Notions of Computation Determine Monads
Plotkin, Gordon; Power, John
20020101T23:59:59.000Z
We model notions of computation using algebraic operations and equations. We show that these generate several of the monads of pri mary interest that have been used to model computational e ects, with the striking ...
computational modeling of biological systems
$author.value
Faculty. Faculty listing for "computational modeling of biological systems" ... Research Interests: computational modeling of biological systems.
Typologies of Computation and Computational Models
Mark Burgin; Gordana DodigCrnkovic
20131209T23:59:59.000Z
We need much better understanding of information processing and computation as its primary form. Future progress of new computational devices capable of dealing with problems of big data, internet of things, semantic web, cognitive robotics and neuroinformatics depends on the adequate models of computation. In this article we first present the current state of the art through systematization of existing models and mechanisms, and outline basic structural framework of computation. We argue that defining computation as information processing, and given that there is no information without (physical) representation, the dynamics of information on the fundamental level is physical/ intrinsic/ natural computation. As a special case, intrinsic computation is used for designed computation in computing machinery. Intrinsic natural computation occurs on variety of levels of physical processes, containing the levels of computation of living organisms (including highly intelligent animals) as well as designed computational devices. The present article offers a typology of current models of computation and indicates future paths for the advancement of the field; both by the development of new computational models and by learning from nature how to better compute using different mechanisms of intrinsic computation.
Computational Models for Understanding Weather
Muraki, David J.
Computational Models for Understanding Weather Mathematics for Atmospheric Science http://weatherS migration Dutton Conway zonal jetstream in unstable weather 6 #12;Baroclinic Instability Vortices
COLLEGE OF SCIENCE Computational Modeling & Data Analytics
Crawford, T. Daniel
COLLEGE OF SCIENCE Computational Modeling & Data Analytics COLLEGE OF SCIENCE Computational Modeling & Data Analytics The Bachelor of Science in Computational Modeling and Data Analytics (CMDA mathematics. It imparts the unique blend of skills from Statistics, Mathematics, and Computer Science needed
LANL computer model boosts engine efficiency
Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)
LANL computer model boosts engine efficiency LANL computer model boosts engine efficiency The KIVA model has been instrumental in helping researchers and manufacturers understand...
20140221 Issuance: Proposed Determination of Computer Servers...
Office of Environmental Management (EM)
Servers as a Covered Consumer Product; Withdrawal 20140221 Issuance: Proposed Determination of Computer Servers as a Covered Consumer Product; Withdrawal This document is a...
toolkit computational mesh conceptual model.
Baur, David G.; Edwards, Harold Carter; Cochran, William K.; Williams, Alan B.; Sjaardema, Gregory D.
20100301T23:59:59.000Z
The Sierra Toolkit computational mesh is a software library intended to support massively parallel multiphysics computations on dynamically changing unstructured meshes. This domain of intended use is inherently complex due to distributed memory parallelism, parallel scalability, heterogeneity of physics, heterogeneous discretization of an unstructured mesh, and runtime adaptation of the mesh. Management of this inherent complexity begins with a conceptual analysis and modeling of this domain of intended use; i.e., development of a domain model. The Sierra Toolkit computational mesh software library is designed and implemented based upon this domain model. Software developers using, maintaining, or extending the Sierra Toolkit computational mesh library must be familiar with the concepts/domain model presented in this report.
Cosmic Logic: a Computational Model
Vitaly Vanchurin
20150705T23:59:59.000Z
We describe a simple computational model of cosmic logic suitable for analysis of, for example, discretized cosmological systems. The construction is based on a particular model of computation, developed by Alan Turing, with cosmic observers (CO), cosmic measures (CM) and cosmic symmetries (CS) described by Turing machines. CO machines always start with a blank tape and CM machines take CO's Turing number (also known as description number or G{\\" o}del number) as input and output the corresponding probability. Similarly, CS machines take CO's Turing number as input, but output either one if the CO machines are in the same equivalence class or zero otherwise. We argue that CS machines are more fundamental than CM machines and, thus, should be used as building blocks in constructing CM machines. We prove the noncomputability of a CS machine which discriminates between two classes of CO machines: mortal that halts in finite time and immortal that runs forever. In context of eternal inflation this result implies that it is impossible to construct CM machines to compute probabilities using cutoff prescriptions or that all of the cutoff measures are noncomputable.
Parallel computing in enterprise modeling.
Goldsby, Michael E.; Armstrong, Robert C.; Shneider, Max S.; Vanderveen, Keith; Ray, Jaideep; Heath, Zach; Allan, Benjamin A.
20080801T23:59:59.000Z
This report presents the results of our efforts to apply highperformance computing to entitybased simulations with a multiuse plugin for parallel computing. We use the term 'Entitybased simulation' to describe a class of simulation which includes both discrete event simulation and agent based simulation. What simulations of this class share, and what differs from more traditional models, is that the result sought is emergent from a large number of contributing entities. Logistic, economic and social simulations are members of this class where things or people are organized or selforganize to produce a solution. Entitybased problems never have an a priori ergodic principle that will greatly simplify calculations. Because the results of entitybased simulations can only be realized at scale, scalable computing is de rigueur for large problems. Having said that, the absence of a spatial organizing principal makes the decomposition of the problem onto processors problematic. In addition, practitioners in this domain commonly use the Java programming language which presents its own problems in a highperformance setting. The plugin we have developed, called the Parallel Particle Data Model, overcomes both of these obstacles and is now being used by two Sandia frameworks: the Decision Analysis Center, and the Seldon social simulation facility. While the ability to engage U.S.sized problems is now available to the Decision Analysis Center, this plugin is central to the success of Seldon. Because Seldon relies on computationally intensive cognitive submodels, this work is necessary to achieve the scale necessary for realistic results. With the recent upheavals in the financial markets, and the inscrutability of terrorist activity, this simulation domain will likely need a capability with ever greater fidelity. Highperformance computing will play an important part in enabling that greater fidelity.
Determining position inside building via laser rangefinder and handheld computer
Ramsey, Jr. James L. (Albuquerque, NM); Finley, Patrick (Albuquerque, NM); Melton, Brad (Albuquerque, NM)
20100112T23:59:59.000Z
An apparatus, computer software, and a method of determining position inside a building comprising selecting on a PDA at least two walls of a room in a digitized map of a building or a portion of a building, pointing and firing a laser rangefinder at corresponding physical walls, transmitting collected range information to the PDA, and computing on the PDA a position of the laser rangefinder within the room.
Quantum computation beyond the circuit model
Jordan, Stephen Paul
20080101T23:59:59.000Z
The quantum circuit model is the most widely used model of quantum computation. It provides both a framework for formulating quantum algorithms and an architecture for the physical construction of quantum computers. However, ...
Cosmic Logic: a Computational Model
Vanchurin, Vitaly
20150101T23:59:59.000Z
We describe a simple computational model of cosmic logic suitable for analysis of, for example, discretized cosmological systems. The construction is based on a particular model of computation, developed by Alan Turing, with cosmic observers (CO), cosmic measures (CM) and cosmic symmetries (CS) described by Turing machines. CO machines always start with a blank tape and CM machines take CO's Turing number (also known as description number or G{\\" o}del number) as input and output the corresponding probability. Similarly, CS machines take CO's Turing number as input, but output either one if the CO machines are in the same equivalence class or zero otherwise. We argue that CS machines are more fundamental than CM machines and, thus, should be used as building blocks in constructing CM machines. We prove the noncomputability of a CS machine which discriminates between two classes of CO machines: mortal that halts in finite time and immortal that runs forever. In context of eternal inflation this result implies...
Computable General Equilibrium Models for Sustainability Impact...
and prospects Screenshot References: Computable general equilibrium models1 Abstract "Sustainability Impact Assessment (SIA) of economic, environmental, and social effects...
Computer Modelling of Pigeon Navigation according to the "Map and Compass"Model
Nehmzow, Ulrich
Computer Modelling of Pigeon Navigation according to the "Map and Compass"Model Ulrich Nehmzow@zoology.unifrankfurt.de Abstract This paper presents a computer model of pigeon navigation (homing), based on Kramer's mapandcompass intersecting gradients which are used by the birds to determine the correct compass heading for home
On Continuous Models of Computation: Towards Computing the Distance Between
Schellekens, Michel P.
with building formal, mathematical models both for aspects of the computational process and for features discuss this issue in Section 3.1. 6th Irish Workshop on Formal Methods (IWFM'03), eWiC, British Computer traditionally associated with computer science are logic and discrete mathematics, the latter including set theo
Theoretical, experimental, and computational aspects of optical property determination of
Mandelis, Andreas
densitywave source is applied to data from model phantoms. The combined theoretical, experimental, and computational media uniquely, as compared with PPTR, which ex hibits uniqueness problems. From data sets obtained applications, especially with tur bid media such as tissue.36 In earlier studies,4,5 PPTR was used
Cupola Furnace Computer Process Model
Seymour Katz
20041231T23:59:59.000Z
The cupola furnace generates more than 50% of the liquid iron used to produce the 9+ million tons of castings annually. The cupola converts iron and steel into cast iron. The main advantages of the cupola furnace are lower energy costs than those of competing furnaces (electric) and the ability to melt less expensive metallic scrap than the competing furnaces. However the chemical and physical processes that take place in the cupola furnace are highly complex making it difficult to operate the furnace in optimal fashion. The results are low energy efficiency and poor recovery of important and expensive alloy elements due to oxidation. Between 1990 and 2004 under the auspices of the Department of Energy, the American Foundry Society and General Motors Corp. a computer simulation of the cupola furnace was developed that accurately describes the complex behavior of the furnace. When provided with the furnace input conditions the model provides accurate values of the output conditions in a matter of seconds. It also provides key diagnostics. Using clues from the diagnostics a trained specialist can infer changes in the operation that will move the system toward higher efficiency. Repeating the process in an iterative fashion leads to near optimum operating conditions with just a few iterations. More advanced uses of the program have been examined. The program is currently being combined with an ''Expert System'' to permit optimization in real time. The program has been combined with ''neural network'' programs to affect very easy scanning of a wide range of furnace operation. Rudimentary efforts were successfully made to operate the furnace using a computer. References to these more advanced systems will be found in the ''Cupola Handbook''. Chapter 27, American Foundry Society, Des Plaines, IL (1999).
Sandia National Laboratories: Computational Modeling & Simulation
Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)
Waste Isolation Pilot Plant Accident Investigation Analysis Support On December 3, 2014, in Computational Modeling & Simulation, Energy, Materials Science, News, News & Events,...
Modeling Computer Viruses MSc Thesis (Afstudeerscriptie)
Amsterdam, University of
Modeling Computer Viruses MSc Thesis (Afstudeerscriptie) written by Luite Menno Pieter van Zelst About half a year ago, Alban Ponse, my thesis supervisor, suggested that the topic of `computer viruses indus try and the creators of computer viruses. After all, the antivirus industry stands to lose a lot
Climate Modeling using HighPerformance Computing
Mirin, A A
20070205T23:59:59.000Z
The Center for Applied Scientific Computing (CASC) and the LLNL Climate and Carbon Science Group of Energy and Environment (E and E) are working together to improve predictions of future climate by applying the best available computational methods and computer resources to this problem. Over the last decade, researchers at the Lawrence Livermore National Laboratory (LLNL) have developed a number of climate models that provide stateoftheart simulations on a wide variety of massively parallel computers. We are now developing and applying a second generation of highperformance climate models. Through the addition of relevant physical processes, we are developing an earth systems modeling capability as well.
Computer aided nuclear reactor modeling
Warraich, Khalid Sarwar
19950101T23:59:59.000Z
at a thermal hydraulic analysis program, CENTAR, and present the problems encountered in the tra ditional modeling process. These problems are the difficulty the modeler faces to cope with the model size, complexity, rigidity, correctness, long modeling...
Computer aided nuclear reactor modeling
Warraich, Khalid Sarwar
19950101T23:59:59.000Z
after model has been sent to CENTAR). We then present an interactive, graphical, icon based modeling program, Alpha, that lets the user "draw" the model on screen and translates it into a syn tactically correct CENTAR input model which is also free...
Mason, Harris E. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Walsh, Stuart D. C. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); DuFrane, Wyatt L. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Carroll, Susan A. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States)
20140617T23:59:59.000Z
The development of accurate, predictive models for use in determining wellbore integrity requires detailed information about the chemical and mechanical changes occurring in hardened Portland cements. Xray computed tomography (XRCT) provides a method that can nondestructively probe these changes in three dimensions. Here, we describe a method for extracting subvoxel mineralogical and chemical information from synchrotron XRCT images by combining advanced image segmentation with geochemical models of cement alteration. The method relies on determining “effective linear activity coefficients” (ELAC) for the white light source to generate calibration curves that relate the image grayscales to material composition. The resulting data set supports the modeling of cement alteration by CO_{2}rich brine with discrete increases in calcium concentration at reaction boundaries. The results of these XRCT analyses can be used to further improve coupled geochemical and mechanical models of cement alteration in the wellbore environment.
Modelling morphogenesis as an amorphous computation
Bhattacharyya, Arnab
20060101T23:59:59.000Z
This thesis presents a programminglanguage viewpoint for morphogenesis, the process of shape formation during embryological development. We model morphogenesis as a selforganizing, selfrepairing amorphous computation ...
Preliminary Phase Field Computational Model Development
Li, Yulan; Hu, Shenyang Y.; Xu, Ke; Suter, Jonathan D.; McCloy, John S.; Johnson, Bradley R.; Ramuhalli, Pradeep
20141215T23:59:59.000Z
This interim report presents progress towards the development of mesoscale models of magnetic behavior that incorporate microstructural information. Modeling magnetic signatures in irradiated materials with complex microstructures (such as structural steels) is a significant challenge. The complexity is addressed incrementally, using the monocrystalline Fe (i.e., ferrite) film as model systems to develop and validate initial models, followed by polycrystalline Fe films, and by more complicated and representative alloys. In addition, the modeling incrementally addresses inclusion of other major phases (e.g., martensite, austenite), minor magnetic phases (e.g., carbides, FeCr precipitates), and minor nonmagnetic phases (e.g., Cu precipitates, voids). The focus of the magnetic modeling is on phasefield models. The models are based on the numerical solution to the LandauLifshitzGilbert equation. From the computational standpoint, phasefield modeling allows the simulation of large enough systems that relevant defect structures and their effects on functional properties like magnetism can be simulated. To date, two phasefield models have been generated in support of this work. First, a bulk iron model with periodic boundary conditions was generated as a proofofconcept to investigate major loop effects of single versus polycrystalline bulk iron and effects of single nonmagnetic defects. More recently, to support the experimental program herein using iron thin films, a new model was generated that uses finite boundary conditions representing surfaces and edges. This model has provided key insights into the domain structures observed in magnetic force microscopy (MFM) measurements. Simulation results for single crystal thinfilm iron indicate the feasibility of the model for determining magnetic domain wall thickness and mobility in an externally applied field. Because the phasefield model dimensions are limited relative to the size of most specimens used in experiments, special experimental methods were devised to create similar boundary conditions in the iron films. Preliminary MFM studies conducted on single and polycrystalline iron films with small subareas created with focused ion beam have correlated quite well qualitatively with phasefield simulations. However, phasefield model dimensions are still small relative to experiments thus far. We are in the process of increasing the size of the models and decreasing specimen size so both have identical dimensions. Ongoing research is focused on validation of the phasefield model. Validation is being accomplished through comparison with experimentally obtained MFM images (in progress), and planned measurements of major hysteresis loops and first order reversal curves. Extrapolation of simulation sizes to represent a more stochastic bulklike system will require sampling of various simulations (i.e., with single nonmagnetic defect, single magnetic defect, single grain boundary, single dislocation, etc.) with distributions of input parameters. These outputs can then be compared to laboratory magnetic measurements and ultimately to simulate magnetic Barkhausen noise signals.
Modelling Cloud Computing Infrastructure Marianne Hickey and Maher Rahmouni,
ParisSud XI, Université de
Modelling Cloud Computing Infrastructure Marianne Hickey and Maher Rahmouni, HP Labs, Long Down, and shared vocabularies. Keywords: Modelling, Cloud Computing, RDF, Ontology, Rules, Validation 1 Introduction There is currently a shift towards cloud computing, which changes the model of provision
Regional weather modeling on parallel computers.
Baillie, C.; Michalakes, J.; Skalin, R.; Mathematics and Computer Science; NOAA Forecast Systems Lab.; Norwegian Meteorological Inst.
19970101T23:59:59.000Z
This special issue on 'regional weather models' complements the October 1995 special issue on 'climate and weather modeling', which focused on global models. In this introduction we review the similarities and differences between regional and global atmospheric models. Next, the structure of regional models is described and we consider how the basic algorithms applied in these models influence the parallelization strategy. Finally, we give a brief overview of the eight articles in this issue and discuss some remaining challenges in the area of adapting regional weather models to parallel computers.
Computationally Efficient Modeling of HighEfficiency Clean Combustion...
Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site
& Publications Computationally Efficient Modeling of HighEfficiency Clean Combustion Engines Computationally Efficient Modeling of HighEfficiency Clean Combustion Engines...
Computer modeling reveals how surprisingly potent hepatitis C...
Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)
Hepatitis C computer modeling Computer modeling reveals how surprisingly potent hepatitis C drug works A study reveals how daclatasvir targets one of its proteins and causes the...
HIV virus spread and evolution studied through computer modeling
Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)
HIV and evolution studied through computer modeling HIV virus spread and evolution studied through computer modeling This approach distinguishes between susceptible and infected...
A Computational Model for Adaptive Emotion Regulation
Treur, Jan
A Computational Model for Adaptive Emotion Regulation Tibor Bosse, Matthijs Pontier, and Jan Treur} Abstract. Emotion regulation describes how a subject can use certain strategies to affect emotion response levels. Usually, models for emotion regulation as sume mechanisms based on feedback loops that indicate
Mechanistic Models in Computational Social Science
Holme, Petter
20150101T23:59:59.000Z
Quantitative social science is not only about regression analysis or, in general, data inference. Computer simulations of social mechanisms have an over 60 years long history. They have been used for many different purposes  to test scenarios, to test the consistency of descriptive theories (proofofconcept models), to explore emerging phenomena, for forecasting, etc. In this essay, we sketch these historical developments, the role of mechanistic models in the social sciences and the influences from natural and formal sciences. We argue that mechanistic computational models form a natural common ground for social and natural sciences, and look forward to possible future information flow across the socialnatural divide.
Decision Model for Cloud Computing
Kondo, Derrick
with different pricing models for costcutting, resourcehungry users. Second, prices can differ dynamically (as Grenoble, France 1 #12;Tradeoffs Supercomputers Performance Reliability Cost ($) low high high high Instances Â· "Spot" instance price varies dynamically Â· Spot instance provided when user's bid is greater
CDF computing and event data models
Snider, F.D.; /Fermilab
20051201T23:59:59.000Z
The authors discuss the computing systems, usage patterns and event data models used to analyze Run II data from the CDFII experiment at the Tevatron collider. A critical analysis of the current implementation and design reveals some of the stronger and weaker elements of the system, which serve as lessons for future experiments. They highlight a need to maintain simplicity for users in the face of an increasingly complex computing environment.
Who Wants to Know What When? Privacy Preference Determinants in Ubiquitous Computing
Madiraju, Praveen
Who Wants to Know What When? Privacy Preference Determinants in Ubiquitous Computing Scott Lederer disclosed through a ubiquitous computing system. We found that privacy preferences varied by inquirer more privacy in ubiquitous computing. Keywords Ubiquitous Computing, Privacy, Social and Legal Issues
20140221 Issuance: Proposed Determination of Computer and Battery...
Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site
and Battery Backup Systems as a Covered Consumer Product 20140221 Issuance: Proposed Determination of Computer and Battery Backup Systems as a Covered Consumer Product This...
Modeling Computations in a Semantic Network
Marko A. Rodriguez; Johan Bollen
20070531T23:59:59.000Z
Semantic network research has seen a resurgence from its early history in the cognitive sciences with the inception of the Semantic Web initiative. The Semantic Web effort has brought forth an array of technologies that support the encoding, storage, and querying of the semantic network data structure at the world stage. Currently, the popular conception of the Semantic Web is that of a data modeling medium where real and conceptual entities are related in semantically meaningful ways. However, new models have emerged that explicitly encode procedural information within the semantic network substrate. With these new technologies, the Semantic Web has evolved from a data modeling medium to a computational medium. This article provides a classification of existing computational modeling efforts and the requirements of supporting technologies that will aid in the further growth of this burgeoning domain.
High performance computing and numerical modelling
,
20140101T23:59:59.000Z
Numerical methods play an ever more important role in astrophysics. This is especially true in theoretical works, but of course, even in purely observational projects, data analysis without massive use of computational methods has become unthinkable. The key utility of computer simulations comes from their ability to solve complex systems of equations that are either intractable with analytic techniques or only amenable to highly approximative treatments. Simulations are best viewed as a powerful complement to analytic reasoning, and as the method of choice to model systems that feature enormous physical complexity such as star formation in evolving galaxies, the topic of this 43rd Saas Fee Advanced Course. The organizers asked me to lecture about high performance computing and numerical modelling in this winter school, and to specifically cover the basics of numerically treating gravity and hydrodynamics in the context of galaxy evolution. This is still a vast field, and I necessarily had to select a subset ...
A Model for the Human Computer Interface Evaluation in Safety Critical Computer
Schreiber, Fabio A.
A Model for the Human Computer Interface Evaluation in Safety Critical Computer Applications Fabio of the IEEE International Conference and Workshop: Engineering of ComputerBased Systems March 1998, Jerusalem, Israel #12; 179 A Model for the Human Computer Interface Evaluation in Safety Critical Computer
Molecular structure determination on a computational and data grid
Miller, Russ
Õs ability to present the user with a large computational infrastructure that will allow for the processing in a routine fashion to solve difficult atomic resolution structures, containing as many as 1000 unique nonhydrogen
Sensitivity Analysis Methodology for a Complex System Computational Model
1 Sensitivity Analysis Methodology for a Complex System Computational Model James J. Filliben of computational models to serve as predictive surrogates for the system. The use of such models increasingly) of a computational model for a complex system is always an essential component in accepting/rejecting such a model
Kim, G.; Pesaran, A.; Smith, K.; Graf, P.; Jun, M.; Yang, C.; Li, G.; Li, S.; Hochman, A.; Tselepidakis, D.; White, J.
20140601T23:59:59.000Z
This presentation discusses the significant enhancement of computational efficiency in nonlinear multiscale battery model for computer aided engineering in current research at NREL.
Advanced Computing Tools and Models for Accelerator Physics
Ryne, Robert D.
20080101T23:59:59.000Z
TOOLS AND MODELS FOR ACCELERATOR PHYSICS * Robert D. Ryne,computing tools for accelerator physics. Following anscale computing in accelerator physics. INTRODUCTION To
Computational Modeling of Selforganization of Dislocations and...
Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)
Computational Modeling of Selforganization of Dislocations and Mesoscale Deformation of Metals Event Sponsor: Mathematics and Computing Science  LANS Seminar Start Date: Jun 19...
Computer Modelling of 3D Geological Surface
Kodge, B G
20110101T23:59:59.000Z
The geological surveying presently uses methods and tools for the computer modeling of 3Dstructures of the geographical subsurface and geotechnical characterization as well as the application of geoinformation systems for management and analysis of spatial data, and their cartographic presentation. The objectives of this paper are to present a 3D geological surface model of Latur district in Maharashtra state of India. This study is undertaken through the several processes which are discussed in this paper to generate and visualize the automated 3D geological surface model of a projected area.
Computing Approximate Solutions of the Protein Structure Determination Problem using
Dal Palù, Alessandro
dovier@dimi.uniud.it 3 Dept. of Computer Science, New Mexico State University epontell dramatically. In these cases, constraint programming can be exploited to generate suboptimal candi dates by mining the protein data bank, e.g., a collection of rotamers, can be introduced to provide additional
Wild Fire Computer Model Helps Firefighters
Canfield, Jesse
20140602T23:59:59.000Z
A hightech computer model called HIGRAD/FIRETEC, the cornerstone of a collaborative effort between U.S. Forest Service Rocky Mountain Research Station and Los Alamos National Laboratory, provides insights that are essential for frontline fire fighters. The science team is looking into levels of bark beetleinduced conditions that lead to drastic changes in fire behavior and how variable or erratic the behavior is likely to be.
Wild Fire Computer Model Helps Firefighters
Canfield, Jesse
20120904T23:59:59.000Z
A hightech computer model called HIGRAD/FIRETEC, the cornerstone of a collaborative effort between U.S. Forest Service Rocky Mountain Research Station and Los Alamos National Laboratory, provides insights that are essential for frontline fire fighters. The science team is looking into levels of bark beetleinduced conditions that lead to drastic changes in fire behavior and how variable or erratic the behavior is likely to be.
COMPUTATIONAL MODELING OF CIRCULATING FLUIDIZED BED REACTORS
Ibrahim, Essam A
20130109T23:59:59.000Z
Details of numerical simulations of twophase gassolid turbulent flow in the riser section of Circulating Fluidized Bed Reactor (CFBR) using Computational Fluid Dynamics (CFD) technique are reported. Two CFBR riser configurations are considered and modeled. Each of these two riser models consist of inlet, exit, connecting elbows and a main pipe. Both riser configurations are cylindrical and have the same diameter but differ in their inlet lengths and main pipe height to enable investigation of riser geometrical scaling effects. In addition, two types of solid particles are exploited in the solid phase of the twophase gassolid riser flow simulations to study the influence of solid loading ratio on flow patterns. The gaseous phase in the twophase flow is represented by standard atmospheric air. The CFDbased FLUENT software is employed to obtain steady state and transient solutions for flow modulations in the riser. The physical dimensions, types and numbers of computation meshes, and solution methodology utilized in the present work are stated. Flow parameters, such as static and dynamic pressure, species velocity, and volume fractions are monitored and analyzed. The differences in the computational results between the two models, under steady and transient conditions, are compared, contrasted, and discussed.
Dang, Zhe
Bond Computing Systems: a Biologically Inspired and Highlevel Dynamics Model for Pervasive. Targeting at modeling the highlevel dynamics of pervasive comput ing systems, we introduce Bond Computing are regular, and study their computation power and verification problems. Among other results, we show
7. Business Models LearningsfromfoundingaComputerVisionStartup
Solem, Jan Erik
7. Business Models #12;LearningsfromfoundingaComputerVisionStartup Flickr:dystopos How are you models ! ! (not only technology) #12;LearningsfromfoundingaComputerVisionStartup Auction business model! Bricks and clicks business model! Collective business models! Component business model! Cutting out
Determination of matric adjoints using a digital computer
Guseman, Lawrence Frank
19620101T23:59:59.000Z
the computer facility available. 9m D. Drew for developing Jfield. FOHMA'fg, and for his advice on PAF programming. iv TABLE OF CONTENTS I. XNTRODUCT ION. ZZ. A FINITE SEQUENTIALLY COMPACT PROCESS FOR THE ADJOINS OF MATRXCES OVER ARBITRARY INTEGRAL... is probably nowhere more evident than when working with matrices. In this thesis an efficient techn1que for determ1ning exact matric adJoints is developed. The technique is applicable to either singular or nonsingular aatr1ces with integral entries...
ModelingComputer Simulations At Stillwater Area (Wisian & Blackwell...
Activity: ModelingComputer Simulations At Stillwater Area (Wisian & Blackwell, 2004) Exploration Activity Details Location Stillwater Area Exploration Technique...
An Interactive Computer Model of TwoCountry Trade
Hamlen, Kevin W.
91 An Interactive Computer Model of TwoCountry Trade Bill Hamlen and Kevin Hamlen Abstract We introduce an interactive computer model of twocountry trade that allows students to investigate is to present an interactive computer model of twocountry international trade that allows students
Determinant Formulas for Matrix Model Free Energy
D. Vasiliev
20050711T23:59:59.000Z
The paper contains a new nonperturbative representation for subleading contribution to the free energy of multicut solution for hermitian matrix model. This representation is a generalisation of the formula, proposed by Klemm, Marino and Theisen for two cut solution, which was obtained by comparing the cubic matrix model with the topological Bmodel on the local CalabiYau geometry $\\hat {II}$ and was checked perturbatively. In this paper we give a direct proof of their formula and generalise it to the general multicut solution.
Computational procedures for determining parameters in RambergOsgood
Office of Scientific and Technical Information (OSTI)
AFDC Printable Version Share this resource Send a link to EERE: Alternative Fuels Data Center Home Page to someone by Email Share EERE: Alternative Fuels Data Center Home Page on Facebook Tweet about EERE: Alternative Fuels Data Center Home Page on Twitter Bookmark EERE:1 First Use of Energy for All Purposes (Fuel and Nonfuel), 2002; Level:5 TablesExports to3,1,50022,3,,0,,6,1,Separation 23TribalInformationConference: CatalyticSignatures (Journalmembraneelastoplastic model based on
Computational models of intergroup competition and warfare.
Letendre, Kenneth (University of New Mexico); Abbott, Robert G.
20111101T23:59:59.000Z
This document reports on the research of Kenneth Letendre, the recipient of a Sandia Graduate Research Fellowship at the University of New Mexico. Warfare is an extreme form of intergroup competition in which individuals make extreme sacrifices for the benefit of their nation or other group to which they belong. Among animals, limited, nonlethal competition is the norm. It is not fully understood what factors lead to warfare. We studied the global variation in the frequency of civil conflict among countries of the world, and its positive association with variation in the intensity of infectious disease. We demonstrated that the burden of human infectious disease importantly predicts the frequency of civil conflict and tested a causal model for this association based on the parasitestress theory of sociality. We also investigated the organization of social foraging by colonies of harvester ants in the genus Pogonomyrmex, using both field studies and computer models.
Robust Resource Allocations in Parallel Computing Systems: Model and Heuristics
Maciejewski, Anthony A. "Tony"
Robust Resource Allocations in Parallel Computing Systems: Model and Heuristics Vladimir Shestak1 in parallel computer systems (including heterogeneous clusters) should be allocated to the computational was supported by the Colorado State University Center for Robustness in Computer Systems (funded by the Colorado
Dang, Zhe
Bond Computing Systems: a Biologically Inspired and Highlevel Dynamics Model for Pervasive their com putation power and verification problems. Among other results, we show that the computing power) techniques for pervasive computing systems. At a highlevel, there are at least two views in modeling
Math 574 Optimization Models in Computational Biology (Spring 2008)
Krishnamoorthy, Bala
Bioinformatics and computational biology (BCB) is one of the "hottest" interdisciplinary areas of sci ence todayMath 574  Optimization Models in Computational Biology (Spring 2008) Course Title Topics in Optimization: Models in Computational Biology Time TueThu 12:001:15 pm Credits 3 Location Webster B12
Computer Virus Propagation Models Giuseppe Serazzi and Stefano Zanero
Zanero, Stefano
Computer Virus Propagation Models Giuseppe Serazzi and Stefano Zanero Dipartimento di Elettronica e.zanero@polimi.it Abstract. The availability of reliable models of computer virus propa gation would prove useful The concept of a computer virus is relatively old, in the young and expanding field of information security
Theoretical Neuroscience: Computational and Mathematical Modeling of Neural Systems
Olshausen, Bruno
Theoretical Neuroscience: Computational and Mathematical Modeling of Neural Systems Peter Dayan: Computational and Mathematical Modeling of Neural Systems. The emergence of this book represents more than scientist was brought up on ``Kandel and Schwartz.'' Now, at last, the field of computational neuroscience
Cummings, P. T.
20100208T23:59:59.000Z
This document reports the outcomes of the Computational Nanoscience Project, "Integrated Multiscale Modeling of Molecular Computing Devices". It includes a list of participants and publications arising from the research supported.
Computational systems biology and in silico modeling of the
Borenstein, Elhanan
Computational systems biology and in silico modeling of the human microbiome Elhanan Borenstein Professor at the Santa Fe Institute. His research interests include computational and evolutionary systems is a complex biological system with numerous interacting components across multiple organizational levels
Integration of engineering models in computeraided preliminary design
Lajoie, Ronnie M.
The problems of the integration of engineering models in computeraided preliminary design are reviewed. This paper details the research, development, and testing of modifications to Paper Airplane, a LISPbased computer ...
Department of Computing CSPB modelling for railway verification
Doran, Simon J.
University of Surrey Department of Computing Computing Sciences Report CS1203 CSPB modelling Schneider Helen Treharne March 30th 2012 #12;CSPB modelling for railway verification: the double junction work in verifying railway systems through CSP k B modelling and analysis. In particular we consider
Computational modeling of composite material fires.
Brown, Alexander L.; Erickson, Kenneth L.; Hubbard, Joshua Allen; Dodd, Amanda B.
20101001T23:59:59.000Z
Composite materials behave differently from conventional fuel sources and have the potential to smolder and burn for extended time periods. As the amount of composite materials on modern aircraft continues to increase, understanding the response of composites in fire environments becomes increasingly important. An effort is ongoing to enhance the capability to simulate composite material response in fires including the decomposition of the composite and the interaction with a fire. To adequately model composite material in a fire, two physical model development tasks are necessary; first, the decomposition model for the composite material and second, the interaction with a fire. A porous media approach for the decomposition model including a time dependent formulation with the effects of heat, mass, species, and momentum transfer of the porous solid and gas phase is being implemented in an engineering code, ARIA. ARIA is a Sandia National Laboratories multiphysics code including a range of capabilities such as incompressible NavierStokes equations, energy transport equations, species transport equations, nonNewtonian fluid rheology, linear elastic solid mechanics, and electrostatics. To simulate the fire, FUEGO, also a Sandia National Laboratories code, is coupled to ARIA. FUEGO represents the turbulent, buoyantly driven incompressible flow, heat transfer, mass transfer, and combustion. FUEGO and ARIA are uniquely able to solve this problem because they were designed using a common architecture (SIERRA) that enhances multiphysics coupling and both codes are capable of massively parallel calculations, enhancing performance. The decomposition reaction model is developed from small scale experimental data including thermogravimetric analysis (TGA) and Differential Scanning Calorimetry (DSC) in both nitrogen and air for a range of heating rates and from available data in the literature. The response of the composite material subject to a radiant heat flux boundary condition is examined to study the propagation of decomposition fronts of the epoxy and carbon fiber and their dependence on the ambient conditions such as oxygen concentration, surface flow velocity, and radiant heat flux. In addition to the computational effort, small scaled experimental efforts to attain adequate data used to validate model predictions is ongoing. The goal of this paper is to demonstrate the progress of the capability for a typical composite material and emphasize the path forward.
Disruptive technology business models in cloud computing
Krikos, Alexis Christopher
20100101T23:59:59.000Z
Cloud computing, a term whose origins have been in existence for more than a decade, has come into fruition due to technological capabilities and marketplace demands. Cloud computing can be defined as a scalable and flexible ...
ModelingComputer Simulations At Central Nevada Seismic Zone...
Nevada Seismic Zone Region (Biasi, Et Al., 2009) Exploration Activity Details Location Central Nevada Seismic Zone Geothermal Region Exploration Technique ModelingComputer...
ModelingComputer Simulations At Nw Basin & Range Region (Pritchett...
Pritchett, 2004) Exploration Activity Details Location Northwest Basin and Range Geothermal Region Exploration Technique ModelingComputer Simulations Activity Date Usefulness not...
ModelingComputer Simulations At Desert Peak Area (Wisian & Blackwell...
Desert Peak Area (Wisian & Blackwell, 2004) Exploration Activity Details Location Desert Peak Area Exploration Technique ModelingComputer Simulations Activity Date Usefulness not...
ModelingComputer Simulations At WalkerLane Transitional Zone...
Pritchett, 2004) Exploration Activity Details Location WalkerLane Transition Zone Geothermal Region Exploration Technique ModelingComputer Simulations Activity Date Usefulness...
Scientists use world's fastest computer to model materials under...
Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)
Materials under extreme conditions Scientists use world's fastest computer to model materials under extreme conditions Materials scientists are for the first time attempting to...
ModelingComputer Simulations At Fish Lake Valley Area (Deymonaz...
Additional References Retrieved from "http:en.openei.orgwindex.php?titleModelingComputerSimulationsAtFishLakeValleyArea(Deymonaz,EtAl.,2008)&oldid3876...
ModelingComputer Simulations At Nevada Test And Training Range...
ENERGYGeothermal Home Exploration Activity: ModelingComputer Simulations At Nevada Test And Training Range Area (Sabin, Et Al., 2004) Exploration Activity Details Location...
Gering, Kevin L.
20130101T23:59:59.000Z
A system includes an electrochemical cell, monitoring hardware, and a computing system. The monitoring hardware samples performance characteristics of the electrochemical cell. The computing system determines cell information from the performance characteristics. The computing system also analyzes the cell information of the electrochemical cell with a ButlerVolmer (BV) expression modified to determine exchange current density of the electrochemical cell by including kinetic performance information related to pulsetime dependence, electrode surface availability, or a combination thereof. A set of sigmoidbased expressions may be included with the modifiedBV expression to determine kinetic performance as a function of pulse time. The determined exchange current density may be used with the modifiedBV expression, with or without the sigmoid expressions, to analyze other characteristics of the electrochemical cell. Model parameters can be defined in terms of cell aging, making the overall kinetics model amenable to predictive estimates of cell kinetic performance along the aging timeline.
Parameter Discovery for Stochastic Computational Models in Systems Biology Using Bayesian Model
Parameter Discovery for Stochastic Computational Models in Systems Biology Using Bayesian ModelParameterized probabilistic complex computational (P2 C2 ) models are increasingly used in computational systems biology to study biochemical and physiological systems. A key challenge is to build mechanistic P2 C2 models
Kinetic Model for Motion Compensation in Computed Tomography
1 Kinetic Model for Motion Compensation in Computed Tomography Zhou Yu, JeanBaptiste Thibault gorithms have recently been applied to computed tomography and demonstrated superior image quality. MBIR to computed tomography and demonstrated superior image quality performance [1], [2], [3]. These methods
A framework for modelling trojans and computer virus infection
Cairns, Paul
A framework for modelling trojans and computer virus infection Harold Thimbleby1 , Stuart Anderson2 world, including the possibility of Trojan Horse programs and computer viruses, as simply a finite realisation of a Turing Machine. We consider the actions of Trojan Horses and viruses in real computer systems
"Creating computational models of biological systems to better combat
Zhigilei, Leonid V.
"Creating computational models of biological systems to better combat dangerous pathogens and human of Biomedical Engineering University of Virginia Charlottesville, VA 434.924.8195 Computational Systems Biology system in biofuel and nutraceutical production. With the aid of computational techniques, we can predict
Resource Portfolio Model's Determination of Conservation's CostEffectiveness1
,008 average megawatts of conservation8. The electricity price forecast used for this initial estimResource Portfolio Model's Determination of Conservation's Cost Effectiveness1 The regional Resource Portfolio Model (RPM) finds large amounts of conservation cost effective. The cost of some
Bytecode unification of geospatial computable models Bytecode unification of geospatial
KÃ¶bben, Barend
heterogeneous to fix and reuse. Fieldbased and objectsbased geospatial models of ten share common GIS data and objectbased data models, and other challenges re garding synergy of geospatial systems that need to use both types of data models. Keywords: fieldbased model, objectbased model, computability, managed
Towards Real Earth Models Computational Geophysics on Unstructured Tetrahedral Meshes?
Farquharson, Colin G.
Towards Real Earth Models  Computational Geophysics on Unstructured Tetrahedral Meshes? Colin tetrahedral meshes. EM geophysics on unstructured tetrahedral meshes. Disadvantages, difficulties, challenges. Conclusions. #12;Outline: Geological models! Advantages of unstructured tetrahedral meshes. EM geophysics
Gering, Kevin L
20130827T23:59:59.000Z
A system includes an electrochemical cell, monitoring hardware, and a computing system. The monitoring hardware periodically samples performance characteristics of the electrochemical cell. The computing system determines cell information from the performance characteristics of the electrochemical cell. The computing system also develops a mechanistic level model of the electrochemical cell to determine performance fade characteristics of the electrochemical cell and analyzing the mechanistic level model to estimate performance fade characteristics over aging of a similar electrochemical cell. The mechanistic level model uses first constantcurrent pulses applied to the electrochemical cell at a first aging period and at three or more current values bracketing a first exchange current density. The mechanistic level model also is based on second constantcurrent pulses applied to the electrochemical cell at a second aging period and at three or more current values bracketing the second exchange current density.
Applying High Performance Computing to Analyzing by Probabilistic Model Checking
Schneider, Carsten
Applying High Performance Computing to Analyzing by Probabilistic Model Checking Mobile Cellular on the use of high performance computing in order to analyze with the proba bilistic model checker PRISM. The Figure Generation Script 22 2 #12;1. Introduction We report in this paper on the use of high performance
Overview of ASC Capability Computing System Governance Model
Doebling, Scott W. [Los Alamos National Laboratory
20120711T23:59:59.000Z
This document contains a description of the Advanced Simulation and Computing Program's Capability Computing System Governance Model. Objectives of the Governance Model are to ensure that the capability system resources are allocated on a prioritydriven basis according to the Program requirements; and to utilize ASC Capability Systems for the large capability jobs for which they were designed and procured.
The Two Server Problem Models of Online Computation
Bein, Wolfgang
The Two Server Problem Models of Online Computation Results A Randomized Algorithm for Two Servers, James Oravec supported by NSF grant CCR0312093 Wolfgang Bein A Randomized Algorithm for Two Servers in Cross Polytope S #12;The Two Server Problem Models of Online Computation Results The Randomized 2Server
Los Alamos CCS (Center for Computer Security) formal computer security model
Dreicer, J.S.; Hunteman, W.J. (Los Alamos National Lab., NM (USA))
19890101T23:59:59.000Z
This paper provides a brief presentation of the formal computer security model currently being developed at the Los Alamos Department of Energy (DOE) Center for Computer Security (CCS). The initial motivation for this effort was the need to provide a method by which DOE computer security policy implementation could be tested and verified. The actual analytical model was a result of the integration of current research in computer security and previous modeling and research experiences. The model is being developed to define a generic view of the computer and network security domains, to provide a theoretical basis for the design of a security model, and to address the limitations of present models. Formal mathematical models for computer security have been designed and developed in conjunction with attempts to build secure computer systems since the early 70's. The foundation of the Los Alamos DOE CCS model is a series of functionally dependent probability equations, relations, and expressions. The mathematical basis appears to be justified and is undergoing continued discrimination and evolution. We expect to apply the model to the discipline of the BellLapadula abstract sets of objects and subjects. 5 refs.
Error models in quantum computation: an application of model selection
Lucia Schwarz; Steven van Enk
20130904T23:59:59.000Z
Threshold theorems for faulttolerant quantum computing assume that errors are of certain types. But how would one detect whether errors of the "wrong" type occur in one's experiment, especially if one does not even know what type of error to look for? The problem is that for many qubits a full state description is impossible to analyze, and a full process description is even more impossible to analyze. As a result, one simply cannot detect all types of errors. Here we show through a quantum state estimation example (on up to 25 qubits) how to attack this problem using model selection. We use, in particular, the Akaike Information Criterion. The example indicates that the number of measurements that one has to perform before noticing errors of the wrong type scales polynomially both with the number of qubits and with the error size.
ComputerScience,TexasA&MUniversity Modeling Heterogeneous User
Loguinov, Dmitri
1 ComputerScience,TexasA&MUniversity Modeling Heterogeneous User Churn and Local Resilience of Unstructured P2P Networks Modeling Heterogeneous UserModeling Heterogeneous User Churn and Local Resilience ofChurn Churn and Local Resilience of Unstructured P2P Networks Modeling Heterogeneous User
Applications to Computer Closed Network Model
Shihada, Basem
Â· Suitable for modeling "virtual circuit" (VC) ith i d fl t lwith window flow control Â· Data sources/sinks are modeled explicitly 2 #12;Model of a VC with Window Flow Control 3 Model of a VC with Window Flow Control packets are individually acknowledged 4 #12;Model of a VC with Window Flow Control Â· A customer entering
Integrating Numerical Computation into the Modeling Instruction Curriculum
Caballero, Marcos D; Aiken, John M; Douglas, Scott S; Scanlon, Erin M; Thoms, Brian; Schatz, Michael F
20120101T23:59:59.000Z
We describe a way to introduce physics high school students with no background in programming to computational problemsolving experiences. Our approach builds on the great strides made by the Modeling Instruction reform curriculum. This approach emphasizes the practices of "Developing and using models" and "Computational thinking" highlighted by the NRC K12 science standards framework. We taught 9thgrade students in a ModelingInstructionbased physics course to construct computational models using the VPython programming environment. Numerical computation within the Modeling Instruction curriculum provides coherence among the curriculum's different force and motion models, links the various representations which the curriculum employs, and extends the curriculum to include realworld problems that are inaccessible to a purely analytic approach.
Modeling Time in Computing: A Taxonomy and a Comparative Survey
Carlo A. Furia; Dino Mandrioli; Angelo Morzenti; Matteo Rossi
20101011T23:59:59.000Z
The increasing relevance of areas such as realtime and embedded systems, pervasive computing, hybrid systems control, and biological and social systems modeling is bringing a growing attention to the temporal aspects of computing, not only in the computer science domain, but also in more traditional fields of engineering. This article surveys various approaches to the formal modeling and analysis of the temporal features of computerbased systems, with a level of detail that is suitable also for nonspecialists. In doing so, it provides a unifying framework, rather than just a comprehensive list of formalisms. The paper first lays out some key dimensions along which the various formalisms can be evaluated and compared. Then, a significant sample of formalisms for time modeling in computing are presented and discussed according to these dimensions. The adopted perspective is, to some extent, historical, going from "traditional" models and formalisms to more modern ones.
A computer program to determine the specific power of prismaticcore reactors
Dobranich, D.
19870501T23:59:59.000Z
A computer program has been developed to determine the maximum specific power for prismaticcore reactors as a function of maximum allowable fuel temperature, core pressure drop, and coolant velocity. The prismaticcore reactors consist of hexagonally shaped fuel elements grouped together to form a cylindrically shaped core. A gas coolant flows axially through circular channels within the elements, and the fuel is dispersed within the solid element material either as a composite or in the form of coated pellets. Different coolant, fuel, coating, and element materials can be selected to represent different prismaticcore concepts. The computer program allows the user to divide the core into any arbitrary number of axial levels to account for different axial power shapes. An option in the program allows the automatic determination of the core height that results in the maximum specific power. The results of parametric specific power calculations using this program are presented for various reactor concepts.
Computational load in model physics of the parallel NCAR community climate model
Michalakes, J.G.; Nanjundiah, R.S.
19941101T23:59:59.000Z
Maintaining a balance of computational load over processors is a crucial issue in parallel computing. For efficient parallel implementation, complex codes such as climate models need to be analyzed for load imbalances. In the present study we focus on the load imbalances in the physics portion of the community climate model`s (CCM2) distributedmemory parallel implementation on the Intel Touchstone DELTA computer. We note that the major source of load imbalance is the diurnal variation in the computation of solar radiation. Convective weather patterns also cause some load imbalance. Landocean contrast is seen to have little effect on computational load in the present version of the model.
Effective Design And Use Of Computer Decision Models.
Fuerst, William L.
19840301T23:59:59.000Z
decision mode/ defi ciencies, evaluates selected financial simulation model packages, and suggests design needs for expanding the use of decision models to a broader range of firms. Keywords: Decision models. MIS design, simulation ACM Categories: D.3, H... considerations. For example, some organizations may use com puters with a memory so limited that use of cer tain decision models may be impractical. MIS Ouarterly/March 1984 17 Computer Decision Models Hardware Constraints Technical Background and Expertise...
Computational Modeling and Optimization of Proton Exchange Membrane Fuel Cells
Victoria, University of
Computational Modeling and Optimization of Proton Exchange Membrane Fuel Cells by Marc Secanell and Optimization of Proton Exchange Membrane Fuel Cells by Marc Secanell Gallart Bachelor in Engineering cells. In this thesis, a computational framework for fuel cell analysis and optimization is presented
Inverse Modelling in Geology by Interactive Evolutionary Computation
Boschetti, Fabio
Inverse Modelling in Geology by Interactive Evolutionary Computation Chris Wijns a,b,, Fabio of geological processes, in the absence of established numerical criteria to act as inversion targets, requires evolutionary computation provides for the inclusion of qualitative geological expertise within a rigorous
Cliquedetection Models in Computational Biochemistry and Genomics
Butenko, Sergiy
Cliquedetection Models in Computational Biochemistry and Genomics S. Butenko and W. E. Wilhelm,wilhelm}@tamu.edu Abstract Many important problems arising in computational biochemistry and genomics have been formulated and genomic aspects of the problems as well as to the graphtheoretic aspects of the solution approaches. Each
COMPUTATIONAL FLUID DYNAMICS MODELING OF SOLID OXIDE FUEL CELLS
COMPUTATIONAL FLUID DYNAMICS MODELING OF SOLID OXIDE FUEL CELLS Ugur Pasaogullari and Chaodimensional model has been developed to simulate solid oxide fuel cells (SOFC). The model fully couples current density operation. INTRODUCTION Solid oxide fuel cells (SOFC) are among possible candidates
Computationally Efficient Modeling of HighEfficiency Clean Combustion...
Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site
Meeting, June 711, 2010  Washington D.C. ace012aceves2010o.pdf More Documents & Publications Computationally Efficient Modeling of HighEfficiency Clean Combustion Engines...
Computer simulation and topological modeling of radiation effects in zircon
Zhang, Yi, 1979
20060101T23:59:59.000Z
The purpose of this study is to understand on atomic level the structural response of zircon (ZrSiO4) to irradiation using molecular dynamics (MD) computer simulations, and to develop topological models that can describe ...
15.094 Systems Optimization: Models and Computation, Spring 2002
Freund, Robert Michael
A computational and applicationoriented introduction to the modeling of largescale systems in a wide variety of decisionmaking domains and the optimization of such systems using stateoftheart optimization software. ...
Language acquisition and implication for language change: A computational model.
Clark, Robert A J
19970101T23:59:59.000Z
Computer modeling techniques, when applied to language acquisition problems, give an often unrealized insight into the diachronic change that occurs in language over successive generations. This paper shows that using ...
Computational Model of Forward and Opposed Smoldering Combustion in Microgravity
Rein, Guillermo; FernandezPello, Carlos; Urban, David
20060806T23:59:59.000Z
A novel computational model of smoldering combustion capable of predicting both forward and opposed propagation is developed. This is accomplished by considering the onedimensional, transient, governing equations for ...
Journal of Computational Acoustics, FREQUENCY DOMAIN WAVE PROPAGATION MODELLING
Sheen, Dongwoo
#11;ect de gas, brine or oil and gasbrine or gasoil pore uids on seismic velocities. NumericalJournal of Computational Acoustics, f c IMACS FREQUENCY DOMAIN WAVE PROPAGATION MODELLING
The Nuclear Energy Advanced Modeling and Simulation Enabling Computational Technologies FY09 Report
Diachin, L F; Garaizar, F X; Henson, V E; Pope, G
20091012T23:59:59.000Z
In this document we report on the status of the Nuclear Energy Advanced Modeling and Simulation (NEAMS) Enabling Computational Technologies (ECT) effort. In particular, we provide the context for ECT In the broader NEAMS program and describe the three pillars of the ECT effort, namely, (1) tools and libraries, (2) software quality assurance, and (3) computational facility (computers, storage, etc) needs. We report on our FY09 deliverables to determine the needs of the integrated performance and safety codes (IPSCs) in these three areas and lay out the general plan for software quality assurance to meet the requirements of DOE and the DOE Advanced Fuel Cycle Initiative (AFCI). We conclude with a brief description of our interactions with the Idaho National Laboratory computer center to determine what is needed to expand their role as a NEAMS user facility.
Sandia National Laboratories: Computational Modeling & Simulation
Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)
primary purpose is to model severeaccident progression in lightwaterreactor (LWR) nuclear power plants. Sandia developed MELCOR for the US Nuclear Regulatory ... Sandian...
Sandia National Laboratories: Computational Modeling & Simulation
Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)
and atmospheric chemistry that is expected to benefit auto and engine manufacturers, oil and gas utilities, and other industries that employ combustion models. A paper...
The Computational Limit to Quantum Determinism and the Black Hole Information Loss Paradox
Arkady Bolotin
20150608T23:59:59.000Z
The present paper scrutinizes the principle of quantum determinism, which maintains that the complete information about the initial quantum state of a physical system should determine the system's quantum state at any other time. As it shown in the paper, assuming the strong exponential time hypothesis, SETH, which conjectures that known algorithms for solving computational NPcomplete problems (often bruteforce algorithms) are optimal, the quantum deterministic principle cannot be used generally, i.e., for randomly selected physical systems, particularly macroscopic systems. In other words, even if the initial quantum state of an arbitrary system were precisely known, as long as SETH is true it might be impossible in the real world to predict the system's exact final quantum state. The paper suggests that the breakdown of quantum determinism in a process, in which a black hole forms and then completely evaporates, might actually be physical evidence supporting SETH.
Discrete transfinite computation models School of Mathematics,
Welch, Philip
such as Davies [Davies (2001)], and the models of Beggs and Tucker [Beggs and Tucker (2007)] that attempt' functions or oracles, such as is done in [Beggs et al. (2008)]. We also shall not particularly consider
Computational model of miniature pulsating heat pipes.
Martinez, Mario J.; Givler, Richard C.
20130101T23:59:59.000Z
The modeling work described herein represents Sandia National Laboratories (SNL) portion of a collaborative threeyear project with Northrop Grumman Electronic Systems (NGES) and the University of Missouri to develop an advanced, thermal groundplane (TGP), which is a device, of planar configuration, that delivers heat from a source to an ambient environment with high efficiency. Work at all three institutions was funded by DARPA/MTO; Sandia was funded under DARPA/MTO project number 015070924. This is the final report on this project for SNL. This report presents a numerical model of a pulsating heat pipe, a device employing a two phase (liquid and its vapor) working fluid confined in a closed loop channel etched/milled into a serpentine configuration in a solid metal plate. The device delivers heat from an evaporator (hot zone) to a condenser (cold zone). This new model includes key physical processes important to the operation of flat plate pulsating heat pipes (e.g. dynamic bubble nucleation, evaporation and condensation), together with conjugate heat transfer with the solid portion of the device. The model qualitatively and quantitatively predicts performance characteristics and metrics, which was demonstrated by favorable comparisons with experimental results on similar configurations. Application of the model also corroborated many previous performance observations with respect to key parameters such as heat load, fill ratio and orientation.
Nonlinear regression models for Approximate Bayesian Computation
Robert, Christian P.
Nonlinear regression models for Approximate Bayesian Computation (ABC) Michael Blum Olivier ABC #12;Blum and OF (2009) suggest the use of nonlinear conditional heteroscedastic regression models) Linear regressionbased ABC can sometimes be improved #12;abc of ABC Using stochastic simulations
Computational Modeling of Brain Dynamics during Repetitive Head Motions
Burtscher, Martin
Computational Modeling of Brain Dynamics during Repetitive Head Motions Igor Szczyrba School the HIC scale to arbitrary head motions. Our simulations of the brain dynamics in sagittal and horizontal injury modeling, resonance effects 1 Introduction A rapid head motion can result in a severe brain injury
Personal ComputerBased Model for Cool Storage Performance Simulation
Kasprowicz, L. M.; Jones, J. W.; Hitzfelder, J.
19900101T23:59:59.000Z
A personal computer based hourly simulation model was developed based on the CBS/ICE routines in the DOE2.1 mainframe building simulation software. The menu driven new model employs more efficient data and information handling than the previous...
Personal ComputerBased Model for Cool Storage Performance Simulation
Kasprowicz, L. M.; Jones, J. W.; Hitzfelder, J.
19900101T23:59:59.000Z
A personal computer based hourly simulation model was developed based on the CBS/ICE routines in the DOE2.1 mainframe building simulation software. The menu driven new model employs more efficient data and information handling than the previous...
Computational Fluid Dynamics (CFD) Modelling on Soot Yield for Fire
Computational Fluid Dynamics (CFD) Modelling on Soot Yield for Fire Engineering Assessment Yong S (CFD) Modelling is now widely used by fire safety engineers throughout the world as a tool of the smoke control design as part of the performance based fire safety design in the current industry
Computing the Electricity Market Equilibrium: Uses of market equilibrium models
Baldick, Ross
on power system operation greatly complicate the application of economic analysis to electricity markets1 Computing the Electricity Market Equilibrium: Uses of market equilibrium models Ross Baldick AbstractIn this paper we consider the formulation and uses of electric ity market equilibrium models
Computational social network modeling of terrorist recruitment.
Berry, Nina M.; Turnley, Jessica Glicken (Sandia National Laboratories, Albuquerque, NM); Smrcka, Julianne D. (Sandia National Laboratories, Albuquerque, NM); Ko, Teresa H.; Moy, Timothy David (Sandia National Laboratories, Albuquerque, NM); Wu, Benjamin C.
20041001T23:59:59.000Z
The Seldon terrorist model represents a multidisciplinary approach to developing organization software for the study of terrorist recruitment and group formation. The need to incorporate aspects of social science added a significant contribution to the vision of the resulting Seldon toolkit. The unique addition of and abstract agent category provided a means for capturing social concepts like cliques, mosque, etc. in a manner that represents their social conceptualization and not simply as a physical or economical institution. This paper provides an overview of the Seldon terrorist model developed to study the formation of cliques, which are used as the major recruitment entity for terrorist organizations.
Protein Models Comparator Scalable Bioinformatics Computing on
Krasnogor, Natalio
of parameters of energy functions used in templatefree modelling and refinement. Although many protein Engine cloud platform and is a showcase of how the emerging PaaS (Platform as a Service) technology could, the predicted structure is compared against the target native structure. This type of evaluation is performed
Computing Biological Model Parameters by Parallel Statistical Model Checking
Tronci, Enrico
of Treatments for Infertility Related Endocrinological Diseases, 600773). #12;patientspecific model parameters
Models of quantum computation and quantum programming languages
J. A. Miszczak
20111203T23:59:59.000Z
The goal of the presented paper is to provide an introduction to the basic computational models used in quantum information theory. We review various models of quantum Turing machine, quantum circuits and quantum random access machine (QRAM) along with their classical counterparts. We also provide an introduction to quantum programming languages, which are developed using the QRAM model. We review the syntax of several existing quantum programming languages and discuss their features and limitations.
A gas kick model for the personal computer
Miller, Clayton Lowell
19870101T23:59:59.000Z
A GAS KICK MODEL FOR THE PERSONAL COMPUTER A Thesis by CLAYTON LOWELL MILLER Submitted to the Graduate College of Texas A6M University in partial fulfillment of the requirements for the degree of MASTER OF SCIENCE May 1987 Major Subject...: Petroleum Engineering A GAS KICK MODEL FOR THE PERSONAL COMPUTER A Thesis by CLAYTON LOWELL MILLER Approved as to style and content by: Wana C. vkamWold (Chair f Committee) Robert W. Heine (Member) Tibor G. Rozg yi (Member) W. D. Von Gonten Head...
A gas kick model for the personal computer
Miller, Clayton Lowell
19870101T23:59:59.000Z
A GAS KICK MODEL FOR THE PERSONAL COMPUTER A Thesis by CLAYTON LOWELL MILLER Submitted to the Graduate College of Texas A6M University in partial fulfillment of the requirements for the degree of MASTER OF SCIENCE May 1987 Major Subject...: Petroleum Engineering A GAS KICK MODEL FOR THE PERSONAL COMPUTER A Thesis by CLAYTON LOWELL MILLER Approved as to style and content by: Wana C. vkamWold (Chair f Committee) Robert W. Heine (Member) Tibor G. Rozg yi (Member) W. D. Von Gonten Head...
Integrated Multiscale Modeling of Molecular Computing Devices
Weinan E
20120329T23:59:59.000Z
The main bottleneck in modeling transport in molecular devices is to develop the correct formulation of the problem and efficient algorithms for analyzing the electronic structure and dynamics using, for example, the timedependent density functional theory. We have divided this task into several steps. The first step is to developing the right mathematical formulation and numerical algorithms for analyzing the electronic structure using density functional theory. The second step is to study timedependent density functional theory, particularly the farfield boundary conditions. The third step is to study electronic transport in molecular devices. We are now at the end of the first step. Under DOE support, we have made subtantial progress in developing linear scaling and sublinear scaling algorithms for electronic structure analysis. Although there has been a huge amount of effort in the past on developing linear scaling algorithms, most of the algorithms developed suffer from the lack of robustness and controllable accuracy. We have made the following progress: (1) We have analyzed thoroughly the localization properties of the wavefunctions. We have developed a clear understanding of the physical as well as mathematical origin of the decay properties. One important conclusion is that even for metals, one can choose wavefunctions that decay faster than any algebraic power. (2) We have developed algorithms that make use of these localization properties. Our algorithms are based on nonorthogonal formulations of the density functional theory. Our key contribution is to add a localization step into the algorithm. The addition of this localization step makes the algorithm quite robust and much more accurate. Moreover, we can control the accuracy of these algorithms by changing the numerical parameters. (3) We have considerably improved the Fermi operator expansion (FOE) approach. Through pole expansion, we have developed the optimal scaling FOE algorithm.
Computational model for highenergy lasercutting process
Kim, M.J.; Majumdar, P. [Northern Illinois Univ., DeKalb, IL (United States). Dept. of Mechanical Engineering
19950601T23:59:59.000Z
A computational model for the simulation of a lasercutting process has been developed using a finite element method. A transient heat transfer model is considered that deals with the materialcutting process using a Gaussian continuous wave laser beam. Numerical experimentation is carried out for mesh refinements and the rate of convergence in terms of groove shape and temperature. Results are also presented for the prediction of groove depth with different moving speeds.
Faceted Models of Blog Feeds Department of Computer
Meng, Weiyi
Faceted Models of Blog Feeds Lifeng Jia Department of Computer Science University of Illinois@cs.binghamton.edu ABSTRACT Faceted blog distillation aims at retrieving the blogs that are not only relevant to a query blogs depict various topics related to the personal experiences of bloggers while official blogs deliver
Open Learner Models at Birmingham Electronic, Electrical and Computer Engineering
Bull, Susan
and Engineering Principles and Methods EE2D2 Introduction to Communications EE2F1 Speech and Audio Technology EE2G EE1B2 Circuits, Devices and Fields EE1E1&2 C Programming EE1F1 Introduction to InformationOpen Learner Models at Birmingham Electronic, Electrical and Computer Engineering University
Computational Modeling of Pancreatic Cancer Reveals Kinetics of Metastasis
Theory Computational Modeling of Pancreatic Cancer Reveals Kinetics of Metastasis Suggesting and size distribution of metas tases as well as patient survival. These findings were validated death and one of the most aggressive malignancies in humans, with a fiveyear relative survival rate
Thermal building simulation and computer generation of nodal models
ParisSud XI, Université de
Thermal building simulation and computer generation of nodal models H. BOYER, J.P. CHABRIAT, B in the development of several packages simulating the dynamic behaviour of buildings. This paper shows the adaptation. This article shows the chosen method in the case of our thermal simulation program for buildings, CODYRUN. #12
Computer Modeling of Crystalline Electrolytes Lithium Thiophosphates and Phosphates
Holzwarth, Natalie
migration. I. Introduction During the last 5 years, lithium thiophosphate solid electrolyte materials haveComputer Modeling of Crystalline Electrolytes Lithium Thiophosphates and Phosphates N. D. Lepley properties of (thio)phosphate electrolyte materials, focusing on the "superionic" electrolyte Li7P3S11. We
innovati nNREL Computer Models Integrate Wind Turbines with
innovati nNREL Computer Models Integrate Wind Turbines with Floating Platforms Far off the shores of energyhungry coastal cities, powerful winds blow over the open ocean, where the water is too deep for today's seabedmounted offshore wind turbines. For the United States to tap into these vast offshore
Computational Model for Forced Expiration from Asymmetric Normal Lungs
Lutchen, Kenneth
Computational Model for Forced Expiration from Asymmetric Normal Lungs ADAM G. POLAK 1 losses along the airway branches. Calculations done for succeeding lung volumes result in the semidynamic to the choke points, characteristic differences of lung regional pressures and volumes, and a shape
Concurrent multiscale computational modeling for dense dry granular materials interfacing
Regueiro, Richard A.
of interfacial mechanics between granular soil and tire, tool, or penetrometer, while properly representing far computational modeling of interfacial mechanics between granular materials and deformable solid bodies, agricultural grains (in silo flows), dry soils (sand, silt, gravel), and lunar and martian regolith (soil found
Computational model of aortic valve surgical repair using grafted pericardium
1 Computational model of aortic valve surgical repair using grafted pericardium Peter E. Hammer1, aortic valve repair, membrane, surgical planning, leaflet graft, pericardium ABSTRACT Aortic valve leaflets. Difficulty is largely due to the complex geometry and function of the valve and the lower
Learning words from sights and sounds: a computational model
Learning words from sights and sounds: a computational model Deb K. Roy*, Alex P. Pentland MIT.: 16172530596; fax: 16172538874. Email address: dkroy@media.mit.edu (D.K. Roy). http
The use of computed radiography plates to determine light and radiation field coincidence
Kerns, James R. [Department of Radiation Physics, The University of Texas MD Anderson Cancer Center, Houston, Texas 77030 and Graduate School of Biomedical Sciences, The University of Texas Health Science Center Houston, Houston, Texas 77030 (United States)] [Department of Radiation Physics, The University of Texas MD Anderson Cancer Center, Houston, Texas 77030 and Graduate School of Biomedical Sciences, The University of Texas Health Science Center Houston, Houston, Texas 77030 (United States); Anand, Aman [Department of Radiation Oncology, Mayo Clinic, Scottsdale, Arizona 85259 (United States)] [Department of Radiation Oncology, Mayo Clinic, Scottsdale, Arizona 85259 (United States)
20131115T23:59:59.000Z
Purpose: Photostimulable phosphor computed radiography (CR) has characteristics that allow the output to be manipulated by both radiation and optical light. The authors have developed a method that uses these characteristics to carry out radiation field and light field coincidence quality assurance on linear accelerators.Methods: CR detectors from Kodak were used outside their cassettes to measure both radiation and light field edges from a Varian linear accelerator. The CR detector was first exposed to a radiation field and then to a slightly smaller light field. The light impinged on the detector's latent image, removing to an extent the portion exposed to the light field. The detector was then digitally scanned. A MATLABbased algorithm was developed to automatically analyze the images and determine the edges of the light and radiation fields, the vector between the field centers, and the crosshair center. Radiographic film was also used as a control to confirm the radiation field size.Results: Analysis showed a high degree of repeatability with the proposed method. Results between the proposed method and radiographic film showed excellent agreement of the radiation field. The effect of varying monitor units and light exposure time was tested and found to be very small. Radiation and light field sizes were determined with an uncertainty of less than 1 mm, and light and crosshair centers were determined within 0.1 mm.Conclusions: A new method was developed to digitally determine the radiation and light field size using CR photostimulable phosphor plates. The method is quick and reproducible, allowing for the streamlined and robust assessment of light and radiation field coincidence, with no observer interpretation needed.
Nanostructure Determination by CoRefining Models to Multiple Datasets
Billinge, Simon
20110531T23:59:59.000Z
The results of the work are contained in the publications resulting from the grant (which are listed below). Here I summarize the main findings from the last period of the award, 20062007: • Published a paper in Science with Igor Levin outlining the “Nanostructure Problem”, our inability to solve structure at the nanoscale. • Published a paper in Nature demonstrating the first ever abinitio structure determination of a nanoparticle from atomic pair distribution function (PDF) data. • Published one book and 3 overview articles on PDF methods and the nanostructure problem. • Completed a project that sought to find a structural response to the presence of the socalled “intermediate phase” in network glasses which appears close to the rigidity percolation threshold in these systems. The main result was that we did not see convincing evidence for this, which drew into doubt the idea that GexSe1x glasses were a model system exhibiting rigidity percolation.
Challenges for the CMS computing model in the first year
Fisk, I.; /Fermilab
20090501T23:59:59.000Z
CMS is in the process of commissioning a complex detector and a globally distributed computing infrastructure simultaneously. This represents a unique challenge. Even at the beginning there is not sufficient analysis or organized processing resources at CERN alone. In this presentation we discuss the unique computing challenges CMS expects to face during the first year of running and how they influence the baseline computing model decisions. During the early accelerator commissioning periods, CMS will attempt to collect as many events as possible when the beam is on in order to provide adequate early commissioning data. Some of these plans involve overdriving the Tier0 infrastructure during data collection with recovery when the beam is off. In addition to the larger number of triggered events, there will be pressure in the first year to collect and analyze more complete data formats as the summarized formats mature. The large event formats impact the required storage, bandwidth, and processing capacity across all the computing centers. While the understanding of the detector and the event selections is being improved, there will likely be a larger number of reconstruction passes and skims performed by both central operations and individual users. We discuss how these additional stresses impact the allocation of resources and the changes from the baseline computing model.
A New Perspective for the Calibration of Computational Predictor Models.
Crespo, Luis Guillermo
20141101T23:59:59.000Z
This paper presents a framework for calibrating computational models using data from sev eral and possibly dissimilar validation experiments. The offset between model predictions and observations, which might be caused by measurement noise, modelform uncertainty, and numerical error, drives the process by which uncertainty in the models parameters is characterized. The resulting description of uncertainty along with the computational model constitute a predictor model. Two types of predictor models are studied: Interval Predictor Models (IPMs) and Random Predictor Models (RPMs). IPMs use sets to characterize uncer tainty, whereas RPMs use random vectors. The propagation of a set through a model makes the response an interval valued function of the state, whereas the propagation of a random vector yields a random process. Optimizationbased strategies for calculating both types of predictor models are proposed. Whereas the formulations used to calculate IPMs target solutions leading to the interval value function of minimal spread containing all observations, those for RPMs seek to maximize the models' ability to reproduce the distribution of obser vations. Regarding RPMs, we choose a structure for the random vector (i.e., the assignment of probability to points in the parameter space) solely dependent on the prediction error. As such, the probabilistic description of uncertainty is not a subjective assignment of belief, nor is it expected to asymptotically converge to a fixed value, but instead it is a description of the model's ability to reproduce the experimental data. This framework enables evaluating the spread and distribution of the predicted response of target applications depending on the same parameters beyond the validation domain (i.e., rollup and extrapolation).
A Computational Model of Limb Impedance Control Based on Principles of Internal Model Uncertainty
Vijayakumar, Sethu
of Informatics, University of Edinburgh, Edinburgh, United Kingdom, 2 ATR Computational Neuroscience Laboratories uncertainties, along with energy and accuracy demands. The insights from this computational model could be used. This is an effortless task, however if suddenly a seemingly random wind gust perturbs the umbrella, you will typically
The origins of computer weather prediction and climate modeling
Lynch, Peter [Meteorology and Climate Centre, School of Mathematical Sciences, University College Dublin, Belfield (Ireland)], Email: Peter.Lynch@ucd.ie
20080320T23:59:59.000Z
Numerical simulation of an everincreasing range of geophysical phenomena is adding enormously to our understanding of complex processes in the Earth system. The consequences for mankind of ongoing climate change will be farreaching. Earth System Models are capable of replicating climate regimes of past millennia and are the best means we have of predicting the future of our climate. The basic ideas of numerical forecasting and climate modeling were developed about a century ago, long before the first electronic computer was constructed. There were several major practical obstacles to be overcome before numerical prediction could be put into practice. A fuller understanding of atmospheric dynamics allowed the development of simplified systems of equations; regular radiosonde observations of the free atmosphere and, later, satellite data, provided the initial conditions; stable finite difference schemes were developed; and powerful electronic computers provided a practical means of carrying out the prodigious calculations required to predict the changes in the weather. Progress in weather forecasting and in climate modeling over the past 50 years has been dramatic. In this presentation, we will trace the history of computer forecasting through the ENIAC integrations to the present day. The useful range of deterministic prediction is increasing by about one day each decade, and our understanding of climate change is growing rapidly as Earth System Models of everincreasing sophistication are developed.
A General Hippocampal Computational Model Combining Episodic and Spatial Memory in a Spiking Model
Aguiar, Paulo de Castro
The hippocampus, in humans and rats, plays crucial roles in spatial tasks and nonspatial tasks involving episodictype memory. This thesis presents a novel computational model of the hippocampus (CA1, CA3 and dentate ...
ONSET OF CHAOS IN A MODEL OF QUANTUM COMPUTATION
G. BERMAN; ET AL
20010201T23:59:59.000Z
Recently, the question of a relevance of the socalled quantum chaos has been raised in applications to quantum computation [2,3]. Indeed, according to the general approach to closed systems of finite number of interacting Fermiparticles (see, e.g. [4,5]), with an increase of an interaction between qubits a kind of chaos is expected to emerge in the energy spectra and structure of manybody states. Specifically, the fluctuations of energy levels and components of the eigenstates turn out to be very strong and described by the Random Matrix Theory. Clearly, if this happens in a quantum computer, it may lead to a destruction of the coherence of quantum computations due to internal decoherence inside manybody states. It is important to stress that quantum chaos occurs not only in the systems with random interaction, but also for purely dynamical interaction. In the latter case, the mechanism of chaos is due to a complex (nonlinear) form of a twobody interaction represented in the basis of noninteracting particles. Numerical analysis [2] of a simplest model of quantum computer (2D model of 1/2spins with a random interqubit interaction J) shows that with an increase of the number L of qubits, the chaos threshold J{sub cr} decreases as J{sub cr} {infinity} 1/L. On this ground, it was claimed that the onset of quantum chaos could be dangerous for quantum computers, since their effectiveness requires L >> 1. On the other hand, in [3] it was argued that in order to treat this problem properly, one needs to distinguish between chaotic properties of stationary states, and the dynamical process of quantum computation.
ParisSud XI, Université de
Nonlinear inversion modeling for Ultrasound Computer Tomography: transition from soft to hard Marseille cedex 20, France ABSTRACT Ultrasound Computer Tomography (UCT) is an imaging technique which has experiments. Keyword: Ultrasound Computer Tomography, Inverse Born Approximation, Elliptical Projection
A Model Independent Approach for Determining the Fragmentation Functions
Christova, Ekaterina [Institute for Nuclear research and Nuclear Energy, Sofia (Bulgaria); Leader, Elliot [Imperial College, London (United Kingdom)
20090804T23:59:59.000Z
We show that the difference cross sections in unpolarized semiinclusive deep inelastic scattering (SIDIS)e+N{yields}e+h+X and pp hadron production p+p{yields}h+X determine independently in a model independent way, in any order in Quantum Chromodynamics (QCD), the two FFs: D{sub u}{sup hhbar} and D{sub d}{sup hhbar}, h = {pi}{sup {+}}, K{sup {+}} or a sum over charged hadrons. If both K{sup {+}} and K{sub 2}{sup 0} are measured, then e{sup +}e{sup }{yields}K+X, e+N{yields}e+K+X and p+p{yields}K+X present independent measurements of just one FF: D{sub ud}{sup K{sup +}+K{sup }}. The above results allow to test the existing parameterizations, obtained with various different assumptions about the FFs, and to test the Q{sup 2} evolution and factorization.
Final Report: Center for Programming Models for Scalable Parallel Computing
MellorCrummey, John [William Marsh Rice University] [William Marsh Rice University
20110913T23:59:59.000Z
As part of the Center for Programming Models for Scalable Parallel Computing, Rice University collaborated with project partners in the design, development and deployment of language, compiler, and runtime support for parallel programming models to support application development for the “leadershipclass” computer systems at DOE national laboratories. Work over the course of this project has focused on the design, implementation, and evaluation of a secondgeneration version of Coarray Fortran. Research and development efforts of the project have focused on the CAF 2.0 language, compiler, runtime system, and supporting infrastructure. This has involved working with the teams that provide infrastructure for CAF that we rely on, implementing new language and runtime features, producing an open source compiler that enabled us to evaluate our ideas, and evaluating our design and implementation through the use of benchmarks. The report details the research, development, findings, and conclusions from this work.
Computationally Efficient Use of Derivatives in Emulation of Complex Computational Models
Williams, Brian J. [Los Alamos National Laboratory; Marcy, Peter W. [University of Wyoming
20120607T23:59:59.000Z
We will investigate the use of derivative information in complex computer model emulation when the correlation function is of the compactly supported Bohman class. To this end, a Gaussian process model similar to that used by Kaufman et al. (2011) is extended to a situation where first partial derivatives in each dimension are calculated at each input site (i.e. using gradients). A simulation study in the tendimensional case is conducted to assess the utility of the Bohman correlation function against strictly positive correlation functions when a high degree of sparsity is induced.
Comprehensive computer model for magnetron sputtering. II. Charged particle transport
Jimenez, Francisco J., Email: fjimenez@ualberta.ca; Dew, Steven K. [Department of Electrical and Computer Engineering, University of Alberta, Edmonton T6G 2V4 (Canada); Field, David J. [Smith and Nephew (Alberta) Inc., Fort Saskatchewan T8L 4K4 (Canada)
20141101T23:59:59.000Z
Discharges for magnetron sputter thin film deposition systems involve complex plasmas that are sensitively dependent on magnetic field configuration and strength, working gas species and pressure, chamber geometry, and discharge power. The authors present a numerical formulation for the general solution of these plasmas as a component of a comprehensive simulation capability for planar magnetron sputtering. This is an extensible, fully threedimensional model supporting realistic magnetic fields and is selfconsistently solvable on a desktop computer. The plasma model features a hybrid approach involving a Monte Carlo treatment of energetic electrons and ions, along with a coupled fluid model for thermalized particles. Validation against a wellknown onedimensional system is presented. Various strategies for improving numerical stability are investigated as is the sensitivity of the solution to various model and process parameters. In particular, the effect of magnetic field, argon gas pressure, and discharge power are studied.
Computer Modeling of Violent Intent: A Content Analysis Approach
Sanfilippo, Antonio P.; Mcgrath, Liam R.; Bell, Eric B.
20140103T23:59:59.000Z
We present a computational approach to modeling the intent of a communication source representing a group or an individual to engage in violent behavior. Our aim is to identify and rank aspects of radical rhetoric that are endogenously related to violent intent to predict the potential for violence as encoded in written or spoken language. We use correlations between contentious rhetoric and the propensity for violent behavior found in documents from radical terrorist and nonterrorist groups and individuals to train and evaluate models of violent intent. We then apply these models to unseen instances of linguistic behavior to detect signs of contention that have a positive correlation with violent intent factors. Of particular interest is the application of violent intent models to social media, such as Twitter, that have proved to serve as effective channels in furthering sociopolitical change.
Interactive OffLine Computer Modeling for Powerhouse Operations
Delk, S. R.; Jones, W. G.
19820101T23:59:59.000Z
evaluate. determine the additional operating cost by not In the initial runs of the program, the s~vings having the gas turbine available, the computer have ranged from $100,000 per year to $800,000 per operator would go back to the original input data... process heating loads in the Deer Park Complex plus several steam turbine drivers in the Chlorine Plant. The bulk of the steam supplied to this header comes from the 5th stage extraction on the three turbogenerators. The No. 7 Boiler gener ates...
Computer Modeling and Simulation of an Active Vision Camera System MingChin Lu
Subbarao, Murali "Rao"
Computer Modeling and Simulation of an Active Vision Camera System MingChin Lu Communications of computer simulation systems. Computer simulation avoids the necessity of building actual camera systems. Based on the proposed model and algorithms, a computer simulation system called Active Vision Simulator
Determination of Retrofit Savings Using a Calibrated Building Energy Simulation Model
Reddy, S. N.; Hunn, B. D.; Hood, D. B.
19940101T23:59:59.000Z
This paper presents the development of a methodology to determine retrofit energy savings in buildings when few measured preretrofit data are available. Calibration of the DOE2 building energy analysis computer program for a 250,000 ft2 building...
Models and People: an alternative view of the emergent properties of computational models
Boschetti, Fabio
be used to address scientific questions or support realworld decision making. At these higher levels ecological model may be used i) to address a scientific question and ii) how the scientific insight so gained management problem (Figure 1, top) 1116 . 2 Computational models and scientific questions Within
Not Available
20120801T23:59:59.000Z
Cation degradation insights obtained by computational modeling could result in better performance and longer lifetime for alkaline membrane fuel cells.
Eldridge, R. Bruce
Novel Application of Xray Computed Tomography: Determination of Gas/Liquid Contact Area and Liquid Company, 1 Neumann WayM/D Q8, Cincinnati, Ohio 45215 Xray computed tomography (CT) was utilized Principles of Xray Computed Tomography. Xray computed tomography (CT) is used to noninvasively
Computational fluid dynamics modeling of coal gasification in a pressurized spoutfluid bed
Zhongyi Deng; Rui Xiao; Baosheng Jin; He Huang; Laihong Shen; Qilei Song; Qianjun Li [Southeast University, Nanjing (China). Key Laboratory of Clean Coal Power Generation and Combustion Technology of Ministry of Education
20080515T23:59:59.000Z
Computational fluid dynamics (CFD) modeling, which has recently proven to be an effective means of analysis and optimization of energyconversion processes, has been extended to coal gasification in this paper. A 3D mathematical model has been developed to simulate the coal gasification process in a pressurized spoutfluid bed. This CFD model is composed of gassolid hydrodynamics, coal pyrolysis, char gasification, and gas phase reaction submodels. The rates of heterogeneous reactions are determined by combining Arrhenius rate and diffusion rate. The homogeneous reactions of gas phase can be treated as secondary reactions. A comparison of the calculated and experimental data shows that most gasification performance parameters can be predicted accurately. This good agreement indicates that CFD modeling can be used for complex fluidized beds coal gasification processes. 37 refs., 7 figs., 5 tabs.
How Computational Models Predict the Behavior of Complex Systems John Symons 1
Boschetti, Fabio
How Computational Models Predict the Behavior of Complex Systems John Symons 1 Fabio Boschetti2,3 1 of prediction in the use of computational models in science. We focus on the consequences of the irreversibility of computational models and on the conditional or ceteris paribus, nature of the kinds of their predictions
FINITE VOLUME METHODS APPLIED TO THE COMPUTATIONAL MODELLING OF WELDING PHENOMENA
Taylor, Gary
1 FINITE VOLUME METHODS APPLIED TO THE COMPUTATIONAL MODELLING OF WELDING PHENOMENA Gareth A.Taylor@brunel.ac.uk ABSTRACT This paper presents the computational modelling of welding phenomena within a versatile numerical) and Computational Solid Mechanics (CSM). With regard to the CFD modelling of the weld pool fluid dynamics, heat
Boutchko, R.
20140101T23:59:59.000Z
emission tomography systems and computational fluid dynamicsa computational ?uid dynamics (CFD) model of the systemthe computational domain. A Cartesian coordinate system was
ModelingComputer Simulations (Gritto & Majer)  Open Energy Information
AFDC Printable Version Share this resource Send a link to EERE: Alternative Fuels Data Center Home Page to someone by Email Share EERE: Alternative Fuels Data Center Home Page on Facebook Tweet about EERE: Alternative Fuels Data Center Home Page on Twitter Bookmark EERE: Alternative Fuels Data Center Home Page on Google Bookmark EERE: Alternative Fuels Data Center Home Page on DeliciousPlasmaP a gHigh Plains WindInformation Mississippi's 3rdMitsubishi Acciona JV JumpModelingComputer
Computing model independent perturbations in dark energy and modified gravity
Battye, Richard A. [Jodrell Bank Centre for Astrophysics, School of Physics and Astronomy, The University of Manchester, Manchester M13 9PL (United Kingdom); Pearson, Jonathan A., Email: richard.battye@manchester.ac.uk, Email: jonathan.pearson@durham.ac.uk [Department of Mathematical Sciences, Durham University, South Road, Durham, DH1 3LE (United Kingdom)
20140301T23:59:59.000Z
We present a methodology for computing model independent perturbations in dark energy and modified gravity. This is done from the Lagrangian for perturbations, by showing how field content, symmetries, and physical principles are often sufficient ingredients for closing the set of perturbed fluid equations. The fluid equations close once ''equations of state for perturbations'' are identified: these are linear combinations of fluid and metric perturbations which construct gauge invariant entropy and anisotropic stress perturbations for broad classes of theories. Our main results are the proof of the equation of state for perturbations presented in a previous paper, and the development of the required calculational tools.
Computing Limb Darkening Coefficients from Stellar Atmosphere Models
David Heyrovsky
20061024T23:59:59.000Z
We explore the sensitivity of limb darkening coefficients computed from stellar atmosphere models to different leastsquares fitting methods. We demonstrate that conventional methods are strongly biased to fitting the stellar limb. Our suggested method of fitting by minimizing the radially integrated squared residual yields improved fits with better flux conservation. The differences of the obtained coefficients from commonly used values are observationally significant. We show that the new values are in better agreement with solar limb darkening measurements as well as with coefficients reported from analyses of eclipsing binary light curves.
Computer vision determination of the stem/root joint on processing carrots
Batchelor, Matthew McMahon
19870101T23:59:59.000Z
. This paper documents the methods, procedures, equipment, testing, and analysis which led to the conclusion that the Midpoint Method could perform the visual inspection operation needed for an automated canot crown trimming device. ACKNOWLEDGEMENT I wish... Based Inspection. Applying Computer Vision to Carrot Processing. . . . . . . . . 12 CHAPTER III COMPUTER VISION EQUIPMENT AND ALGORITHM DEVELOPMENT. . . . . . . . . . . 14 Description of Equipment. . Carrots . . Conveying Mechanism...
A Unified Computational Model for Solar and Stellar Flares
Allred, Joel C; Carlsson, Mats
20150101T23:59:59.000Z
We present a unified computational framework which can be used to describe impulsive flares on the Sun and on dMe stars. The models assume that the flare impulsive phase is caused by a beam of charged particles that is accelerated in the corona and propagates downward depositing energy and momentum along the way. This rapidly heats the lower stellar atmosphere causing it to explosively expand and dramatically brighten. Our models consist of flux tubes that extend from the subphotosphere into the corona. We simulate how flareaccelerated charged particles propagate down onedimensional flux tubes and heat the stellar atmosphere using the FokkerPlanck kinetic theory. Detailed radiative transfer is included so that model predictions can be directly compared with observations. The flux of flareaccelerated particles drives return currents which additionally heat the stellar atmosphere. These effects are also included in our models. We examine the impact of the flareaccelerated particle beams on model solar and...
Modeling the Fracture of Ice Sheets on Parallel Computers
Waisman, Haim [Columbia University] [Columbia University; Tuminaro, Ray [Sandia National Labs] [Sandia National Labs
20131010T23:59:59.000Z
The objective of this project was to investigate the complex fracture of ice and understand its role within larger ice sheet simulations and global climate change. This objective was achieved by developing novel physics based models for ice, novel numerical tools to enable the modeling of the physics and by collaboration with the ice community experts. At the present time, ice fracture is not explicitly considered within ice sheet models due in part to large computational costs associated with the accurate modeling of this complex phenomena. However, fracture not only plays an extremely important role in regional behavior but also influences ice dynamics over much larger zones in ways that are currently not well understood. To this end, our research findings through this project offers significant advancement to the field and closes a large gap of knowledge in understanding and modeling the fracture of ice sheets in the polar regions. Thus, we believe that our objective has been achieved and our research accomplishments are significant. This is corroborated through a set of published papers, posters and presentations at technical conferences in the field. In particular significant progress has been made in the mechanics of ice, fracture of ice sheets and ice shelves in polar regions and sophisticated numerical methods that enable the solution of the physics in an efficient way.
Center for Programming Models for Scalable Parallel Computing
John MellorCrummey
20080229T23:59:59.000Z
Rice University's achievements as part of the Center for Programming Models for Scalable Parallel Computing include: (1) design and implemention of cafc, the first multiplatform CAF compiler for distributed and sharedmemory machines, (2) performance studies of the efficiency of programs written using the CAF and UPC programming models, (3) a novel technique to analyze explicitlyparallel SPMD programs that facilitates optimization, (4) design, implementation, and evaluation of new language features for CAF, including communication topologies, multiversion variables, and distributed multithreading to simplify development of highperformance codes in CAF, and (5) a synchronization strength reduction transformation for automatically replacing barrierbased synchronization with more efficient pointtopoint synchronization. The prototype Coarray Fortran compiler cafc developed in this project is available as open source software from http://www.hipersoft.rice.edu/caf.
The impact of the SelfDetermined Learning Model of Instruction on student selfdetermination
Wehmeyer, Michael L.; Shogren, Karrie A.; Palmer, Susan B.; WilliamsDiehm, Kendra; Little, Todd D.; Boulton, Aaron Jacob
20120101T23:59:59.000Z
possible, attrition of teachers/students at control campuses. After obtaining consent and assent to partici pate, we collected baseline data, including demo graphic information and two measures of selfdetermination. We also collected selfdetermi nation... data at the end of Year 1 and of Year 2. As expected, there was attrition in the sample over time. At the end of Year 1, 103 controlgroup stu dents had completed baseline and Year 1 posttest scores from the SDS (Wehmeyer & Kelchner, 1995) and 94...
A PKN Hydraulic Fracture Model Study and Formation Permeability Determination
Xiang, Jing
20120214T23:59:59.000Z
as a function of treatment parameters. There are various models used to approximately define the development of fracture geometry, which can be broadly classified into 2D and 3D categories. 2D models include, the PerkinsKernNordgren (PKN) fracture...
Determining Identifiable Parameterizations for Largescale Physical Models in
Van den Hof, Paul
/Novem (Dutch Government). ISAPP (Integrated Systems Approach to Petroleum Production) is a joint project as applied in the field of petroleum reservoir engineering. Starting from a largescale, physicsbased model models in petroleum reservoir engineering. Petroleum reservoir engineering is concerned with maximizing
Experimental validation of a kilovoltage xray source model for computing imaging dose
Poirier, Yannick, Email: yannick.poirier@cancercare.mb.ca [CancerCare Manitoba, 675 McDermot Ave, Winnipeg, Manitoba R3E 0V9 (Canada)] [CancerCare Manitoba, 675 McDermot Ave, Winnipeg, Manitoba R3E 0V9 (Canada); Kouznetsov, Alexei; Koger, Brandon [Department of Physics and Astronomy, University of Calgary, Calgary, Alberta T2N 1N4 (Canada)] [Department of Physics and Astronomy, University of Calgary, Calgary, Alberta T2N 1N4 (Canada); Tambasco, Mauro, Email: mtambasco@mail.sdsu.edu [Department of Physics, San Diego State University, San Diego, California 921821233 and Department of Physics and Astronomy and Department of Oncology, University of Calgary, Calgary, Alberta T2N 1N4 (Canada)] [Department of Physics, San Diego State University, San Diego, California 921821233 and Department of Physics and Astronomy and Department of Oncology, University of Calgary, Calgary, Alberta T2N 1N4 (Canada)
20140415T23:59:59.000Z
Purpose: To introduce and validate a kilovoltage (kV) xray source model and characterization method to compute absorbed dose accrued from kV xrays. Methods: The authors propose a simplified virtual point source model and characterization method for a kV xray source. The source is modeled by: (1) characterizing the spatial spectral and fluence distributions of the photons at a plane at the isocenter, and (2) creating a virtual point source from which photons are generated to yield the derived spatial spectral and fluence distribution at isocenter of an imaging system. The spatial photon distribution is determined by inair relative dose measurements along the transverse (x) and radial (y) directions. The spectrum is characterized using transverse axis halfvalue layer measurements and the nominal peak potential (kVp). This source modeling approach is used to characterize a Varian{sup ®} onboardimager (OBI{sup ®}) for four default conebeam CT beam qualities: beams using a half bowtie filter (HBT) with 110 and 125 kVp, and a full bowtie filter (FBT) with 100 and 125 kVp. The source model and characterization method was validated by comparing dose computed by the authors’ inhouse software (kVDoseCalc) to relative dose measurements in a homogeneous and a heterogeneous block phantom comprised of tissue, bone, and lungequivalent materials. Results: The characterized beam qualities and spatial photon distributions are comparable to reported values in the literature. Agreement between computed and measured percent depthdose curves is ?2% in the homogeneous block phantom and ?2.5% in the heterogeneous block phantom. Transverse axis profiles taken at depths of 2 and 6 cm in the homogeneous block phantom show an agreement within 4%. All transverse axis dose profiles in water, in bone, and lungequivalent materials for beams using a HBT, have an agreement within 5%. Measured profiles of FBT beams in bone and lungequivalent materials were higher than their computed counterparts resulting in an agreement within 2.5%, 5%, and 8% within solid water, bone, and lung, respectively. Conclusions: The proposed virtual point source model and characterization method can be used to compute absorbed dose in both the homogeneous and heterogeneous block phantoms within of 2%–8% of measured values, depending on the phantom and the beam quality. The authors’ results also provide experimental validation for their kV dose computation software, kVDoseCalc.
Computational fluid dynamic modeling of fluidizedbed polymerization reactors
Rokkam, Ram [Ames Laboratory
20121102T23:59:59.000Z
Polyethylene is one of the most widely used plastics, and over 60 million tons are produced worldwide every year. Polyethylene is obtained by the catalytic polymerization of ethylene in gas and liquid phase reactors. The gas phase processes are more advantageous, and use fluidizedbed reactors for production of polyethylene. Since they operate so close to the melting point of the polymer, agglomeration is an operational concern in all slurry and gas polymerization processes. Electrostatics and hot spot formation are the main factors that contribute to agglomeration in gasphase processes. Electrostatic charges in gas phase polymerization fluidized bed reactors are known to influence the bed hydrodynamics, particle elutriation, bubble size, bubble shape etc. Accumulation of electrostatic charges in the fluidizedbed can lead to operational issues. In this work a firstprinciples electrostatic model is developed and coupled with a multifluid computational fluid dynamic (CFD) model to understand the effect of electrostatics on the dynamics of a fluidizedbed. The multifluid CFD model for gasparticle flow is based on the kinetic theory of granular flows closures. The electrostatic model is developed based on a fixed, sizedependent charge for each type of particle (catalyst, polymer, polymer fines) phase. The combined CFD model is first verified using simple test cases, validated with experiments and applied to a pilotscale polymerization fluidizedbed reactor. The CFD model reproduced qualitative trends in particle segregation and entrainment due to electrostatic charges observed in experiments. For the scale up of fluidized bed reactor, filtered models are developed and implemented on pilot scale reactor.
Modeling and Synthesizing Task Placement Constraints in Google Compute Clusters
Cortes, Corinna
characterization for high performance computing and grids focus on task resource require ments for CPU, memory of compute clusters. Existing workload characteri zations for high performance computing and grids focus
Molecular Structure Determination on a Computational & Data Grid Mark L. Green and Russ Miller
Miller, Russ
intensive procedure can exploit the grid's ability to present the user with a large computational infrastructure, containing as many as 1000 unique non Hydrogen atoms, which could not be solved by traditional reciprocal
An Exact Modeling of Signal Statistics in Energyintegrating Xray Computed Tomography
An Exact Modeling of Signal Statistics in Energyintegrating Xray Computed Tomography Yi Fan1 used by modern computed tomography (CT) scanners and has been an interesting research topic 1. INTRODUCTION In xray computed tomography (CT), Poisson noise model has been widely used in noise
Computational Biology and Bioinformatics 10.10 Models of substitution I: Basic Models A
Goldschmidt, Christina
& stochastic grammars 7.11 RNA structures 9.11 Finding signals in sequences 14.11 Challenges in genome of structure & movements & shapes & grammars 28.11 Integrative genomics: the omics DNA mRNA Protein Metabolite Phenotype 30.11 Integrative genomics: mapping #12;Computational Biology and Bioinformatics 10.10 Models
Vladimir P. Gerdt; Vasily M. Severyanov
20051208T23:59:59.000Z
A C# package is presented that allows a user for an input quantum circuit to generate a set of multivariate polynomials over the finite field Z_2 whose total number of solutions in Z_2 determines the output of the quantum computation defined by the circuit. The generated polynomial system can further be converted to the canonical Groebner basis form which provides a universal algorithmic tool for counting the number of common roots of the polynomials.
GEOCITY: a computer model for systems analysis of geothermal district heating and cooling costs
Fassbender, L.L.; Bloomster, C.H.
19810601T23:59:59.000Z
GEOCITY is a computersimulation model developed to study the economics of district heating/cooling using geothermal energy. GEOCITY calculates the cost of district heating/cooling based on climate, population, resource characteristics, and financing conditions. The basis for our geothermalenergy cost analysis is the unit cost of energy which will recover all the costs of production. The calculation of the unit cost of energy is based on lifecycle costing and discountedcashflow analysis. A wide variation can be expected in the range of potential geothermal district heating and cooling costs. The range of costs is determined by the characteristics of the resource, the characteristics of the demand, and the distance separating the resource and the demand. GEOCITY is a useful tool for estimating costs for each of the main parts of the production process and for determining the sensitivity of these costs to several significant parameters under a consistent set of assumptions.
Shapiro, C.S.
19840801T23:59:59.000Z
The GLODEP2 computer code was utilized to determine biological impact to humans on a global scale using uptodate estimates of biological risk. These risk factors use varied biological damage models for assessing effects. All the doses reported are the unsheltered, unweathered, smooth terrain, external gamma dose. We assume the unperturbed atmosphere in determining injection and deposition. Effects due to ''nuclear winter'' may invalidate this assumption. The calculations also include scenarios that attempt to assess the impact of the changing nature of the nuclear stockpile. In particular, the shift from larger to smaller yield nuclear devices significantly changes the injection pattern into the atmosphere, and hence significantly affects the radiation doses that ensue. We have also looked at injections into the equatorial atmosphere. In total, we report here the results for 8 scenarios. 10 refs., 6 figs., 11 tabs.
A Polarizable QM/MM Explicit Solvent Model for Computational Electrochemistry in Water
Wang, LeePing
We present a quantum mechanical/molecular mechanical (QM/MM) explicit solvent model for the computation of standard reduction potentials E[subscript 0]. The QM/MM model uses density functional theory (DFT) to model the ...
Computation Modeling and Assessment of Nanocoatings for Ultra Supercritical Boilers
J. Shingledecker; D. Gandy; N. Cheruvu; R. Wei; K. Chan
20110621T23:59:59.000Z
Forced outages and boiler unavailability of coalfired fossil plants is most often caused by fireside corrosion of boiler waterwalls and tubing. Reliable coatings are required for Ultrasupercritical (USC) application to mitigate corrosion since these boilers will operate at a much higher temperatures and pressures than in supercritical (565 C {at} 24 MPa) boilers. Computational modeling efforts have been undertaken to design and assess potential FeCrNiAl systems to produce stable nanocrystalline coatings that form a protective, continuous scale of either Al{sub 2}O{sub 3} or Cr{sub 2}O{sub 3}. The computational modeling results identified a new series of Fe25Cr40Ni with or without 10 wt.% Al nanocrystalline coatings that maintain longterm stability by forming a diffusion barrier layer at the coating/substrate interface. The computational modeling predictions of microstructure, formation of continuous Al{sub 2}O{sub 3} scale, inward Al diffusion, grain growth, and sintering behavior were validated with experimental results. Advanced coatings, such as MCrAl (where M is Fe, Ni, or Co) nanocrystalline coatings, have been processed using different magnetron sputtering deposition techniques. Several coating trials were performed and among the processing methods evaluated, the DC pulsed magnetron sputtering technique produced the best quality coating with a minimum number of shallow defects and the results of multiple deposition trials showed that the process is repeatable. scale, inward Al diffusion, grain growth, and sintering behavior were validated with experimental results. The cyclic oxidation test results revealed that the nanocrystalline coatings offer better oxidation resistance, in terms of weight loss, localized oxidation, and formation of mixed oxides in the Al{sub 2}O{sub 3} scale, than widely used MCrAlY coatings. However, the ultrafine grain structure in these coatings, consistent with the computational model predictions, resulted in accelerated Al diffusion from the coating into the substrate. An effective diffusion barrier interlayer coating was developed to prevent inward Al diffusion. The fireside corrosion test results showed that the nanocrystalline coatings with a minimum number of defects have a great potential in providing corrosion protection. The coating tested in the most aggressive environment showed no evidence of coating spallation and/or corrosion attack after 1050 hours exposure. In contrast, evidence of coating spallation in isolated areas and corrosion attack of the base metal in the spalled areas were observed after 500 hours. These contrasting results after 500 and 1050 hours exposure suggest that the premature coating spallation in isolated areas may be related to the variation of defects in the coating between the samples. It is suspected that the cauliflowertype defects in the coating were presumably responsible for coating spallation in isolated areas. Thus, a defect free good quality coating is the key for the longterm durability of nanocrystalline coatings in corrosive environments. Thus, additional process optimization work is required to produce defectfree coatings prior to development of a coating application method for production parts.
Larson, N. M.; Perey, F. G.
19801101T23:59:59.000Z
A method is described for determining the parameters of a model from experimental data based upon the utilization of Bayes' theorem. This method has several advantages over the leastsquares method as it is commonly used; one important advantage is that the assumptions under which the parameter values have been determined are more clearly evident than in many results based upon least squares. Bayes' method has been used to develop a computer code which can be utilized to analyze neutron crosssection data by means of the Rmatrix theory. The required formulae from the Rmatrix theory are presented, and the computer implementation of both Bayes' equations and Rmatrix theory is described. Details about the computer code and compelte input/output information are given.
OptCDR: a general computational method for the design of antibody complementarity determining
Maranas, Costas
regions for targeted epitope binding R.J. Pantazes and C.D. Maranas1 Department of Chemical Engineering for each position. OptCDR is applied to three computational test cases: a peptide from the capsid and affinity. They are composed of pairs of heavy and light chains folded into the wellknown `Y' shape. Two
Hur, Jin [School of Computational Sciences, Korea Institute for Advanced Study, Seoul 130012 (Korea, Republic of); Min, Hyunsoo [Department of Physics, University of Seoul, Seoul 130743 (Korea, Republic of); School of Physics, Korea Institute for Advanced Study, Seoul 130012 (Korea, Republic of)
20080615T23:59:59.000Z
Recently the partialwave cutoff method was developed as a new calculational scheme for a functional determinant of quantum field theory in radial backgrounds. For the contribution given by an infinite sum of large partial waves, we derive explicitly radialWKB series in the angular momentum cutoff for d=2, 3, 4, and 5 (d is the spacetime dimension), which has uniform validity irrespectively of any specific values assumed for other parameters. Utilizing this series, precision evaluation of the renormalized functional determinant is possible with a relatively small number of low partialwave contributions determined separately. We illustrate the power of this scheme in a numerically exact evaluation of the prefactor (expressed as a functional determinant) in the case of the false vacuum decay of 4D scalar field theory.
Using Parallel MCMC Sampling to Calibrate a Computer Model of a Geothermal Reservoir
Fox, Colin
Using Parallel MCMC Sampling to Calibrate a Computer Model of a Geothermal Reservoir by T. Cui, C. 686 ISSN 1178360 #12;Using Parallel MCMC Sampling to Calibrate a Computer Model of a Geothermal of a geothermal field to achieve model `calibration' from measured welltest data. We explore three scenarios
Madiraju, Praveen
discovery process lightweight and secure. In this paper we present details of the trust and risk models. We of devices running in a pervasive computing environment [10]. The resource discovery process demands modelsSSRD+: A Privacyaware Trust and Security Model for Resource Discovery in Pervasive Computing
A Riskaware Trust Based Secure Resource Discovery (RTSRD) Model for Pervasive Computing
Madiraju, Praveen
security threat to them. Thus, the resource discovery process demands models that ensure the privacyA Riskaware Trust Based Secure Resource Discovery (RTSRD) Model for Pervasive Computing Sheikh Ihoc network of pervasive computing, a resource discovery model is needed that can resolve security and privacy
Ray tracing computations in the smoothed SEG/EAGE Salt Model
Cerveny, Vlastislav
Ray tracing computations in the smoothed SEG/EAGE Salt Model V#19;aclav Bucha Department to compute rays and synthetic seismograms of refracted and re ected Pwaves in the smoothed SEG/EAGE Salt The original 3D SEG/EAGE Salt Model (Aminzadeh et al. 1997) is very complex model and cannot be used for ray
A ThreeDimensional Computational Model of PEM Fuel Cell with Serpentine Gas Channels
Victoria, University of
A ThreeDimensional Computational Model of PEM Fuel Cell with Serpentine Gas Channels by Phong ABSTRACT A threedimensional computational fluid dynamics model of a Polymer Electrolyte Membrane (PEM) fuel cell with serpentine gas flow channels is presented in this thesis. This comprehensive model
Computational Modeling for the American Chemical Society  GE...
Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)
Analytics at Work machinesandinterconnectV "Rise of the Machines" on CNBC Tonight 12150vcloudcomputingmanufacturing Cloud Computing Manufacturing Efforts Take Off...
A New Model for ImageBased Humanities Computing
Brown, Jacob Hohmann
20090515T23:59:59.000Z
Imagebased humanities computing, the computerassisted study of digitallyrepresented “objects or artifacts of cultural heritage,” is an increasingly popular yet “established practice” located at the most recent intersections ...
HighPerformance Computer Modeling of the CosmosIridium Collision
Olivier, S; Cook, K; Fasenfest, B; Jefferson, D; Jiang, M; Leek, J; Levatin, J; Nikolaev, S; Pertica, A; Phillion, D; Springer, K; De Vries, W
20090828T23:59:59.000Z
This paper describes the application of a new, integrated modeling and simulation framework, encompassing the space situational awareness (SSA) enterprise, to the recent CosmosIridium collision. This framework is based on a flexible, scalable architecture to enable efficient simulation of the current SSA enterprise, and to accommodate future advancements in SSA systems. In particular, the code is designed to take advantage of massively parallel, highperformance computer systems available, for example, at Lawrence Livermore National Laboratory. We will describe the application of this framework to the recent collision of the Cosmos and Iridium satellites, including (1) detailed hydrodynamic modeling of the satellite collision and resulting debris generation, (2) orbital propagation of the simulated debris and analysis of the increased risk to other satellites (3) calculation of the radar and optical signatures of the simulated debris and modeling of debris detection with space surveillance radar and optical systems (4) determination of simulated debris orbits from modeled space surveillance observations and analysis of the resulting orbital accuracy, (5) comparison of these modeling and simulation results with Space Surveillance Network observations. We will also discuss the use of this integrated modeling and simulation framework to analyze the risks and consequences of future satellite collisions and to assess strategies for mitigating or avoiding future incidents, including the addition of new sensor systems, used in conjunction with the Space Surveillance Network, for improving space situational awareness.
A computer procedure for analyzing bidding data and determining optimum markup
Howard, Robert Timothy
19700101T23:59:59.000Z
 Data Pertains to Concrete Job 21 Listed in Appendix IV 20 5. Graphical Versus Computer Result Summary ? Case One 45 6. Independent Versus Gates Result Summary ? Case Two Grading Category 61 7. Independent Versus Gates Result Summary ? Case Two... Deck Setup ? Program PROBE 29 5. Sample Output for Program PROBE ? Concrete Category l, ist of Data Read In and Bid/Cost Ratios  Data from Appendix IV 31 6. Sample Output for Program PROBE ? Concrete Category "Typical" Competitor Probabilities 32...
AIR INGRESS ANALYSIS: PART 2 – COMPUTATIONAL FLUID DYNAMIC MODELS
Chang H. Oh; Eung S. Kim; Richard Schultz; Hans Gougar; David Petti; Hyung S. Kang
20110101T23:59:59.000Z
The Idaho National Laboratory (INL), under the auspices of the U.S. Department of Energy, is performing research and development that focuses on key phenomena important during potential scenarios that may occur in very high temperature reactors (VHTRs). Phenomena Identification and Ranking Studies to date have ranked an air ingress event, following on the heels of a VHTR depressurization, as important with regard to core safety. Consequently, the development of advanced air ingressrelated models and verification and validation data are a very high priority. Following a loss of coolant and system depressurization incident, air will enter the core of the High Temperature Gas Cooled Reactor through the break, possibly causing oxidation of the inthe core and reflector graphite structure. Simple core and plant models indicate that, under certain circumstances, the oxidation may proceed at an elevated rate with additional heat generated from the oxidation reaction itself. Under postulated conditions of fluid flow and temperature, excessive degradation of the lower plenum graphite can lead to a loss of structural support. Excessive oxidation of core graphite can also lead to the release of fission products into the confinement, which could be detrimental to a reactor safety. Computational fluid dynamic model developed in this study will improve our understanding of this phenomenon. This paper presents twodimensional and threedimensional CFD results for the quantitative assessment of the air ingress phenomena. A portion of results of the densitydriven stratified flow in the inlet pipe will be compared with results of the experimental results.
Designing Computing System Architecture and Models for the HLLHC era
Bauerdick, Lothar; Elmer, Peter; Gowdy, Stephen; Tadel, Matevz; Wuerthwein, Frank
20150101T23:59:59.000Z
This paper describes a programme to study the computing model in CMS after the next long shutdown near the end of the decade.
Designing Computing System Architecture and Models for the HLLHC era
Lothar Bauerdick; Brian Bockelman; Peter Elmer; Stephen Gowdy; Matevz Tadel; Frank Wuerthwein
20150720T23:59:59.000Z
This paper describes a programme to study the computing model in CMS after the next long shutdown near the end of the decade.
final report for Center for Programming Models for Scalable Parallel Computing
Johnson, Ralph E
20130410T23:59:59.000Z
This is the final report of the work on parallel programming patterns that was part of the Center for Programming Models for Scalable Parallel Computing
Empirical model determines energy required to clean sand from well bore
Appah, D.; Ichara, M. (Univ. of Port Harcourt (Nigeria))
19940228T23:59:59.000Z
An empirical hydraulic model has been developed for determining the energy required for cleaning a vertical and nearly vertical well bore plugged with sand particles. The model considers pressure losses and cleanout time and compares sand cleanout time during direct and reverse circulation of water. Good agreement was obtained between the model and experimental results.
Computational Modeling and Assessment Of Nanocoatings for Ultra Supercritical Boilers
David W. Gandy; John P. Shingledecker
20110411T23:59:59.000Z
Forced outages and boiler unavailability in conventional coalfired fossil power plants is most often caused by fireside corrosion of boiler waterwalls. Industrywide, the rate of wall thickness corrosion wastage of fireside waterwalls in fossilfired boilers has been of concern for many years. It is significant that the introduction of nitrogen oxide (NOx) emission controls with staged burners systems has increased reported waterwall wastage rates to as much as 120 mils (3 mm) per year. Moreover, the reducing environment produced by the lowNOx combustion process is the primary cause of accelerated corrosion rates of waterwall tubes made of carbon and low alloy steels. Improved coatings, such as the MCrAl nanocoatings evaluated here (where M is Fe, Ni, and Co), are needed to reduce/eliminate waterwall damage in subcritical, supercritical, and ultrasupercritical (USC) boilers. The first two tasks of this sixtask projectjointly sponsored by EPRI and the U.S. Department of Energy (DEFC2607NT43096)have focused on computational modeling of an advanced MCrAl nanocoating system and evaluation of two nanocrystalline (iron and nickel base) coatings, which will significantly improve the corrosion and erosion performance of tubing used in USC boilers. The computational model results showed that about 40 wt.% is required in Fe based nanocrystalline coatings for longterm durability, leading to a coating composition of Fe25Cr40Ni10 wt.% Al. In addition, the long term thermal exposure test results further showed accelerated inward diffusion of Al from the nanocrystalline coatings into the substrate. In order to enhance the durability of these coatings, it is necessary to develop a diffusion barrier interlayer coating such TiN and/or AlN. The third task 'Process Advanced MCrAl Nanocoating Systems' of the sixtask project jointly sponsored by the Electric Power Research Institute, EPRI and the U.S. Department of Energy (DEFC2607NT43096) has focused on processing of advanced nanocrystalline coating systems and development of diffusion barrier interlayer coatings. Among the diffusion interlayer coatings evaluated, the TiN interlayer coating was found to be the optimum one. This report describes the research conducted under the Task 3 workscope.
Computer modeling of the spatial resolution properties of a dedicated breast CT system
Yang Kai; Kwan, Alexander L. C.; Boone, John M. [Department of Radiology, University of California, Davis Medical Center, 4860 Y Street, Suite 3100 Ellison Building, Sacramento, California 95817 (United States) and Department of Biomedical Engineering, University of California, Davis, California, 95616 (United States); Department of Radiology, University of California, Davis Medical Center, 4860 Y Street, Suite 3100 Ellison Building, Sacramento, California 95817 (United States); Department of Radiology, University of California, Davis Medical Center, 4860 Y Street, Suite 3100 Ellison Building, Sacramento, California 95817 (United States) and Department of Biomedical Engineering, University of California, Davis, California 95616 (United States)
20070615T23:59:59.000Z
Computer simulation methods were used to evaluate the spatial resolution properties of a dedicated conebeam breast CT system. xray projection data of a 70 {mu}m nickelchromium wire were simulated. The modulation transfer function (MTF) was calculated from the reconstructed axial images at different radial positions from the isocenter to study the spatial dependency of the spatial resolution of the breast CT scanner. The MTF was also calculated in both the radial and azimuthal directions. Subcomponents of the cone beam CT system that affect the MTF were modeled in the computer simulation in a serial manner, including the xray focal spot distribution, gantry rotation under the condition of continuous fluoroscopy, detector lag, and detector spatial resolution. Comparison between the computer simulated and physically measured MTF values demonstrates reasonable accuracy in the simulation process, with a small systematic difference ({approx}9.5{+}6.4% difference, due to unavoidable uncertainties from physical measurement and system calibration). The intrinsic resolution in the radial direction determined by simulation was about 2.0 mm{sup 1} uniformly through the field of view. The intrinsic resolution in the azimuthal direction degrades from 2.0 mm{sup 1} at the isocenter to 1.0 mm{sup 1} at the periphery with 76.9 mm from the isocenter. The results elucidate the intrinsic spatial resolution properties of the prototype breast CT system, and suggest ways in which spatial resolution can be improved with system modification.
A Mathematical Model for Virus Infection in a System of Interacting Computers
Cipolatti, Rolci
A Mathematical Model for Virus Infection in a System of Interacting Computers J. L´opez Gondar & R are explored and enlightened in this paper. 1. Introduction The infection of computers by virtual viruses of virtual viruses in a system of interacting computers could be compared with a disease transmitted
Modeling Computational Security in LongLived Systems, Version 2 Ran Canetti1,2
International Association for Cryptologic Research (IACR)
Modeling Computational Security in LongLived Systems, Version 2 Ran Canetti1,2 , Ling Cheung2 Introduction Computational security in longlived systems: Security properties of cryptographic protocols computational power. This type of security degrades progressively over the lifetime of a protocol. However, some
Modeling Computational Security in LongLived Systems Ran Canetti1,2
International Association for Cryptologic Research (IACR)
Modeling Computational Security in LongLived Systems Ran Canetti1,2 , Ling Cheung2 , Dilsun Kaynar Introduction Computational security in longlived systems: Security properties of cryptographic protocols protocols, security relies on the assumption that adversarial entities have lim ited computational power
Novel properties generated by interacting computational systems: A minimal model Fabio Boschetti1,2
Boschetti, Fabio
Novel properties generated by interacting computational systems: A minimal model Fabio questions: First, what is the smallest number of components a computational system needs in order such as selforganisation and emergence have been discussed in computational terms within Complex System Science
Baer, Ferdinand
Optimizing Computations in Weather and Climate Prediction Models* F. BAER, BANGLIN ZHANG, AND BING scenarios for many time scales, more computer power than is currently available will be needed. One and sometimes with a biosphere included, are very complex and require so much computing power on available
CPT: An EnergyEfficiency Model for Multicore Computer Systems
Shi, Weisong
CPT: An EnergyEfficiency Model for Multicore Computer Systems Weisong Shi, Shinan Wang and Bing efficiency of computer systems. These techniques affect the energy efficiency across different layers metric that represents the energy efficiency of a computer system, for a specific configuration, given
Pedram, Massoud
TraceBased Analysis and Prediction of Cloud Computing User Behavior Using the Fractal Modeling and technology. In this paper, we investigate the characteristics of the cloud computing requests received the alpha stable distribution. Keywords cloud computing; alphastable distribution; fractional order
ParisSud XI, Université de
specially designed within the framework of this research. A computational heat transfer model is constructed. The developed mean model constitutes the basis of the computational stochastic heat transfer model that has been to the experimental ones. Keywords: computational heat transfer modeling, uncertainties, probabilistic modeling
Modeling of the Aging Viscoelastic Properties of Cement Paste Using Computational Methods
Li, Xiaodan
20120716T23:59:59.000Z
computational model using finite element method to predict the viscoelastic behavior of cement paste, and using this model, virtual tests can be carried out to improve understanding of the mechanisms of viscoelastic behavior. The primary finding from...
A Calibrated Computer Model for the Thermal Simulation of Courtyard Microclimates
Bagneid, A.; Haberl, J.
20060101T23:59:59.000Z
This paper describes a calibrated standalone courtyard microclimate model. This model is considered to be the fIrst calibrated computer program for the simulation of courtyard microclimates. In order to accomplish this a calibrated simplif...
MaRIE theory, modeling and computation roadmap executive summary...
Office of Scientific and Technical Information (OSTI)
in codesigning the elements of materials discovery, with theory and high performance computing, itself codesigned by constrained optimization of hardware and software,...
Cielo Computational Environment Usage Model With Mappings to...
Office of Scientific and Technical Information (OSTI)
Environment (ACE) Version 8.0 requirements. The ACE requirements reflect the high performance computing requirements for the Production Readiness Milestone user environment...
DEVELOPMENT OF AN INTERACTIVE COMPUTER SIMULATION MODEL FOR DESIGNING
processes and high product quality standards required. In particular, computer simulation of industrial equilibrium and transient warp analysis, and displaying the results graphically. Four example panels were
Londergan, John Thomas
19870101T23:59:59.000Z
TRANSPORT PARAMETER DETERMINATION AND MODELING OF SODIUM AND STRONTIUM PLUMES AT THE IDAHO NATIONAL ENGINEERING LABORATORY A Thesis by JOHN THOMAS LONDERGAN Submitted to the Graduate College of Texas A&M University in partial fulfillment... of the requirements for the degree of MASTER OF SCIENCE May 1987 Major Subject: Geophysics TRANSPORT PARAMETER DETERMINATION AND MODELING OF SODIUM AND STRONTIUM PLUMES AT THE IDAHO NATIONAL ENGINEERING LABORATORY A Thesis by JOHN THOMAS LONDERGAN Approved...
Dessouky, Maged
A Hierarchical Task Model for Dispatching in Computer Assisted DemandResponsive Paratransit Model for Dispatching in ComputerAssisted DemandResponsive Paratransit Operation ABSTRACT, Dispatch Training #12;1 INTRODUCTION Demandresponsive paratransit service is on the rise. For example
8/30/2001 Parallel Programming Fall 2001 1 Models of Parallel Computation
Browne, James C.
8/30/2001 Parallel Programming  Fall 2001 1 Models of Parallel Computation Philosophy Parallel of parallel programming. #12;8/30/2001 Parallel Programming  Fall 2001 2 Models of Parallel Computation will discuss parallelism from the viewpoint of programming but with connections to other domains. #12;8/30/2001
Lecture Notes in Computer Science 1 A ConnectionistSymbolic Approach to Modeling Agent
DeMara, Ronald F.
, Ronald F. DeMara1 1 Intelligent Systems Laboratory School of Electrical Engineering and Computer Science. CGFs are computercontrolled behavioral models of combatants used to serve as opponents against whom promise for providing power ful learning models" in a recent National Research Council Report [5]. Also
Mathematical and Computer Modelling 53 (2011) 716730 Contents lists available at ScienceDirect
Berges, John A.
20110101T23:59:59.000Z
Direct Mathematical and Computer Modelling journal homepage: www.elsevier.com/locate/mcm Dynamics of a virusMathematical and Computer Modelling 53 (2011) 716730 Contents lists available at Science: Received 19 January 2010 Accepted 17 October 2010 Keywords: Virushost dynamics Quota Bacteriophage
Ferroelectrics 342:7382, 2006 Computational Modeling of Ferromagnetic Shape Memory Thin Films
Luskin, Mitchell
1 Ferroelectrics 342:7382, 2006 Computational Modeling of Ferromagnetic Shape Memory Thin Films J films of Ni2MnGa ferromagnetic shape memory alloys in response to the application of a magnetic field: ferromagnetic, shape memory, active thin film, computational modeling INTRODUCTION The Ni2MnGa ferromagnetic
On the computation of steady hopper flows I: stress determination for Coulomb materials. 1
the above problems. In spite of wide ranging applications going from the above industrial problems to soil mechanics for instance, the modeling of granular materials has not reached a level of maturity that is anywhere near what has been achieved in Fluid Mechanics for instance. The culprit is found in the fact
Tarragon : a programming model for latencyhiding scientific computations
Cicotti, Pietro
20110101T23:59:59.000Z
Chapter 2 Programming Model . . . . . . . . . 2.1vi Chapter 6 Dynamic programming . . . . . . . . . . 6.1 Theof related programming model implementations. . . . . .
Bürger, Raimund
dimensional model of sedimentation of suspensions of small solid particles dispersed in a viscous fluid. This model accepted spatially onedimensional sedimentation model [35] gives rise to one scalar, nonlinear hyperbolicINTERNATIONAL JOURNAL OF c 2011 Institute for Scientific NUMERICAL ANALYSIS AND MODELING Computing
Bürger, Raimund
dimensional model of sedimentation of suspensions of small solid particles dispersed in a viscous fluid. This model accepted spatially onedimensional sedimentation model [35] gives rise to one scalar, nonlinear hyperbolicINTERNATIONAL JOURNAL OF c 2012 Institute for Scientific NUMERICAL ANALYSIS AND MODELING Computing
LaCava, W.; Xing, Y.; Guo, Y.; Moan, T.
20120401T23:59:59.000Z
The Gearbox Reliability Collaborative (GRC) has conducted extensive field and dynamometer test campaigns on two heavily instrumented wind turbine gearboxes. In this paper, data from the planetary stage is used to evaluate the accuracy and computation time of numerical models of the gearbox. First, planetbearing load and motion data is analyzed to characterize planetary stage behavior in different environments and to derive requirements for gearbox models and life calculations. Second, a set of models are constructed that represent different levels of fidelity. Simulations of the test conditions are compared to the test data and the computational cost of the models are compared. The test data suggests that the planetbearing life calculations should be made separately for each bearing on a row due to unequal load distribution. It also shows that tilting of the gear axes is related to planet load share. The modeling study concluded that fully flexible models were needed to predict planetbearing loading in some cases, although less complex models were able to achieve good correlation in the fieldloading case. Significant differences in planet load share were found in simulation and were dependent on the scope of the model and the bearing stiffness model used.
JACKSON VL
20110831T23:59:59.000Z
The primary purpose of the tank mixing and sampling demonstration program is to mitigate the technical risks associated with the ability of the Hanford tank farm delivery and celtification systems to measure and deliver a uniformly mixed highlevel waste (HLW) feed to the Waste Treatment and Immobilization Plant (WTP) Uniform feed to the WTP is a requirement of 24590WTPICDMG01019, ICD19  Interface Control Document for Waste Feed, although the exact definition of uniform is evolving in this context. Computational Fluid Dynamics (CFD) modeling has been used to assist in evaluating scaleup issues, study operational parameters, and predict mixing performance at fullscale.
Cummings, Peter [Vanderbilt University] [Vanderbilt University
20091115T23:59:59.000Z
The document is the final report of the DOE Computational Nanoscience Project DEFG0203ER46096: Integrated Multiscale Modeling of Molecular Computing Devices. It included references to 62 publications that were supported by the grant.
Multiscale Computational Modeling of Multiphase Composites with Damage
Cheng, Feifei
20131101T23:59:59.000Z
A multiscale computational framework for multiphase composites considering damage is developed in this research. In microscale, micromechanics based homogenization methods are used to estimate effective elastic moduli ...
Automatic Symmetry Detection for Model Checking Using Computational Group Theory
Donaldson, A.F.; Miller, A.
Donaldson,A.F. Miller,A. Proceedings of the 13th International Symposium on Formal Methods Europe (FME 2005). Lecture Notes in Computing Science volume 3582. pp 481496 Springer
Applications of Computer Modelling to Fire Safety Design
Torero, Jose L; Steinhaus, Thomas
Tools in support of fire safety engineering design have proliferated in the last few years due to the increased performance of computers. These tools are currently being used in a generalized manner in areas such as egress, ...
The Need for Biological Computation System Models  GE Global...
Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)
2012.10.09 Hello everyone, I'm Maria Zavodszky and I work in the Computational Biology and Biostatistics Lab at GE Global Research in Niskayuna, New York. This being our...
Cloud computing adoption model for governments and large enterprises
Trivedi, Hrishikesh
20130101T23:59:59.000Z
Cloud Computing has held organizations across the globe spell bound with its promise. As it moves from being a buzz word and hype into adoption, organizations are faced with question of how to best adopt cloud. Existing ...
Simulated movement of musculature in a computer generated model
Ten Wolde, Kristian Bernard
20000101T23:59:59.000Z
Designing a computer generated character involves many steps, including the structure that is responsible for moving the character in a organic manner. There are several ways to develop a character to control the motion exhibited by the skin...
A Fire Model for 2D Computer Animation
Yu, JH.; Patterson, J.W.
Yu,JH. Patterson,J.W. Proceedings of the EUROGRAPHICS Workshop on Computer Animation '96, Poitiers, France. Published in Eurographics Series, (Boulic R. and Hegron, G., Eds.). pp 4960 Springer
Center for Programming Models for Scalable Parallel Computing: Future Programming Models
Gao, Guang, R.
20080724T23:59:59.000Z
The mission of the pmodel center project is to develop software technology to support scalable parallel programming models for terascale systems. The goal of the specific UD subproject is in the context developing an efficient and robust methodology and tools for HPC programming. More specifically, the focus is on developing new programming models which facilitate programmers in porting their application onto parallel high performance computing systems. During the course of the research in the past 5 years, the landscape of microprocessor chip architecture has witnessed a fundamental change – the emergence of multicore/manycore chip architecture appear to become the mainstream technology and will have a major impact to for future generation parallel machines. The programming model for sharedaddress space machines is becoming critical to such multicore architectures. Our research highlight is the indepth study of proposed finegrain parallelism/multithreading support on such future generation multicore architectures. Our research has demonstrated the significant impact such finegrain multithreading model can have on the productivity of parallel programming models and their efficient implementation.
Protein translocation without specific quality control in a computational model of the Tat system
Chitra R. Nayak; Aidan I. Brown; Andrew D. Rutenberg
20140820T23:59:59.000Z
The twinarginine translocation (Tat) system transports folded proteins of various sizes across both bacterial and plant thylakoid membranes. The membraneassociated TatA protein is an essential component of the Tat translocon, and a broad distribution of different sized TatAclusters is observed in bacterial membranes. We assume that the size dynamics of TatA clusters are affected by substrate binding, unbinding, and translocation to associated TatBC clusters, where clusters with bound translocation substrates favour growth and those without associated substrates favour shrinkage. With a stochastic model of substrate binding and cluster dynamics, we numerically determine the TatA cluster size distribution. We include a proportion of targeted but nontranslocatable (NT) substrates, with the simplifying hypothesis that the substrate translocatability does not directly affect cluster dynamical rate constants or substrate binding or unbinding rates. This amounts to a translocation model without specific quality control. Nevertheless, NT substrates will remain associated with TatA clusters until unbound and so will affect cluster sizes and translocation rates. We find that the number of larger TatA clusters depends on the NT fraction $f$. The translocation rate can be optimized by tuning the rate of spontaneous substrate unbinding, $\\Gamma_U$. We present an analytically solvable threestate model of substrate translocation without cluster size dynamics that follows our computed translocation rates, and that is consistent with {\\em in vitro} Tattranslocation data in the presence of NT substrates.
Tesfatsion, Leigh
Bounded Computing Capacity ·· Explicit SpaceExplicit Space ·· Local InteractionsLocal Interactions ·· Non explanatory notion.notion. #12;4 SugarscapeSugarscape ·· Events unfold on a landscape of renewableEvents
A.241 A.24 ENHANCING THE CAPABILITY OF COMPUTATIONAL EARTH SYSTEM MODELS AND NASA DATA) computational support of Earth system modeling. #12;A.242 2.1 Acceleration of Operational Use of Research Data
Discovering Novel Cancer Therapies: A Computational Modeling and Search Approach
Flann, Nicholas
of new blood vessels (angiogenesis) is an important approach in cancer treatment. However, the complexitybased approach for the discovery of novel potential cancer treatments using a high fidelity simulation in cancer treatment [2]. This paper introduces a computational approach to search for novel intervention
A Computational Model to Connect Gestalt Perception and Natural Language
Roy, Deb
by . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Deb K. Roy Assistant Professor of Media Arts and Sciences Thesis Supervisor Accepted by Supervisor: Deb K. Roy Title: Assistant Professor of Media Arts and Sciences 3 #12;4 #12;A Computational by . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . Alex P. Pentland Toshiba Professor of Media Arts and Sciences Massachusetts Institute of Technology
Modelling Photochemical Pollution using Parallel and Distributed Computing Platforms
Abramson, David
of photochemical air pollution (smog) in industrialised cities. However, computational hardware demands can that have been used as part of an air pollution study being conducted in Melbourne, Australia. We also necessary to perform real air pollution studies. The system is used as part of the Melbourne Airshed study
ModelingComputer Simulations At Dixie Valley Geothermal Area...
that created the model did a run that took about 15 hours on a pentium4 processor running at 3 GHz with 1.7 Gbyte of RAM being used. The model revealed an electrical structure...
ModelingComputer Simulations At Kilauea East Rift Geothermal...
understand the heat flow patterns in the East Rift Zone Notes Three models were made from data collected in the exploratory well HGPA. The models simulated constant heat sources...
An Axiomatisation of Computationally Adequate Domain Theoretic Models of FPC
Fiore, Marcelo P; Plotkin, Gordon
19940101T23:59:59.000Z
Categorical models of the metalanguage FPC (a type theory with sums, products, exponentials and recursive types) are defined. Then, domaintheoretic models of FPC are axiomatised and a wide subclass of them —the ...
Computer representation of the model covariance function resulting from traveltime tomography
Cerveny, Vlastislav
Computer representation of the model covariance function resulting from traveltime tomography Lud a supplement to the paper by Klime#20;s (2002b) on the stochastic travel{time tomography. It contains brief covariance function is a function of 6 coordinates with pro nounced singularities. The computer
Illinois at Chicago, University of
20070101T23:59:59.000Z
Resources, Conservation and Recycling 51 (2007) 847869 Modeling obsolete computer stock under and recycling systems using GIS, and demonstrate the potential economic benefits from diverting electronic buildings. © 2007 Elsevier B.V. All rights reserved. Keywords: Computer recycling; Product inventory
Author's personal copy Calibration procedures for a computational model of ductile fracture
Hutchinson, John W.
Author's personal copy Calibration procedures for a computational model of ductile fracture Z. Xue fracture Computational fracture Shear fracture Damage parameters a b s t r a c t A recent extension of the cupcone fracture mode in the neck of a round tensile bar. Ductility of a notched round bar provides
Experimental Evaluations of Expert and Nonexpert Computer Users' Mental Models of Security Risks
Camp, L. Jean
Experimental Evaluations of Expert and Nonexpert Computer Users' Mental Models of Security Risks risks and thereby enable informed decisions by naive users. Yet computer security has not been en gaged with the scholarship of risk communication. While the existence of malicious actors may appear at first to distinguish
Probabilistic Model Checking and PowerAware Computing Marta Kwiatkowska Gethin Norman David Parker
Oxford, University of
operating system control, can be switched either on and off or between several power states of varying powerProbabilistic Model Checking and PowerAware Computing #3; Marta Kwiatkowska Gethin Norman Davidaware computing aims either to maximise the per formance of a system under certain constraints on its power
Probabilistic Model Checking and PowerAware Computing Marta Kwiatkowska Gethin Norman David Parker
Oxford, University of
operating system control, can be switched either on and off or between several power states of varying powerProbabilistic Model Checking and PowerAware Computing Marta Kwiatkowska Gethin Norman David Parkeraware computing aims either to maximise the per formance of a system under certain constraints on its power
Math 484: Mathematical & Computational Modeling Course Information and Syllabus Spring 2013
Math 484: Mathematical & Computational Modeling Course Information and Syllabus Spring 2013 problems that arise in industry, government, science and engineering. 1. Main objective is to teach math Prerequisites Math 407 or equivalent first course in scientific computing. Math 455 or equivalent first course
Math 484: Mathematical & Computational Modeling Course Information and Syllabus Spring 2012
Math 484: Mathematical & Computational Modeling Course Information and Syllabus Spring 2012 problems that arise in industry, government, science and engineering. 1. Main objective is to teach math Prerequisites Math 407 or equivalent first course in scientific computing. Math 455 or equivalent first course
Continuumbased Multiscale Computational Damage Modeling of Cementitous Composites
Kim, SunMyung
20110808T23:59:59.000Z
of Advisory Committee: Dr. Rashid K. Abu AlRub Based on continuum damage mechanics (CDM), an isotropic and anisotropic damage model coupled with a novel plasticity model for plain concrete is proposed in this research. Two different damage evolution laws... in the commercial finite element analysis program Abaqus, and the overall performance of the proposed model is verified by comparing the model predictions to various experimental data on macroscopic level. Using the proposed coupled plasticity...
Computational Fluid Dynamics Modeling of a Lithium/Thionyl Chloride Battery with Electrolyte Flow
Wang, ChaoYang
Computational Fluid Dynamics Modeling of a Lithium/Thionyl Chloride Battery with Electrolyte Flow Wdimensional model is developed to simulate discharge of a primary lithium/thionyl chloride battery. The model to the first task with important examples of leadacid,13 nickelmetal hydride,48 and lithiumbased batteries
Lecture Notes in Computer Science 1 Data Reduction Using Multiple Models Integration
Obradovic, Zoran
Lecture Notes in Computer Science 1 Data Reduction Using Multiple Models Integration Aleksandar the models constructed on previously considered data samples. In addition to random sampling, controllable sampling based on the boosting algorithm is proposed, where the models are combined using a weighted voting
Model Discovery for EnergyAware Computing Systems: An Experimental Evaluation
Stoller, Scott
experimentally. The process of model discovery for energy aware systems, in advance of controller design. Such models are also prerequisites for the appli cation of control theory to energyaware systems. We.e., the computing system to be controlled) using system identification; (2) use the plant model to design
A computational contact model for nanoscale rubber adhesion Roger A. Sauer
A computational contact model for nanoscale rubber adhesion Roger A. Sauer Institute for Continuum Mechanics, Leibniz UniversitÂ¨at Hannover, Germany published in Constitutive Models for Rubber VI, G mechanical contact model which is capable of describing and simulating rubber adhesion at the nanometer scale
A Computational Model of KnowledgeIntensive Learning and Problem Solving1
Aamodt, Agnar
1 A Computational Model of KnowledgeIntensive Learning and Problem Solving1 Agnar Aamodt Knowledge. If knowledgebased systems are to become more competent and robust in solving real world problems, they need model  a framework for knowledgeintensive problem solving and learning from experience. The model has
Gedeon, Tomas
, from those appearing in physiology and ecology to Earth systems modeling, often experience critical
Liquefied Natural Gas (LNG) Vapor Dispersion Modeling with Computational Fluid Dynamics Codes
Qi, Ruifeng
20121019T23:59:59.000Z
Federal regulation 49 CFR 193 and standard NFPA 59A require the use of validated consequence models to determine the vapor cloud dispersion exclusion zones for accidental liquefied natural gas (LNG) releases. For modeling purposes, the physical...
Daigle, Matthew
, and availability. Prognos tics deals with determining the health state of compo nents, and projecting) predictions. Modelbased prognos tics approaches perform these tasks with the aid of a model that captures
Liquefied Natural Gas (LNG) Vapor Dispersion Modeling with Computational Fluid Dynamics Codes
Qi, Ruifeng
20121019T23:59:59.000Z
Federal regulation 49 CFR 193 and standard NFPA 59A require the use of validated consequence models to determine the vapor cloud dispersion exclusion zones for accidental liquefied natural gas (LNG) releases. For modeling purposes, the physical...
Computational Modeling of Conventionally Reinforced Concrete Coupling Beams
Shastri, Ajay Seshadri
20120214T23:59:59.000Z
. The model is developed in the finite element analysis software ABAQUS. The concrete damaged plasticity model was used to simulate the behavior of concrete. A calibration model using a cantilever beam was produced to generate key parameters in the model... Stress (ABAQUS 2008)??????.????...61 Fig. 3.9. CPS8 Element Used for Modeling Concrete (ABAQUS 2008)?????64 Fig. 4.1. Elevation and CrossSection of the Cantilever Beam?????????66 Fig. 4.2. Compressive StressStrain Behavior of Concrete...
Huang, Yongxin
20100116T23:59:59.000Z
using MPI. The results show the cluster system can simultaneously support up to 32 processes for MPI program with high performance of interprocess communication. The parallel computations of phase field model of magnetic materials implemented by a MPI...
Rybkowski, Zofia K.; Wong, JohnMichael; Ballard, Glenn; Tommelein, Iris D.
20080101T23:59:59.000Z
Simulation games may be used to introduce lean principles to those who are considering implementing them. However, they can also function as controlled experiments against which to calibrate a computer model and they can even be adapted to serve...
Seagraves, Andrew Nathan
20100101T23:59:59.000Z
In this thesis a new parallel computational method is proposed for modeling threedimensional dynamic fracture of brittle solids. The method is based on a combination of the discontinuous Galerkin (DG) formulation of the ...
Financial Impact of Good Condenser Vacuum in Industrial Steam Turbines: Computer Modeling Techniques
Viar, W. L.
19840101T23:59:59.000Z
power is made, for example. Its performance affects the entire steam system and must be monitored persistently. Because of the complexities (and advantages) of systems analyses, computer modeling is demonstrated in this paper to fully evaluated...
Ma, Yongting
20110111T23:59:59.000Z
This thesis presents development of mathematical models for multimedia interaction process using Eulerian description and associated computational infrastructure to obtain numerical solution of the initial value problems ...
Curry, Benjamin David
A goal of Artificial Intelligence is to develop computational models of what would be considered intelligent behaviour in a human. One such task is that of musical performance. This research specifically focuses on aspects ...
Rothe, Jeanne Marie
19830101T23:59:59.000Z
A COMPUTER SIMULATION MODEL FOR THE PREDICTION OF . EMPERATURE DISTRIBUTIONS IN RADIOFREQUENCY HYPERTHERMIA TREATMENT A Thesis by JEANNE MARIE ROTHE Submitted to the Graduate College of Texas ASM University in Partial fulfillment... of the requirement for the degree of MASTER OF SCIENCE DECEMBER 1983 Major Subject: Bioengineering A COMPUTER SIMULATION MODEL FOR THE PREDICTION OF TEMPERATURE DISTRIBUTIONS IN RADIOFREQUENCY HYPERTHERMIA TREATMENT A Thesis by JEANNE MARIE ROTHE Approved...
A new, efficient computational model for the prediction of fluid seal flowfields
Hibbs, Robert Irwin
19880101T23:59:59.000Z
A NEW) EFFICIENT COMPUTATIONAL MODEL FOR THE PREDICTION OF FLUID SEAL FLOWFIELDS A Thesis by ROBERT IRWIN HIBBS, JR. Submitted to the Office of Graduate Studies of Texas ASM University in partial fulfillment of the requirement for the degree... of MASTER OF SCIENCE December 1988 Major Subject: Mechanical Engineering A NEW, EFFICIENT COMPUTATIONAL MODEL FOR THE PREDICTION OF FLUID SEAL FLOWFIELDS A Thesis by ROBERT IRWIN HIBBS, JR. Approved as to style and content by: David L. Rhode...
Formal computational models and nonstandard finiteness
AyalaRincÃ³n, Mauricio
(f) #12;Finiteness in Computation ^ without NNO T : Pos  f : P(Q Ã? Pos Ã? Tfin)  P(Q Ã? Posfin Ã? Q), such that Tfin = {h/fin({p Pos : h(p) = b})} X = P(Q Ã? Pos Ã? Tfin), St(X) = {S X : f(S) S} i : St(X) P(X) has = { x , y /Pos , yT > ^ Pos , xT > and final(q)} #12;Finitenes
Visscher, P.B.
19880101T23:59:59.000Z
Computer simulations have been performed, aimed at achieving a better understanding of the geological and physical processes involved in the formation of sedimentary basins in general and the Black Warrior basin of Alabama and Mississippi in particular. Microscopiclevel computer modeling of sandstone porosity reduction has been done, elucidating the detailed smallscale dynamics which lead to the geological phenomenon of pressure solution. A new technique has been developed for 1D burial and thermal modeling of sedimentary basins based on stratigraphic data from test wells. It is significantly faster than previous methods, and can be used in interactive menuoriented program requiring relatively little learning time or prior computer experience. This allows a geologist to rapidly determine the results of various different hypotheses about basin formation, providing insight which may help determine which is correct. A program has also been written to simulate tectonicplate collisions and rifting processes using viscoelastic hydrodynamics.
Final Report for Integrated Multiscale Modeling of Molecular Computing Devices
Glotzer, Sharon C.
20130828T23:59:59.000Z
In collaboration with researchers at Vanderbilt University, North Carolina State University, Princeton and Oakridge National Laboratory we developed multiscale modeling and simulation methods capable of modeling the synthesis, assembly, and operation of molecular electronics devices. Our role in this project included the development of coarsegrained molecular and mesoscale models and simulation methods capable of simulating the assembly of millions of organic conducting molecules and other molecular components into nanowires, crossbars, and other organized patterns.
New partnership uses advanced computer science modeling to address...
Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)
to accelerate the development and application of fully coupled, advanced Earth system models for scientific and energy applications. Fourteen institutions will work...
ModelingComputer Simulations At Long Valley Caldera Geothermal...
surrounding a vertically dipping prolate spheroid source during an active period of timedependent deformation between 1995 and 2000 at Long Valley caldera. We model a rapid...
ModelingComputer Simulations At San Juan Volcanic Field Area...
Juan Basin Since Late Cretaceous Times And Its Relationship To San Juan Mountains Thermal Sources Additional References Retrieved from "http:en.openei.orgwindex.php?titleModel...
ModelingComputer Simulations At Dixie Valley Geothermal Area...
and variations in structure on a generic basin and range geothermal reservoir. Structure, heat input, and permeability were variable used in the numerical models. Dixie valley was...
ModelingComputer Simulations At Valles Caldera  Redondo Geothermal...
volcanics, and the basement sections, respectively (Fig. 8). Although correlation with well data was done whenever possible, there is some uncertainty to the model because of...
ModelingComputer Simulations At Valles Caldera  Sulphur Springs...
volcanics, and the basement sections, respectively (Fig. 8). Although correlation with well data was done whenever possible, there is some uncertainty to the model because of...
ModelingComputer Simulations At Fenton Hill HDR Geothermal Area...
Discrete Fractures Using GEOCRACK Daniel Swenson, Robert DuTeau, Timothy Sprecker (1995) Modeling Flow in a Jointed Geothermal Reservoir Donald W. Brown (1994) Summary of...
ModelingComputer Simulations At Long Valley Caldera Geothermal...
using modeled solutions for a flow system consisting of a rock matrix with finite hydraulic conductivity cut by a steeply dipping fracture with infinite hydraulic conductivity....
Advanced Computing Tools and Models for Accelerator Physics
Ryne, Robert D.
20080101T23:59:59.000Z
MODELS FOR ACCELERATOR PHYSICS * Robert D. Ryne, Lawrencetools for accelerator physics. Following an introduction Icomputing in accelerator physics. INTRODUCTION To begin I
Vassilis Geroyannis; Georgios Kleftogiannis
20140614T23:59:59.000Z
We revisit the problem of radial pulsations of neutron stars by computing four generalrelativistic polytropic models, in which "density" and "adiabatic index" are involved with their discrete meanings: (i) "restmass density" or (ii) "massenergy density" regarding the density, and (i) "constant" or (ii) "variable" regarding the adiabatic index. Considering the resulting four discrete combinations, we construct corresponding models and compute for each model the frequencies of the lowest three radial modes. Comparisons with previous results are made. The deviations of respective frequencies of the resolved models seem to exhibit a systematic behavior, an issue discussed here in detail.
Modeling civil violence: An agentbased computational approach
Tesfatsion, Leigh
," I do so advisedly, recognizing that no political or social order is represented in the model of revolutions properly speaking. The dynamics of decentralized upheaval, rather than its political substance Against Central Authority This model involves two categories of actors. ``Agents'' are members
Gaussian Process Modeling and Computation in Engineering Applications
Pourhabib, Arash
20140708T23:59:59.000Z
; and predictive modeling for large datasets. First, we develop a spatialtemporal model for local wind fields in a wind farm with more than 200 wind turbines. Our framework utilizes the correlation among the derivatives of wind speeds to find a neighborhood...
Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)
AFDC Printable Version Share this resource Send a link to EERE: Alternative Fuels Data Center Home Page to someone by Email Share EERE: Alternative Fuels Data Center Home Page on Facebook Tweet about EERE: Alternative Fuels Data Center Home Page on Twitter Bookmark EERE:1 First Use of Energy for All Purposes (Fuel and Nonfuel), 2002; Level:5 TablesExports(Journal Article)41clothThe Bonneville Power Administration would like submit theNationalto Canada (MillionComputerEnergyComputing and
Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)
AFDC Printable Version Share this resource Send a link to EERE: Alternative Fuels Data Center Home Page to someone by Email Share EERE: Alternative Fuels Data Center Home Page on Facebook Tweet about EERE: Alternative Fuels Data Center Home Page on Twitter Bookmark EERE:1 First Use of Energy for All Purposes (Fuel and Nonfuel), 2002; Level:5 TablesExports(Journal Article)41clothThe Bonneville Power Administration would like submit theNationalto Canada (MillionComputerEnergyComputing
Huang, SuYun
factors · Simulation codes with calibration parameters 8 #12;Example: Designing Cellular Heat Exchangers in Qian et al. (2006, ASME) Related to the autoregressive model in Kennedy and O'Hagan (2000) · x = (x1
Computer modeling reveals how surprisingly potent hepatitis C drug works
Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)
AFDC Printable Version Share this resource Send a link to EERE: Alternative Fuels Data Center Home Page to someone by Email Share EERE: Alternative Fuels Data Center Home Page on Facebook Tweet about EERE: Alternative Fuels Data Center Home Page on Twitter Bookmark EERE:1 First Use of Energy for All Purposes (Fuel and Nonfuel), 2002; Level:5 TablesExports(Journal Article)41clothThe Bonneville Power Administration would like submit theNationalto Canada (MillionComputer Sciencesite employees
ComputerAided Construction of Combustion Chemistry Models
Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)
AFDC Printable Version Share this resource Send a link to EERE: Alternative Fuels Data Center Home Page to someone by Email Share EERE: Alternative Fuels Data Center Home Page on Facebook Tweet about EERE: Alternative Fuels Data Center Home Page on Twitter Bookmark EERE:1 First Use of Energy for All Purposes (Fuel and Nonfuel), 2002; Level:5 TablesExports(Journal Article)41clothThe Bonneville Power Administration would like submit theNationalto Canada (MillionComputer SciencesiteConstructing
A Computational Model of How the Basal Ganglia Produce Sequences
Berns, Gregory S.
closely on known anatomy and physiology. First, we assume that the thalamic targets, which relay ascend the external globus pallidus (GPe) and the subthalamic nucleus (STN). As a test of the model, the system
When Model Checking Met Deduction Computer Science Laboratory
Clarke, Edmund M.
Park, CA Sep 19, 2014 #12;Alan Turing It is of course important that some efforts be made to verify hold in each case. Alan Turing (quoted by D. MacKenzie in Risk and Reason) N. Shankar Model checking
A Method for Computing Conditional Probabilities in Probabilistic Library Model
, . PLM , invitro DNA , , PLM , 1% . PLM . 1. (Probabilistic Library Model) DNA [4], .[2,3] invitro DNA , PCR dilution .[1] PLM , . , , DNA , DNA . PLM , .[2, 3] , DNA (wDNF) DNA
Computational modeling of biological cells and soft tissues
Unnikrishnan, Ginu U.
20090515T23:59:59.000Z
formulation accounting for the inhomogeneity of the cytoplasm due to stress fibers and actin cortex is developed in this work using MoriTanaka method of homogenization. Mechanical modeling of single cells would be extremely useful in understanding its...
Computationally Efficient Strategy for Modeling the Effect of Ion ...

20070314T23:59:59.000Z
... predictions using a. Markov model with mass action binding of the modifiers to ..... each HHMod quantity, a three step process was used. First, the. HHMod ...... interval prolongation on drug discovery and development," Nat. Rev Drug Discov
ModelingComputer Simulations At Fenton Hill HDR Geothermal Area...
24 potential sites for EGS development across the U.S., as well as modeling of the representative geologic systems in which promising EGS sites occur. References Fraser Goff,...
ModelingComputer Simulations At White Mountains Area (Goff ...
24 potential sites for EGS development across the U.S., as well as modeling of the representative geologic systems in which promising EGS sites occur. References Fraser Goff,...
Scalable computational architecture for integrating biological pathway models
Shiva, V. A
20070101T23:59:59.000Z
A grand challenge of systems biology is to model the cell. The cell is an integrated network of cellular functions. Each cellular function, such as immune response, cell division, metabolism or apoptosis, is defined by an ...
Slepoy, Alexander; Mitchell, Scott A.; Backus, George A.; McNamara, Laura A.; Trucano, Timothy Guy
20080901T23:59:59.000Z
Sandia National Laboratories is investing in projects that aim to develop computational modeling and simulation applications that explore human cognitive and social phenomena. While some of these modeling and simulation projects are explicitly research oriented, others are intended to support or provide insight for people involved in high consequence decisionmaking. This raises the issue of how to evaluate computational modeling and simulation applications in both research and applied settings where human behavior is the focus of the model: when is a simulation 'good enough' for the goals its designers want to achieve? In this report, we discuss two years' worth of review and assessment of the ASC program's approach to computational model verification and validation, uncertainty quantification, and decision making. We present a framework that extends the principles of the ASC approach into the area of computational social and cognitive modeling and simulation. In doing so, we argue that the potential for evaluation is a function of how the modeling and simulation software will be used in a particular setting. In making this argument, we move from strict, engineering and physics oriented approaches to V&V to a broader project of model evaluation, which asserts that the systematic, rigorous, and transparent accumulation of evidence about a model's performance under conditions of uncertainty is a reasonable and necessary goal for model evaluation, regardless of discipline. How to achieve the accumulation of evidence in areas outside physics and engineering is a significant research challenge, but one that requires addressing as modeling and simulation tools move out of research laboratories and into the hands of decision makers. This report provides an assessment of our thinking on ASC Verification and Validation, and argues for further extending V&V research in the physical and engineering sciences toward a broader program of model evaluation in situations of high consequence decisionmaking.
Coupling remote sensing with computational fluid dynamics modelling to estimate lake chlorophyll form 17 October 2000; accepted 1 June 2001 Abstract A remotely sensed image of Loch Leven, a shallow in the remotely sensed image. It is proposed that CFD modelling benefits the interpretation of remotely sensed
Tentzeris, Manos
Modeling and Optimization of RFMEMS Reconfigurable Tuners with Computationally Efficient Time of Technology, Atlanta, GA 30332 2 Raytheon Company, Tucson AZ, 85734 Abstract  Modern RFMEMS device design methods in which the FDTD technique can be used to model a reconfigurable RFMEMS tuner. A new method
Modeling and Design of RF MEMS Structures Using Computationally Efficient Numerical Techniques
Tentzeris, Manos
Modeling and Design of RF MEMS Structures Using Computationally Efficient Numerical Techniques N. A Abstract The modeling of MEMS structures using MRTD is presented. Many complex RF structures have been communication systems efficiently and accurately. Specifically, micromachined structures such as MEMS
Toward CostSensitive Modeling for Intrusion Detection Computer Science Department
Toward CostSensitive Modeling for Intrusion Detection Wenke Lee Computer Science Department North,ezk,weaselg@cs.columbia.edu Abstract Intrusion detection systems need to maximize security while minimizing costs. In this paper, we study the problem of building costsensitive intrusion detection models. We examine the major cost
GWU Department of Mathematics Topics in Model Theory: Classical and Computable
Harizanov, Valentina S.
framework for the notions of language, meaning, and truth. A model, a concept used in all of sciences course will be, in some sense, selfcontained. We will start by reviewing the fundamental conceptsÂ194 (survey chapter without proofs). (3) V. Harizanov, "Pure computable model theory," in the volume: Handbook
Hawick, Ken
}, title = {Simulation Modelling and Visualisation: Toolkits for Building Artificial Worlds}, journal0 Computational Science Technical Note CSTN052 Simulation Modelling and Visualisation: Toolkits for Building Artificial Worlds Daniel Peter Playne and Anton P Gerdelan and Arno Leist and Chris J Scogings
An efficient computational model for macroscale simulations of moving contact lines
Boyer, Edmond
with CO2, for example). A major challenge in numerical simulations of moving contact linesAn efficient computational model for macroscale simulations of moving contact lines Y. Sui1 simulation of moving contact lines. The main purpose is to formulate and test a model wherein the macroscale
Computable General Equilibrium Models for the Analysis of Energy and Climate Policies
Wing, Ian Sue
(i) how a model may be cali brated using the economic data in a social accounting matrix, (ii) how of their size or apparent complexity), the 2 #12;key features of their data base and the calibration methodsComputable General Equilibrium Models for the Analysis of Energy and Climate Policies Ian Sue Wing
Fluid computation of the performanceenergy tradeoff in large scale Markov models
Imperial College, London
Fluid computation of the performanceÂenergy tradeoff in large scale Markov models Anton Stefanek energy consumption while maintaining multiple service level agreements. 2. VIRTUALISED EXECUTION MODEL optimisation. We show how the fluid analysis naturally leads to a constrained global optimisation prob lem
Computational modeling of thermal conductivity of single walled carbon nanotube polymer composites
Maruyama, Shigeo
was developed to study the thermal conductivity of single walled carbon nanotube (SWNT)polymer composites1 Computational modeling of thermal conductivity of single walled carbon nanotube polymer resistance on effective conductivity of composites were quantified. The present model is a useful tool
Model Discovery for EnergyAware Computing Systems: An Experimental Evaluation
Zadok, Erez
aware systems. Such models are also prerequisites for the appli cation of control theory to energyModel Discovery for EnergyAware Computing Systems: An Experimental Evaluation Appears, is a critical first step in designing advanced controllers that can dynamically man age the energy
A computational model for predicting damage evolution in laminated composite plates
Phillips, Mark Lane
19990101T23:59:59.000Z
computationally tenable is shown herein. Due to the complicated nature of the many cracks and their interactions, a multiscale micromesolocalglobal methodology is employed in order to model damage modes. Interface degradation is first modeled analytically...
A Unified RANSLES Model: Computational Development, Accuracy and Cost1 Harish Gopalana
Heinz, Stefan
Stokes (RANS) methods, applies modeling assumptions to all the scales of motion. The17 use of LES methodsA Unified RANSLES Model: Computational Development, Accuracy and Cost1 Harish Gopalana , Stefan Heinzb, , Michael K. Stöllingera 2 aMechanical Engineering Department, University of Wyoming, 1000 E
Computational Fuel Cell Research and SOFC Modeling at Penn State
multidisciplinary research on fuel cells and advanced batteries for vehicle propulsion, distributed power generation science, multiphase transport, reactive flow, CFD modeling, experimental diagnostics, in vehicle testing, DMFC, and SOFC #12;ECEC Facilities (>5,000 sq ft) Fuel Cell/Battery Experimental Labs Fuel Cell
Model construction: elements of a computational mechanism Jan M. _Zytkow
Ras, Zbigniew W.
Academy of Sciences, Warsaw, Poland zytkow@uncc.edu Abstract Model construction is one of the key scienti of the mainsteps. As a body of mass m rolls down its kinetic energy grows from zero to mv2=2, where v is the nal velocity. At the same time, its potential energy decreases from gmh to zero, where g is Earth acceleration
A Computational Framework for Modelling Aneurysm Inception due
Growth Project Thesis November 23, 2010 Matthias Kirchhart Thorolf Schulte Florian Lubisch Michael Woopen is used. The artery is modelled to consist of elastin and collagen fibres, arranged in double helical.1.2 Displacement, Velocity and Substantial Derivative . . . . . . . . . . . . 19 2.1.3 Deformation Gradient
A quantitative model of computation Dan R. Ghica
Ghica, Dan
languages. We define a HylandOngstyle games framework called slot games, which consists of HO games be nevertheless difficult, if not impossible, to handle using known operational techniques. Categories and Subject]: Modeling techniques. General Terms: Languages, Performance, Theory, Verifi cation Keywords: Game semantics
Computational Modelling of Particle Degradation in Dilute Phase Pneumatic Conveyors
Christakis, Nikolaos
the calculation of degradation propensity is coupled with a flow model of the solids and gas phases in the pipeline. Numerical results are presented for degradation of granulated sugar in an industrial scale handling, because of the change in particle properties such as particle size distribution, shape and
HIGH RESOLUTION FORWARD AND INVERSE EARTHQUAKE MODELING ON TERASCALE COMPUTERS
Shewchuk, Jonathan
highly populated seismic region in the U.S., it has well characterized geological structures (including in characterizing earthquake source and basin material properties, a critical remaining challenge is to invert basin geology and earthquake sources, and to use this capability to model and forecast strong ground
CASE FOR SUPPORT: Computational Modeling of Salience Sensitive Control
Heinke, Dietmar
in neural network modeling, machine learning, adaptive systems in general and selforganising systems] and verification of realtime systems [6]. A large amount of this research has been performed using the CADP verification environment, which is one of the most powerful tool suites available, boasting a spectrum
Computational Fluid Dynamics Modeling of the John Day Dam Tailrace
Rakowski, Cynthia L.; Perkins, William A.; Richmond, Marshall C.; Serkowski, John A.
20100708T23:59:59.000Z
US Army Corps of Engineers  Portland District required that a twodimensional (2D) depthaveraged and a threedimensional (3D) freesurface numerical models to be developed and validated for the John Day tailrace. These models were used to assess potential impact of a select group of structural and operational alternatives to tailrace flows aimed at improving fish survival at John Day Dam. The 2D model was used for the initial assessment of the alternatives in conjunction with a reducedscale physical model of the John Day Project. A finer resolution 3D model was used to more accurately model the details of flow in the stilling basin and nearproject tailrace hydraulics. Threedimensional model results were used as input to the Pacific Northwest National Laboratory particle tracking software, and particle paths and times to pass a downstream cross section were used to assess the relative differences in travel times resulting from project operations and structural scenarios for multiple total river flows. Streamlines and neutrallybuoyant particles were seeded in all turbine and spill bays with flows. For a Total River of 250 kcfs running with the Fish Passage Plan spill pattern and a spillwall, the mean residence times for all particles were little changed; however the tails of the distribution were truncated for both spillway and powerhouse release points, and, for the powerhouse releases, reduced the residence time for 75% of the particles to pass a downstream cross section from 45.5 minutes to 41.3 minutes. For a total river of 125 kcfs configured with the operations from the Fish Passage Plan for the temporary spillway weirs and for a proposed spillwall, the neutrallybuoyant particle tracking data showed that the river with a spillwall in place had the overall mean residence time increase; however, the residence time for 75% of the powerhousereleased particles to pass a downstream cross section was reduced from 102.4 min to 89 minutes.
Rakowski, Cynthia L.; Serkowski, John A.; Richmond, Marshall C.; Perkins, William A.
20101201T23:59:59.000Z
Although fisheries biology studies are frequently performed at US Army Corps of Engineers (USACE) projects along the Columbia and Snake Rivers, there is currently no consistent definition of the ``forebay'' and ``tailrace'' regions for these studies. At this time, each study may use somewhat arbitrary lines (e.g., the Boat Restriction Zone) to define the upstream and downstream limits of the study, which may be significantly different at each project. Fisheries researchers are interested in establishing a consistent definition of project forebay and tailrace regions for the hydroelectric projects on the lower Columbia and Snake rivers. The Hydraulic Extent of a project was defined by USACE (Brad Eppard, USACECENWP) as follows: The river reach directly upstream (forebay) and downstream (tailrace) of a project that is influenced by the normal range of dam operations. Outside this reach, for a particular river discharge, changes in dam operations cannot be detected by hydraulic measurement. The purpose of this study was to, in consultation with USACE and regional representatives, develop and apply a consistent set of criteria for determining the hydraulic extent of each of the projects in the lower Columbia and Snake rivers. A 2D depthaveraged river model, MASS2, was applied to the Snake and Columbia Rivers. New computational meshes were developed most reaches and the underlying bathymetric data updated to the most current survey data. The computational meshes resolved each spillway bay and turbine unit at each project and extended from project to project. MASS2 was run for a range of total river flows and each flow for a range of project operations at each project. The modeled flow was analyzed to determine the range of velocity magnitude differences and the range of flow direction differences at each location in the computational mesh for each total river flow. Maps of the differences in flow direction and velocity magnitude were created. USACE fishery biologists requested data analysis to determine the project hydraulic extent based on the following criteria: 1) For areas where the mean velocities are less than 4 ft/s, the water velocity differences between operations are not greater than 0.5 ft/sec and /or the differences in water flow direction are not greater than 10 degrees, 2) If mean water velocity is 4.0 ft/second or greater the boundary is determined using the differences in water flow direction (i.e., not greater than 10 degrees). Based on these criteria, and excluding areas with a mean velocity of less than 0.1 ft/s (within the error of the model), a final set of graphics were developed that included data from all flows and all operations. Although each hydroelectric project has a different physical setting, there were some common results. The downstream hydraulic extent tended to be greater than the hydraulic extent in the forebay. The hydraulic extent of the projects tended to be larger at the midrange flows. At higher flows, the channel geometry tends to reduce the impact of project operations.
Deductivereductive determination of the model of our observed Universe
V. Skalsky
20000917T23:59:59.000Z
According to the observations, in our expansive and isotropic relativistic Universe for the gravitational phenomena in a Newtonian approximation the Newtonian nonmodified relations are valid. The Friedmann general equations of isotropic and homogeneous universe dynamics describe an infinite number of models of expansive and isotropic relativistic universe in the Newtonian approximation, but only in one of them the Newtonian nonmodified relations are valid. These facts give  till now not considered  possibility for unambiguous deductivereductive determination of the Friedmannian model, which describes our observed Universe.
Solar wind modeling: a computational tool for the classroom
Woolsey, Lauren N
20150101T23:59:59.000Z
This article presents a Python model and library that can be used for student investigation of the application of fundamental physics on a specific problem: the role of magnetic field in solar wind acceleration. The paper begins with a short overview of the open questions in the study of the solar wind and how they relate to many commonly taught physics courses. The physics included in the model, The Efficient Modified Parker Equation Solving Tool (TEMPEST), is laid out for the reader. Results using TEMPEST on a magnetic field structure representative of the minimum phase of the Sun's activity cycle are presented and discussed. The paper suggests several ways to use TEMPEST in an educational environment and provides access to the current version of the code.
CASTING DEFECT MODELING IN AN INTEGRATED COMPUTATIONAL MATERIALS ENGINEERING APPROACH
Sabau, Adrian S [ORNL
20150101T23:59:59.000Z
To accelerate the introduction of new cast alloys, the simultaneous modeling and simulation of multiphysical phenomena needs to be considered in the design and optimization of mechanical properties of cast components. The required models related to casting defects, such as microporosity and hot tears, are reviewed. Three aluminum alloys are considered A356, 356 and 319. The data on calculated solidification shrinkage is presented and its effects on microporosity levels discussed. Examples are given for predicting microporosity defects and microstructure distribution for a plate casting. Models to predict fatigue life and yield stress are briefly highlighted here for the sake of completion and to illustrate how the length scales of the microstructure features as well as porosity defects are taken into account for modeling the mechanical properties. Thus, the data on casting defects, including microstructure features, is crucial for evaluating the final performancerelated properties of the component. ACKNOWLEDGEMENTS This work was performed under a Cooperative Research and Development Agreement (CRADA) with the Nemak Inc., and Chrysler Co. for the project "High Performance Cast Aluminum Alloys for Next Generation Passenger Vehicle Engines. The author would also like to thank Amit Shyam for reviewing the paper and Andres Rodriguez of Nemak Inc. Research sponsored by the U. S. Department of Energy, Office of Energy Efficiency and Renewable Energy, Vehicle Technologies Office, as part of the Propulsion Materials Program under contract DEAC0500OR22725 with UTBattelle, LLC. Part of this research was conducted through the Oak Ridge National Laboratory's High Temperature Materials Laboratory User Program, which is sponsored by the U. S. Department of Energy, Office of Energy Efficiency and Renewable Energy, Vehicle Technologies Program.
Computational models for the berry phase in semiconductor quantum dots
Prabhakar, S., Email: rmelnik@wlu.ca; Melnik, R. V. N., Email: rmelnik@wlu.ca [M2NeT Lab, Wilfrid Laurier University, 75 University Ave W, Waterloo, ON N2L 3C5 (Canada); Sebetci, A. [Department of Mechanical Engineering, Mevlana University, 42003, Konya (Turkey)
20141006T23:59:59.000Z
By developing a new model and its finite element implementation, we analyze the Berry phase lowdimensional semiconductor nanostructures, focusing on quantum dots (QDs). In particular, we solve the Schrödinger equation and investigate the evolution of the spin dynamics during the adiabatic transport of the QDs in the 2D plane along circular trajectory. Based on this study, we reveal that the Berry phase is highly sensitive to the Rashba and Dresselhaus spinorbit lengths.
A Variable Refrigerant Flow Heat Pump Computer Model in EnergyPlus
Raustad, Richard A. [Florida Solar Energy Center
20130101T23:59:59.000Z
This paper provides an overview of the variable refrigerant flow heat pump computer model included with the Department of Energy's EnergyPlusTM wholebuilding energy simulation software. The mathematical model for a variable refrigerant flow heat pump operating in cooling or heating mode, and a detailed model for the variable refrigerant flow directexpansion (DX) cooling coil are described in detail.
Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)
AFDC Printable Version Share this resource Send a link to EERE: Alternative Fuels Data Center Home Page to someone by Email Share EERE: Alternative Fuels Data Center Home Page on Facebook Tweet about EERE: Alternative Fuels Data Center Home Page on Twitter Bookmark EERE:1 First Use of Energy for All Purposes (Fuel and Nonfuel), 2002; Level:5 TablesExports(Journal Article)41clothThe Bonneville Power Administration would like submit theNationalto Canada (Million CubicComputationalEnergysimulation
Verification of a VRF Heat Pump Computer Model in EnergyPlus
Nigusse, Bereket; Raustad, Richard
20130601T23:59:59.000Z
This paper provides verification results of the EnergyPlus variable refrigerant flow (VRF) heat pump computer model using manufacturer's performance data. The paper provides an overview of the VRF model, presents the verification methodology, and discusses the results. The verification provides quantitative comparison of full and partload performance to manufacturer's data in coolingonly and heatingonly modes of operation. The VRF heat pump computer model uses dual range biquadratic performance curves to represent capacity and Energy Input Ratio (EIR) as a function of indoor and outdoor air temperatures, and dual range quadratic performance curves as a function of partloadratio for modeling partload performance. These performance curves are generated directly from manufacturer's published performance data. The verification compared the simulation output directly to manufacturer's performance data, and found that the dual range equation fit VRF heat pump computer model predicts the manufacturer's performance data very well over a wide range of indoor and outdoor temperatures and partload conditions. The predicted capacity and electric power deviations are comparbale to equationfit HVAC computer models commonly used for packaged and split unitary HVAC equipment.
A Review of Ground Coupled Heat Pump Models Used in WholeBuilding Computer Simulation Programs
Do, S. L.; Haberl, J. S.
constant injection or extraction rate. The DST model implemented in TRNSYS relies on an analytical solution for the steadyflux process and a numerical method for the global and local processes to model the ground heat exchanger. COMPUTER SIMULATION... related to wholebuilding energy simulation: the DOE2.1e program, eQUEST, EnergyPlus, TRNSYS, and EnergyGauge USA. Three computer simulation programs (the DOE2.1e program, eQUEST, EnergyPlus) have used by a number of individuals and organizations...
Computational Human Performance Modeling For Alarm System Design
Jacques Hugo
20120701T23:59:59.000Z
The introduction of new technologies like adaptive automation systems and advanced alarms processing and presentation techniques in nuclear power plants is already having an impact on the safety and effectiveness of plant operations and also the role of the control room operator. This impact is expected to escalate dramatically as more and more nuclear power utilities embark on upgrade projects in order to extend the lifetime of their plants. One of the most visible impacts in control rooms will be the need to replace aging alarm systems. Because most of these alarm systems use obsolete technologies, the methods, techniques and tools that were used to design the previous generation of alarm system designs are no longer effective and need to be updated. The same applies to the need to analyze and redefine operators’ alarm handling tasks. In the past, methods for analyzing human tasks and workload have relied on crude, paperbased methods that often lacked traceability. New approaches are needed to allow analysts to model and represent the new concepts of alarm operation and humansystem interaction. Stateoftheart task simulation tools are now available that offer a costeffective and efficient method for examining the effect of operator performance in different conditions and operational scenarios. A discrete event simulation system was used by human factors researchers at the Idaho National Laboratory to develop a generic alarm handling model to examine the effect of operator performance with simulated modern alarm system. It allowed analysts to evaluate alarm generation patterns as well as critical task times and human workload predicted by the system.
Whitney, Daniel
to include concurrent engineering.1 In both cases, the Unit has established strong ties with the Computer with defining product data models that will support concurrent engineering. Both fabrication and assembly in manufacturing must be structured like concurrent engineering activities: the users of the research must be part
A Bayesian Approach for Parameter Estimation and Prediction using a Computationally Intensive Model
Dave Higdon; Jordan D. McDonnell; Nicolas Schunck; Jason Sarich; Stefan M. Wild
20140917T23:59:59.000Z
Bayesian methods have been very successful in quantifying uncertainty in physicsbased problems in parameter estimation and prediction. In these cases, physical measurements y are modeled as the best fit of a physicsbased model $\\eta(\\theta)$ where $\\theta$ denotes the uncertain, best input setting. Hence the statistical model is of the form $y = \\eta(\\theta) + \\epsilon$, where $\\epsilon$ accounts for measurement, and possibly other error sources. When nonlinearity is present in $\\eta(\\cdot)$, the resulting posterior distribution for the unknown parameters in the Bayesian formulation is typically complex and nonstandard, requiring computationally demanding computational approaches such as Markov chain Monte Carlo (MCMC) to produce multivariate draws from the posterior. While quite generally applicable, MCMC requires thousands, or even millions of evaluations of the physics model $\\eta(\\cdot)$. This is problematic if the model takes hours or days to evaluate. To overcome this computational bottleneck, we present an approach adapted from Bayesian model calibration. This approach combines output from an ensemble of computational model runs with physical measurements, within a statistical formulation, to carry out inference. A key component of this approach is a statistical response surface, or emulator, estimated from the ensemble of model runs. We demonstrate this approach with a case study in estimating parameters for a density functional theory (DFT) model, using experimental mass/binding energy measurements from a collection of atomic nuclei. We also demonstrate how this approach produces uncertainties in predictions for recent mass measurements obtained at Argonne National Laboratory (ANL).
Abundance determinations in HII regions: model fitting versus Temethod
L. S. Pilyugin
20021114T23:59:59.000Z
The discrepancy between the oxygen abundances in highmetallicity HII regions determined through the Temethod (and/or through the corresponding "strong lines  oxygen abundance" calibration) and that determined through the model fitting (and/or through the corresponding "strong lines  oxygen abundance" calibration) is discussed. It is suggested to use the interstellar oxygen abundance in the solar vicinity, derived with very high precision from the highresolution observations of the weak interstellar absorption lines towards the stars, as a "Rosetta stone" to verify the validity of the oxygen abundances derived in HII regions with the Temethod at high abundances. The agreement between the value of the oxygen abundance at the solar galactocentric distance traced by the abundances derived in HII regions through the Temethod and that derived from the interstellar absorption lines towards the stars is strong evidence in favor of that i) the twozone model for Te seems to be a realistic interpretation of the temperature structure within HII regions, and ii) the classic Temethod provides accurate oxygen abundances in HII regions. It has been concluded that the "strong lines  oxygen abundance" calibrations must be based on the HII regions with the oxygen abundances derived with the Temethod but not on the existing grids of the models for HII regions.
September 2013 Most Viewed Documents for Mathematics And Computing...
Office of Scientific and Technical Information (OSTI)
risk assessment techniques from qualitative to quantitative Altenbach, T.J. (1995) 50 > Computational procedures for determining parameters in RambergOsgood elastoplastic model...
Modeling of BWR core meltdown accidents  for application in the MELRPI. MOD2 computer code
Koh, B R; Kim, S H; Taleyarkhan, R P; Podowski, M Z; Lahey, Jr, R T
19850401T23:59:59.000Z
This report summarizes improvements and modifications made in the MELRPI computer code. A major difference between this new, updated version of the code, called MELRPI.MOD2, and the one reported previously, concerns the inclusion of a model for the BWR emergency core cooling systems (ECCS). This model and its computer implementation, the ECCRPI subroutine, account for various emergency injection modes, for both intact and rubblized geometries. Other changes to MELRPI deal with an improved model for canister wall oxidation, rubble bed modeling, and numerical integration of system equations. A complete documentation of the entire MELRPI.MOD2 code is also given, including an input guide, list of subroutines, sample input/output and program listing.
Computer modeling approach for microspherepacked bone scaffold Pallavi Lal, Wei Sun*
Sun, Wei
bone graft [5,6], for structural and human cellular assessment of scaffolds for bone repair [7 modeling approach for constructing a threedimensional microspherepacked bone graft structure is presented packing model to determine the number of microspheres packed in a synthesized bone graft. The pore size
Zhang, Jiapu
20100101T23:59:59.000Z
Evolutionary algorithms are parallel computing algorithms and simulated annealing algorithm is a sequential computing algorithm. This paper inserts simulated annealing into evolutionary computations and successful developed a hybrid SelfAdaptive Evolutionary Strategy $\\mu+\\lambda$ method and a hybrid SelfAdaptive Classical Evolutionary Programming method. Numerical results on more than 40 benchmark test problems of global optimization show that the hybrid methods presented in this paper are very effective. LennardJones potential energy minimization is another benchmark for testing new global optimization algorithms. It is studied through the amyloid fibril constructions by this paper. To date, there is little molecular structural data available on the AGAAAAGA palindrome in the hydrophobic region (113120) of prion proteins.This region belongs to the Nterminal unstructured region (1123) of prion proteins, the structure of which has proved hard to determine using NMR spectroscopy or Xray crystallography ...
Computational Modeling and the Experimental Plasma Research Program A White Paper Submitted of the fusion energy program. The experimental plasma research (EPR) program is well positioned to make major in fusion development and promote scientific discovery. Experimental plasma research projects explore
A Simulation Technique for Performance Analysis of Generic Petri Net Models of Computer Systems1
Cintra, Marcelo
A Simulation Technique for Performance Analysis of Generic Petri Net Models of Computer Systems1 Abstract Many timed extensions for Petri nets have been proposed in the literature, but their analytical solutions impose limitations on the time distributions and the net topology. To overcome these limitations
Computational Modeling of Electrolyte/Cathode Interfaces in Proton Exchange Membrane Fuel Cells
Bjørnstad, Ottar Nordal
Computational Modeling of Electrolyte/Cathode Interfaces in Proton Exchange Membrane Fuel Cells Dr Proton exchange membrane fuel cells (PEMFCs) are alternative energy conversion devices that efficiently. The fundamental relationship between operating conditions and device performance will help to optimize the device
A computational strategy for multiscale systems with applications to Lorenz 96 model
Van Den Eijnden, Eric
A computational strategy for multiscale systems with applications to Lorenz 96 model Ibrahim 2004 Available online Abstract Numerical schemes for systems with multiple spatiotemporal scales are investigated. The multiscale schemes use asymptotic results for this type of systems which guarantee
InVehicle Testing and Computer Modeling of Electric Vehicle Batteries
Wang, ChaoYang
InVehicle Testing and Computer Modeling of Electric Vehicle Batteries B. Thomas, W.B. Gu, J.edu Abstract A combined simulation and testing approach has been developed to evaluate battery packs in real accelerates battery development cycle, and enables innovative battery design and optimization. Several
Modeling and Computing Twosettlement Oligopolistic Equilibrium in a Congested Electricity Network
Modeling and Computing Twosettlement Oligopolistic Equilibrium in a Congested Electricity Network with equilibrium con straints (EPEC), in which each firm solves a mathematical program with equilibriumproblems and on parametric LCP pivoting. Numerical examples demonstrate the effectiveness of the MPEC and EPEC algorithms
Broader source: Energy.gov [DOE]
The objective of this Funding Opportunity Announcement (FOA) is to leverage scientific advancements in mathematics and computation for application to power system models and software tools, with the longterm goal of enabling realtime protection and control based on widearea sensor measurements.
Margaliot, Michael
1 Entrainment to Periodic Initiation and Transition Rates in a Computational Model for Gene, the biological system must entrain or phaselock to the periodic excitation. Entrainment is also important in synthetic biology. For example, connecting several artificial biological systems that entrain to a common
Sontag, Eduardo
Entrainment to Periodic Initiation and Transition Rates in a Computational Model for Gene to the solar day. In the terminology of systems theory, the biological system must entrain or phaselock to the periodic excitation. Entrainment is also important in synthetic biology. For example, connecting several
3D Bone Microarchitecture Modeling and Fracture Risk Department of Computer
Buffalo, State University of New York
3D Bone Microarchitecture Modeling and Fracture Risk Prediction Hui Li Department of Computer will also rise. It calls for innovative research on understanding of osteoporo sis and fracture mechanismsoftheart probabilistic approach to analyze bone fracture risk factors including demographic attributes and life styles
ParisSud XI, Université de
Computing combustion noise by combining Large Eddy Simulations with analytical models +++++ Presented by Ignacio Duran Abstract Two mechanisms control combustion noise generation as shown by Marble. A method to calculate combustiongenerated noise has been implemented in a tool called CHORUS. The method
Baker, Jack W.
Conditional Spectrum Computation Incorporating Multiple Causal Earthquakes and GroundMotion Prediction Models by Ting Lin, Stephen C. Harmsen, Jack W. Baker, and Nicolas Luco Abstract The conditional uncertainties in all earthquake scenarios and resulting ground motions, as well as the epistemic uncertainties
A versatile computer model for the design and analysis of electric and hybrid vehicles
Stevens, Kenneth Michael
19960101T23:59:59.000Z
The primary purpose of the work reported in this thesis was to develop a versatile computer model to facilitate the design and analysis of hybrid vehicle drivetrains. A hybrid vehicle is one in which power for propulsion comes from two distinct...
A versatile computer model for the design and analysis of electric and hybrid vehicles
Stevens, Kenneth Michael
19960101T23:59:59.000Z
The primary purpose of the work reported in this thesis was to develop a versatile computer model to facilitate the design and analysis of hybrid vehicle drivetrains. A hybrid vehicle is one in which power for propulsion comes from two distinct...
.534 kN to 5.34 kN. In worst case tests representing a complete lack of superior femoral head bonePreclinical evaluation of ceramic femoral head resurfacing prostheses using computational models in resurfacing hip replacement (RHR) have been reported as early femoral neck fracture, infection, and loosening
P&P: a Combined PushPull Model for Resource Monitoring in Cloud Computing Environment
Wang, Liqiang
P&P: a Combined PushPull Model for Resource Monitoring in Cloud Computing Environment He Huang, various platforms and software. Resource monitoring involves collecting information of system resources to facilitate decision making by other components in Cloud environ ment. It is the foundation of many major
Teaching canal hydraulics and control using a computer game or a scale model canal
ParisSud XI, Université de
systems with automatic control algorithms. Modernization can also improve the quality of service to water irrigation canals are now designed and built using modern technologies allowing advanced control proceduresTeaching canal hydraulics and control using a computer game or a scale model canal Pierre
Computational Modeling of Neural Plasticity for SelfOrganization of Neural Networks
Jin, Yaochu
Computational Modeling of Neural Plasticity for SelfOrganization of Neural Networks Joseph Chrol on the learning per formance of neural networks for accomplishing machine learning tasks such as classication, dynamics and learning per formance of neural networks remains elusive. The purpose of this article
AN ADVANCED COMPUTATIONAL APPROACH TO SYSTEM MODELING OF TOKAMAK POWER PLANTS Zoran Dragojlovic1
Najmabadi, Farrokh
AN ADVANCED COMPUTATIONAL APPROACH TO SYSTEM MODELING OF TOKAMAK POWER PLANTS Zoran Dragojlovic1 power plant system studies is being developed for the ARIES program. An operational design space has power plants. This allows examination of a multidimensional trade space as opposed to traditional
MATHEMATICAL PERGAMON Mathematical and Computer Modelling 35 (2002) 13711375
Gorban, Alexander N.
20020101T23:59:59.000Z
Application to the Efficiency of Free Flow Turbines A. GORBAN' Institute of Computational Modeling, Russian obstacle is considered. Its application to estimating the efficiency of free flow turbines is discussed hydraulic turbines, i.e., the turbines that work without dams [l]. For this kind of turbine, the term
Building ventilation : a pressure airflow model computer generation and elements of
ParisSud XI, Université de
Building ventilation : a pressure airflow model computer generation and elements of validation H when heating a residential building, approximately 30% of the energy loss is due to air renewal[1. Thus in tropical climates, natural ventilation affects essentially the inside comfort by favouring
Many Task Computing for Modeling the Fate of Oil Discharged from the Deep Water Horizon Well
Many Task Computing for Modeling the Fate of Oil Discharged from the Deep Water Horizon Well@jhu.edu AbstractThe Deep Water Horizon well blowout on April 20th 2010 discharged between 40,000  1.2 million@med.miami.edu O. M. Knio Dept of Mechanical Engineering Johns Hopkins University Baltimore, MD knio
COMPUTATIONAL CHALLENGES IN THE NUMERICAL TREATMENT OF LARGE AIR POLLUTION MODELS
Ostromsky, Tzvetan
COMPUTATIONAL CHALLENGES IN THE NUMERICAL TREATMENT OF LARGE AIR POLLUTION MODELS I. DIMOV , K. GEORGIEVy, TZ. OSTROMSKY , R. J. VAN DER PASz, AND Z. ZLATEVx Abstract. The air pollution, and especially the reduction of the air pollution to some acceptable levels, is an important environmental problem, which
A Computational Model of Aging and Calcification in the Aortic Heart Valve
Mofrad, Mohammad R. K.
A Computational Model of Aging and Calcification in the Aortic Heart Valve Eli J. Weinberg1 of America Abstract The aortic heart valve undergoes geometric and mechanical changes over time. The cusps of a normal, healthy valve thicken and become less extensible over time. In the disease calcific aortic
S5 S5 S5 lacks the finite model property Dept. of Computer Science
Kurucz, Agi
S5 × S5 × S5 lacks the finite model property A. Kurucz Dept. of Computer Science King's College London Abstract It follows from algebraic results of Maddux that every multimodal logic L such that [S5, S5, . . . , S5] L S5n is undecidable, whenever n 3. This implies that the product logic S5×S5×S5
International Association for Cryptologic Research (IACR)
Modeling Computational Security in LongLived Systems # ## Ran Canetti 1,2 , Ling Cheung 2 , Dilsun Introduction Computational security in longlived systems: Security properties of cryptographic protocols computational power. This type of security degrades progressively over the lifetime of a protocol. However, some
Buyya, Rajkumar
CloudAnalyst: A CloudSimbased Visual Modeller for Analysing Cloud Computing Environments and Applications Bhathiya Wickremasinghe1 , Rodrigo N. Calheiros2 , and Rajkumar Buyya1 1 The Cloud Computing and Distributed Systems (CLOUDS) Laboratory Department of Computer Science and Software Engineering The University
Test of the notch technique for determining the radial sensitivity of the optical model potential
Lei Yang; ChengJian Lin; Huiming Jia; XinXing Xu; NanRu Ma; LiJie Sun; Feng Yang; HuanQiao Zhang; ZuHua Li; DongXi Wang
20150810T23:59:59.000Z
Detailed investigations on the notch technique are performed on the ideal data generated by the optical model potential parameters extracted from the 16O+208Pb system at the laboratory energy of 129.5 MeV, to study the sensitivities of this technique on the model parameters as well as the experimental data. It is found that, for the perturbation parameters, a sufficient large reduced fraction and an appropriate small perturbation width are necessary to determine the accurate radial sensitivity; while for the potential parameters, almost no dependence was observed. For the experimental measurements, the number of data points has little influence for the heavy target system, and the relative inner information of the nuclear potential can be derived when the measurement extended to a lower cross section.
Yock, Adam D., Email: ADYock@mdanderson.org; Kudchadker, Rajat J. [Department of Radiation Physics, The University of Texas MD Anderson Cancer Center, Houston, Texas 77030 and The Graduate School of Biomedical Sciences, The University of Texas Health Science Center at Houston, Houston, Texas 77030 (United States)] [Department of Radiation Physics, The University of Texas MD Anderson Cancer Center, Houston, Texas 77030 and The Graduate School of Biomedical Sciences, The University of Texas Health Science Center at Houston, Houston, Texas 77030 (United States); Rao, Arvind [Department of Bioinformatics and Computational Biology, The University of Texas MD Anderson Cancer Center, Houston, Texas 77030 and the Graduate School of Biomedical Sciences, the University of Texas Health Science Center at Houston, Houston, Texas 77030 (United States)] [Department of Bioinformatics and Computational Biology, The University of Texas MD Anderson Cancer Center, Houston, Texas 77030 and the Graduate School of Biomedical Sciences, the University of Texas Health Science Center at Houston, Houston, Texas 77030 (United States); Dong, Lei [Scripps Proton Therapy Center, San Diego, California 92121 and The Graduate School of Biomedical Sciences, The University of Texas Health Science Center at Houston, Houston, Texas 77030 (United States)] [Scripps Proton Therapy Center, San Diego, California 92121 and The Graduate School of Biomedical Sciences, The University of Texas Health Science Center at Houston, Houston, Texas 77030 (United States); Beadle, Beth M.; Garden, Adam S. [Department of Radiation Oncology, The University of Texas MD Anderson Cancer Center, Houston, Texas 77030 (United States)] [Department of Radiation Oncology, The University of Texas MD Anderson Cancer Center, Houston, Texas 77030 (United States); Court, Laurence E. [Department of Radiation Physics and Department of Imaging Physics, The University of Texas MD Anderson Cancer Center, Houston, Texas 77030 and The Graduate School of Biomedical Sciences, The University of Texas Health Science Center at Houston, Houston, Texas 77030 (United States)] [Department of Radiation Physics and Department of Imaging Physics, The University of Texas MD Anderson Cancer Center, Houston, Texas 77030 and The Graduate School of Biomedical Sciences, The University of Texas Health Science Center at Houston, Houston, Texas 77030 (United States)
20140515T23:59:59.000Z
Purpose: The purpose of this work was to develop and evaluate the accuracy of several predictive models of variation in tumor volume throughout the course of radiation therapy. Methods: Nineteen patients with oropharyngeal cancers were imaged daily with CTonrails for imageguided alignment per an institutional protocol. The daily volumes of 35 tumors in these 19 patients were determined and used to generate (1) a linear model in which tumor volume changed at a constant rate, (2) a general linear model that utilized the power fit relationship between the daily and initial tumor volumes, and (3) a functional general linear model that identified and exploited the primary modes of variation between time series describing the changing tumor volumes. Primary and nodal tumor volumes were examined separately. The accuracy of these models in predicting daily tumor volumes were compared with those of static and linear reference models using leaveoneout crossvalidation. Results: In predicting the daily volume of primary tumors, the general linear model and the functional general linear model were more accurate than the static reference model by 9.9% (range: ?11.6%–23.8%) and 14.6% (range: ?7.3%–27.5%), respectively, and were more accurate than the linear reference model by 14.2% (range: ?6.8%–40.3%) and 13.1% (range: ?1.5%–52.5%), respectively. In predicting the daily volume of nodal tumors, only the 14.4% (range: ?11.1%–20.5%) improvement in accuracy of the functional general linear model compared to the static reference model was statistically significant. Conclusions: A general linear model and a functional general linear model trained on data from a small population of patients can predict the primary tumor volume throughout the course of radiation therapy with greater accuracy than standard reference models. These more accurate models may increase the prognostic value of information about the tumor garnered from pretreatment computed tomography images and facilitate improved treatment management.
Unit physics testing of a mix model in an eulerian fluid computation
Vold, Erik [Los Alamos National Laboratory; Douglass, Rod [Los Alamos National Laboratory
20100101T23:59:59.000Z
A KL turbulence mix model driven with a dragbuoyancy source term is tested in an Eulerian code in a series of basic unitphysics tests, as part of a mix validation milestone. The model and the closure coefficient values are derived in the work of DimonteTipton [DT] in Phys.Flu.18, 085101 (2006), and many of the test problems were reported there, where the mix model operated in Lagrange computations. The dragbuoyancy KL mix model was implemented within the Eulerian code framework by A.J. Scannapieco. Mix model performance is evaluated in terms of mix width growth rates compared to experiments in select regimes. Results in our Eulerian code are presented for several unitphysics ID test problems including the decay of homogeneous isotropic turbulence (HIT), RayleighTaylor (RT) unstable mixing, shock amplification of initial turbulence, RichtmyerMeshkov (RM) mixing in several single shock test cases and in comparison to two RM experiments including reshock (VetterSturtevant and Poggi, et.al.). Sensitivity to model parameters, to Atwood number, and to initial conditions are examined. Results here are in good agreement in some tests (HIT, RT) with the previous results reported for the mix model in the Lagrange calculations. The HIT turbulent decay agrees closely with analytic expectations, and the RT growth rate matches experimental values for the default values of the model coefficients proposed in [DT]. Results for RM characterized with a power law growth rate differ from the previous mix model work but are still within the range for reasonable agreement with experiments. Sensitivity to IC values in the RM studies are examined; results are sensitive to initial values of L[t=O], which largely determines the RM mix layer growth rate, and generally differs from the IC values used in the RT studies. Result sensitivity to initial turbulence, K[t=O], is seen to be small but significant above a threshold value. Initial conditions can be adjusted so that single shock RM mix width results match experiments but we have not been able to obtain a good match for first shock and reshock growth rates in the same experiment with a single set of parameters and Ie. Problematic issues with KH test problems are described. Resolution studies for an RM test problem show the KL mix growth rate decreases as it converges at a supralinear rate, and, convergence requires a fine grid (on the order of 10 microns). For comparison, a resolution study of a second mix model [Scannapieco and Cheng, Phys.Lett.A, 299(1),49, (2002)] acting on a two fluid interface problem was examined. The mix in this case was found to increase with grid resolution at low to moderate resolutions, but converged at comparably fine resolutions. In conclusion, these tests indicate that the Eulerian code KL model, using the Dimonte Tipton default model closure coefficients, achieve reasonable results across many of the unitphysics experimental conditions. However, we were unable to obtain good matches simultaneously for shock and reshock mix in a single experiment. Results are sensitive to initial conditions in the regimes under study, with different IC best suited to RT or RM mix. It is reasonable to expect IC sensitivity in extrapolating to high energy density regimes, or to experiments with deceleration due to arbitrary combinations of RT and RM. As a final comparison, the atomically generated mix fraction and the mix width were each compared for the KL mix model and the Scannapieco model on an identical RM test problem. The Scannapieco mix fraction and width grow linearly. The KL mix fraction and width grow with the same power law exponent, in contrast to expectations from analysis. In future work it is proposed to do more headtohead comparisons between these two models and other mix model options on a full suite of physics test problems, such as interfacial deceleration due to pressure buildup during an idealized ICF implosion.
Computational modeling of GTA (gas tungsten arc) welding with emphasis on surface tension effects
Zacharia, T.; David, S.A.
19900101T23:59:59.000Z
A computational study of the convective heat transfer in the weld pool during gas tungsten arch (GTA) welding of Type 304 stainless steel is presented. The solution of the transport equations is based on a control volume approach which utilized directly, the integral form of the governing equations. The computational model considers buoyancy and electromagnetic and surface tension forces in the solution of convective heat transfer in the weld pool. In addition, the model treats the weld pool surface as a deformable free surface. The computational model includes weld metal vaporization and temperature dependent thermophysical properties. The results indicate that consideration of weld pool vaporization effects and temperature dependent thermophysical properties significantly influence the weld model predictions. Theoretical predictions of the weld pool surface temperature distributions and the crosssectional weld pool size and shape wee compared with corresponding experimental measurements. Comparison of the theoretically predicted and the experimentally obtained surface temperature profiles indicated agreement with {plus minus} 8%. The predicted weld crosssection profiles were found to agree very well with actual weld crosssections for the best theoretical models. 26 refs., 8 figs.
High Performance Computing Modeling Advances Accelerator Science for High Energy Physics
Amundson, James; Macridin, Alexandru; Spentzouris, Panagiotis
20140429T23:59:59.000Z
The development and optimization of particle accelerators are essential for advancing our understanding of the properties of matter, energy, space and time. Particle accelerators are complex devices whose behavior involves many physical effects on multiple scales. Therefore, advanced computational tools utilizing highperformance computing (HPC) are essential for accurately modeling them. In the past decade, the DOE SciDAC program has produced such acceleratormodeling tools, which have beem employed to tackle some of the most difficult accelerator science problems. In this article we discuss the Synergia beamdynamics framework and its applications to highintensity particle accelerator physics. Synergia is an accelerator simulation package capable of handling the entire spectrum of beam dynamics simulations. We present the design principles, key physical and numerical models in Synergia and its performance on HPC platforms. Finally, we present the results of Synergia applications for the Fermilab proton source upgrade, known as the Proton Improvement Plan (PIP).
HighPerformance Computing Modeling Advances Accelerator Science for HighEnergy Physics
DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)
Amundson, James [Fermi National Accelerator Laboratory (FNAL), Batavia, IL (United States); Macridin, Alexandru [Fermi National Accelerator Laboratory (FNAL), Batavia, IL (United States); Spentzouris, Panagiotis [Fermi National Accelerator Laboratory (FNAL), Batavia, IL (United States)
20141101T23:59:59.000Z
The development and optimization of particle accelerators are essential for advancing our understanding of the properties of matter, energy, space and time. Particle accelerators are complex devices whose behavior involves many physical effects on multiple scales. Therefore, advanced computational tools utilizing highperformance computing (HPC) are essential for accurately modeling them. In the past decade, the DOE SciDAC program has produced such acceleratormodeling tools, which have beem employed to tackle some of the most difficult accelerator science problems. In this article we discuss the Synergia beamdynamics framework and its applications to highintensity particle accelerator physics. Synergia is an accelerator simulation package capable of handling the entire spectrum of beam dynamics simulations. We present the design principles, key physical and numerical models in Synergia and its performance on HPC platforms. Finally, we present the results of Synergia applications for the Fermilab proton source upgrade, known as the Proton Improvement Plan (PIP).
HighPerformance Computing Modeling Advances Accelerator Science for HighEnergy Physics
DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)
Amundson, James; Macridin, Alexandru; Spentzouris, Panagiotis
20141101T23:59:59.000Z
The development and optimization of particle accelerators are essential for advancing our understanding of the properties of matter, energy, space and time. Particle accelerators are complex devices whose behavior involves many physical effects on multiple scales. Therefore, advanced computational tools utilizing highperformance computing (HPC) are essential for accurately modeling them. In the past decade, the DOE SciDAC program has produced such acceleratormodeling tools, which have beem employed to tackle some of the most difficult accelerator science problems. In this article we discuss the Synergia beamdynamics framework and its applications to highintensity particle accelerator physics. Synergia is an accelerator simulation packagemore »capable of handling the entire spectrum of beam dynamics simulations. We present the design principles, key physical and numerical models in Synergia and its performance on HPC platforms. Finally, we present the results of Synergia applications for the Fermilab proton source upgrade, known as the Proton Improvement Plan (PIP).« less
Vandersall, Jennifer A.; Gardner, Shea N.; Clague, David S.
20100504T23:59:59.000Z
A computational method and computerbased system of modeling DNA synthesis for the design and interpretation of PCR amplification, parallel DNA synthesis, and microarray chip analysis. The method and system include modules that address the bioinformatics, kinetics, and thermodynamics of DNA amplification and synthesis. Specifically, the steps of DNA selection, as well as the kinetics and thermodynamics of DNA hybridization and extensions, are addressed, which enable the optimization of the processing and the prediction of the products as a function of DNA sequence, mixing protocol, time, temperature and concentration of species.
Computational Modeling of Blood Flow in the TrapEase Inferior Vena Cava Filter
Singer, M A; Henshaw, W D; Wang, S L
20080204T23:59:59.000Z
To evaluate the flow hemodynamics of the TrapEase vena cava filter using three dimensional computational fluid dynamics, including simulated thrombi of multiple shapes, sizes, and trapping positions. The study was performed to identify potential areas of recirculation and stagnation and areas in which trapped thrombi may influence intrafilter thrombosis. Computer models of the TrapEase filter, thrombi (volumes ranging from 0.25mL to 2mL, 3 different shapes), and a 23mm diameter cava were constructed. The hemodynamics of steadystate flow at Reynolds number 600 was examined for the unoccluded and partially occluded filter. Axial velocity contours and wall shear stresses were computed. Flow in the unoccluded TrapEase filter experienced minimal disruption, except near the superior and inferior tips where low velocity flow was observed. For spherical thrombi in the superior trapping position, stagnant and recirculating flow was observed downstream of the thrombus; the volume of stagnant flow and the peak wall shear stress increased monotonically with thrombus volume. For inferiorly trapped spherical thrombi, marked disruption to the flow was observed along the cava wall ipsilateral to the thrombus and in the interior of the filter. Spherically shaped thrombus produced a lower peak wall shear stress than conically shaped thrombus and a larger peak stress than ellipsoidal thrombus. We have designed and constructed a computer model of the flow hemodynamics of the TrapEase IVC filter with varying shapes, sizes, and positions of thrombi. The computer model offers several advantages over in vitro techniques including: improved resolution, ease of evaluating different thrombus sizes and shapes, and easy adaptation for new filter designs and flow parameters. Results from the model also support a previously reported finding from photochromic experiments that suggest the inferior trapping position of the TrapEase IVC filter leads to an intrafilter region of recirculating/stagnant flow with very low shear stress that may be thrombogenic.
Carbajo, Juan (Oak Ridge National Laboratory, Oak Ridge, TN); Jeong, HaeYong (Korea Atomic Energy Research Institute, Daejeon, Korea); Wigeland, Roald (Idaho National Laboratory, Idaho Falls, ID); Corradini, Michael (University of Wisconsin, Madison, WI); Schmidt, Rodney Cannon; Thomas, Justin (Argonne National Laboratory, Argonne, IL); Wei, Tom (Argonne National Laboratory, Argonne, IL); Sofu, Tanju (Argonne National Laboratory, Argonne, IL); Ludewig, Hans (Brookhaven National Laboratory, Upton, NY); Tobita, Yoshiharu (Japan Atomic Energy Agency, Ibarakiken, Japan); Ohshima, Hiroyuki (Japan Atomic Energy Agency, Ibarakiken, Japan); Serre, Frederic (Centre d'%C3%94etudes nucl%C3%94eaires de Cadarache %3CU%2B2013%3E CEA, France)
20110601T23:59:59.000Z
This report summarizes the results of an expertopinion elicitation activity designed to qualitatively assess the status and capabilities of currently available computer codes and models for accident analysis and reactor safety calculations of advanced sodium fast reactors, and identify important gaps. The twelvemember panel consisted of representatives from five U.S. National Laboratories (SNL, ANL, INL, ORNL, and BNL), the University of Wisconsin, the KAERI, the JAEA, and the CEA. The major portion of this elicitation activity occurred during a twoday meeting held on Aug. 1011, 2010 at Argonne National Laboratory. There were two primary objectives of this work: (1) Identify computer codes currently available for SFR accident analysis and reactor safety calculations; and (2) Assess the status and capability of current US computer codes to adequately model the required accident scenarios and associated phenomena, and identify important gaps. During the review, panel members identified over 60 computer codes that are currently available in the international community to perform different aspects of SFR safety analysis for various event scenarios and accident categories. A brief description of each of these codes together with references (when available) is provided. An adaptation of the Predictive Capability Maturity Model (PCMM) for computational modeling and simulation is described for use in this work. The panel's assessment of the available US codes is presented in the form of nine tables, organized into groups of three for each of three risk categories considered: anticipated operational occurrences (AOOs), design basis accidents (DBA), and beyond design basis accidents (BDBA). A set of summary conclusions are drawn from the results obtained. At the highest level, the panel judged that current US code capabilities are adequate for licensing given reasonable margins, but expressed concern that US code development activities had stagnated and that the experienced userbase and the experimental validation base was decaying away quickly.
Victoria, University of
On the Use of Computational Models for Wave Climate Assessment in Support of the Wave Energy On the Use of Computational Models for Wave Climate Assessment in Support of the Wave Energy Industry Effective, economic extraction of ocean wave energy requires an intimate under standing of the ocean wave
Steinhaus, Thomas
20100101T23:59:59.000Z
Computational Fluid Dynamics (CFD) codes are being increasingly used in the field of fire safety engineering. They provide, amongst other things, velocity, species and heat flux distributions throughout the computational ...
DualTrust: A Trust Management Model for SwarmBased Autonomic Computing Systems
Maiden, Wendy M.
20100501T23:59:59.000Z
Trust management techniques must be adapted to the unique needs of the application architectures and problem domains to which they are applied. For autonomic computing systems that utilize mobile agents and ant colony algorithms for their sensor layer, certain characteristics of the mobile agent ant swarm  their lightweight, ephemeral nature and indirect communication  make this adaptation especially challenging. This thesis looks at the trust issues and opportunities in swarmbased autonomic computing systems and finds that by monitoring the trustworthiness of the autonomic managers rather than the swarming sensors, the trust management problem becomes much more scalable and still serves to protect the swarm. After analyzing the applicability of trust management research as it has been applied to architectures with similar characteristics, this thesis specifies the required characteristics for trust management mechanisms used to monitor the trustworthiness of entities in a swarmbased autonomic computing system and describes a trust model that meets these requirements.
Guarneri, Christine E.
20110808T23:59:59.000Z
John Bongaarts' proximate determinants model of fertility has accounted for over 90 percent of variation in the total fertility rate (TFR) of primarily developing nations and historical populations. Recently, dramatically low fertility rates across...
Guarneri, Christine E.
20110808T23:59:59.000Z
John Bongaarts' proximate determinants model of fertility has accounted for over 90 percent of variation in the total fertility rate (TFR) of primarily developing nations and historical populations. Recently, dramatically ...
Ortiz Prada, Rubiel Paul
20120214T23:59:59.000Z
focused on developing and extending a new technology for determining optimal well spacing in tight gas reservoirs that maximize profitability. To achieve the research objectives, an integrated multiwell reservoir and decision model that fully incorporates...
James P. Crutchfield; Christopher J. Ellison; Ryan G. James; John R. Mahoney
20100730T23:59:59.000Z
We adapt tools from information theory to analyze how an observer comes to synchronize with the hidden states of a finitary, stationary stochastic process. We show that synchronization is determined by both the process's internal organization and by an observer's model of it. We analyze these components using the convergence of stateblock and blockstate entropies, comparing them to the previously known convergence properties of the Shannon block entropy. Along the way, we introduce a hierarchy of information quantifiers as derivatives and integrals of these entropies, which parallels a similar hierarchy introduced for block entropy. We also draw out the duality between synchronization properties and a process's controllability. The tools lead to a new classification of a process's alternative representations in terms of minimality, synchronizability, and unifilarity.
Compare Energy Use in Variable Refrigerant Flow Heat Pumps Field Demonstration and Computer Model
Sharma, Chandan; Raustad, Richard
20130601T23:59:59.000Z
Variable Refrigerant Flow (VRF) heat pumps are often regarded as energy efficient airconditioning systems which offer electricity savings as well as reduction in peak electric demand while providing improved individual zone setpoint control. One of the key advantages of VRF systems is minimal duct losses which provide significant reduction in energy use and duct space. However, there is limited data available to show their actual performance in the field. Since VRF systems are increasingly gaining market share in the US, it is highly desirable to have more actual field performance data of these systems. An effort was made in this direction to monitor VRF system performance over an extended period of time in a US national lab test facility. Due to increasing demand by the energy modeling community, an empirical model to simulate VRF systems was implemented in the building simulation program EnergyPlus. This paper presents the comparison of energy consumption as measured in the national lab and as predicted by the program. For increased accuracy in the comparison, a customized weather file was created by using measured outdoor temperature and relative humidity at the test facility. Other inputs to the model included building construction, VRF system model based on lab measured performance, occupancy of the building, lighting/plug loads, and thermostat setpoints etc. Infiltration model inputs were adjusted in the beginning to tune the computer model and then subsequent field measurements were compared to the simulation results. Differences between the computer model results and actual field measurements are discussed. The computer generated VRF performance closely resembled the field measurements.
Superior model for fault tolerance computation in designing nanosized circuit systems
Singh, N. S. S., Email: narinderjit@petronas.com.my; Muthuvalu, M. S., Email: msmuthuvalu@gmail.com [Fundamental and Applied Sciences Department, Universiti Teknologi PETRONAS, Bandar Seri Iskandar, Perak (Malaysia); Asirvadam, V. S., Email: vijanthsagayan@petronas.com.my [Electrical and Electronics Engineering Department, Universiti Teknologi PETRONAS, Bandar Seri Iskandar, Perak (Malaysia)
20141024T23:59:59.000Z
As CMOS technology scales nanometrically, reliability turns out to be a decisive subject in the design methodology of nanosized circuit systems. As a result, several computational approaches have been developed to compute and evaluate reliability of desired nanoelectronic circuits. The process of computing reliability becomes very troublesome and time consuming as the computational complexity build ups with the desired circuit size. Therefore, being able to measure reliability instantly and superiorly is fast becoming necessary in designing modern logic integrated circuits. For this purpose, the paper firstly looks into the development of an automated reliability evaluation tool based on the generalization of Probabilistic Gate Model (PGM) and Boolean Differencebased Error Calculator (BDEC) models. The Matlabbased tool allows users to significantly speedup the task of reliability analysis for very large number of nanoelectronic circuits. Secondly, by using the developed automated tool, the paper explores into a comparative study involving reliability computation and evaluation by PGM and, BDEC models for different implementations of same functionality circuits. Based on the reliability analysis, BDEC gives exact and transparent reliability measures, but as the complexity of the same functionality circuits with respect to gate error increases, reliability measure by BDEC tends to be lower than the reliability measure by PGM. The lesser reliability measure by BDEC is well explained in this paper using distribution of different signal input patterns overtime for same functionality circuits. Simulation results conclude that the reliability measure by BDEC depends not only on faulty gates but it also depends on circuit topology, probability of input signals being one or zero and also probability of error on signal lines.
A Computational Model based on Gross' Emotion Regulation Theory1 Tibor Bosse (tbosse@few.vu.nl)
Treur, Jan
A Computational Model based on Gross' Emotion Regulation Theory1 Tibor Bosse (tbosse for emotion regulation by formalizing the model informally described by Gross (1998). The model has been of emotional response) and qualitative aspects (such as decisions to regulate one's emotion). This model
Parametric Studies and Optimization of Eddy Current Techniques through Computer Modeling
Todorov, E. I. [EWI, Engineering and NDE, 1250 Arthur E. Adams Dr., Columbus, OH 432213585 (United States)
20070321T23:59:59.000Z
The paper demonstrates the use of computer models for parametric studies and optimization of surface and subsurface eddy current techniques. The study with highfrequency probe investigates the effect of eddy current frequency and probe shape on the detectability of flaws in the steel substrate. The lowfrequency sliding probe study addresses the effect of conductivity between the fastener and the hole, frequency and coil separation distance on detectability of flaws in subsurface layers.
NREL Computer Models Integrate Wind Turbines with Floating Platforms (Fact Sheet)
Not Available
20110701T23:59:59.000Z
Far off the shores of energyhungry coastal cities, powerful winds blow over the open ocean, where the water is too deep for today's seabedmounted offshore wind turbines. For the United States to tap into these vast offshore wind energy resources, wind turbines must be mounted on floating platforms to be cost effective. Researchers at the National Renewable Energy Laboratory (NREL) are supporting that development with computer models that allow detailed analyses of such floating wind turbines.
Computer simulation and modeling; you've got quite a task before you.
Vonessen, Nikolaus
/577 Computer Simulation & Modeling Contents 1 Instructor Information 2 Online Course Tools 3 Textbooks 4.1 Grading Breakdown 7.2 Assessment 7.2.1 CS477 7.2.2 CS577 7.2.2.1 CS 577 Final Project Milestones 7.2.3 Coconvening courses 7.3 Other Issues Related to Grades 7.3.1 Flexibility of Grading Breakdown 7.3.2 Pass Fail 7
An Additive Bivariate Hierarchical Model for Functional Data and Related Computations
Redd, Andrew Middleton
20111021T23:59:59.000Z
the penalties 11 are given is computationally intensive. In addition the space for nding the penalty parameters is four dimensional, two for each predictor variable, one each for the mean and principal component functions. Optimizing over a four dimensional... ed by the structure of the formula used. I implement two new formula operators that only work with the pfda package; %&% bind together variables, on the left side of the formula indicates the paired model, on the right an additive variable...
Mork, B; Nelson, R; Kirkendall, B; Stenvig, N
20091130T23:59:59.000Z
Application of BPL technologies to existing overhead highvoltage power lines would benefit greatly from improved simulation tools capable of predicting performance  such as the electromagnetic fields radiated from such lines. Existing EMTPbased frequencydependent line models are attractive since their parameters are derived from physical design dimensions which are easily obtained. However, to calculate the radiated electromagnetic fields, detailed current distributions need to be determined. This paper presents a method of using EMTP line models to determine the current distribution on the lines, as well as a technique for using these current distributions to determine the radiated electromagnetic fields.
Unit physics performance of a mix model in Eulerian fluid computations
Vold, Erik [Los Alamos National Laboratory; Douglass, Rod [Los Alamos National Laboratory
20110125T23:59:59.000Z
In this report, we evaluate the performance of a KL dragbuoyancy mix model, described in a reference study by DimonteTipton [1] hereafter denoted as [DT]. The model was implemented in an Eulerian multimaterial AMR code, and the results are discussed here for a series of unit physics tests. The tests were chosen to calibrate the model coefficients against empirical data, principally from RT (RayleighTaylor) and RM (RichtmyerMeshkov) experiments, and the present results are compared to experiments and to results reported in [DT]. Results show the Eulerian implementation of the mix model agrees well with expectations for test problems in which there is no convective flow of the mass averaged fluid, i.e., in RT mix or in the decay of homogeneous isotropic turbulence (HIT). In RM shockdriven mix, the mix layer moves through the Eulerian computational grid, and there are differences with the previous results computed in a Lagrange frame [DT]. The differences are attributed to the mass averaged fluid motion and examined in detail. Shock and reshock mix are not well matched simultaneously. Results are also presented and discussed regarding model sensitivity to coefficient values and to initial conditions (IC), grid convergence, and the generation of atomically mixed volume fractions.
Leishear, Robert A.; Lee, Si Y.; Poirier, Michael R.; Steeper, Timothy J.; Ervin, Robert C.; Giddings, Billy J.; Stefanko, David B.; Harp, Keith D.; Fowley, Mark D.; Van Pelt, William B.
20121007T23:59:59.000Z
Computational fluid dynamics (CFD) is recognized as a powerful engineering tool. That is, CFD has advanced over the years to the point where it can now give us deep insight into the analysis of very complex processes. There is a danger, though, that an engineer can place too much confidence in a simulation. If a user is not careful, it is easy to believe that if you plug in the numbers, the answer comes out, and you are done. This assumption can lead to significant errors. As we discovered in the course of a study on behalf of the Department of Energy's Savannah River Site in South Carolina, CFD models fail to capture some of the large variations inherent in complex processes. These variations, or scatter, in experimental data emerge from physical tests and are inadequately captured or expressed by calculated mean values for a process. This anomaly between experiment and theory can lead to serious errors in engineering analysis and design unless a correction factor, or safety factor, is experimentally validated. For this study, blending times for the mixing of salt solutions in large storage tanks were the process of concern under investigation. This study focused on the blending processes needed to mix salt solutions to ensure homogeneity within waste tanks, where homogeneity is required to control radioactivity levels during subsequent processing. Two of the requirements for this task were to determine the minimum number of submerged, centrifugal pumps required to blend the salt mixtures in a fullscale tank in half a day or less, and to recommend reasonable blending times to achieve nearly homogeneous salt mixtures. A fullscale, lowflow pump with a total discharge flow rate of 500 to 800 gpm was recommended with two opposing 2.27inch diameter nozzles. To make this recommendation, both experimental and CFD modeling were performed. Lab researchers found that, although CFD provided good estimates of an average blending time, experimental blending times varied significantly from the average.
ParisSud XI, UniversitÃ© de
of Stirling Cycle Numbers Counts Unlabeled Acyclic SingleSource Automata David Callan Department 2007, revised 7 May 2008, accepted 18 May 2008. We show that a determinant of Stirling cycle numbers a formula for the number of acyclic automata with a given set of sources. Keywords: Stirling cycle number
Huang, Yongxin
20100116T23:59:59.000Z
computer cluster system A computer cluster is a high performance computing system consists of a group of computers. The cluster, composed of multiple identical high performance computers (compute nodes) and high performance network, is an ideal platform... retain a reasonable price without compromising computation power. Due to these advantages, the computer cluster has become the mainstream of high performance computation and widely used in the field of science, engineering and business. From the TOP500...
Quantum Analogical Modeling: A General Quantum Computing Algorithm for Predicting Language Behavior
Royal Skousen
20051018T23:59:59.000Z
This paper proposes a general quantum algorithm that can be applied to any classical computer program. Each computational step is written using reversible operators, but the operators remain classical in that the qubits take on values of only zero and one. This classical restriction on the quantum states allows the copying of qubits, a necessary requirement for doing general classical computation. Parallel processing of the quantum algorithm proceeds because of the superpositioning of qubits, the only aspect of the algorithm that is strictly quantum mechanical. Measurement of the system collapses the superposition, leaving only one state that can be observed. In most instances, the loss of information as a result of measurement would be unacceptable. But the linguistically motivated theory of Analogical Modeling (AM) proposes that the probabilistic nature of language behavior can be accurately modeled in terms of the simultaneous analysis of all possible contexts (referred to as supracontexts) providing one selects a single supracontext from those supracontexts that are homogeneous in behavior (namely, supracontexts that allow no increase in uncertainty). The amplitude for each homogeneous supracontext is proportional to its frequency of occurrence, with the result that the probability of selecting one particular supracontext to predict the behavior of the system is proportional to the square of its frequency.
Dr. Chenn Zhou
20081015T23:59:59.000Z
Pulverized coal injection (PCI) into the blast furnace (BF) has been recognized as an effective way to decrease the coke and total energy consumption along with minimization of environmental impacts. However, increasing the amount of coal injected into the BF is currently limited by the lack of knowledge of some issues related to the process. It is therefore important to understand the complex physical and chemical phenomena in the PCI process. Due to the difficulty in attaining trus BF measurements, Computational fluid dynamics (CFD) modeling has been identified as a useful technology to provide such knowledge. CFD simulation is powerful for providing detailed information on flow properties and performing parametric studies for process design and optimization. In this project, comprehensive 3D CFD models have been developed to simulate the PCI process under actual furnace conditions. These models provide raceway size and flow property distributions. The results have provided guidance for optimizing the PCI process.
Briceno, Luis Diego [Colorado State University, Fort Collins; Khemka, Bhavesh [Colorado State University, Fort Collins; Siegel, Howard Jay [Colorado State University, Fort Collins; Maciejewski, Anthony A [ORNL; Groer, Christopher S [ORNL; Koenig, Gregory A [ORNL; Okonski, Gene D [ORNL; Poole, Stephen W [ORNL
20110101T23:59:59.000Z
This study considers a heterogeneous computing system and corresponding workload being investigated by the Extreme Scale Systems Center (ESSC) at Oak Ridge National Laboratory (ORNL). The ESSC is part of a collaborative effort between the Department of Energy (DOE) and the Department of Defense (DoD) to deliver research, tools, software, and technologies that can be integrated, deployed, and used in both DOE and DoD environments. The heterogeneous system and workload described here are representative of a prototypical computing environment being studied as part of this collaboration. Each task can exhibit a timevarying importance or utility to the overall enterprise. In this system, an arriving task has an associated priority and precedence. The priority is used to describe the importance of a task, and precedence is used to describe how soon the task must be executed. These two metrics are combined to create a utility function curve that indicates how valuable it is for the system to complete a task at any given moment. This research focuses on using timeutility functions to generate a metric that can be used to compare the performance of different resource schedulers in a heterogeneous computing system. The contributions of this paper are: (a) a mathematical model of a heterogeneous computing system where tasks arrive dynamically and need to be assigned based on their priority, precedence, utility characteristic class, and task execution type, (b) the use of priority and precedence to generate timeutility functions that describe the value a task has at any given time, (c) the derivation of a metric based on the total utility gained from completing tasks to measure the performance of the computing environment, and (d) a comparison of the performance of resource allocation heuristics in this environment.
Zhou, Shujia
20090101T23:59:59.000Z
Acceleration of Numerical Weather Prediction,” ProceedingsComputer Systems for Climate and Weather Models Shujia Zhouprocesses in climate and weather models demands a continual
Shimizu, Y.
20010111T23:59:59.000Z
This report provides computational results of an extensive study to examine the following: (1) infinite media neutronmultiplication factors; (2) material bucklings; (3) bounding infinite media critical concentrations; (4) bounding finite critical dimensions of waterreflected and homogeneously watermoderated onedimensional systems (i.e., spheres, cylinders of infinite length, and slabs that are infinite in two dimensions) that were comprised of various proportions and densities of plutonium oxides and uranium oxides, each having various isotopic compositions; and (5) sensitivity coefficients of delta keff with respect to critical geometry delta dimensions were determined for each of the three geometries that were studied. The study was undertaken to support the development of a standard that is sponsored by the International Standards Organization (ISO) under Technical Committee 85, Nuclear Energy (TC 85)Subcommittee 5, Nuclear Fuel Technology (SC 5)Working Group 8, Standardization of Calculations, Procedures and Practices Related to Criticality Safety (WG 8). The designation and title of the ISO TC 85/SC 5/WG 8 standard working draft is WD 14941, ''Nuclear energyFissile materialsNuclear criticality control and safety of plutoniumuranium oxide fuel mixtures outside of reactors.'' Various ISO member participants performed similar computational studies using their indigenous computational codes to provide comparative results for analysis in the development of the standard.
Gerkmann, Ralf
J. Theis, Computational Modeling in Biology, Institute of Bioinformatics and Systems BiologyTitle: From data analysis to network modeling, with applications in systems biology Author: Fabian at detailed models of the system of interest. Our application focus are biological networks, namely gene
PM10 Open FugitiveDustSource computer model (for microcomputers). ModelSimulation
Elmore, L.
19900401T23:59:59.000Z
The computer programs in the package are based on the material presented in the document, Control of Open Fugitive Dust Sources, EPA450/388008. The programs on these diskettes serve two purposes. Their primary purpose is to facilitate the process of data entry, allowing the user not only to enter and verify the data which he/she possesses, but also to access additional data which might not be readily available. The second purpose is to calculate emission rates for the particular source category selected using the data previously entered and verified. Software Description: The program is written in BASIC programming language for implementation on an IBMPC/AT and compatible machines using DOS.2X or higher operating system. Hard disk with 5 1/4 inch disk drive or two disk drives, wide carriage printer (132character) or printer capable of printing text in condensed mode required. Text editor or word processing program capable of manipulating ASCII or DOS text files is optional.
Computational Model of Population Dynamics Based on the Cell Cycle and Local Interactions
Oprisan, Sorinel Adrian [Department of Psychology, University of New Orleans, 2000 Lakeshore Dr., New Orleans, LA 70148 (United States); Oprisan, Ana [Department of Physics, University of New Orleans, 2000 Lakeshore Dr., New Orleans, LA 70148 (United States)
20050331T23:59:59.000Z
Our study bridges cellular (mesoscopic) level interactions and global population (macroscopic) dynamics of carcinoma. The morphological differences and transitions between well and smooth defined benign tumors and tentacular malignat tumors suggest a theoretical analysis of tumor invasion based on the development of mathematical models exhibiting bifurcations of spatial patterns in the density of tumor cells. Our computational model views the most representative and clinically relevant features of oncogenesis as a fight between two distinct subsystems: the immune system of the host and the neoplastic system. We implemented the neoplastic subsystem using a threestage cell cycle: active, dormant, and necrosis. The second considered subsystem consists of cytotoxic active (effector) cells  EC, with a very broad phenotype ranging from NK cells to CTL cells, macrophages, etc. Based on extensive numerical simulations, we correlated the fractal dimensions for carcinoma, which could be obtained from tumor imaging, with the malignat stage. Our computational model was able to also simulate the effects of surgical, chemotherapeutical, and radiotherapeutical treatments.
Energy and agriculture in the Haitian economy: A computable general equilibrium model
Jones, D.W.; Wu, M.T.C.; Das, S.; Cohn, S.M.
19880201T23:59:59.000Z
This report documents a computable general equilibrium (CGE) model of the economy of Haiti, emphasizing energy use in agriculture. CGE models compare favorably with econometric models for developing countries in terms of their ability to take advantage of available data. The model of Haiti contains ten production sectors: manufacturing, services, transportation, electricity, rice, coffee, sugar cane, sugar refining, general agriculture, and fuelwood and charcoal. All production functions use functional forms which permit factor substitution. Consumption is specified for three income categories of consumers and a government sector with a linear expenditure system (LES) of demand equations. The economy exports four categories of products and imports six. Balanced trade and capital accounts are required for equilibrium. Total sectoral allocations of land, labor and capital are constrained to equal the quantities of these inputs in the Haitian economy as of the early 1980s. The model can be used to study the consequences of fiscal and trade policies and sectorally oriented productivity improvement policies. Guidance is offered regarding how to use the model to study economic growth and technological change. Limitations of the mode are also pointed out as well as user strategies which can lessen or work around some of those limitations. 19 refs.
Use of model calibration to achieve high accuracy in analysis of computer networks
Frogner, Bjorn; Guarro, Sergio; Scharf, Guy
20040511T23:59:59.000Z
A system and method are provided for creating a network performance prediction model, and calibrating the prediction model, through application of network load statistical analyses. The method includes characterizing the measured load on the network, which may include background load data obtained over time, and may further include directed load data representative of a transactionlevel event. Probabilistic representations of load data are derived to characterize the statistical persistence of the network performance variability and to determine delays throughout the network. The probabilistic representations are applied to the network performance prediction model to adapt the model for accurate prediction of network performance. Certain embodiments of the method and system may be used for analysis of the performance of a distributed application characterized as data packet streams.
Supercomputing  Computational Engineering  ORNL
Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)
Computing Computer Science Data Earth Sciences Energy Science Future Technology Knowledge Discovery Materials Mathematics National Security Systems Modeling Engineering...
Reznik, Alexander N., Email: reznik@ipm.scinnov.ru [Institute for Physics of Microstructures of the Russian Academy of Sciences, GSP105, Nizhniy Novgorod 603950, Russia, and Lobachevsky State University of Nizhny Novgorod, 23, pr. Gagarina, N. Novgorod 603950 (Russian Federation)
20140825T23:59:59.000Z
An electrodynamic model is proposed for the tunneling microwave microscope with subnanometer space resolution as developed by Lee et al. [Appl. Phys. Lett. 97, 183111 (2010)]. Tipsample impedance Z{sub a} was introduced and studied in the tunneling and nontunneling regimes. At tunneling breakdown, the microwave current between probe and sample flows along two parallel channels characterized by impedances Z{sub p} and Z{sub t} that add up to form overall impedance Z{sub a}. Quantity Z{sub p} is the capacitive impedance determined by the near field of the probe and Z{sub t} is the impedance of the tunnel junction. By taking into account the distance dependences of effective tip radius r{sub 0}(z) and tunnel resistance R{sub t}(z)?=?Re[Z{sub t}(z)], we were able to explain the experimentally observed dependences of resonance frequency f{sub r}(z) and quality factor Q{sub L}(z) of the microscope. The obtained microwave resistance R{sub t}(z) and direct current tunnel resistance R{sub t}{sup dc}(z) exhibit qualitatively similar behavior, although being largely different in both magnitude and the characteristic scale of height dependence. Interpretation of the microwave images of the atomic structure of test samples proved possible by taking into account the inductive component of tunnel impedance ImZ{sub t}?=??L{sub t}. Relation ?L{sub t}/R{sub t}???0.235 was obtained.
and NCAR in the development of a comprehensive, earth systems model. This model incorporates the mostperformance climate models. Through the addition of relevant physical processes, we are developing an earth systems modeling capability as well. Our collaborators in climate research include the National Center
Blough, E.; Russell, W.; Leach, J.W.
19900801T23:59:59.000Z
Computer models have been developed for evaluating conceptual designs of integrated coal gasification combined cycle power plants. An overall system model was developed for performing thermodynamic cycle analyses, and detailed models were developed for predicting performance characteristics of fixed bed coal gasifiers and hot gas clean up subsystem components. The overall system model performs mass and energy balances and does chemical equilibrium analyses to determine the effects of changes in operating conditions, or to evaluate proposed design changes. An existing plug flow model for fixed bed gasifiers known as the Wen II model was revised and updated. Also, a spread sheet model of zinc ferrite sulfur sorbent regeneration subsystem was developed. Parametric analyses were performed to determine how performance depends on variables in the system design. The work was done to support CRS Sirrine Incorporated in their study of standardized air blown coal gasifier gas turbine concepts.
Model computations of blue stragglers and W UMatype stars in globular clusters
Stepien, Kazimierz
20150101T23:59:59.000Z
It was recently demonstrated that contact binaries occur in globular clusters (GCs) only immediately below turnoff point and in the region of blue straggler stars (BSs). In addition, observations indicate that at least a significant fraction of BSs in these clusters was formed by the binary masstransfer mechanism. The aim of our present investigation is to obtain and analyze a set of evolutionary models of cool, close detached binaries with a low metal abundance, which are characteristic of GC. We computed the evolution of 975 models of initially detached, cool close binaries with different initial parameters. The models include mass exchange between components as well as mass and angular momentum loss due to the magnetized winds for very lowmetallicity binaries with Z = 0.001. The models are interpreted in the context of existing data on contact binary and blue straggler members of GCs. The model parameters agree well with the observed positions of the GC contact binaries in the HertzsprungRussell diagra...
Building ventilation: A pressure airflow model computer generation and elements of validation
Boyer, H; Adelard, L; Mara, T A
20120101T23:59:59.000Z
The calculation of airflows is of great importance for detailed building thermal simulation computer codes, these airflows most frequently constituting an important thermal coupling between the building and the outside on one hand, and the different thermal zones on the other. The driving effects of air movement, which are the wind and the thermal buoyancy, are briefly outlined and we look closely at their coupling in the case of buildings, by exploring the difficulties associated with large openings. Some numerical problems tied to the resolving of the nonlinear system established are also covered. Part of a detailled simulation software (CODYRUN), the numerical implementation of this airflow model is explained, insisting on data organization and processing allowing the calculation of the airflows. Comparisons are then made between the model results and in one hand analytical expressions and in another and experimental measurements in case of a collective dwelling.
Leishear, R.; Poirier, M.; Fowley, M.
20110526T23:59:59.000Z
The Salt Disposition Integration (SDI) portfolio of projects provides the infrastructure within existing Liquid Waste facilities to support the startup and long term operation of the Salt Waste Processing Facility (SWPF). Within SDI, the Blend and Feed Project will equip existing waste tanks in the Tank Farms to serve as Blend Tanks where 300,000800,000 gallons of salt solution will be blended in 1.3 million gallon tanks and qualified for use as feedstock for SWPF. Blending requires the miscible salt solutions from potentially multiple source tanks per batch to be well mixed without disturbing settled sludge solids that may be present in a Blend Tank. Disturbing solids may be problematic both from a feed quality perspective as well as from a process safety perspective where hydrogen release from the sludge is a potential flammability concern. To develop the necessary technical basis for the design and operation of blending equipment, Savannah River National Laboratory (SRNL) completed scaled blending and transfer pump tests and computational fluid dynamics (CFD) modeling. A 94 inch diameter pilotscale blending tank, including tank internals such as the blending pump, transfer pump, removable cooling coils, and center column, were used in this research. The test tank represents a 1/10.85 scaled version of an 85 foot diameter, Type IIIA, nuclear waste tank that may be typical of Blend Tanks used in SDI. Specifically, Tank 50 was selected as the tank to be modeled per the SRR, Project Engineering Manager. SRNL blending tests investigated various fixed position, nonrotating, dual nozzle pump designs, including a blending pump model provided by the blend pump vendor, Curtiss Wright (CW). Primary research goals were to assess blending times and to evaluate incipient sludge disturbance for waste tanks. Incipient sludge disturbance was defined by SRR and SRNL as minor blending of settled sludge from the tank bottom into suspension due to blending pump operation, where the sludge level was shown to remain constant. To experimentally model the sludge layer, a very thin, pourable, sludge simulant was conservatively used for all testing. To experimentally model the liquid, supernate layer above the sludge in waste tanks, two salt solution simulants were used, which provided a bounding range of supernate properties. One solution was water (H{sub 2}O + NaOH), and the other was an inhibited, more viscous salt solution. The research performed and data obtained significantly advances the understanding of fluid mechanics, mixing theory and CFD modeling for nuclear waste tanks by benchmarking CFD results to actual experimental data. This research significantly bridges the gap between previous CFD models and actual field experiences in real waste tanks. A finding of the 2009, DOE, Slurry Retrieval, Pipeline Transport and Plugging, and Mixing Workshop was that CFD models were inadequate to assess blending processes in nuclear waste tanks. One recommendation from that Workshop was that a validation, or bench marking program be performed for CFD modeling versus experiment. This research provided experimental data to validate and correct CFD models as they apply to mixing and blending in nuclear waste tanks. Extensive SDI research was a significant step toward bench marking and applying CFD modeling. This research showed that CFD models not only agreed with experiment, but demonstrated that the large variance in actual experimental data accounts for misunderstood discrepancies between CFD models and experiments. Having documented this finding, SRNL was able to provide correction factors to be used with CFD models to statistically bound full scale CFD results. Through the use of pilot scale tests performed for both types of pumps and available engineering literature, SRNL demonstrated how to effectively apply CFD results to salt batch mixing in full scale waste tanks. In other words, CFD models were in error prior to development of experimental correction factors determined during this research, which provided a technique to use CFD models fo
Nair, S.K.; Chambers, D.B.; Park, S.H.; Hoffman, F.O. [Senes Oak Ridge, Inc., TN (United States). Center for Risk Analysis
19971101T23:59:59.000Z
The objective of this study is to examine the usefulness and effectiveness of currently existing models that simulate the release of uranium hexafluoride from UF{sub 6}handling facilities, subsequent reactions of UF{sub 6} with atmospheric moisture, and the dispersion of UF{sub 6} and reaction products in the atmosphere. The study evaluates screeninglevel and detailed publicdomain models that were specifically developed for UF{sub 6} and models that were originally developed for the treatment of dense gases but are applicable to UF{sub 6} release, reaction, and dispersion. The model evaluation process is divided into three specific tasks: modelcomponent evaluation; applicability evaluation; and user interface and quality assurance and quality control (QA/QC) evaluation. Within the modelcomponent evaluation process, a model`s treatment of source term, thermodynamics, and atmospheric dispersion are considered and model predictions are compared with actual observations. Within the applicability evaluation process, a model`s applicability to Integrated Safety Analysis, Emergency Response Planning, and PostAccident Analysis, and to sitespecific considerations are assessed. Finally, within the user interface and QA/QC evaluation process, a model`s userfriendliness, presence and clarity of documentation, ease of use, etc. are assessed, along with its handling of QA/QC. This document presents the complete methodology used in the evaluation process.
Diggavi, Suhas
the measurement of velocity elds and pressures in a hydraulic turbine. The development of a new probing systemDevelopment of a measurement system able to determine the ow velocity eld on models of hydraulic turbines Christian Landry Motivations & Objectives The project was driven by the need to improve
Quantum Chaos & Quantum Computers
D. L. Shepelyansky
20000615T23:59:59.000Z
The standard generic quantum computer model is studied analytically and numerically and the border for emergence of quantum chaos, induced by imperfections and residual interqubit couplings, is determined. This phenomenon appears in an isolated quantum computer without any external decoherence. The onset of quantum chaos leads to quantum computer hardware melting, strong quantum entropy growth and destruction of computer operability. The time scales for development of quantum chaos and ergodicity are determined. In spite the fact that this phenomenon is rather dangerous for quantum computing it is shown that the quantum chaos border for interqubit coupling is exponentially larger than the energy level spacing between quantum computer eigenstates and drops only linearly with the number of qubits n. As a result the ideal multiqubit structure of the computer remains rather robust against imperfections. This opens a broad parameter region for a possible realization of quantum computer. The obtained results are related to the recent studies of quantum chaos in such manybody systems as nuclei, complex atoms and molecules, finite Fermi systems and quantum spin glass shards which are also reviewed in the paper.
Determination of Retrofit Savings Using a Calibrated Building Energy Simulation Model
Reddy, S. N.; Hunn, B. D.; Hood, D. B.
19940101T23:59:59.000Z
for wholebuilding electric, cooling, and heating energy use, and were compared with savings calculated using a regression model developed under the LoanSTAR program. Finally, to validate the model, postretrofit DOE2 results were compared with measured...
Study of behavior and determination of customer lifetime value(CLV) using Markov chain model
Permana, Dony, Email: donypermana@students.itb.ac.id [Statistics Research Division, Faculty of Mathematics and Natural Science, Bandung Institute of Technology, Indonesia and Statistics Study Program, Faculty of Mathematics and Natural Sciences, Padang State University (Indonesia); Indratno, Sapto Wahyu; Pasaribu, Udjianna S. [Statistics Research Division, Faculty of Mathematics and Natural Science, Bandung Institute of Technology (Indonesia)
20140324T23:59:59.000Z
Customer Lifetime Value or CLV is a restriction on interactive marketing to help a company in arranging financial for the marketing of new customer acquisition and customer retention. Additionally CLV can be able to segment customers for financial arrangements. Stochastic models for the fairly new CLV used a Markov chain. In this model customer retention probability and new customer acquisition probability play an important role. This model is originally introduced by Pfeifer and Carraway in 2000 [1]. They introduced several CLV models, one of them only involves customer and former customer. In this paper we expand the model by adding the assumption of the transition from former customer to customer. In the proposed model, the CLV value is higher than the CLV value obtained by Pfeifer and Caraway model. But our model still requires a longer convergence time.
Lyon, Richard Harry, 1981
20040101T23:59:59.000Z
Correct modeling of the space environment, including radiative forces, is an important aspect of space situational awareness for geostationary (GEO) spacecraft. Solar radiation pressure has traditionally been modeled using ...
Shepelyansky, Dima
Quantum computing of quantum chaos in the kicked rotator model B. LeÂ´vi, B. Georgeot, and D. L model, a system that displays rich physical properties and enables to study problems of quantum chaos are robust in presence of imperfections. This implies that the algorithm can simulate the dynamics of quantum
Whitton, Mary C.
or action. · Scalability. The system should require a human setup that is at most sublinear in the numbertime rendering system. Figure 1. A view of our 15million polygon model of a coalfired power plant. The MMR: Massive Model Rendering System The Challenge Overview. Computeraided design (CAD) applications
Hawick, Ken
0 Computational Science Technical Note CSTN163 Transients in a ForestFire Simulation Model with Varying Combustion Neighbourhoods and Watercourse Firebreaks K. A. Hawick 2012 Forest and bush firesSchwabl Forestfire model is investigated with various localised combustion neighbourhoods. The transient
20040101T23:59:59.000Z
Mathematics and Computers in Simulation 65 (2004) 557577 Parallel runs of a large air pollution 20 January 2004; accepted 21 January 2004 Abstract Largescale air pollution models can successfully. The mathematical description of a largescale air pollution model will be discussed in this paper. The principles
Winters, W.S.
19840101T23:59:59.000Z
An overview of the computer code TOPAZ (TransientOneDimensional Pipe Flow Analyzer) is presented. TOPAZ models the flow of compressible and incompressible fluids through complex and arbitrary arrangements of pipes, valves, flow branches and vessels. Heat transfer to and from the fluid containment structures (i.e. vessel and pipe walls) can also be modeled. This document includes discussions of the fluid flow equations and containment heat conduction equations. The modeling philosophy, numerical integration technique, code architecture, and methods for generating the computational mesh are also discussed.
Robinson, Mark R. (Albuquerque, NM); Ward, Kenneth J. (Albuquerque, NM); Eaton, Robert P. (Albuquerque, NM); Haaland, David M. (Albuquerque, NM)
19900101T23:59:59.000Z
The characteristics of a biological fluid sample having an analyte are determined from a model constructed from plural known biological fluid samples. The model is a function of the concentration of materials in the known fluid samples as a function of absorption of wideband infrared energy. The wideband infrared energy is coupled to the analyte containing sample so there is differential absorption of the infrared energy as a function of the wavelength of the wideband infrared energy incident on the analyte containing sample. The differential absorption causes intensity variations of the infrared energy incident on the analyte containing sample as a function of sample wavelength of the energy, and concentration of the unknown analyte is determined from the thusderived intensity variations of the infrared energy as a function of wavelength from the model absorption versus wavelength function.
Cerveny, Vlastislav
Ray tracing computations in the smoothed SEG/EAGE Salt Model V. Bucha*, Department of Geophysics in comparison with precise methods, e.g., finite differences or finite elements, are the speed of computation, because ray tracing computations need smooth velocity macro models (Bucha, Bulant & Klimes 2003
Nair, S.K.; Chambers, D.B.; Park, S.H.; Radonjic, Z.R.; Coutts, P.T.; Lewis, C.J.; Hammonds, J.S.; Hoffman, F.O. [Senes Oak Ridge, Inc., TN (United States). Center for Risk Analysis
19971101T23:59:59.000Z
Three uranium hexafluoride(UF{sub 6}) specific modelsHGSYSTEM/UF{sub 6}, Science Application International Corporation, and RTM96; three densegas modelsDEGADIS, SLAB, and the Chlorine Institute methodology; and one toxic chemical modelAFTOXare evaluated on their capabilities to simulate the chemical reactions, thermodynamics, and atmospheric dispersion of UF{sub 6} released from accidents at nuclear fuelcycle facilities, to support Integrated Safety Analysis, Emergency Response Planning, and PostAccident Analysis. These models are also evaluated for userfriendliness and for quality assurance and quality control features, to ensure the validity and credibility of the results. Model performance evaluations are conducted for the three UF{sub 6}specific models, using field data on releases of UF{sub 6} and other heavy gases. Predictions from the HGSYSTEM/UF{sub 6} and SAIC models are within an order of magnitude of the field data, but the SAIC model overpredicts beyond an order of magnitude for a few UF{sub 6}specific data points. The RTM96 model provides overpredictions within a factor of 3 for all data points beyond 400 m from the source. For one data set, however, the RTM96 model severely underpredicts the observations within 200 m of the source. Outputs of the models are most sensitive to the meteorological parameters at large distances from the source and to certain sourcespecific and meteorological parameters at distances close to the source. Specific recommendations are being made to improve the applicability and usefulness of the three models and to choose a specific model to support the intended analyses. Guidance is also provided on the choice of input parameters for initial dilution, building wake effects, and distance to completion of UF{sub 6} reaction with water.
Kramer, Sharlotte Lorraine Bolyard; Scherzinger, William M.
20140901T23:59:59.000Z
The Virtual Fields Method (VFM) is an inverse method for constitutive model parameter identication that relies on fulleld experimental measurements of displacements. VFM is an alternative to standard approaches that require several experiments of simple geometries to calibrate a constitutive model. VFM is one of several techniques that use fulleld exper imental data, including Finite Element Method Updating (FEMU) techniques, but VFM is computationally fast, not requiring iterative FEM analyses. This report describes the im plementation and evaluation of VFM primarily for nitedeformation plasticity constitutive models. VFM was successfully implemented in MATLAB and evaluated using simulated FEM data that included representative experimental noise found in the Digital Image Cor relation (DIC) optical technique that provides fulleld displacement measurements. VFM was able to identify constitutive model parameters for the BCJ plasticity model even in the presence of simulated DIC noise, demonstrating VFM as a viable alternative inverse method. Further research is required before VFM can be adopted as a standard method for constitu tive model parameter identication, but this study is a foundation for ongoing research at Sandia for improving constitutive model calibration.
Automatic Interface Generation for Enumerative Model Computer Science Annual Workshop 2006
Pace, Gordon J.
Sandro Spina Dept. of Computer Science and A.I. New Computing Building University of Malta, Malta sandro.spina@um.edu.mt Gordon Pace Dept. of Computer Science and A.I. New Computing Building University of Malta, Malta gordon techniques. CSAW '06 CSAI Department, University of Malta Processes can be described using some formal
Wachsman, E.D.; Duncan, K.L.; Ebrahimi, F.
20050127T23:59:59.000Z
The objectives of this project were to: provide fundamental relationships between SOFC performance and operating conditions and transient (time dependent) transport properties; extend models to thermomechanical stability, thermochemical stability, and multilayer structures; incorporate microstructural effects such as grain boundaries and grainsize distribution; experimentally verify models and devise strategies to obtain relevant material constants; and assemble software package for integration into SECA failure analysis models.
Emergency Response Equipment and Related Training: Airborne Radiological Computer System (Model II)
David P. Colton
20070228T23:59:59.000Z
The materials included in the Airborne Radiological Computer System, ModelII (ARCSII) were assembled with several considerations in mind. First, the system was designed to measure and record the airborne gamma radiation levels and the corresponding latitude and longitude coordinates, and to provide a first overview look of the extent and severity of an accident's impact. Second, the portable system had to be light enough and durable enough that it could be mounted in an aircraft, ground vehicle, or watercraft. Third, the system must control the collection and storage of the data, as well as provide a realtime display of the data collection results to the operator. The notebook computer and color graphics printer components of the system would only be used for analyzing and plotting the data. In essence, the provided equipment is composed of an acquisition system and an analysis system. The data can be transferred from the acquisition system to the analysis system at the end of the data collection or at some other agreeable time.
Michael V. Glazoff; Piyush Sabharwall; Akira Tokuhiro
20140901T23:59:59.000Z
An evaluation of thermodynamic aspects of hot corrosion of the superalloys Haynes 242 and HastelloyTM N in the eutectic mixtures of KF and ZrF4 is carried out for development of Advanced High Temperature Reactor (AHTR). This work models the behavior of several superalloys, potential candidates for the AHTR, using computational thermodynamics tool (ThermoCalc), leading to the development of thermodynamic description of the molten salt eutectic mixtures, and on that basis, mechanistic prediction of hot corrosion. The results from these studies indicated that the principal mechanism of hot corrosion was associated with chromium leaching for all of the superalloys described above. However, HastelloyTM N displayed the best hot corrosion performance. This was not surprising given it was developed originally to withstand the harsh conditions of molten salt environment. However, the results obtained in this study provided confidence in the employed methods of computational thermodynamics and could be further used for future alloy design efforts. Finally, several potential solutions to mitigate hot corrosion were proposed for further exploration, including coating development and controlled scaling of intermediate compounds in the KFZrF4 system.
Ghosh, Somnath
. New research initiatives like the Materials Genome Initiative (MGI) and the Integrated Computational Materials Science & Engineering (ICMSE) are creating unprecedented opportunities for unraveling new1 PREFACE The recent times have seen a surge in computational modeling of materials and processes
Computer modeling of arc welds to predict effects of critical variables on weld penetration
Zacharia, T.; David, S.A.
19910101T23:59:59.000Z
In recent years, there have been several attempts to study the effect of critical variables on welding by computational modeling. It is widely recognized that temperature distributions and weld pool shapes are keys to quality weldments. It would be very useful to obtain relevant information about the thermal cycle experienced by the weld metal, the size and shape of the weld pool, and the local solidification rates, temperature distributions in the heataffected zone (HAZ), and associated phase transformations. The solution of moving boundary problems, such as weld pool fluid flow and heat transfer, that involve melting and/or solidification is inherently difficult because the location of the solidliquid interface is not known a priori and must be obtained as a part of the solution. Because of nonlinearity of the governing equations, exact analytical solutions can be obtained only for a limited number of idealized cases. Therefore, considerable interest has been directed toward the use of numerical methods to obtain timedependant solutions for theoretical models that describe the welding process. Numerical methods can be employed to predict the transient development of the weld pool as an integral part of the overall heat transfer conditions. The structure of the model allows each phenomenon to be addressed individually, thereby gaining more insight into their competing interactions. 19 refs., 6 figs., 1 tab.
Okuma, Tomohisa, Email: okuma@msic.med.osakacu.ac.jp; Matsuoka, Toshiyuki; Yamamoto, Akira; Oyama, Yoshimasa; Hamamoto, Shinichi [Osaka City University Graduate School of Medicine, Department of Radiology (Japan); Toyoshima, Masami [Kobe City Medical Center West Hospital, Department of Radiology (Japan); Nakamura, Kenji; Miki, Yukio [Osaka City University Graduate School of Medicine, Department of Radiology (Japan)
20100815T23:59:59.000Z
The purpose of this study was to retrospectively determine the local control rate and contributing factors to local progression after computed tomography (CT)guided radiofrequency ablation (RFA) for unresectable lung tumor. This study included 138 lung tumors in 72 patients (56 men and 16 women; age 70.0 {+} 11.6 years (range 3194); mean tumor size 2.1 {+} 1.2 cm [range 0.29]) who underwent lung RFA between June 2000 and May 2009. Mean followup periods for patients and tumors were 14 and 12 months, respectively. The local progressionfree rate and survival rate were calculated to determine the contributing factors to local progression. During followup, 44 of 138 (32%) lung tumors showed local progression. The 1, 2, 3, and 5year overall local control rates were 61, 57, 57, and 38%, respectively. The risk factors for local progression were age ({>=}70 years), tumor size ({>=}2 cm), sex (male), and no achievement of rolloff during RFA (P < 0.05). Multivariate analysis identified tumor size {>=}2 cm as the only independent factor for local progression (P = 0.003). For tumors <2 cm, 17 of 68 (25%) showed local progression, and the 1, 2, and 3year overall local control rates were 77, 73, and 73%, respectively. Multivariate analysis identified that age {>=}70 years was an independent determinant of local progression for tumors <2 cm in diameter (P = 0.011). The present study showed that 32% of lung tumors developed local progression after CTguided RFA. The significant risk factor for local progression after RFA for lung tumors was tumor size {>=}2 cm.
Anderson, C.A.
19810101T23:59:59.000Z
Six years ago the Reactor Safety Research Division of the Nuclear Regulatory Commission (NRC) approached the Los Alamos National Laboratory to develop a comprehensive concrete structural analysis code to predict the static and dynamic behavior of Prestressed Concrete Reactor Vessels (PCRVs) that serve as the containment structure of a HighTemperature GasCooled Reactor. The PCRV is a complex concrete structure that must be modeled in three dimensions and posseses other complicating features such as a steel liner for the reactor cavity and woven cables embedded vertically in the PCRV and wound circumferentially on the outside of the PCRV. The cables, or tendons, are used for prestressing the reactor vessel. In addition to developing the computational capability to predict inelastic three dimensional concrete structural behavior, the code response was verified against documented experiments on concrete structural behavior. This code development/verification effort is described.
Kaper, Tasso J., Email: tasso@bu.edu; Kramer, Mark A., Email: mak@bu.edu [Department of Mathematics and Statistics, Boston University, Boston, Massachusetts 02215 (United States); Rotstein, Horacio G., Email: horacio@njit.edu [Department of Mathematical Sciences, New Jersey Institute of Technology, Newark, New Jersey 07102 (United States)
20131215T23:59:59.000Z
Rhythmic neuronal oscillations across a broad range of frequencies, as well as spatiotemporal phenomena, such as waves and bumps, have been observed in various areas of the brain and proposed as critical to brain function. While there is a long and distinguished history of studying rhythms in nerve cells and neuronal networks in healthy organisms, the association and analysis of rhythms to diseases are more recent developments. Indeed, it is now thought that certain aspects of diseases of the nervous system, such as epilepsy, schizophrenia, Parkinson's, and sleep disorders, are associated with transitions or disruptions of neurological rhythms. This focus issue brings together articles presenting modeling, computational, analytical, and experimental perspectives about rhythms and dynamic transitions between them that are associated to various diseases.
PierreLuc DallaireDemers; Frank K. Wilhelm
20150818T23:59:59.000Z
Many phenomena of strongly correlated materials are encapsulated in the FermiHubbard model whose thermodynamical properties can be computed from its grand canonical potential according to standard procedures. In general, there is no closed form solution for lattices of more than one spatial dimension, but solutions can be approximated with cluster perturbation theory. To model longrange effects such as order parameters, a powerful method to compute the cluster's Green's function consists in finding its selfenergy through a variational principle of the grand canonical potential. This opens the possibility of studying various phase transitions at finite temperature in the FermiHubbard model. However, a classical cluster solver quickly hits an exponential wall in the memory (or computation time) required to store the computation variables. Here it is shown theoretically that that the cluster solver can be mapped to a subroutine on a quantum computer whose quantum memory scales as the number of orbitals in the simulated cluster. A quantum computer with a few tens of qubits could therefore simulate the thermodynamical properties of complex fermionic lattices inaccessible to classical supercomputers.
The Use Of Computational Human Performance Modeling As Task Analysis Tool
Jacuqes Hugo; David Gertman
20120701T23:59:59.000Z
During a review of the Advanced Test Reactor safety basis at the Idaho National Laboratory, human factors engineers identified ergonomic and human reliability risks involving the inadvertent exposure of a fuel element to the air during manual fuel movement and inspection in the canal. There were clear indications that these risks increased the probability of human error and possible severe physical outcomes to the operator. In response to this concern, a detailed study was conducted to determine the probability of the inadvertent exposure of a fuel element. Due to practical and safety constraints, the task network analysis technique was employed to study the work procedures at the canal. Discreteevent simulation software was used to model the entire procedure as well as the salient physical attributes of the task environment, such as distances walked, the effect of dropped tools, the effect of hazardous body postures, and physical exertion due to strenuous tool handling. The model also allowed analysis of the effect of cognitive processes such as visual perception demands, auditory information and verbal communication. The model made it possible to obtain reliable predictions of operator performance and workload estimates. It was also found that operator workload as well as the probability of human error in the fuel inspection and transfer task were influenced by the concurrent nature of certain phases of the task and the associated demand on cognitive and physical resources. More importantly, it was possible to determine with reasonable accuracy the stages as well as physical locations in the fuel handling task where operators would be most at risk of losing their balance and falling into the canal. The model also provided sufficient information for a human reliability analysis that indicated that the postulated fuel exposure accident was less than credible.
Michigan, University of
, and the IZMEM/DMSP model was constructed using groundmagnetometer measurements and calibrated against DMSP ion. These convection models must accommodate a raft of geo physical conditions, particularly IMF orientation, solar and Maynard, 1987; Rich and Maynard, 1989) and sup ported the two cell convection pattern predicted
Aalborg Universitet Determining HTPEM electrode parameters using a mechanistic impedance model
Berning, Torsten
International Conference on Medium and High Temperature Proton Exchange Membrane Fuel Cells, Copenhagen, Denmarkrealistic parameters. Model The fuel cell model is implemented using a 2D finite volume approach, taking into account at the School of Chemical Engineering and Advanced Materials, Newcastle University. The membrane was PBI doped
Goodarz Ahmadi
20020701T23:59:59.000Z
In this project, a computational modeling approach for analyzing flow and ash transport and deposition in filter vessels was developed. An EulerianLagrangian formulation for studying hotgas filtration process was established. The approach uses an Eulerian analysis of gas flows in the filter vessel, and makes use of the Lagrangian trajectory analysis for the particle transport and deposition. Particular attention was given to the SiemensWestinghouse filter vessel at Power System Development Facility in Wilsonville in Alabama. Details of hotgas flow in this tangential flow filter vessel are evaluated. The simulation results show that the rapidly rotation flow in the spacing between the shroud and the vessel refractory acts as cyclone that leads to the removal of a large fraction of the larger particles from the gas stream. Several alternate designs for the filter vessel are considered. These include a vessel with a short shroud, a filter vessel with no shroud and a vessel with a deflector plate. The hotgas flow and particle transport and deposition in various vessels are evaluated. The deposition patterns in various vessels are compared. It is shown that certain filter vessel designs allow for the large particles to remain suspended in the gas stream and to deposit on the filters. The presence of the larger particles in the filter cake leads to lower mechanical strength thus allowing for the backpulse process to more easily remove the filter cake. A laboratoryscale filter vessel for testing the cold flow condition was designed and fabricated. A laserbased flow visualization technique is used and the gas flow condition in the laboratoryscale vessel was experimental studied. A computer model for the experimental vessel was also developed and the gas flow and particle transport patterns are evaluated.
Donna Post Guillen; Tami Grimmett; Anastasia M. Gribik; Steven P. Antal
20100901T23:59:59.000Z
The Hybrid Energy Systems Testing (HYTEST) Laboratory is being established at the Idaho National Laboratory to develop and test hybrid energy systems with the principal objective to safeguard U.S. Energy Security by reducing dependence on foreign petroleum. A central component of the HYTEST is the slurry bubble column reactor (SBCR) in which the gastoliquid reactions will be performed to synthesize transportation fuels using the Fischer Tropsch (FT) process. SBCRs are cylindrical vessels in which gaseous reactants (for example, synthesis gas or syngas) is sparged into a slurry of liquid reaction products and finely dispersed catalyst particles. The catalyst particles are suspended in the slurry by the rising gas bubbles and serve to promote the chemical reaction that converts syngas to a spectrum of longer chain hydrocarbon products, which can be upgraded to gasoline, diesel or jet fuel. These SBCRs operate in the churnturbulent flow regime which is characterized by complex hydrodynamics, coupled with reacting flow chemistry and heat transfer, that effect reactor performance. The purpose of this work is to develop a computational multiphase fluid dynamic (CMFD) model to aid in understanding the physicochemical processes occurring in the SBCR. Our team is developing a robust methodology to couple reaction kinetics and mass transfer into a fourfield model (consisting of the bulk liquid, small bubbles, large bubbles and solid catalyst particles) that includes twelve species: (1) CO reactant, (2) H2 reactant, (3) hydrocarbon product, and (4) H2O product in small bubbles, large bubbles, and the bulk fluid. Properties of the hydrocarbon product were specified by vapor liquid equilibrium calculations. The absorption and kinetic models, specifically changes in species concentrations, have been incorporated into the mass continuity equation. The reaction rate is determined based on the macrokinetic model for a cobalt catalyst developed by Yates and Satterfield [1]. The model includes heat generation due to the exothermic chemical reaction, as well as heat removal from a constant temperature heat exchanger. Results of the CMFD simulations (similar to those shown in Figure 1) will be presented.
Computer Modeling VRF Heat Pumps in Commercial Buildings using EnergyPlus
Raustad, Richard
20130601T23:59:59.000Z
Variable Refrigerant Flow (VRF) heat pumps are increasingly used in commercial buildings in the United States. Monitored energy use of field installations have shown, in some cases, savings exceeding 30% compared to conventional heating, ventilating, and airconditioning (HVAC) systems. A simulation study was conducted to identify the installation or operational characteristics that lead to energy savings for VRF systems. The study used the Department of Energy EnergyPlus? building simulation software and four reference building models. Computer simulations were performed in eight U.S. climate zones. The baseline reference HVAC system incorporated packaged singlezone directexpansion cooling with gas heating (PSZAC) or variableairvolume systems (VAV with reheat). An alternate baseline HVAC system using a heat pump (PSZHP) was included for some buildings to directly compare gas and electric heating results. These baseline systems were compared to a VRF heat pump model to identify differences in energy use. VRF systems combine multiple indoor units with one or more outdoor unit(s). These systems move refrigerant between the outdoor and indoor units which eliminates the need for duct work in most cases. Since many applications install duct work in unconditioned spaces, this leads to installation differences between VRF systems and conventional HVAC systems. To characterize installation differences, a duct heat gain model was included to identify the energy impacts of installing ducts in unconditioned spaces. The configuration of variable refrigerant flow heat pumps will ultimately eliminate or significantly reduce energy use due to duct heat transfer. Fan energy is also studied to identify savings associated with nonducted VRF terminal units. VRF systems incorporate a variablespeed compressor which may lead to operational differences compared to singlespeed compression systems. To characterize operational differences, the computer model performance curves used to simulate cooling operation are also evaluated. The information in this paper is intended to provide a relative difference in system energy use and compare various installation practices that can impact performance. Comparative results of VRF versus conventional HVAC systems include energy use differences due to duct location, differences in fan energy when ducts are eliminated, and differences associated with electric versus fossil fuel type heating systems.
Johnson, J. D. (Prostat, Mesa, AZ); Oberkampf, William Louis; Helton, Jon Craig (Arizona State University, Tempe, AZ); Storlie, Curtis B. (North Carolina State University, Raleigh, NC)
20061001T23:59:59.000Z
Evidence theory provides an alternative to probability theory for the representation of epistemic uncertainty in model predictions that derives from epistemic uncertainty in model inputs, where the descriptor epistemic is used to indicate uncertainty that derives from a lack of knowledge with respect to the appropriate values to use for various inputs to the model. The potential benefit, and hence appeal, of evidence theory is that it allows a less restrictive specification of uncertainty than is possible within the axiomatic structure on which probability theory is based. Unfortunately, the propagation of an evidence theory representation for uncertainty through a model is more computationally demanding than the propagation of a probabilistic representation for uncertainty, with this difficulty constituting a serious obstacle to the use of evidence theory in the representation of uncertainty in predictions obtained from computationally intensive models. This presentation describes and illustrates a samplingbased computational strategy for the representation of epistemic uncertainty in model predictions with evidence theory. Preliminary trials indicate that the presented strategy can be used to propagate uncertainty representations based on evidence theory in analysis situations where naive samplingbased (i.e., unsophisticated Monte Carlo) procedures are impracticable due to computational cost.
Computer determination of bacterial volume
Griffis, David William
19780101T23:59:59.000Z
51I!. 55 (1'j ~3) 17. M. J. Tnornton, The Applicaticn of the Coul Counter to Problems in the Siz ". nalys's cf Insoluble D u"s. Jour. Pharm. =nd Phar. . ecol. (11)~ 742"752 i" 963~ 18 . G. F. Bahr, G . R. Rer'caner, and 8 . , G' as, A...
Shadmehr, Reza
of Our Arm? JoAnn Kluzik,1,3,4 Jo¨rn Diedrichsen,5 Reza Shadmehr,2 and Amy J. Bastian1,3 1 Department learn an internal model of the tool or adapt the model of our arm? J Neurophysiol 100: 14551464, 2008 or of our own arm? We considered a wellstudied adaptation task in which people made goaldirected reaching
Determination of the proper operating range for the CAFCA IIB fuel cycle model
Warburton, Jamie (Jamie L.)
20070101T23:59:59.000Z
The fuel cycle simulation tool, CAFCA II was previously modified to produce the most recent version, CAFCA IIB. The code tracks the mass distribution of transuranics in the fuel cycle in one model and also projects costs ...
Technosocial Modeling for Determining the Status and Nature of a State’s Nuclear Activities
Gastelum, Zoe N.; Harvey, Julia B.
20090925T23:59:59.000Z
The International Atomic Energy Agency State Evaluation Process: The Role of Information Analysis in Reaching Safeguards Conclusions (Mathews et al. 2008), several examples of nonproliferation models using analytical software were developed that may assist the IAEA with collecting, visualizing, analyzing, and reporting information in support of the State Evaluation Process. This paper focuses on one of the examples a set of models developed in the Proactive Scenario Production, Evidence Collection, and Testing (ProSPECT) software that evaluates the status and nature of a state’s nuclear activities. The models use three distinct subject areas to perform this assessment: the presence of nuclear activities, the consistency of those nuclear activities with national nuclear energy goals, and the geopolitical context in which those nuclear activities are taking place. As a proofofconcept for the models, a crude case study was performed. The study, which attempted to evaluate the nuclear activities taking place in Syria prior to September 2007, yielded illustrative, yet inconclusive, results. Due to the inconclusive nature of the case study results, changes that may improve the model’s efficiency and accuracy are proposed.
Gross, Louis J.
Computational Ecology Bioinformatics The biological sciences have become increasingly quantitative:252:15 Location: TBA Section Number: 59692 Computational Biology Spring 1998 Text: Models in Biology: Mathematics with entirely new subdisciplines having developed recently which apply modern computational methods to basic
Harting, Jens
Steering in computational science: mesoscale modelling and simulation J. CHIN{, J. HARTING{, S. JHA steering for high performance computing applications. LatticeBoltzmann mesoscale fluid simulations, there is currently considerable interest in mesoscale models. These models coarse grain most of the atomic
McConnell, Joshua B
20000101T23:59:59.000Z
to produce a temperature profile in the liner thickness. An analytical stress model, using the results of the derived equations and the numerical thermal model, was constructed to determine the magnitude of the stresses the liner is subjected to after...
Computer modelling of the reduction of rare earth dopants in barium aluminate
Rezende, Marcos V. dos S; Valerio, Mario E.G. [Department of Physics, Federal University of Sergipe, 49100000 Sao Cristovao, SE (Brazil); Jackson, Robert A., Email: r.a.jackson@chem.keele.ac.uk [School of Physical and Geographical Sciences, Keele University, Keele, Staffordshire ST5 5BG (United Kingdom)
20110815T23:59:59.000Z
Long lasting phosphorescence in barium aluminates can be achieved by doping with rare earth ions in divalent charge states. The rare earth ions are initially in a trivalent charge state, but are reduced to a divalent charge state before being doped into the material. In this paper, the reduction of trivalent rare earth ions in the BaAl{sub 2}O{sub 4} lattice is studied by computer simulation, with the energetics of the whole reduction and doping process being modelled by two methods, one based on single ion doping and one which allows dopant concentrations to be taken into account. A range of different reduction schemes are considered and the most energetically favourable schemes identified.  Graphical abstract: The doping and subsequent reduction of a rare earth ion into the barium aluminate lattice. Highlights: > The doping of barium aluminate with rare earth ions reduced in a range of atmospheres has been modelled. > The overall solution energy for the doping process for each ion in each reducing atmosphere is calculated using two methods. > The lowest energy reduction process is predicted and compared with experimental results.
Wind Turbine Modeling for Computational Fluid Dynamics: December 2010  December 2012
Tossas, L. A. M.; Leonardi, S.
20130701T23:59:59.000Z
With the shortage of fossil fuel and the increasing environmental awareness, wind energy is becoming more and more important. As the market for wind energy grows, wind turbines and wind farms are becoming larger. Current utilityscale turbines extend a significant distance into the atmospheric boundary layer. Therefore, the interaction between the atmospheric boundary layer and the turbines and their wakes needs to be better understood. The turbulent wakes of upstream turbines affect the flow field of the turbines behind them, decreasing power production and increasing mechanical loading. With a better understanding of this type of flow, wind farm developers could plan betterperforming, less maintenanceintensive wind farms. Simulating this flow using computational fluid dynamics is one important way to gain a better understanding of wind farm flows. In this study, we compare the performance of actuator disc and actuator line models in producing wind turbine wakes and the waketurbine interaction between multiple turbines. We also examine parameters that affect the performance of these models, such as grid resolution, the use of a tiploss correction, and the way in which the turbine force is projected onto the flow field.
Development of Computational Tools for Metabolic Model Curation, Flux Elucidation and Strain Design
Maranas, Costas D
20120521T23:59:59.000Z
An overarching goal of the Department of EnergyÂ? mission is the efficient deployment and engineering of microbial and plant systems to enable biomass conversion in pursuit of high energy density liquid biofuels. This has spurred the pace at which new organisms are sequenced and annotated. This torrent of genomic information has opened the door to understanding metabolism in not just skeletal pathways and a handful of microorganisms but for truly genomescale reconstructions derived for hundreds of microbes and plants. Understanding and redirecting metabolism is crucial because metabolic fluxes are unique descriptors of cellular physiology that directly assess the current cellular state and quantify the effect of genetic engineering interventions. At the same time, however, trying to keep pace with the rate of genomic data generation has ushered in a number of modeling and computational challenges related to (i) the automated assembly, testing and correction of genomescale metabolic models, (ii) metabolic flux elucidation using labeled isotopes, and (iii) comprehensive identification of engineering interventions leading to the desired metabolism redirection.
DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)
Kilina, Svetlana; Yarotski, Dzmitry A.; Talin, A. Alec; Tretiak, Sergei; Taylor, Antoinette J.; Balatsky, Alexander V.
20110101T23:59:59.000Z
We present a combined approach that relies on computational simulations and scanning tunneling microscopy (STM) measurements to reveal morphological properties and stability criteria of carbon nanotubeDNA (CNTDNA) constructs. Application of STM allows direct observation of very stable CNTDNA hybrid structures with the welldefined DNA wrapping angle of 63.4°and a coiling period of 3.3?nm. Using force field simulations, we determine how the DNACNT binding energy depends on the sequence and binding geometry of a single strand DNA. This dependence allows us to quantitatively characterize the stability of a hybrid structure with an optimal ?stacking between DNA nucleotides and the tube surfacemore »and better interpret STM data. Our simulations clearly demonstrate the existence of a very stable DNA binding geometry for (6,5) CNT as evidenced by the presence of a welldefined minimum in the binding energy as a function of an angle between DNA strand and the nanotube chiral vector. This novel approach demonstrates the feasibility of CNTDNA geometry studies with subnanometer resolution and paves the way towards complete characterization of the structural and electronic properties of drugdelivering systems based on DNACNT hybrids as a function of DNA sequence and a nanotube chirality.« less
Modeling a Reconfigurable System for Computing the FFT in Place via RewritingLogic #
AyalaRincón, Mauricio
hardware implementation of the Fast Fourier Transform  FFT using rewritinglogic. It is shown on general purpose processors, reconfigurable computing delivers more processing power due electronic market. There are several taxonomies applied to reconfigurable computing. Concerning the specific
Wu, K.T.; Li, B.; Payne, R.
19920601T23:59:59.000Z
This manual presents and describes a package of computer models uniquely developed for boiler thermal performance and emissions evaluations by the Energy and Environmental Research Corporation. The model package permits boiler heat transfer, fuels combustion, and pollutant emissions predictions related to a number of practical boiler operations such as fuelswitching, fuels cofiring, and reburning NO{sub x} reductions. The models are adaptable to most boiler/combustor designs and can handle burner fuels in solid, liquid, gaseous, and slurried forms. The models are also capable of performing predictions for combustion applications involving gaseousfuel reburning, and cofiring of solid/gas, liquid/gas, gas/gas, slurry/gas fuels. The model package is conveniently named as BPACK (Boiler Package) and consists of six computer codes, of which three of them are main computational codes and the other three are input codes. The three main codes are: (a) a twodimensional furnace heattransfer and combustion code: (b) a detailed chemicalkinetics code; and (c) a boiler convective passage code. This user`s manual presents the computer model package in two volumes. Volume 1 describes in detail a number of topics which are of general users` interest, including the physical and chemical basis of the models, a complete description of the model applicability, options, input/output, and the default inputs. Volume 2 contains a detailed record of the worked examples to assist users in applying the models, and to illustrate the versatility of the codes.
Bergman, Keren
at the scale of high performance computer clusters and warehouse scale data centers, system level simulations and results for rack scale photonic interconnection networks for high performance computing. Keywords: optical to the newsworthy power consumption [3], latency [4] and bandwidth challenges [5] of high performance computing (HPC
Stockman, Mark [Georgia State University Research Foundation] [Georgia State University Research Foundation; Gray, Steven [Argon National Laboratory] [Argon National Laboratory
20140221T23:59:59.000Z
The program is directed toward development of new computational approaches to photoprocesses in nanostructures whose geometry and composition are tailored to obtain desirable optical responses. The emphasis of this specific program is on the development of computational methods and prediction and computational theory of new phenomena of optical energy transfer and transformation on the extreme nanoscale (down to a few nanometers).
Pannala, S; D'Azevedo, E; Zacharia, T
20020226T23:59:59.000Z
The goal of the radiation modeling effort was to develop and implement a radiation algorithm that is fast and accurate for the underhood environment. As part of this CRADA, a netradiation model was chosen to simulate radiative heat transfer in an underhood of a car. The assumptions (diffusegray and uniform radiative properties in each element) reduce the problem tremendously and all the view factors for radiation thermal calculations can be calculated once and for all at the beginning of the simulation. The cost for online integration of heat exchanges due to radiation is found to be less than 15% of the baseline CHAD code and thus very manageable. The offline view factor calculation is constructed to be very modular and has been completely integrated to read CHAD grid files and the output from this code can be read into the latest version of CHAD. Further integration has to be performed to accomplish the same with STARCD. The main outcome of this effort is to obtain a highly scalable and portable simulation capability to model view factors for underhood environment (for e.g. a view factor calculation which took 14 hours on a single processor only took 14 minutes on 64 processors). The code has also been validated using a simple test case where analytical solutions are available. This simulation capability gives underhood designers in the automotive companies the ability to account for thermal radiation  which usually is critical in the underhood environment and also turns out to be one of the most computationally expensive components of underhood simulations. This report starts off with the original work plan as elucidated in the proposal in section B. This is followed by Technical work plan to accomplish the goals of the project in section C. In section D, background to the current work is provided with references to the previous efforts this project leverages on. The results are discussed in section 1E. This report ends with conclusions and future scope of work in section F.
Bentz, Dale P.
CEMENT PARTICLES REPERENCE: Bentz, D. P. and Stutzman, P. E., "S)314Anslysisand Computer Modelling of Hydration of Portland Cement Particles,* petrov~ ~* lmMSLuu* Sharon M. DeHayes and David Stark, Eds., American Society for Testing and Materials, Philadelphia, 1994, ASS'J!RACT: Characterization of cement
Gomez, Hector
Experimental and computational modeling of oscillatory flow within a baffled tube containing describes numerical simulation and matching experimental results for oscillatory flow within a baffled tube the basic mechanism of OFM in a horizontal singleorifice baffled tube. As the fluid passes through
Quantum method of determination of penetrability in FRW model with radiation
Sergei P. Maydanyuk
20140718T23:59:59.000Z
In paper the closed FriedmannRobertsonWalker model with quantization in presence of the positive cosmological constant, radiation and Chaplygin gas is studied. For analysis of tunneling probability for birth of an asymptotically deSitter, inflationary Universe as a function of the radiation energy a new definition of a "free" wave propagating inside strong fields is introduced. Vilenkin's tunneling boundary condition is corrected, penetrability and reflection are calculated in fully quantum stationary approach.
Nishino, Takafumi
20120101T23:59:59.000Z
Modelling of turbine bladeinduced turbulence (BIT) is discussed within the framework of threedimensional Reynoldsaveraged NavierStokes (RANS) actuator disk computations. We first propose a generic (baseline) BIT model, which is applied only to the actuator disk surface, does not include any model coefficients (other than those used in the original RANS turbulence model) and is expected to be valid in the limiting case where BIT is fully isotropic and in energy equilibrium. The baseline model is then combined with correction functions applied to the region behind the disk to account for the effect of rotor tip vortices causing a mismatch of Reynolds shear stress between short and longtime averaged flow fields. Results are compared with wake measurements of a twobladed wind turbine model of Medici and Alfredsson [Wind Energy, Vol. 9, 2006, pp. 219236] to demonstrate the capability of the new model.
Schneider, JeanGuy
systemlevel optimisation. Keywordsgreen computing; Cloud computing; energy consumption; performanceIn 1st ICSE Workshop on Green and Sustainable Software Engineering, Zurich, Switzerland, 3rd June 2012 An Energy Consumption Model and Analysis Tool for Cloud Computing Environments FeiFei Chen, Jean
Eric Wachsman; Keith L. Duncan
20060930T23:59:59.000Z
This research was focused on two distinct but related issues. The first issue concerned using defect modeling to understand the relationship between point defect concentration and the electrochemical, thermochemical and mechanochemical properties of typical solid oxide fuel cell (SOFC) materials. The second concerned developing relationships between the microstructural features of SOFC materials and their electrochemical performance. To understand the role point defects play in ceramics, a coherent analytical framework was used to develop expressions for the dependence of thermal expansion and elastic modulus on point defect concentration in ceramics. These models, collectively termed the continuumlevel electrochemical model (CLEM), were validated through fits to experimental data from electrical conductivity, IV characteristics, elastic modulus and thermochemical expansion experiments for (nominally pure) ceria, gadoliniadoped ceria (GDC) and yttriastabilized zirconia (YSZ) with consistently good fits. The same values for the material constants were used in all of the fits, further validating our approach. As predicted by the continuumlevel electrochemical model, the results reveal that the concentration of defects has a significant effect on the physical properties of ceramic materials and related devices. Specifically, for pure ceria and GDC, the elastic modulus decreased while the chemical expansion increased considerably in low partial pressures of oxygen. Conversely, the physical properties of YSZ remained insensitive to changes in oxygen partial pressure within the studied range. Again, the findings concurred exactly with the predictions of our analytical model. Indeed, further analysis of the results suggests that an increase in the point defect content weakens the attractive forces between atoms in fluoritestructured oxides. The reduction treatment effects on the flexural strength and the fracture toughness of pure ceria were also evaluated at room temperature. The results reveal that the flexural strength decreases significantly after heat treatment in very low oxygen partial pressure environments; however, in contrast, fracture toughness is increased by 3040% when the oxygen partial pressure was decreased to 10{sup 20} to 10{sup 22} atm range. Fractographic studies show that microcracks developed at 800 oC upon hydrogen reduction are responsible for the decreased strength. To understand the role of microstructure on electrochemical performance, electrical impedance spectra from symmetric LSM/YSZ/LSM cells was deconvoluted to obtain the key electrochemical components of electrode performance, namely charge transfer resistance, surface diffusion of reactive species and bulk gas diffusion through the electrode pores. These properties were then related to microstructural features, such as triplephase boundary length and tortuosity. From these experiments we found that the impedance due to oxygen adsorption obeys a power law with pore surface area, while the impedance due to charge transfer is found to obey a powerlaw with respect to triple phase boundary length. A model based on kinetic theory explaining the powerlaw relationships observed was then developed. Finally, during our EIS work on the symmetric LSM/YSZ/LSM cells a technique was developed to improve the quality of highfrequency impedance data and their subsequent deconvolution.
Ambiguity in the Determination of the Free Energy for a Model of the Circle Map
Brian G. Kenny; Tony W. Dixon
20060809T23:59:59.000Z
We consider a simple model to describe the widths of the mode locked intervals for the critical circle map. Using two different partitions of the rational numbers, based on Farey series and Farey tree levels respectively, we calculate the free energy analytically at selected points for each partition. It is found that the result of the calculation depends on the method of partition. An implication of this is that the generalized dimensions $D_q$ are different for each partition except when $q=0$, i.e. only the Hausdorff dimension is the same in each case.
Musial, W.; Lawson, M.; Rooney, S.
20130201T23:59:59.000Z
The Marine and Hydrokinetic Technology (MHK) Instrumentation, Measurement, and Computer Modeling Workshop was hosted by the National Renewable Energy Laboratory (NREL) in Broomfield, Colorado, July 910, 2012. The workshop brought together over 60 experts in marine energy technologies to disseminate technical information to the marine energy community and collect information to help identify ways in which the development of a commercially viable marine energy industry can be accelerated. The workshop was comprised of plenary sessions that reviewed the state of the marine energy industry and technical sessions that covered specific topics of relevance. Each session consisted of presentations, followed by facilitated discussions. During the facilitated discussions, the session chairs posed several prepared questions to the presenters and audience to encourage communication and the exchange of ideas between technical experts. Following the workshop, attendees were asked to provide written feedback on their takeaways and their best ideas on how to accelerate the pace of marine energy technology development. The first four sections of this document give a general overview of the workshop format, provide presentation abstracts and discussion session notes, and list responses to the postworkshop questions. The final section presents key findings and conclusions from the workshop that suggest how the U.S. Department of Energy and national laboratory resources can be utilized to most effectively assist the marine energy industry.
Experimental determination and thermodynamic modeling of the NiRe binary system
Yaqoob, Khurram [Chimie Metallurgique des Terres Rares (CMTR), Institut de Chimie et des Materiaux ParisEst (ICMPE), 28 rue Henri Dunant, 94320 Thiais Cedex (France)] [Chimie Metallurgique des Terres Rares (CMTR), Institut de Chimie et des Materiaux ParisEst (ICMPE), 28 rue Henri Dunant, 94320 Thiais Cedex (France); Joubert, JeanMarc, Email: jeanmarc.joubert@icmpe.cnrs.fr [Chimie Metallurgique des Terres Rares (CMTR), Institut de Chimie et des Materiaux ParisEst (ICMPE), 28 rue Henri Dunant, 94320 Thiais Cedex (France)] [Chimie Metallurgique des Terres Rares (CMTR), Institut de Chimie et des Materiaux ParisEst (ICMPE), 28 rue Henri Dunant, 94320 Thiais Cedex (France)
20121215T23:59:59.000Z
The phase diagram of the NiRe binary system has been partially reinvestigated by chemical, structural and thermal characterization of the arc melted alloys. The experimental results obtained during the present investigation were combined with the literature data and a new phase diagram of the NiRe binary system is proposed. In comparison with the NiRe phase diagram proposed by Nash et al. in 1985 [1], significant differences in the homogeneity domains, freezing ranges and peritectic reaction temperature were evidenced. On the other hand, thermodynamic modeling of the studied system by using the new experimental information has also been carried out with the help of the CALPHAD method. The calculated NiRe phase diagram showed a good agreement with the selected experimental information.  Graphical abstract: NiRe phase diagram according to the present study. Highlights: BlackRightPointingPointer Reinvestigation of the NiRe phase diagram. BlackRightPointingPointer Extended phase field of the hcp phase. BlackRightPointingPointer Different freezing ranges and peritectic reaction temperature. BlackRightPointingPointer Thermodynamic modeling of the studied system by using the CALPHAD method.
Pesaran, A.; Kim, G. H.; Smith, K.; Santhanagopalan, S.; Lee, K. J.
20120501T23:59:59.000Z
This 2012 Annual Merit Review presentation gives an overview of the ComputerAided Engineering of Batteries (CAEBAT) project and introduces the MultiScale, MultiDimensional model for modeling lithiumion batteries for electric vehicles.
Perry, Richard Jay
19850101T23:59:59.000Z
4. 5 percentage units less fat (21. 64% vs 26, 1% carcass fat) than nonimplanted steers of the same carcass weight and rate of gain (350 kg and ADG of 1. 0 kg/d). Fat as a percentage of gain averaged 67. 85, 52. 05 and 39. 59 for non... model which can be used for projecting changes in yield grade and quality grade with performance of cattle in the feedlot over time. The primary objectives of this research were: 1) To determine the "shelf life" of a steer in the feedlot. 2...
Lester, Christopher G; Parker, Michael A; White, Martin J
ar X iv :h ep p h/ 05 08 14 3v 2 3 0 A ug 2 00 5 Preprint typeset in JHEP style  HYPER VERSION CAVHEP200515 ATLPHYSPUB2005013 ATLCOMPHYS2005033 Determining SUSY model parameters and masses at the LHC using crosssections, kinematic... been used to map the interesting region of the parameter space with fewer points than would be required in a scan in order to obtain similar performance. To demonstrate this, consider the following. There are four and a half parameters – 14 – in the m...
Lester, Christopher G; Parker, Michael A; White, Martin J
ar X iv :h ep p h/ 05 08 14 3v 2 3 0 A ug 2 00 5 Preprint typeset in JHEP style  HYPER VERSION CAVHEP200515 ATLPHYSPUB2005013 ATLCOMPHYS2005033 Determining SUSY model parameters and masses at the LHC using crosssections, kinematic... edges and other observables. Christopher G. Lester, Michael A. Parker and Martin J. White Cavendish Laboratory. Madingley Road. Cambridge CB3 0HE, UK Abstract: We address the problem of mass measurements of supersymmetric par ticles at the Large Hadron...
Price forecasting for notebook computers
Rutherford, Derek Paul
19970101T23:59:59.000Z
life cycle of a notebook. Since all data are publicly available, this approach can be used to assist managerial decision making in the notebook computer industry, for example, in determining when and how to upgrade a model and when to introduce a new...
Price forecasting for notebook computers
Rutherford, Derek Paul
19970101T23:59:59.000Z
life cycle of a notebook. Since all data are publicly available, this approach can be used to assist managerial decision making in the notebook computer industry, for example, in determining when and how to upgrade a model and when to introduce a new...
M.F. Simpson; K.R. Kim
20101201T23:59:59.000Z
In support of closing the nuclear fuel cycle using nonaqueous separations technology, this project aims to develop computational models of electrorefiners based on fundamental chemical and physical processes. Spent driver fuel from Experimental Breeder ReactorII (EBRII) is currently being electrorefined in the Fuel Conditioning Facility (FCF) at Idaho National Laboratory (INL). And Korea Atomic Energy Research Institute (KAERI) is developing electrorefining technology for future application to spent fuel treatment and management in the Republic of Korea (ROK). Electrorefining is a critical component of pyroprocessing, a nonaqueous chemical process which separates spent fuel into four streams: (1) uranium metal, (2) U/TRU metal, (3) metallic highlevel waste containing cladding hulls and noble metal fission products, and (4) ceramic highlevel waste containing sodium and active metal fission products. Having rigorous yet flexible electrorefiner models will facilitate process optimization and assist in troubleshooting as necessary. To attain such models, INL/UI has focused on approaches to develop a computationallylight and portable twodimensional (2D) model, while KAERI/SNU has investigated approaches to develop a computationally intensive threedimensional (3D) model for detailed and finetuned simulation.
DataDriven Optimization for Modeling in Computer Graphics and Vision
Yu, Lap Fai
20130101T23:59:59.000Z
in 3D scanning, 3D display, and 3D printing have made thesethe recent trend of 3D printing in computeraided design.
DataDriven Optimization for Modeling in Computer Graphics and Vision
Yu, Lap Fai
20130101T23:59:59.000Z
the recent trend of 3D printing in computeraided design.3D display, and 3D printing have made these technologies
PerezSanchez, Danyl [CIEMAT, Avenida Complutense 40, 28040, Madrid (Spain)] [CIEMAT, Avenida Complutense 40, 28040, Madrid (Spain)
20130701T23:59:59.000Z
As a result of a pilot project developed at the old Spanish 'Junta de Energia Nuclear' to extract uranium from ores, tailings materials were generated. Most of these residual materials were sent back to different uranium mines, but a small amount of it was mixed with conventional building materials and deposited near the old plant until the surrounding ground was flattened. The affected land is included in an area under institutional control and used as recreational area. At the time of processing, uranium isotopes were separated but other radionuclides of the uranium decay series as Th230, Ra226 and daughters remain in the residue. Recently, the analyses of samples taken at different ground's depths confirmed their presence. This paper presents the methodology used to calculate the derived concentration level to ensure that the reference dose level of 0.1 mSv y1 used as radiological criteria. In this study, a radiological impact assessment was performed modeling the area as recreational scenario. The modelization study was carried out with the code RESRAD considering as exposure pathways, external irradiation, inadvertent ingestion of soil, inhalation of resuspended particles, and inhalation of radon (Rn222). As result was concluded that, if the concentration of Ra226 in the first 15 cm of soil is lower than, 0.34 Bq g{sup 1}, the dose would not exceed the reference dose. Applying this value as a derived concentration level and comparing with the results of measurements on the ground, some areas with a concentration of activity slightly higher than latter were found. In these zones the remediation proposal has been to cover with a layer of 15 cm of clean material. This action represents a reduction of 85% of the dose and ensures compliance with the reference dose. (authors)
Illinois at UrbanaChampaign, University of
85721 Center for Reliable and HighPerformance Computing Coordinated Science Laboratory UniversityFrom International Computer Performance and Dependability Symposium, Erlangen, Germany, April 1995, pp.285 294 MODELING RECYCLE: A CASE STUDY IN THE INDUSTRIAL USE OF MEASUREMENT AND MODELING Luai M
Alejandro Perdomo; Colin Truncik; Ivan TubertBrohman; Geordie Rose; Alán AspuruGuzik
20080516T23:59:59.000Z
In this report, we explore the use of a quantum optimization algorithm for obtaining low energy conformations of protein models. We discuss mappings between protein models and optimization variables, which are in turn mapped to a system of coupled quantum bits. General strategies are given for constructing Hamiltonians to be used to solve optimization problems of physical/chemical/biological interest via quantum computation by adiabatic evolution. As an example, we implement the Hamiltonian corresponding to the HydrophobicPolar (HP) model for protein folding. Furthermore, we present an approach to reduce the resulting Hamiltonian to twobody terms gearing towards an experimental realization.
Coupling MultiComponent Models with MPH on Distributed Memory Computer Architectures
He, Yun; Ding, Chris
20050101T23:59:59.000Z
Among these, NASA’s Earth System Models Framework (ESMF) [to facilitate coupling earth system model components and to
Wang, S L; Singer, M A
20090713T23:59:59.000Z
The purpose of this report is to evaluate the hemodynamic effects of renal vein inflow and filter position on unoccluded and partially occluded IVC filters using threedimensional computational fluid dynamics. Threedimensional models of the TrapEase and Gunther Celect IVC filters, spherical thrombi, and an IVC with renal veins were constructed. Hemodynamics of steadystate flow was examined for unoccluded and partially occluded TrapEase and Gunther Celect IVC filters in varying proximity to the renal veins. Flow past the unoccluded filters demonstrated minimal disruption. Natural regions of stagnant/recirculating flow in the IVC are observed superior to the bilateral renal vein inflows, and high flow velocities and elevated shear stresses are observed in the vicinity of renal inflow. Spherical thrombi induce stagnant and/or recirculating flow downstream of the thrombus. Placement of the TrapEase filter in the suprarenal vein position resulted in a large area of low shear stress/stagnant flow within the filter just downstream of thrombus trapped in the upstream trapping position. Filter position with respect to renal vein inflow influences the hemodynamics of filter trapping. Placement of the TrapEase filter in a suprarenal location may be thrombogenic with redundant areas of stagnant/recirculating flow and low shear stress along the caval wall due to the upstream trapping position and the naturally occurring region of stagnant flow from the renal veins. Infrarenal vein placement of IVC filters in a near juxtarenal position with the downstream cone near the renal vein inflow likely confers increased levels of mechanical lysis of trapped thrombi due to increased shear stress from renal vein inflow.
MODELING STRATEGIES TO COMPUTE NATURAL CIRCULATION USING CFD IN A VHTR AFTER A LOFA
YuHsin Tung; Richard W. Johnson; ChingChang Chieng; YuhMing Ferng
20121101T23:59:59.000Z
A prismatic gascooled very high temperature reactor (VHTR) is being developed under the next generation nuclear plant program (NGNP) of the U.S. Department of Energy, Office of Nuclear Energy. In the design of the prismatic VHTR, hexagonal shaped graphite blocks are drilled to allow insertion of fuel pins, made of compacted TRISO fuel particles, and coolant channels for the helium coolant. One of the concerns for the reactor design is the effects of a loss of flow accident (LOFA) where the coolant circulators are lost for some reason, causing a loss of forced coolant flow through the core. In such an event, it is desired to know what happens to the (reduced) heat still being generated in the core and if it represents a problem for the fuel compacts, the graphite core or the reactor vessel (RV) walls. One of the mechanisms for the transport of heat out of the core is by the natural circulation of the coolant, which is still present. That is, how much heat may be transported by natural circulation through the core and upwards to the top of the upper plenum? It is beyond current capability for a computational fluid dynamic (CFD) analysis to perform a calculation on the whole RV with a sufficiently refined mesh to examine the full potential of natural circulation in the vessel. The present paper reports the investigation of several strategies to model the flow and heat transfer in the RV. It is found that it is necessary to employ representative geometries of the core to estimate the heat transfer. However, by taking advantage of global and local symmetries, a detailed estimate of the strength of the resulting natural circulation and the level of heat transfer to the top of the upper plenum is obtained.
hal00157955,version127Jun2007 Does ChurchTuring thesis apply outside computer science?
ParisSud XI, Université de
hal00157955,version127Jun2007 Does ChurchTuring thesis apply outside computer science? Eugene be seen as the first step of the computer science. They determined the invention of computers, design science. All the abovementioned computational models are based on unbounded discretetime, unbounded
Jeffrey D. Evanseck; Jeffry D. Madura; Jonathan P. Mathews
20050527T23:59:59.000Z
We have made progress in carrying out large scale molecular dynamics simulations using the CHARMM force field in order to refine our coal/guest interactions. There have been two issues facing us over the last year. First, we have had to create a completely new topology and parameter definition for coal. Since we are using a classical force field, we have adopted the strategy of treating coal composed of individual common fragments based upon a distribution of mass, composition, and bonding. Our procedure is similar to treating a protein as being composed of the discrete set of amino acids. Second, we have had to incorporate the quality CO{sub 2} parameters that we have developed over the last two years. There are the geometric and arithmetic procedures, which we have successfully implemented. We have utilized computational molecular modeling to generate a stateoftheart large scale structural representation of a bituminous coal of low volatile bituminous rank. This structure(s) has been used to investigate the molecular forces between the bituminous coal structure (or idealized pores) and the molecular species CH{sub 4} and CO{sub 2}. We are close to carrying out molecular dynamics simulations, which will allow us to explore and test the newly created model of coal.
An experimental validation of the PRO model for parallel and distributed computation
Boyer, Edmond
model was introduced by Gebremedhin et al. [2002] as a frame work for the design and analysis. The Parallel ResourceOptimal (PRO) model, introduced in Gebremedhin et al. [2002], is similar to the BSP model discussion of the PRO model, see Gebremedhin et al. [2002]. The PRO model The key features of PRO
Using the FLUENT computational fluid dynamics code to model the NACOK corrosion test
Parks, Benjamin T
20040101T23:59:59.000Z
As a part of advancing nuclear technology, computational fluid dynamics (CFD) analysis offers safer and lowercost results relative to experimental work. Its use as a safety analysis tool is gaining much broader acceptance ...
Wu, YiChieh, Ph. D. Massachusetts Institute of Technology
20140101T23:59:59.000Z
Computational techniques have long been applied to biological data to address a wide range of evolutionary questions. In phylogenetics, methods for reconstructing gene histories from sequence data have enabled researchers ...
Garba, M.T.; GonzalesVelez, H.; Roach, D.L.
20101126T23:59:59.000Z
This paper examines the initial parallel implementation of SCATTER, a computationally intensive inelastic neutron scattering routine with polycrystalline averaging capability, for the General Utility Lattice Program (GULP). Of particular importance...
Modeling Aspects and Computational Methods for Some Recent Problems of Tomographic Imaging
Allmaras, Moritz
20120214T23:59:59.000Z
. Computationally, we implemented backprojection by counting the number of particle trajectories intersecting each voxel of a regular rectangular grid covering the domain of detection. For collimated measurements, we derived confidence estimates indicating when...
Computer Energy Modeling Techniques for Simulation Large Scale Correctional Institutes in Texas
Heneghan, T.; Haberl, J. S.; Saman, N.; BouSaada, T. E.
19960101T23:59:59.000Z
Building energy simulation programs have undergone an increase in use for evaluating energy consumption and energy conservation retrofits in buildings. Utilization of computer simulation programs for large facilities with multiple buildings, however...
Christopher Beetle; Benjamin Bromley; Richard H. Price
20060208T23:59:59.000Z
The periodic standing wave approach to binary inspiral assumes rigid rotation of gravitational fields and hence helically symmetric solutions. To exploit the symmetry, numerical computations must solve for ``helical scalars,'' fields that are functions only of corotating coordinates, the labels on the helical Killing trajectories. Here we present the formalism for describing linearized general relativity in terms of helical scalars and we present solutions to the mixed partial differential equations of the linearized gravity problem (and to a toy nonlinear problem) using the adapted coordinates and numerical techniques previously developed for scalar periodic standing wave computations. We argue that the formalism developed may suffice for periodic standing wave computations for postMinkowskian computations and for full general relativity.
DEVELOPING A NEW APPROACH OF COMPUTER USE `KISS MODELING' FOR DESIGNIDEAS ALTERNATIVES OF FORM
, YOSHIHIRO KOBAYASHI South Valley University, Faculty of Fine Arts at Luxor, Egypt. wael, and form generation through computational power is more prominent in the two dimensions than the three
Computational Model of Forward and Opposed Smoldering Combustion with Improved Chemical Kinetics
Rein, Guillermo
A computational study has been carried out to investigate smoldering ignition and propagation in polyurethane foam. The onedimensional, transient, governing equations for smoldering combustion in a porous fuel are solved ...
Koniges, A; Eder, E; Liu, W; Barnard, J; Friedman, A; Logan, G; Fisher, A; Masers, N; Bertozzi, A
20111104T23:59:59.000Z
The Neutralized Drift Compression Experiment II (NDCX II) is an induction accelerator planned for initial commissioning in 2012. The final design calls for a 3 MeV, Li+ ion beam, delivered in a bunch with characteristic pulse duration of 1 ns, and transverse dimension of order 1 mm. The NDCX II will be used in studies of material in the warm dense matter (WDM) regime, and ion beam/hydrodynamic coupling experiments relevant to heavy ion based inertial fusion energy. We discuss recent efforts to adapt the 3D ALEAMR code to model WDM experiments on NDCX II. The code, which combines Arbitrary Lagrangian Eulerian (ALE) hydrodynamics with Adaptive Mesh Refinement (AMR), has physics models that include ion deposition, radiation hydrodynamics, thermal diffusion, anisotropic material strength with material time history, and advanced models for fragmentation. Experiments at NDCXII will explore the process of bubble and droplet formation (twophase expansion) of superheated metal solids using ion beams. Experiments at higher temperatures will explore equation of state and heavy ion fusion beamtotarget energy coupling efficiency. Ion beams allow precise control of local beam energy deposition providing uniform volumetric heating on a timescale shorter than that of hydrodynamic expansion. The ALEAMR code does not have any export control restrictions and is currently running at the National Energy Research Scientific Computing Center (NERSC) at LBNL and has been shown to scale well to thousands of CPUs. New surface tension models that are being implemented and applied to WDM experiments. Some of the approaches use a diffuse interface surface tension model that is based on the advective CahnHilliard equations, which allows for droplet breakup in divergent velocity fields without the need for imposed perturbations. Other methods require seeding or other methods for droplet breakup. We also briefly discuss the effects of the move to exascale computing and related computational changes on general modeling codes in fusion energy.
I. Robertson; A. Beaudoin; J. Lambros
20050131T23:59:59.000Z
Development and validation of constitutive models for polycrystalline materials subjected to high strain rate loading over a range of temperatures are needed to predict the response of engineering materials to inservice type conditions (foreign object damage, highstrain rate forging, highspeed sheet forming, deformation behavior during forming, response to extreme conditions, etc.). To account accurately for the complex effects that can occur during extreme and variable loading conditions, requires significant and detailed computational and modeling efforts. These efforts must be closely coupled with precise and targeted experimental measurements that not only verify the predictions of the models, but also provide input about the fundamental processes responsible for the macroscopic response. Achieving this coupling between modeling and experimentation is the guiding principle of this program. Specifically, this program seeks to bridge the length scale between discrete dislocation interactions with grain boundaries and continuum models for polycrystalline plasticity. Achieving this goal requires incorporating these complex dislocationinterface interactions into the welldefined behavior of single crystals. Despite the widespread study of metal plasticity, this aspect is not well understood for simple loading conditions, let alone extreme ones. Our experimental approach includes determining the highstrain rate response as a function of strain and temperature with postmortem characterization of the microstructure, quasistatic testing of predeformed material, and direct observation of the dislocation behavior during reloading by using the in situ transmission electron microscope deformation technique. These experiments will provide the basis for development and validation of physicallybased constitutive models, which will include dislocationgrain boundary interactions for polycrystalline systems. One aspect of the program will involve the direct observation of specific mechanisms of microplasticity, as these will indicate the boundary value problem that should be addressed. This focus on the preyield region in the quasistatic effort (the elastoplastic transition) is also a tractable one from an experimental and modeling viewpoint. In addition, our approach will minimize the need to fit model parameters to experimental data to obtain convergence. These are critical steps to reach the primary objective of simulating and modeling material performance under extreme loading conditions. To achieve these goals required assembling a multidisciplinary team, see Table 1, with key collaborators at the National Laboratories. One of the major issues for the team members was to learn about the expertise available and how to communicate across disciplines. The communication issue is a challenging one and is being addressed in part with weekly meetings in which the graduate students present lectures on the fundamentals of their respective areas to the entire group. Breakthroughs in science are presented but these, by necessity, assume a tutorial nature; examples of student led meetings can be found at our website http://hrdg.mse.uiuc.edu/. For example, interpreting electron micrographs and understanding what can be achieved by using electron microscopy is challenging for the modeling expert as is comprehending the input and limitations of crystal plasticity codes for an electron microscopist. Significant progress has been made at dissolving these barriers and the students are able to work across the disciplines.
Computational modeling of the brain limbic system and its application in control engineering
Shahmirzadi, Danial
20051101T23:59:59.000Z
of this thesis, Chapter IV, shows the utilization of the Brain Emotional Learning (BEL) model in different applications of control and signal fusion systems. The main effort is focused on applying the model to control systems where the model acts...
Bradley, D.R.; Gardner, D.R.; Brockmann, J.E.; Griffith, R.O. [Sandia National Labs., Albuquerque, NM (United States)
19931001T23:59:59.000Z
The CORCONMod3 computer code was developed to mechanistically model the important coreconcrete interaction phenomena, including those phenomena relevant to the assessment of containment failure and radionuclide release. The code can be applied to a wide range of severe accident scenarios and reactor plants. The code represents the current state of the art for simulating core debris interactions with concrete. This document comprises the user`s manual and gives a brief description of the models and the assumptions and limitations in the code. Also discussed are the input parameters and the code output. Two sample problems are also given.
Refining and Extending the Business Model with Information Technology: Dell Computer Corporation
Kraemer, Kenneth L; Dedrick, Jason; Yamashiro, Sandra
19990101T23:59:59.000Z
of Dell’s Direct Business Model Fuels Fifteenth ConsecutiveAND EXTENDING THE REFINING AND EXTENDING THE BUSINESS MODELBUSINESS MODEL CENTER FOR RESEARCH ON INFORMATION TECHNOLOGY
Gimpel, Rodney F.; Kruger, Albert A.
20131216T23:59:59.000Z
The purpose of this calculation is to determine the LAW glass former recipe and additives with their respective amounts. The methodology and equations contained herein are to be used in the G2 and ACM models until better information is supplied by R&T efforts. This revision includes calculations that determines the mass and volume of the bulk chemicals/minerals needed per batch. Plus, it contains calculations (for the G2 model) to help prevent overflow in LAW Feed Preparation Vessel.