Air Transport Optimization Model | NISAC
Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)
NISACAir Transport Optimization Model content top Network Optimization Models (RNAS and ATOM) Posted by Admin on Mar 1, 2012 in | Comments 0 comments Many critical infrastructures can be represented by a network of interconnected nodes and links. Mathematically sound nonlinear optimization techniques can then be applied to these networks to understand their behavior under normal and disrupted situations. Network optimization models are particularly useful for evaluating transportation system
Pyomo : Python Optimization Modeling Objects.
Siirola, John; Laird, Carl Damon; Hart, William Eugene; Watson, Jean-Paul
2010-11-01
The Python Optimization Modeling Objects (Pyomo) package [1] is an open source tool for modeling optimization applications within Python. Pyomo provides an objected-oriented approach to optimization modeling, and it can be used to define symbolic problems, create concrete problem instances, and solve these instances with standard solvers. While Pyomo provides a capability that is commonly associated with algebraic modeling languages such as AMPL, AIMMS, and GAMS, Pyomo's modeling objects are embedded within a full-featured high-level programming language with a rich set of supporting libraries. Pyomo leverages the capabilities of the Coopr software library [2], which integrates Python packages (including Pyomo) for defining optimizers, modeling optimization applications, and managing computational experiments. A central design principle within Pyomo is extensibility. Pyomo is built upon a flexible component architecture [3] that allows users and developers to readily extend the core Pyomo functionality. Through these interface points, extensions and applications can have direct access to an optimization model's expression objects. This facilitates the rapid development and implementation of new modeling constructs and as well as high-level solution strategies (e.g. using decomposition- and reformulation-based techniques). In this presentation, we will give an overview of the Pyomo modeling environment and model syntax, and present several extensions to the core Pyomo environment, including support for Generalized Disjunctive Programming (Coopr GDP), Stochastic Programming (PySP), a generic Progressive Hedging solver [4], and a tailored implementation of Bender's Decomposition.
HOMER: The Micropower Optimization Model
Not Available
2004-03-01
HOMER, the micropower optimization model, helps users to design micropower systems for off-grid and grid-connected power applications. HOMER models micropower systems with one or more power sources including wind turbines, photovoltaics, biomass power, hydropower, cogeneration, diesel engines, cogeneration, batteries, fuel cells, and electrolyzers. Users can explore a range of design questions such as which technologies are most effective, what size should components be, how project economics are affected by changes in loads or costs, and is the renewable resource adequate.
Network Optimization Models (RNAS and ATOM) | NISAC
Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)
been used to study policy options concerning the movement of toxic chemicals by rail. Air Transport Optimization Model (ATOM) The TOM is a network-optimization model designed to...
Biotrans: Cost Optimization Model | Open Energy Information
URI: cleanenergysolutions.orgcontentbiotrans-cost-optimization-model,http Language: English Policies: Deployment Programs DeploymentPrograms: Demonstration &...
Model-Based Transient Calibration Optimization for Next Generation...
Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site
Based Transient Calibration Optimization for Next Generation Diesel Engines Model-Based Transient Calibration Optimization for Next Generation Diesel Engines 2005 Diesel Engine...
Scientists use world's fastest supercomputer to model origins of the
Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)
unseen universe Origins of the unseen universe Scientists use world's fastest supercomputer to model origins of the unseen universe The model aims to look at galaxy-scale mass concentrations above and beyond quantities seen in state-of-the-art sky surveys. October 26, 2009 Los Alamos National Laboratory sits on top of a once-remote mesa in northern New Mexico with the Jemez mountains as a backdrop to research and innovation covering multi-disciplines from bioscience, sustainable energy
Optimal Initial Conditions for Coupling Ice Sheet Models to Earth...
Office of Scientific and Technical Information (OSTI)
Optimal Initial Conditions for Coupling Ice Sheet Models to Earth System Models. Citation ... Country of Publication: United States Language: English Word Cloud More Like This Full ...
optimal initial conditions for coupling ice sheet models to earth...
Office of Scientific and Technical Information (OSTI)
optimal initial conditions for coupling ice sheet models to earth system models Perego, Mauro Sandia National Laboratories Sandia National Laboratories; Price, Stephen F. Dr...
Chen, T.L.; Lin, Z.S.; Chen, Y.L.
1995-10-01
The purpose of this study was to estimate the original-gas-in-place (OGIP) of a water-drive reservoir using optimization algorithm for Port Arthur field, Texas, US. The properties of the associate aquifer were also obtained. The good agreement, between the results from this study and those from simulation study, would be demonstrated in this paper. In this study, material balance equation for a gas reservoir and van Everdingen-Hurst model for an aquifer were solved simultaneously to calculate cumulative gas production. The result was then compared with cumulative gas production measured in the field that observed at each pressure. The following parameters were manually adjusted to obtain: OGIP, thickness of the aquifer, water encroachment angle, ratio of aquifer to reservoir radius, and aquifer`s permeability. The procedure was then applied with simplex technique, an optimization algorithm, to adjust parameters automatically. When the difference between cumulative gas production calculated and observed was minimal, the parameters used in the model would be the results obtained. A water-drive gas reservoir, ``C`` sand gas reservoir in Port Arthur field, which had produced for about 12 years, was analyzed successfully. The results showed that the OGIP of 60.6 BCF estimated in this study was favorably compared with 56.2 BCF obtained by a numerical simulator in other study. In addition, the aquifer properties that were unavailable from the conventional plotting method can be estimated from this study. The estimated aquifer properties from this study were compared favorably with the core data.
Model Identification for Optimal Diesel Emissions Control
Stevens, Andrew J.; Sun, Yannan; Song, Xiaobo; Parker, Gordon
2013-06-20
In this paper we develop a model based con- troller for diesel emission reduction using system identification methods. Specifically, our method minimizes the downstream readings from a production NOx sensor while injecting a minimal amount of urea upstream. Based on the linear quadratic estimator we derive the closed form solution to a cost function that accounts for the case some of the system inputs are not controllable. Our cost function can also be tuned to trade-off between input usage and output optimization. Our approach performs better than a production controller in simulation. Our NOx conversion efficiency was 92.7% while the production controller achieved 92.4%. For NH3 conversion, our efficiency was 98.7% compared to 88.5% for the production controller.
Quark-Gluon Plasma Model and Origin of Magic Numbers
Ghahramany, N.; Ghanaatian, M.; Hooshmand, M.
2008-04-21
Using Boltzman distribution in a quark-gluon plasma sample it is possible to obtain all existing magic numbers and their extensions without applying the spin and spin-orbit couplings. In this model it is assumed that in a quark-gluon thermodynamic plasma, quarks have no interactions and they are trying to form nucleons. Considering a lattice for a central quark and the surrounding quarks, using a statistical approach to find the maximum number of microstates, the origin of magic numbers is explained and a new magic number is obtained.
Modeling and Multidimensional Optimization of a Tapered Free...
Office of Scientific and Technical Information (OSTI)
Journal Article: Modeling and Multidimensional Optimization of a Tapered Free Electron Laser Citation Details ... Publication Date: 2013-03-28 OSTI Identifier: 1074231 Report ...
Stochastic Robust Mathematical Programming Model for Power System Optimization
Liu, Cong; Changhyeok, Lee; Haoyong, Chen; Mehrotra, Sanjay
2016-01-01
This paper presents a stochastic robust framework for two-stage power system optimization problems with uncertainty. The model optimizes the probabilistic expectation of different worst-case scenarios with ifferent uncertainty sets. A case study of unit commitment shows the effectiveness of the proposed model and algorithms.
Use Computational Model to Design and Optimize Welding Conditions to
Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site
Suppress Helium Cracking during Welding | Department of Energy Use Computational Model to Design and Optimize Welding Conditions to Suppress Helium Cracking during Welding Use Computational Model to Design and Optimize Welding Conditions to Suppress Helium Cracking during Welding Today, welding is widely used for repair, maintenance and upgrade of nuclear reactor components. As a critical technology to extend the service life of nuclear power plants beyond 60 years, weld technology must be
Model-Based Transient Calibration Optimization for Next Generation Diesel
Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site
Engines | Department of Energy Model-Based Transient Calibration Optimization for Next Generation Diesel Engines Model-Based Transient Calibration Optimization for Next Generation Diesel Engines 2005 Diesel Engine Emissions Reduction (DEER) Conference Presentations and Posters 2005_deer_atkinson.pdf (585.55 KB) More Documents & Publications Future Diesel Engine Thermal Efficiency Improvement andn Emissions Control Technology Integrated Engine and Aftertreatment Technology Roadmap for EPA
Contingency contractor optimization. Phase 3, model description and formulation.
Gearhart, Jared Lee; Adair, Kristin Lynn; Jones, Katherine A.; Bandlow, Alisa; Durfee, Justin D.; Jones, Dean A.; Martin, Nathaniel; Detry, Richard Joseph; Nanco, Alan Stewart; Nozick, Linda Karen
2013-10-01
The goal of Phase 3 the OSD ATL Contingency Contractor Optimization (CCO) project is to create an engineering prototype of a tool for the contingency contractor element of total force planning during the Support for Strategic Analysis (SSA). An optimization model was developed to determine the optimal mix of military, Department of Defense (DoD) civilians, and contractors that accomplishes a set of user defined mission requirements at the lowest possible cost while honoring resource limitations and manpower use rules. An additional feature allows the model to understand the variability of the Total Force Mix when there is uncertainty in mission requirements.
Contingency contractor optimization. phase 3, model description and formulation.
Gearhart, Jared Lee; Adair, Kristin Lynn; Jones, Katherine A.; Bandlow, Alisa; Detry, Richard Joseph; Durfee, Justin D.; Jones, Dean A.; Martin, Nathaniel; Nanco, Alan Stewart; Nozick, Linda Karen
2013-06-01
The goal of Phase 3 the OSD ATL Contingency Contractor Optimization (CCO) project is to create an engineering prototype of a tool for the contingency contractor element of total force planning during the Support for Strategic Analysis (SSA). An optimization model was developed to determine the optimal mix of military, Department of Defense (DoD) civilians, and contractors that accomplishes a set of user defined mission requirements at the lowest possible cost while honoring resource limitations and manpower use rules. An additional feature allows the model to understand the variability of the Total Force Mix when there is uncertainty in mission requirements.
He, Yi; Scheraga, Harold A.; Liwo, Adam
2015-12-28
Coarse-grained models are useful tools to investigate the structural and thermodynamic properties of biomolecules. They are obtained by merging several atoms into one interaction site. Such simplified models try to capture as much as possible information of the original biomolecular system in all-atom representation but the resulting parameters of these coarse-grained force fields still need further optimization. In this paper, a force field optimization method, which is based on maximum-likelihood fitting of the simulated to the experimental conformational ensembles and least-squares fitting of the simulated to the experimental heat-capacity curves, is applied to optimize the Nucleic Acid united-RESidue 2-point (NARES-2P) model for coarse-grained simulations of nucleic acids recently developed in our laboratory. The optimized NARES-2P force field reproduces the structural and thermodynamic data of small DNA molecules much better than the original force field.
The origins of computer weather prediction and climate modeling
Lynch, Peter [Meteorology and Climate Centre, School of Mathematical Sciences, University College Dublin, Belfield (Ireland)], E-mail: Peter.Lynch@ucd.ie
2008-03-20
Numerical simulation of an ever-increasing range of geophysical phenomena is adding enormously to our understanding of complex processes in the Earth system. The consequences for mankind of ongoing climate change will be far-reaching. Earth System Models are capable of replicating climate regimes of past millennia and are the best means we have of predicting the future of our climate. The basic ideas of numerical forecasting and climate modeling were developed about a century ago, long before the first electronic computer was constructed. There were several major practical obstacles to be overcome before numerical prediction could be put into practice. A fuller understanding of atmospheric dynamics allowed the development of simplified systems of equations; regular radiosonde observations of the free atmosphere and, later, satellite data, provided the initial conditions; stable finite difference schemes were developed; and powerful electronic computers provided a practical means of carrying out the prodigious calculations required to predict the changes in the weather. Progress in weather forecasting and in climate modeling over the past 50 years has been dramatic. In this presentation, we will trace the history of computer forecasting through the ENIAC integrations to the present day. The useful range of deterministic prediction is increasing by about one day each decade, and our understanding of climate change is growing rapidly as Earth System Models of ever-increasing sophistication are developed.
Determining Reduced Order Models for Optimal Stochastic Reduced Order Models
Bonney, Matthew S.; Brake, Matthew R.W.
2015-08-01
The use of parameterized reduced order models(PROMs) within the stochastic reduced order model (SROM) framework is a logical progression for both methods. In this report, five different parameterized reduced order models are selected and critiqued against the other models along with truth model for the example of the Brake-Reuss beam. The models are: a Taylor series using finite difference, a proper orthogonal decomposition of the the output, a Craig-Bampton representation of the model, a method that uses Hyper-Dual numbers to determine the sensitivities, and a Meta-Model method that uses the Hyper-Dual results and constructs a polynomial curve to better represent the output data. The methods are compared against a parameter sweep and a distribution propagation where the first four statistical moments are used as a comparison. Each method produces very accurate results with the Craig-Bampton reduction having the least accurate results. The models are also compared based on time requirements for the evaluation of each model where the Meta- Model requires the least amount of time for computation by a significant amount. Each of the five models provided accurate results in a reasonable time frame. The determination of which model to use is dependent on the availability of the high-fidelity model and how many evaluations can be performed. Analysis of the output distribution is examined by using a large Monte-Carlo simulation along with a reduced simulation using Latin Hypercube and the stochastic reduced order model sampling technique. Both techniques produced accurate results. The stochastic reduced order modeling technique produced less error when compared to an exhaustive sampling for the majority of methods.
A Physical Model For The Origin Of Volcanism Of The Tyrrhenian...
Of Neapolitan Area Jump to: navigation, search OpenEI Reference LibraryAdd to library Journal Article: A Physical Model For The Origin Of Volcanism Of The Tyrrhenian Margin- The...
Optimization and Performance Modeling of Stencil Computations on Modern Microprocessors
Datta, Kaushik; Kamil, Shoaib; Williams, Samuel; Oliker, Leonid; Shalf, John; Yelick, Katherine
2007-06-01
Stencil-based kernels constitute the core of many important scientific applications on blockstructured grids. Unfortunately, these codes achieve a low fraction of peak performance, due primarily to the disparity between processor and main memory speeds. In this paper, we explore the impact of trends in memory subsystems on a variety of stencil optimization techniques and develop performance models to analytically guide our optimizations. Our work targets cache reuse methodologies across single and multiple stencil sweeps, examining cache-aware algorithms as well as cache-oblivious techniques on the Intel Itanium2, AMD Opteron, and IBM Power5. Additionally, we consider stencil computations on the heterogeneous multicore design of the Cell processor, a machine with an explicitly managed memory hierarchy. Overall our work represents one of the most extensive analyses of stencil optimizations and performance modeling to date. Results demonstrate that recent trends in memory system organization have reduced the efficacy of traditional cache-blocking optimizations. We also show that a cache-aware implementation is significantly faster than a cache-oblivious approach, while the explicitly managed memory on Cell enables the highest overall efficiency: Cell attains 88% of algorithmic peak while the best competing cache-based processor achieves only 54% of algorithmic peak performance.
Pumping Optimization Model for Pump and Treat Systems - 15091
Baker, S.; Ivarson, Kristine A.; Karanovic, M.; Miller, Charles W.; Tonkin, M.
2015-01-15
Pump and Treat systems are being utilized to remediate contaminated groundwater in the Hanford 100 Areas adjacent to the Columbia River in Eastern Washington. Design of the systems was supported by a three-dimensional (3D) fate and transport model. This model provided sophisticated simulation capabilities but requires many hours to calculate results for each simulation considered. Many simulations are required to optimize system performance, so a two-dimensional (2D) model was created to reduce run time. The 2D model was developed as a equivalent-property version of the 3D model that derives boundary conditions and aquifer properties from the 3D model. It produces predictions that are very close to the 3D model predictions, allowing it to be used for comparative remedy analyses. Any potential system modifications identified by using the 2D version are verified for use by running the 3D model to confirm performance. The 2D model was incorporated into a comprehensive analysis system (the Pumping Optimization Model, POM) to simplify analysis of multiple simulations. It allows rapid turnaround by utilizing a graphical user interface that: 1 allows operators to create hypothetical scenarios for system operation, 2 feeds the input to the 2D fate and transport model, and 3 displays the scenario results to evaluate performance improvement. All of the above is accomplished within the user interface. Complex analyses can be completed within a few hours and multiple simulations can be compared side-by-side. The POM utilizes standard office computing equipment and established groundwater modeling software.
Vrugt, Jasper A; Wohling, Thomas
2008-01-01
Most studies in vadose zone hydrology use a single conceptual model for predictive inference and analysis. Focusing on the outcome of a single model is prone to statistical bias and underestimation of uncertainty. In this study, we combine multi-objective optimization and Bayesian Model Averaging (BMA) to generate forecast ensembles of soil hydraulic models. To illustrate our method, we use observed tensiometric pressure head data at three different depths in a layered vadose zone of volcanic origin in New Zealand. A set of seven different soil hydraulic models is calibrated using a multi-objective formulation with three different objective functions that each measure the mismatch between observed and predicted soil water pressure head at one specific depth. The Pareto solution space corresponding to these three objectives is estimated with AMALGAM, and used to generate four different model ensembles. These ensembles are post-processed with BMA and used for predictive analysis and uncertainty estimation. Our most important conclusions for the vadose zone under consideration are: (1) the mean BMA forecast exhibits similar predictive capabilities as the best individual performing soil hydraulic model, (2) the size of the BMA uncertainty ranges increase with increasing depth and dryness in the soil profile, (3) the best performing ensemble corresponds to the compromise (or balanced) solution of the three-objective Pareto surface, and (4) the combined multi-objective optimization and BMA framework proposed in this paper is very useful to generate forecast ensembles of soil hydraulic models.
Modeling Microinverters and DC Power Optimizers in PVWatts
MacAlpine, S.; Deline, C.
2015-02-01
Module-level distributed power electronics including microinverters and DC power optimizers are increasingly popular in residential and commercial PV systems. Consumers are realizing their potential to increase design flexibility, monitor system performance, and improve energy capture. It is becoming increasingly important to accurately model PV systems employing these devices. This document summarizes existing published documents to provide uniform, impartial recommendations for how the performance of distributed power electronics can be reflected in NREL's PVWatts calculator (http://pvwatts.nrel.gov/).
Optimal control of CPR procedure using hemodynamic circulation model
Lenhart, Suzanne M.; Protopopescu, Vladimir A.; Jung, Eunok
2007-12-25
A method for determining a chest pressure profile for cardiopulmonary resuscitation (CPR) includes the steps of representing a hemodynamic circulation model based on a plurality of difference equations for a patient, applying an optimal control (OC) algorithm to the circulation model, and determining a chest pressure profile. The chest pressure profile defines a timing pattern of externally applied pressure to a chest of the patient to maximize blood flow through the patient. A CPR device includes a chest compressor, a controller communicably connected to the chest compressor, and a computer communicably connected to the controller. The computer determines the chest pressure profile by applying an OC algorithm to a hemodynamic circulation model based on the plurality of difference equations.
Computer model for characterizing, screening, and optimizing electrolyte systems
Gering, Kevin L.
2015-06-15
Electrolyte systems in contemporary batteries are tasked with operating under increasing performance requirements. All battery operation is in some way tied to the electrolyte and how it interacts with various regions within the cell environment. Seeing the electrolyte plays a crucial role in battery performance and longevity, it is imperative that accurate, physics-based models be developed that will characterize key electrolyte properties while keeping pace with the increasing complexity of these liquid systems. Advanced models are needed since laboratory measurements require significant resources to carry out for even a modest experimental matrix. The Advanced Electrolyte Model (AEM) developed at the INL is a proven capability designed to explore molecular-to-macroscale level aspects of electrolyte behavior, and can be used to drastically reduce the time required to characterize and optimize electrolytes. Although it is applied most frequently to lithium-ion battery systems, it is general in its theory and can be used toward numerous other targets and intended applications. This capability is unique, powerful, relevant to present and future electrolyte development, and without peer. It redefines electrolyte modeling for highly-complex contemporary systems, wherein significant steps have been taken to capture the reality of electrolyte behavior in the electrochemical cell environment. This capability can have a very positive impact on accelerating domestic battery development to support aggressive vehicle and energy goals in the 21st century.
Optimal Control of Distributed Energy Resources using Model Predictive Control
Mayhorn, Ebony T.; Kalsi, Karanjit; Elizondo, Marcelo A.; Zhang, Wei; Lu, Shuai; Samaan, Nader A.; Butler-Purry, Karen
2012-07-22
In an isolated power system (rural microgrid), Distributed Energy Resources (DERs) such as renewable energy resources (wind, solar), energy storage and demand response can be used to complement fossil fueled generators. The uncertainty and variability due to high penetration of wind makes reliable system operations and controls challenging. In this paper, an optimal control strategy is proposed to coordinate energy storage and diesel generators to maximize wind penetration while maintaining system economics and normal operation. The problem is formulated as a multi-objective optimization problem with the goals of minimizing fuel costs and changes in power output of diesel generators, minimizing costs associated with low battery life of energy storage and maintaining system frequency at the nominal operating value. Two control modes are considered for controlling the energy storage to compensate either net load variability or wind variability. Model predictive control (MPC) is used to solve the aforementioned problem and the performance is compared to an open-loop look-ahead dispatch problem. Simulation studies using high and low wind profiles, as well as, different MPC prediction horizons demonstrate the efficacy of the closed-loop MPC in compensating for uncertainties in wind and demand.
Computer model for characterizing, screening, and optimizing electrolyte systems
Energy Science and Technology Software Center (OSTI)
2015-06-15
Electrolyte systems in contemporary batteries are tasked with operating under increasing performance requirements. All battery operation is in some way tied to the electrolyte and how it interacts with various regions within the cell environment. Seeing the electrolyte plays a crucial role in battery performance and longevity, it is imperative that accurate, physics-based models be developed that will characterize key electrolyte properties while keeping pace with the increasing complexity of these liquid systems. Advanced modelsmore » are needed since laboratory measurements require significant resources to carry out for even a modest experimental matrix. The Advanced Electrolyte Model (AEM) developed at the INL is a proven capability designed to explore molecular-to-macroscale level aspects of electrolyte behavior, and can be used to drastically reduce the time required to characterize and optimize electrolytes. Although it is applied most frequently to lithium-ion battery systems, it is general in its theory and can be used toward numerous other targets and intended applications. This capability is unique, powerful, relevant to present and future electrolyte development, and without peer. It redefines electrolyte modeling for highly-complex contemporary systems, wherein significant steps have been taken to capture the reality of electrolyte behavior in the electrochemical cell environment. This capability can have a very positive impact on accelerating domestic battery development to support aggressive vehicle and energy goals in the 21st century.« less
OPF incorporating load models maximizing net revenue. [Optimal Power Flow
Dias, L.G.; El-Hawary, M.E. . Dept. of Electrical Engineering)
1993-02-01
Studies of effects of load modeling in optimal power flow studies using minimum cost and minimum loss objective reveal that a main disadvantage of cost minimization is the reduction of the objective via a reduction in the power demand. This inevitably results in lowering the total revenue and in most cases, reducing net revenue as well. An alternative approach for incorporating load models in security-constrained OPF (SCOPF) studies apparently avoids reducing the total power demand for the intact system, but reduces the voltages. A study of the behavior of conventional OPF solutions in the presence of loads not controlled by ULTC's shows that this result in a reducing the total power demand for the intact system. In this paper, the authors propose an objective that avoids the tendency to lower the total power demand, total revenue and net revenue, for OPF neglecting contingencies (normal OPF), as well as for security-constrained OPF. The minimum cost objective is modified by subtracting the total power demand from the total fuel cost. This is equivalent to maximizing the net revenue.
Applying the Battery Ownership Model in Pursuit of Optimal Battery Use Strategies (Presentation)
Neubauer, J.; Ahmad, P.; Brooker, A.; Wood, E.; Smith, K.; Johnson, C.; Mendelsohn, M.
2012-05-01
This Annual Merit Review presentation describes the application of the Battery Ownership Model for strategies for optimal battery use in electric drive vehicles (PEVs, PHEVs, and BEVs).
Optimal SCR Control Using Data-Driven Models
Stevens, Andrew J.; Sun, Yannan; Lian, Jianming; Devarakonda, Maruthi N.; Parker, Gordon
2013-04-16
We present an optimal control solution for the urea injection for a heavy-duty diesel (HDD) selective catalytic reduction (SCR). The approach taken here is useful beyond SCR and could be applied to any system where a control strategy is desired and input-output data is available. For example, the strategy could also be used for the diesel oxidation catalyst (DOC) system. In this paper, we identify and validate a one-step ahead Kalman state-space estimator for downstream NOx using the bench reactor data of an SCR core sample. The test data was acquired using a 2010 Cummins 6.7L ISB production engine with a 2010 Cummins production aftertreatment system. We used a surrogate HDD federal test procedure (FTP), developed at Michigan Technological University (MTU), which simulates the representative transients of the standard FTP cycle, but has less engine speed/load points. The identified state-space model is then used to develop a tunable cost function that simultaneously minimizes NOx emissions and urea usage. The cost function is quadratic and univariate, thus the minimum can be computed analytically. We show the performance of the closed-loop controller in using a reduced-order discrete SCR simulator developed at MTU. Our experiments with the surrogate HDD-FTP data show that the strategy developed in this paper can be used to identify performance bounds for urea dose controllers.
Continuously Optimized Reliable Energy (CORE) Microgrid: Models & Tools (Fact Sheet)
Not Available
2013-07-01
This brochure describes Continuously Optimized Reliable Energy (CORE), a trademarked process NREL employs to produce conceptual microgrid designs. This systems-based process enables designs to be optimized for economic value, energy surety, and sustainability. Capabilities NREL offers in support of microgrid design are explained.
Oneida Tribe of Indians of Wisconsin Energy Optimization Model
Troge, Michael
2014-12-01
Oneida Nation is located in Northeast Wisconsin. The reservation is approximately 96 square miles (8 miles x 12 miles), or 65,000 acres. The greater Green Bay area is east and adjacent to the reservation. A county line roughly splits the reservation in half; the west half is in Outagamie County and the east half is in Brown County. Land use is predominantly agriculture on the west 2/3 and suburban on the east 1/3 of the reservation. Nearly 5,000 tribally enrolled members live in the reservation with a total population of about 21,000. Tribal ownership is scattered across the reservation and is about 23,000 acres. Currently, the Oneida Tribe of Indians of Wisconsin (OTIW) community members and facilities receive the vast majority of electrical and natural gas services from two of the largest investor-owned utilities in the state, WE Energies and Wisconsin Public Service. All urban and suburban buildings have access to natural gas. About 15% of the population and five Tribal facilities are in rural locations and therefore use propane as a primary heating fuel. Wood and oil are also used as primary or supplemental heat sources for a small percent of the population. Very few renewable energy systems, used to generate electricity and heat, have been installed on the Oneida Reservation. This project was an effort to develop a reasonable renewable energy portfolio that will help Oneida to provide a leadership role in developing a clean energy economy. The Energy Optimization Model (EOM) is an exploration of energy opportunities available to the Tribe and it is intended to provide a decision framework to allow the Tribe to make the wisest choices in energy investment with an organizational desire to establish a renewable portfolio standard (RPS).
Diwekar, U.; Shastri, Y.; Subramanayan, K.; Zitney, S.
2007-01-01
APECS (Advanced Process Engineering Co-Simulator) is an integrated software suite that combines the power of process simulation with high-fidelity, computational fluid dynamics (CFD) for improved design, analysis, and optimization of process engineering systems. The APECS system uses commercial process simulation (e.g., Aspen Plus) and CFD (e.g., FLUENT) software integrated with the process-industry standard CAPE-OPEN (CO) interfaces. This breakthrough capability allows engineers to better understand and optimize the fluid mechanics that drive overall power plant performance and efficiency. The focus of this paper is the CAPE-OPEN complaint stochastic modeling and reduced order model computational capability around the APECS system. The usefulness of capabilities is illustrated with coal fired, gasification based, FutureGen power plant simulation. These capabilities are used to generate efficient reduced order models and optimizing model complexities.
High-throughput generation, optimization and analysis of genome-scale metabolic models.
Henry, C. S.; DeJongh, M.; Best, A. A.; Frybarger, P. M.; Linsay, B.; Stevens, R. L.
2010-09-01
Genome-scale metabolic models have proven to be valuable for predicting organism phenotypes from genotypes. Yet efforts to develop new models are failing to keep pace with genome sequencing. To address this problem, we introduce the Model SEED, a web-based resource for high-throughput generation, optimization and analysis of genome-scale metabolic models. The Model SEED integrates existing methods and introduces techniques to automate nearly every step of this process, taking {approx}48 h to reconstruct a metabolic model from an assembled genome sequence. We apply this resource to generate 130 genome-scale metabolic models representing a taxonomically diverse set of bacteria. Twenty-two of the models were validated against available gene essentiality and Biolog data, with the average model accuracy determined to be 66% before optimization and 87% after optimization.
Building Restoration Operations Optimization Model Beta Version 1.0
Energy Science and Technology Software Center (OSTI)
2007-05-31
The Building Restoration Operations Optimization Model (BROOM), developed by Sandia National Laboratories, is a software product designed to aid in the restoration of large facilities contaminated by a biological material. BROOMs integrated data collection, data management, and visualization software improves the efficiency of cleanup operations, minimizes facility downtime, and provides a transparent basis for reopening the facility. Secure remote access to building floor plans Floor plan drawings and knowledge of the HVAC system are criticalmore » to the design and implementation of effective sampling plans. In large facilities, access to these data may be complicated by the sheer abundance and disorganized state they are often stored in. BROOM avoids potentially costly delays by providing a means of organizing and storing mechanical and floor plan drawings in a secure remote database that is easily accessed. Sampling design tools BROOM provides an array of tools to answer the question of where to sample and how many samples to take. In addition to simple judgmental and random sampling plans, the software includes two sophisticated methods of adaptively developing a sampling strategy. Both tools strive to choose sampling locations that best satisfy a specified objective (i.e. minimizing kriging variance) but use numerically different strategies to do so. Surface samples are collected early in the restoration process to characterize the extent of contamination and then again later to verify that the facility is safe to reenter. BROOM supports sample collection using a ruggedized PDA equipped with a barcode scanner and laser range finder. The PDA displays building floor drawings, sampling plans, and electronic forms for data entry. Barcodes are placed on sample containers for the purpose of tracking the specimen and linking acquisition data (i.e. location, surface type, texture) to laboratory results. Sample location is determined by activating the integrated
An Optimization Model for Plug-In Hybrid Electric Vehicles
Malikopoulos, Andreas; Smith, David E
2011-01-01
The necessity for environmentally conscious vehicle designs in conjunction with increasing concerns regarding U.S. dependency on foreign oil and climate change have induced significant investment towards enhancing the propulsion portfolio with new technologies. More recently, plug-in hybrid electric vehicles (PHEVs) have held great intuitive appeal and have attracted considerable attention. PHEVs have the potential to reduce petroleum consumption and greenhouse gas (GHG) emissions in the commercial transportation sector. They are especially appealing in situations where daily commuting is within a small amount of miles with excessive stop-and-go driving. The research effort outlined in this paper aims to investigate the implications of motor/generator and battery size on fuel economy and GHG emissions in a medium-duty PHEV. An optimization framework is developed and applied to two different parallel powertrain configurations, e.g., pre-transmission and post-transmission, to derive the optimal design with respect to motor/generator and battery size. A comparison between the conventional and PHEV configurations with equivalent size and performance under the same driving conditions is conducted, thus allowing an assessment of the fuel economy and GHG emissions potential improvement. The post-transmission parallel configuration yields higher fuel economy and less GHG emissions compared to pre-transmission configuration partly attributable to the enhanced regenerative braking efficiency.
Optimization of Depletion Modeling and Simulation for the High...
Office of Scientific and Technical Information (OSTI)
for the high-fidelity modeling and simulation of the ... Mathematics and Computation (M&C), Supercomputing in Nuclear Applications (SNA) and the Monte Carlo (MC) Method, ...
Turner, D P; Ritts, W D; Wharton, S; Thomas, C; Monson, R; Black, T A
2009-02-26
The combination of satellite remote sensing and carbon cycle models provides an opportunity for regional to global scale monitoring of terrestrial gross primary production, ecosystem respiration, and net ecosystem production. FPAR (the fraction of photosynthetically active radiation absorbed by the plant canopy) is a critical input to diagnostic models, however little is known about the relative effectiveness of FPAR products from different satellite sensors nor about the sensitivity of flux estimates to different parameterization approaches. In this study, we used multiyear observations of carbon flux at four eddy covariance flux tower sites within the conifer biome to evaluate these factors. FPAR products from the MODIS and SeaWiFS sensors, and the effects of single site vs. cross-site parameter optimization were tested with the CFLUX model. The SeaWiFs FPAR product showed greater dynamic range across sites and resulted in slightly reduced flux estimation errors relative to the MODIS product when using cross-site optimization. With site-specific parameter optimization, the flux model was effective in capturing seasonal and interannual variation in the carbon fluxes at these sites. The cross-site prediction errors were lower when using parameters from a cross-site optimization compared to parameter sets from optimization at single sites. These results support the practice of multisite optimization within a biome for parameterization of diagnostic carbon flux models.
Optimal bispectrum constraints on single-field models of inflation
Anderson, Gemma J.; Regan, Donough; Seery, David E-mail: D.Regan@sussex.ac.uk
2014-07-01
We use WMAP 9-year bispectrum data to constrain the free parameters of an 'effective field theory' describing fluctuations in single-field inflation. The Lagrangian of the theory contains a finite number of operators associated with unknown mass scales. Each operator produces a fixed bispectrum shape, which we decompose into partial waves in order to construct a likelihood function. Based on this likelihood we are able to constrain four linearly independent combinations of the mass scales. As an example of our framework we specialize our results to the case of 'Dirac-Born-Infeld' and 'ghost' inflation and obtain the posterior probability for each model, which in Bayesian schemes is a useful tool for model comparison. Our results suggest that DBI-like models with two or more free parameters are disfavoured by the data by comparison with single-parameter models in the same class.
A system-level cost-of-energy wind farm layout optimization with landowner modeling
Chen, Le [Ames Laboratory; MacDonald, Erin [Ames Laboratory
2013-10-01
This work applies an enhanced levelized wind farm cost model, including landowner remittance fees, to determine optimal turbine placements under three landowner participation scenarios and two land-plot shapes. Instead of assuming a continuous piece of land is available for the wind farm construction, as in most layout optimizations, the problem formulation represents landowner participation scenarios as a binary string variable, along with the number of turbines. The cost parameters and model are a combination of models from the National Renewable Energy Laboratory (NREL), Lawrence Berkeley National Laboratory, and Windustiy. The system-level cost-of-energy (COE) optimization model is also tested under two land-plot shapes: equally-sized square land plots and unequal rectangle land plots. The optimal COEs results are compared to actual COE data and found to be realistic. The results show that landowner remittances account for approximately 10% of farm operating costs across all cases. Irregular land-plot shapes are easily handled by the model. We find that larger land plots do not necessarily receive higher remittance fees. The model can help site developers identify the most crucial land plots for project success and the optimal positions of turbines, with realistic estimates of costs and profitability. (C) 2013 Elsevier Ltd. All rights reserved.
THE APPLICATION OF AN EVOLUTIONARY ALGORITHM TO THE OPTIMIZATION OF A MESOSCALE METEOROLOGICAL MODEL
Werth, D.; O'Steen, L.
2008-02-11
We show that a simple evolutionary algorithm can optimize a set of mesoscale atmospheric model parameters with respect to agreement between the mesoscale simulation and a limited set of synthetic observations. This is illustrated using the Regional Atmospheric Modeling System (RAMS). A set of 23 RAMS parameters is optimized by minimizing a cost function based on the root mean square (rms) error between the RAMS simulation and synthetic data (observations derived from a separate RAMS simulation). We find that the optimization can be efficient with relatively modest computer resources, thus operational implementation is possible. The optimization efficiency, however, is found to depend strongly on the procedure used to perturb the 'child' parameters relative to their 'parents' within the evolutionary algorithm. In addition, the meteorological variables included in the rms error and their weighting are found to be an important factor with respect to finding the global optimum.
A MILP-Based Distribution Optimal Power Flow Model for Microgrid Operation
Liu, Guodong; Starke, Michael R; Zhang, Xiaohu; Tomsovic, Kevin
2016-01-01
This paper proposes a distribution optimal power flow (D-OPF) model for the operation of microgrids. The proposed model minimizes not only the operating cost, including fuel cost, purchasing cost and demand charge, but also several performance indices, including voltage deviation, network power loss and power factor. It co-optimizes the real and reactive power form distributed generators (DGs) and batteries considering their capacity and power factor limits. The D-OPF is formulated as a mixed-integer linear programming (MILP). Numerical simulation results show the effectiveness of the proposed model.
Optimization of large-scale heterogeneous system-of-systems models.
Parekh, Ojas; Watson, Jean-Paul; Phillips, Cynthia Ann; Siirola, John; Swiler, Laura Painton; Hough, Patricia Diane; Lee, Herbert K. H.; Hart, William Eugene; Gray, Genetha Anne; Woodruff, David L.
2012-01-01
Decision makers increasingly rely on large-scale computational models to simulate and analyze complex man-made systems. For example, computational models of national infrastructures are being used to inform government policy, assess economic and national security risks, evaluate infrastructure interdependencies, and plan for the growth and evolution of infrastructure capabilities. A major challenge for decision makers is the analysis of national-scale models that are composed of interacting systems: effective integration of system models is difficult, there are many parameters to analyze in these systems, and fundamental modeling uncertainties complicate analysis. This project is developing optimization methods to effectively represent and analyze large-scale heterogeneous system of systems (HSoS) models, which have emerged as a promising approach for describing such complex man-made systems. These optimization methods enable decision makers to predict future system behavior, manage system risk, assess tradeoffs between system criteria, and identify critical modeling uncertainties.
Optimization of a Two-Fluid Hydrodynamic Model of Churn-Turbulent Flow
Donna Post Guillen
2009-07-01
A hydrodynamic model of two-phase, churn-turbulent flows is being developed using the computational multiphase fluid dynamics (CMFD) code, NPHASE-CMFD. The numerical solutions obtained by this model are compared with experimental data obtained at the TOPFLOW facility of the Institute of Safety Research at the Forschungszentrum Dresden-Rossendorf. The TOPFLOW data is a high quality experimental database of upward, co-current air-water flows in a vertical pipe suitable for validation of computational fluid dynamics (CFD) codes. A five-field CMFD model was developed for the continuous liquid phase and four bubble size groups using mechanistic closure models for the ensemble-averaged Navier-Stokes equations. Mechanistic models for the drag and non-drag interfacial forces are implemented to include the governing physics to describe the hydrodynamic forces controlling the gas distribution. The closure models provide the functional form of the interfacial forces, with user defined coefficients to adjust the force magnitude. An optimization strategy was devised for these coefficients using commercial design optimization software. This paper demonstrates an approach to optimizing CMFD model parameters using a design optimization approach. Computed radial void fraction profiles predicted by the NPHASE-CMFD code are compared to experimental data for four bubble size groups.
Simulation and optimization of pressure swing adsorption systmes using reduced-order modeling
Agarwal, A.; Biegler, L.; Zitney, S.
2009-01-01
Over the past three decades, pressure swing adsorption (PSA) processes have been widely used as energyefficient gas separation techniques, especially for high purity hydrogen purification from refinery gases. Models for PSA processes are multiple instances of partial differential equations (PDEs) in time and space with periodic boundary conditions that link the processing steps together. The solution of this coupled stiff PDE system is governed by steep fronts moving with time. As a result, the optimization of such systems represents a significant computational challenge to current differential algebraic equation (DAE) optimization techniques and nonlinear programming algorithms. Model reduction is one approach to generate cost-efficient low-order models which can be used as surrogate models in the optimization problems. This study develops a reducedorder model (ROM) based on proper orthogonal decomposition (POD), which is a low-dimensional approximation to a dynamic PDE-based model. The proposed method leads to a DAE system of significantly lower order, thus replacing the one obtained from spatial discretization and making the optimization problem computationally efficient. The method has been applied to the dynamic coupled PDE-based model of a twobed four-step PSA process for separation of hydrogen from methane. Separate ROMs have been developed for each operating step with different POD modes for each of them. A significant reduction in the order of the number of states has been achieved. The reduced-order model has been successfully used to maximize hydrogen recovery by manipulating operating pressures, step times and feed and regeneration velocities, while meeting product purity and tight bounds on these parameters. Current results indicate the proposed ROM methodology as a promising surrogate modeling technique for cost-effective optimization purposes.
Optimization of ultrasonic array inspections using an efficient hybrid model and real crack shapes
Felice, Maria V.; Velichko, Alexander Wilcox, Paul D.; Barden, Tim; Dunhill, Tony
2015-03-31
Models which simulate the interaction of ultrasound with cracks can be used to optimize ultrasonic array inspections, but this approach can be time-consuming. To overcome this issue an efficient hybrid model is implemented which includes a finite element method that requires only a single layer of elements around the crack shape. Scattering Matrices are used to capture the scattering behavior of the individual cracks and a discussion on the angular degrees of freedom of elastodynamic scatterers is included. Real crack shapes are obtained from X-ray Computed Tomography images of cracked parts and these shapes are inputted into the hybrid model. The effect of using real crack shapes instead of straight notch shapes is demonstrated. An array optimization methodology which incorporates the hybrid model, an approximate single-scattering relative noise model and the real crack shapes is then described.
Suthar, B; Northrop, PWC; Braatz, RD; Subramanian, VR
2014-07-30
This paper illustrates the application of dynamic optimization in obtaining the optimal current profile for charging a lithium-ion battery by restricting the intercalation-induced stresses to a pre-determined limit estimated using a pseudo 2-dimensional (P2D). model. This paper focuses on the problem of maximizing the charge stored in a given time while restricting capacity fade due to intercalation-induced stresses. Conventional charging profiles for lithium-ion batteries (e.g., constant current followed by constant voltage or CC-CV) are not derived by considering capacity fade mechanisms, which are not only inefficient in terms of life-time usage of the batteries but are also slower by not taking into account the changing dynamics of the system. (C) The Author(s) 2014. Published by ECS. This is an open access article distributed under the terms of the Creative Commons Attribution Non-Commercial No Derivatives 4.0 License (CC BY-NC-ND, http://creativecommons.org/licenses/by-nc-nd/4.0/), which permits non-commercial reuse, distribution, and reproduction in any medium, provided the original work is not changed in any way and is properly cited. For permission for commercial reuse, please email: oa@electrochem.org. All rights reserved.
Equation-based languages – A new paradigm for building energy modeling, simulation and optimization
DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)
Wetter, Michael; Bonvini, Marco; Nouidui, Thierry S.
2016-04-01
Most of the state-of-the-art building simulation programs implement models in imperative programming languages. This complicates modeling and excludes the use of certain efficient methods for simulation and optimization. In contrast, equation-based modeling languages declare relations among variables, thereby allowing the use of computer algebra to enable much simpler schematic modeling and to generate efficient code for simulation and optimization. We contrast the two approaches in this paper. We explain how such manipulations support new use cases. In the first of two examples, we couple models of the electrical grid, multiple buildings, HVAC systems and controllers to test a controller thatmore » adjusts building room temperatures and PV inverter reactive power to maintain power quality. In the second example, we contrast the computing time for solving an optimal control problem for a room-level model predictive controller with and without symbolic manipulations. As a result, exploiting the equation-based language led to 2, 200 times faster solution« less
Reduced order model based on principal component analysis for process simulation and optimization
Lang, Y.; Malacina, A.; Biegler, L.; Munteanu, S.; Madsen, J.; Zitney, S.
2009-01-01
It is well-known that distributed parameter computational fluid dynamics (CFD) models provide more accurate results than conventional, lumped-parameter unit operation models used in process simulation. Consequently, the use of CFD models in process/equipment co-simulation offers the potential to optimize overall plant performance with respect to complex thermal and fluid flow phenomena. Because solving CFD models is time-consuming compared to the overall process simulation, we consider the development of fast reduced order models (ROMs) based on CFD results to closely approximate the high-fidelity equipment models in the co-simulation. By considering process equipment items with complicated geometries and detailed thermodynamic property models, this study proposes a strategy to develop ROMs based on principal component analysis (PCA). Taking advantage of commercial process simulation and CFD software (for example, Aspen Plus and FLUENT), we are able to develop systematic CFD-based ROMs for equipment models in an efficient manner. In particular, we show that the validity of the ROM is more robust within well-sampled input domain and the CPU time is significantly reduced. Typically, it takes at most several CPU seconds to evaluate the ROM compared to several CPU hours or more to solve the CFD model. Two case studies, involving two power plant equipment examples, are described and demonstrate the benefits of using our proposed ROM methodology for process simulation and optimization.
Model Predictive Control-based Optimal Coordination of Distributed Energy Resources
Mayhorn, Ebony T.; Kalsi, Karanjit; Lian, Jianming; Elizondo, Marcelo A.
2013-01-07
Distributed energy resources, such as renewable energy resources (wind, solar), energy storage and demand response, can be used to complement conventional generators. The uncertainty and variability due to high penetration of wind makes reliable system operations and controls challenging, especially in isolated systems. In this paper, an optimal control strategy is proposed to coordinate energy storage and diesel generators to maximize wind penetration while maintaining system economics and normal operation performance. The goals of the optimization problem are to minimize fuel costs and maximize the utilization of wind while considering equipment life of generators and energy storage. Model predictive control (MPC) is used to solve a look-ahead dispatch optimization problem and the performance is compared to an open loop look-ahead dispatch problem. Simulation studies are performed to demonstrate the efficacy of the closed loop MPC in compensating for uncertainties and variability caused in the system.
Model Predictive Control-based Optimal Coordination of Distributed Energy Resources
Mayhorn, Ebony T.; Kalsi, Karanjit; Lian, Jianming; Elizondo, Marcelo A.
2013-04-03
Distributed energy resources, such as renewable energy resources (wind, solar), energy storage and demand response, can be used to complement conventional generators. The uncertainty and variability due to high penetration of wind makes reliable system operations and controls challenging, especially in isolated systems. In this paper, an optimal control strategy is proposed to coordinate energy storage and diesel generators to maximize wind penetration while maintaining system economics and normal operation performance. The goals of the optimization problem are to minimize fuel costs and maximize the utilization of wind while considering equipment life of generators and energy storage. Model predictive control (MPC) is used to solve a look-ahead dispatch optimization problem and the performance is compared to an open loop look-ahead dispatch problem. Simulation studies are performed to demonstrate the efficacy of the closed loop MPC in compensating for uncertainties and variability caused in the system.
ARRAY OPTIMIZATION FOR TIDAL ENERGY EXTRACTION IN A TIDAL CHANNEL A NUMERICAL MODELING ANALYSIS
Yang, Zhaoqing; Wang, Taiping; Copping, Andrea
2014-04-18
This paper presents an application of a hydrodynamic model to simulate tidal energy extraction in a tidal dominated estuary in the Pacific Northwest coast. A series of numerical experiments were carried out to simulate tidal energy extraction with different turbine array configurations, including location, spacing and array size. Preliminary model results suggest that array optimization for tidal energy extraction in a real-world site is a very complex process that requires consideration of multiple factors. Numerical models can be used effectively to assist turbine siting and array arrangement in a tidal turbine farm for tidal energy extraction.
Observations on the Optimality Tolerance in the CAISO 33% RPS Model
Yao, Y; Meyers, C; Schmidt, A; Smith, S; Streitz, F
2011-09-22
In 2008 Governor Schwarzenegger of California issued an executive order requiring that 33 percent of all electricity in the state in the year 2020 should come from renewable resources such as wind, solar, geothermal, biomass, and small hydroelectric facilities. This 33% renewable portfolio standard (RPS) was further codified and signed into law by Governor Brown in 2011. To assess the market impacts of such a requirement, the California Public Utilities Commission (CPUC) initiated a study to quantify the cost, risk, and timing of achieving a 33% RPS by 2020. The California Independent System Operator (CAISO) was contracted to manage this study. The production simulation model used in this study was developed using the PLEXOS software package, which allows energy planners to optimize long-term system planning decisions under a wide variety of system constraints. In this note we describe our observations on varying the optimality tolerance in the CAISO 33% RPS model. In particular, we observe that changing the optimality tolerance from .05% to .5% leads to solutions over 5 times faster, on average, producing very similar solutions with a negligible difference in overall distance from optimality.
Hour-by-Hour Cost Modeling of Optimized Central Wind-Based Water Electrolysis Production
Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site
Hour-by-Hour Cost Modeling of Optimized Central Wind-Based Water Electrolysis Production Genevieve Saur (PI), Chris Ainscough (Presenter), Kevin Harrison, Todd Ramsden National Renewable Energy Laboratory January 17 th , 2013 This presentation does not contain any proprietary, confidential, or otherwise restricted information 2 Acknowledgements * This work was made possible by support from the U.S. Department of Energy's Fuel Cell Technologies Office within the Office of Energy Efficiency and
Model Based Optimal Sensor Network Design for Condition Monitoring in an IGCC Plant
Kumar, Rajeeva; Kumar, Aditya; Dai, Dan; Seenumani, Gayathri; Down, John; Lopez, Rodrigo
2012-12-31
This report summarizes the achievements and final results of this program. The objective of this program is to develop a general model-based sensor network design methodology and tools to address key issues in the design of an optimal sensor network configuration: the type, location and number of sensors used in a network, for online condition monitoring. In particular, the focus in this work is to develop software tools for optimal sensor placement (OSP) and use these tools to design optimal sensor network configuration for online condition monitoring of gasifier refractory wear and radiant syngas cooler (RSC) fouling. The methodology developed will be applicable to sensing system design for online condition monitoring for broad range of applications. The overall approach consists of (i) defining condition monitoring requirement in terms of OSP and mapping these requirements in mathematical terms for OSP algorithm, (ii) analyzing trade-off of alternate OSP algorithms, down selecting the most relevant ones and developing them for IGCC applications (iii) enhancing the gasifier and RSC models as required by OSP algorithms, (iv) applying the developed OSP algorithm to design the optimal sensor network required for the condition monitoring of an IGCC gasifier refractory and RSC fouling. Two key requirements for OSP for condition monitoring are desired precision for the monitoring variables (e.g. refractory wear) and reliability of the proposed sensor network in the presence of expected sensor failures. The OSP problem is naturally posed within a Kalman filtering approach as an integer programming problem where the key requirements of precision and reliability are imposed as constraints. The optimization is performed over the overall network cost. Based on extensive literature survey two formulations were identified as being relevant to OSP for condition monitoring; one based on LMI formulation and the other being standard INLP formulation. Various algorithms to solve
Dynamic optimization model of energy related economic planning and development for the Navajo nation
Beladi, S.A.
1983-01-01
The Navajo reservation located in portions of Arizona, New Mexico and Utah is rich in low sulfur coal deposits, ideal for strip mining operation. The Navajo Nation has been leasing the mineral resources to non-Indian enterprises for purposes of extraction. Since the early 1950s the Navajo Nation has entered into extensive coal leases with several large companies and utilities. Contracts have committed huge quantities of Navajo coal for mining. This research was directed to evaluate the shadow prices of Navajo coal and identify optimal coal extraction. An economic model of coal resource extraction over time was structured within an optimal control theory framework. The control problem was formulated as a discrete dynamic optimization problem. A comparison of the shadow prices of coal deposits derived from the dynamic model with the royalty payments the tribe receives on the basis of the present long-term lease contracts indicates that, in most cases, the tribe is paid considerably less than the amount of royalty projected by the model. Part of these discrepancies may be explained in terms of the low coal demand condition at the time of leasing and due to greater uncertainties with respect to the geologic information and other risks associated with mining operations. However, changes in the demand for coal with rigidly fixed terms of royalty rates will lead to non-optimal extraction of coal. A corrective tax scheme is suggested on the basis of the results of this research. The proposed tax per unit of coal shipped from a site is the difference between the shadow price and the present royalty rate. The estimated tax rates over time are derived.
Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)
Optimization Hyperparameter Optimization In machine learning, parameters are the values that describe a machine learning model and are usually chosen by a learning algorithm. Hyperparameters, on the other hand, are parameters for the learning algorithm. The process of looking for the most optimal hyperparameters for a machine learning algorithm is called hyperparameter optimization. We support a few pieces of software for hyperparameter optimzation. Single Node Scikit-Learn Grid Search Random
Optimizing the transverse thermal conductivity of 2D-SiCf/SiC composites, I. Modeling
Youngblood, Gerald E.; Senor, David J.; Jones, Russell H.
2002-12-31
For potential fusion applications, considerable fabrication efforts have been directed to obtaining transverse thermal conductivity (Keff) values in excess of 30 W/mK (unirradiated) in the 800-1000°C temperature range for 2D-SiCf/SiC composites. To gain insight into the factors affecting Keff, at PNNL we have tested three different analytic models for predicting Keff in terms of constituent (fiber, matrix and interphase) properties. The tested models were: the Hasselman-Johnson (H-J) “2-Cylinder” model, which examines the effects of fiber-matrix (f/m) thermal barriers; the Markworth “3-Cylinder” model, which specifically examines the effects of interphase thickness and thermal conductivity; and a newly-developed Anisotropic “3-Square” model, which examines the potential effect of introducing a fiber coating with anisotropic properties to enhance (or diminish) f/m thermal coupling. The first two models are effective medium models, while the third model is a simple combination of parallel and series conductances. Model predictions suggest specific designs and/or development efforts directed to optimize the overall thermal transport performance of 2D-SiCf/SiC.
Development of an entrained flow gasifier model for process optimization study
Biagini, E.; Bardi, A.; Pannocchia, G.; Tognotti, L.
2009-10-15
Coal gasification is a versatile process to convert a solid fuel in syngas, which can be further converted and separated in hydrogen, which is a valuable and environmentally acceptable energy carrier. Different technologies (fixed beds, fluidized beds, entrained flow reactors) are used, operating under different conditions of temperature, pressure, and residence time. Process studies should be performed for defining the best plant configurations and operating conditions. Although 'gasification models' can be found in the literature simulating equilibrium reactors, a more detailed approach is required for process analysis and optimization procedures. In this work, a gasifier model is developed by using AspenPlus as a tool to be implemented in a comprehensive process model for the production of hydrogen via coal gasification. It is developed as a multizonal model by interconnecting each step of gasification (preheating, devolatilization, combustion, gasification, quench) according to the reactor configuration, that is in entrained flow reactor. The model removes the hypothesis of equilibrium by introducing the kinetics of all steps and solves the heat balance by relating the gasification temperature to the operating conditions. The model allows to predict the syngas composition as well as quantity the heat recovery (for calculating the plant efficiency), 'byproducts', and residual char. Finally, in view of future works, the development of a 'gasifier model' instead of a 'gasification model' will allow different reactor configurations to be compared.
Tessier, Tracey E.; Caves, Carlton M.; Deutsch, Ivan H.; Eastin, Bryan; Bacon, Dave
2005-09-15
We present a model, motivated by the criterion of reality put forward by Einstein, Podolsky, and Rosen and supplemented by classical communication, which correctly reproduces the quantum-mechanical predictions for measurements of all products of Pauli operators on an n-qubit GHZ state (or 'cat state'). The n-2 bits employed by our model are shown to be optimal for the allowed set of measurements, demonstrating that the required communication overhead scales linearly with n. We formulate a connection between the generation of the local values utilized by our model and the stabilizer formalism, which leads us to conjecture that a generalization of this method will shed light on the content of the Gottesman-Knill theorem.
A Mathematical Tumor Model with Immune Resistance and Drug Therapy: An Optimal Control Approach
DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)
De Pillis, L. G.; Radunskaya, A.
2001-01-01
We present a competition model of cancer tumor growth that includes both the immune system response and drug therapy. This is a four-population model that includes tumor cells, host cells, immune cells, and drug interaction. We analyze the stability of the drug-free equilibria with respect to the immune response in order to look for target basins of attraction. One of our goals was to simulate qualitatively the asynchronous tumor-drug interaction known as “Jeffs phenomenon.” The model we develop is successful in generating this asynchronous response behavior. Our other goal was to identify treatment protocols that could improve standard pulsed chemotherapymore » regimens. Using optimal control theory with constraints and numerical simulations, we obtain new therapy protocols that we then compare with traditional pulsed periodic treatment. The optimal control generated therapies produce larger oscillations in the tumor population over time. However, by the end of the treatment period, total tumor size is smaller than that achieved through traditional pulsed therapy, and the normal cell population suffers nearly no oscillations.« less
A Technical Review on Biomass Processing: Densification, Preprocessing, Modeling and Optimization
Jaya Shankar Tumuluru; Christopher T. Wright
2010-06-01
It is now a well-acclaimed fact that burning fossil fuels and deforestation are major contributors to climate change. Biomass from plants can serve as an alternative renewable and carbon-neutral raw material for the production of bioenergy. Low densities of 40–60 kg/m3 for lignocellulosic and 200–400 kg/m3 for woody biomass limits their application for energy purposes. Prior to use in energy applications these materials need to be densified. The densified biomass can have bulk densities over 10 times the raw material helping to significantly reduce technical limitations associated with storage, loading and transportation. Pelleting, briquetting, or extrusion processing are commonly used methods for densification. The aim of the present research is to develop a comprehensive review of biomass processing that includes densification, preprocessing, modeling and optimization. The specific objective include carrying out a technical review on (a) mechanisms of particle bonding during densification; (b) methods of densification including extrusion, briquetting, pelleting, and agglomeration; (c) effects of process and feedstock variables and biomass biochemical composition on the densification (d) effects of preprocessing such as grinding, preheating, steam explosion, and torrefaction on biomass quality and binding characteristics; (e) models for understanding the compression characteristics; and (f) procedures for response surface modeling and optimization.
Vandersall, Jennifer A.; Gardner, Shea N.; Clague, David S.
2010-05-04
A computational method and computer-based system of modeling DNA synthesis for the design and interpretation of PCR amplification, parallel DNA synthesis, and microarray chip analysis. The method and system include modules that address the bioinformatics, kinetics, and thermodynamics of DNA amplification and synthesis. Specifically, the steps of DNA selection, as well as the kinetics and thermodynamics of DNA hybridization and extensions, are addressed, which enable the optimization of the processing and the prediction of the products as a function of DNA sequence, mixing protocol, time, temperature and concentration of species.
OneidaTribe of Indians Energy Optimization Model Development and Energy Audits
Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site
Energy Optimization Model Development & Energy Audits U.S. DOE - Tribal Energy Program - 11/14/12 2 12/13/2012 where is it? Overview ► Reservation size of 65,430 acres (roughly 8 x 12 miles) with Oneida ownership of approximately 24,173 acres ► Membership of 16,877 with 7,360 members living on the Reservation or in immediate area ► Repurchase and restoration of lands a priority since casino started in 1993 ► Surburban sprawl from Green Bay and rising land prices Energy Team ►
Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)
Original Signatures on File
Modeling and optimizing of the random atomic spin gyroscope drift based on the atomic spin gyroscope
Quan, Wei; Lv, Lin Liu, Baiqi
2014-11-15
In order to improve the atom spin gyroscope's operational accuracy and compensate the random error caused by the nonlinear and weak-stability characteristic of the random atomic spin gyroscope (ASG) drift, the hybrid random drift error model based on autoregressive (AR) and genetic programming (GP) + genetic algorithm (GA) technique is established. The time series of random ASG drift is taken as the study object. The time series of random ASG drift is acquired by analyzing and preprocessing the measured data of ASG. The linear section model is established based on AR technique. After that, the nonlinear section model is built based on GP technique and GA is used to optimize the coefficients of the mathematic expression acquired by GP in order to obtain a more accurate model. The simulation result indicates that this hybrid model can effectively reflect the characteristics of the ASG's random drift. The square error of the ASG's random drift is reduced by 92.40%. Comparing with the AR technique and the GP + GA technique, the random drift is reduced by 9.34% and 5.06%, respectively. The hybrid modeling method can effectively compensate the ASG's random drift and improve the stability of the system.
A dynamic model for the optimization of oscillatory low grade heat engines
Markides, Christos N.; Smith, Thomas C. B.
2015-01-22
The efficiency of a thermodynamic system is a key quantity on which its usefulness and wider application relies. This is especially true for a device that operates with marginal energy sources and close to ambient temperatures. Various definitions of efficiency are available, each of which reveals a certain performance characteristic of a device. Of these, some consider only the thermodynamic cycle undergone by the working fluid, whereas others contain additional information, including relevant internal components of the device that are not part of the thermodynamic cycle. Yet others attempt to factor out the conditions of the surroundings with which the device is interfacing thermally during operation. In this paper we present a simple approach for the modeling of complex oscillatory thermal-fluid systems capable of converting low grade heat into useful work. We apply the approach to the NIFTE, a novel low temperature difference heat utilization technology currently under development. We use the results from the model to calculate various efficiencies and comment on the usefulness of the different definitions in revealing performance characteristics. We show that the approach can be applied to make design optimization decisions, and suggest features for optimal efficiency of the NIFTE.
Gneiding, N.; Zhuromskyy, O.; Peschel, U.; Shamonina, E.
2014-10-28
Metamaterials are comprised of metallic structures with a strong response to incident electromagnetic radiation, like, for example, split ring resonators. The interaction of resonator ensembles with electromagnetic waves can be simulated with finite difference or finite elements algorithms, however, above a certain ensemble size simulations become inadmissibly time or memory consuming. Alternatively a circuit description of metamaterials, a well developed modelling tool at radio and microwave frequencies, allows to significantly increase the simulated ensemble size. This approach can be extended to the IR spectral range with an appropriate set of circuit element parameters accounting for physical effects such as electron inertia and finite conductivity. The model is verified by comparing the coupling coefficients with the ones obtained from the full wave numerical simulations, and used to optimize the nano-antenna design with improved radiation characteristics.
Gebraad, P. M. O.; Teeuwisse, F. W.; van Wingerden, J. W.; Fleming, Paul A.; Ruben, S. D.; Marden, J. R.; Pao, L. Y.
2016-01-01
This article presents a wind plant control strategy that optimizes the yaw settings of wind turbines for improved energy production of the whole wind plant by taking into account wake effects. The optimization controller is based on a novel internal parametric model for wake effects, called the FLOw Redirection and Induction in Steady-state (FLORIS) model. The FLORIS model predicts the steady-state wake locations and the effective flow velocities at each turbine, and the resulting turbine electrical energy production levels, as a function of the axial induction and the yaw angle of the different rotors. The FLORIS model has a limited number of parameters that are estimated based on turbine electrical power production data. In high-fidelity computational fluid dynamics simulations of a small wind plant, we demonstrate that the optimization control based on the FLORIS model increases the energy production of the wind plant, with a reduction of loads on the turbines as an additional effect.
Stamp, Jason E.; Eddy, John P.; Jensen, Richard P.; Munoz-Ramos, Karina
2016-01-01
Microgrids are a focus of localized energy production that support resiliency, security, local con- trol, and increased access to renewable resources (among other potential benefits). The Smart Power Infrastructure Demonstration for Energy Reliability and Security (SPIDERS) Joint Capa- bility Technology Demonstration (JCTD) program between the Department of Defense (DOD), Department of Energy (DOE), and Department of Homeland Security (DHS) resulted in the pre- liminary design and deployment of three microgrids at military installations. This paper is focused on the analysis process and supporting software used to determine optimal designs for energy surety microgrids (ESMs) in the SPIDERS project. There are two key pieces of software, an ex- isting software application developed by Sandia National Laboratories (SNL) called Technology Management Optimization (TMO) and a new simulation developed for SPIDERS called the per- formance reliability model (PRM). TMO is a decision support tool that performs multi-objective optimization over a mixed discrete/continuous search space for which the performance measures are unrestricted in form. The PRM is able to statistically quantify the performance and reliability of a microgrid operating in islanded mode (disconnected from any utility power source). Together, these two software applications were used as part of the ESM process to generate the preliminary designs presented by SNL-led DOE team to the DOD. Acknowledgements Sandia National Laboratories and the SPIDERS technical team would like to acknowledge the following for help in the project: * Mike Hightower, who has been the key driving force for Energy Surety Microgrids * Juan Torres and Abbas Akhil, who developed the concept of microgrids for military instal- lations * Merrill Smith, U.S. Department of Energy SPIDERS Program Manager * Ross Roley and Rich Trundy from U.S. Pacific Command * Bill Waugaman and Bill Beary from U.S. Northern Command * Tarek Abdallah, Melanie
Urniezius, Renaldas
2011-03-14
The principle of Maximum relative Entropy optimization was analyzed for dead reckoning localization of a rigid body when observation data of two attached accelerometers was collected. Model constraints were derived from the relationships between the sensors. The experiment's results confirmed that accelerometers each axis' noise can be successfully filtered utilizing dependency between channels and the dependency between time series data. Dependency between channels was used for a priori calculation, and a posteriori distribution was derived utilizing dependency between time series data. There was revisited data of autocalibration experiment by removing the initial assumption that instantaneous rotation axis of a rigid body was known. Performance results confirmed that such an approach could be used for online dead reckoning localization.
Lenhart, S. |; Protopopescu, V.
1994-09-01
The last years have witnessed a dramatic shift of the world`s military, political, and economic paradigm from a bi-polar competitive gridlock to a more fluid, multi-player environment. This change has necessarily been followed by a re-evaluation of the strategic thinking and by a reassessment of mutual positions, options, and decisions. The essential attributes of the new situation are modeled by a system of nonlinear evolution equations with competitive/cooperative interactions. The mathematical setting is quite general to accommodate models related to military confrontation, arms control, economic competition, political negotiations, etc. Irrespective of the specific details, all these situations share a common denominator, namely the presence of various players with different and often changing interests and goals. The interests, ranging from conflicting to consensual, are defined in a context of interactions between the players that vary from competitive to cooperative. Players with converging interests tend to build up cooperative coalitions while coalitions with diverging interests usually compete among themselves, but this is not an absolute requirement (namely, one may have groups with converging interests and competitive interactions, and vice-versa). Appurtenance to a coalition may change in time according to the shift in one`s perceptions, interests, or obligations. During the time evolution, the players try to modify their strategies as to best achieve their respective goals. An objective functional quantifying the rate of success (payoff) vs. effort (cost) measures the degree of goal attainment for all players involved, thus selecting an optimal strategy based on optimal controls. While the technical details may vary from problem to problem, the general approach described here establishes a standard framework for a host of concrete situations that may arise from tomorrow`s {open_quotes}next competition{close_quotes}.
A Full Demand Response Model in Co-Optimized Energy and
Liu, Guodong; Tomsovic, Kevin
2014-01-01
It has been widely accepted that demand response will play an important role in reliable and economic operation of future power systems and electricity markets. Demand response can not only influence the prices in the energy market by demand shifting, but also participate in the reserve market. In this paper, we propose a full model of demand response in which demand flexibility is fully utilized by price responsive shiftable demand bids in energy market as well as spinning reserve bids in reserve market. A co-optimized day-ahead energy and spinning reserve market is proposed to minimize the expected net cost under all credible system states, i.e., expected total cost of operation minus total benefit of demand, and solved by mixed integer linear programming. Numerical simulation results on the IEEE Reliability Test System show effectiveness of this model. Compared to conventional demand shifting bids, the proposed full demand response model can further reduce committed capacity from generators, starting up and shutting down of units and the overall system operating costs.
A mathematical liver model and its application to system optimization and texture analysis
Cargill, E.B.
1989-01-01
This dissertation presents realistic mathematical models of normal and diseased livers and a nuclear medicine camera. The mathematical model of a normal liver is developed by creating a data set of points on the surface of the liver and fitting it to a truncated set of spherical harmonics. We model the depth-dependent MTF of a scintillation camera taking into account the effects of Compton scatter, linear attenuation, intrinsic detector resolution, collimator resolution, and Poisson noise. The differential diagnosis on a liver scan includes normal, focal disease, and diffuse disease. Object classes of normal livers are created by randomly perturbing the spherical harmonic coefficients. Object classes of livers with focal disease are created by introducing cold ellipsoids within the liver volume. Cirrhotic livers are created by modelling the gross morphological changes, heterogenous uptake, and decreased overall uptake. Simulated nuclear medicine images are made by projecting livers through nuclear imaging systems. The combination of object classes of simulated livers and models of different imaging systems is applied to imaging-system design optimization in a psycho-physical study. Human observer performance on simulated liver images made on nine different systems is compared to the Hotelling trace criterion (HTC). The system with the best observer performance is judged to be the best system. The correlation between the human performance metric d{sub a} and the HTC for this study was 0.829, suggesting that the HTC may have value as a predictor of observer performance. Texture in a liver scan is related to the three-dimensional distribution of functional acini, which changes with disease. One measure of texture is the fractal dimension, related to the Fourier power spectrum. We measured the average radial power spectra of 70 liver scans.
Optimization of Depletion Modeling and Simulation for the High Flux Isotope Reactor
Betzler, Benjamin R; Ade, Brian J; Chandler, David; Ilas, Germina; Sunny, Eva E
2015-01-01
Monte Carlo based depletion tools used for the high-fidelity modeling and simulation of the High Flux Isotope Reactor (HFIR) come at a great computational cost; finding sufficient approximations is necessary to make the use of these tools feasible. The optimization of the neutronics and depletion model for the HFIR is based on two factors: (i) the explicit representation of the involute fuel plates with sets of polyhedra and (ii) the treatment of depletion mixtures and control element position during depletion calculations. A very fine representation (i.e., more polyhedra in the involute plate approximation) does not significantly improve simulation accuracy. The recommended representation closely represents the physical plates and ensures sufficient fidelity in regions with high flux gradients. Including the fissile targets in the central flux trap of the reactor as depletion mixtures has the greatest effect on the calculated cycle length, while localized effects (e.g., the burnup of specific isotopes or the power distribution evolution over the cycle) are more noticeable consequences of including a critical control element search or depleting burnable absorbers outside the fuel region.
Nelson, R.A. Jr.; Pimentel, D.A.; Jolly-Woodruff, S.; Spore, J.
1998-04-01
In this report, a phenomenological model of simultaneous bottom-up and top-down quenching is developed and discussed. The model was implemented in the TRAC-PF1/MOD2 computer code. Two sets of closure relationships were compared within the study, the Absolute set and the Conditional set. The Absolute set of correlations is frequently viewed as the pure set because the correlations is frequently viewed as the pure set because the correlations utilize their original coefficients as suggested by the developer. The Conditional set is a modified set of correlations with changes to the correlation coefficient only. Results for these two sets indicate quite similar results. This report also summarizes initial results of an effort to investigate nonlinear optimization techniques applied to the closure model development. Results suggest that such techniques can provide advantages for future model development work, but that extensive expertise is required to utilize such techniques (i.e., the model developer must fully understand both the physics of the process being represented and the computational techniques being employed). The computer may then be used to improve the correlation of computational results with experiments.
DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)
Gebraad, P. M. O.; Teeuwisse, F. W.; van Wingerden, J. W.; Fleming, Paul A.; Ruben, S. D.; Marden, J. R.; Pao, L. Y.
2016-01-01
This article presents a wind plant control strategy that optimizes the yaw settings of wind turbines for improved energy production of the whole wind plant by taking into account wake effects. The optimization controller is based on a novel internal parametric model for wake effects, called the FLOw Redirection and Induction in Steady-state (FLORIS) model. The FLORIS model predicts the steady-state wake locations and the effective flow velocities at each turbine, and the resulting turbine electrical energy production levels, as a function of the axial induction and the yaw angle of the different rotors. The FLORIS model has a limitedmore » number of parameters that are estimated based on turbine electrical power production data. In high-fidelity computational fluid dynamics simulations of a small wind plant, we demonstrate that the optimization control based on the FLORIS model increases the energy production of the wind plant, with a reduction of loads on the turbines as an additional effect.« less
Numerical research of the optimal control problem in the semi-Markov inventory model
Gorshenin, Andrey K.
2015-03-10
This paper is devoted to the numerical simulation of stochastic system for inventory management products using controlled semi-Markov process. The results of a special software for the systems research and finding the optimal control are presented.
Optimal Compensation with Hidden Action and Lump-Sum Payment in a Continuous-Time Model
Cvitanic, Jaksa Wan, Xuhu Zhang Jianfeng
2009-02-15
We consider a problem of finding optimal contracts in continuous time, when the agent's actions are unobservable by the principal, who pays the agent with a one-time payoff at the end of the contract. We fully solve the case of quadratic cost and separable utility, for general utility functions. The optimal contract is, in general, a nonlinear function of the final outcome only, while in the previously solved cases, for exponential and linear utility functions, the optimal contract is linear in the final output value. In a specific example we compute, the first-best principal's utility is infinite, while it becomes finite with hidden action, which is increasing in value of the output. In the second part of the paper we formulate a general mathematical theory for the problem. We apply the stochastic maximum principle to give necessary conditions for optimal contracts. Sufficient conditions are hard to establish, but we suggest a way to check sufficiency using non-convex optimization.
Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)
Optimizing Performance Optimizing Performance Storage Optimization Optimizing the sizes of the files you store in HPSS and minimizing the number of tapes they are on will lead to...
Tan, Sirui; Huang, Lianjie
2014-11-01
For modeling scalar-wave propagation in geophysical problems using finite-difference schemes, optimizing the coefficients of the finite-difference operators can reduce numerical dispersion. Most optimized finite-difference schemes for modeling seismic-wave propagation suppress only spatial but not temporal dispersion errors. We develop a novel optimized finite-difference scheme for numerical scalar-wave modeling to control dispersion errors not only in space but also in time. Our optimized scheme is based on a new stencil that contains a few more grid points than the standard stencil. We design an objective function for minimizing relative errors of phase velocities of waves propagating in all directions within a given range of wavenumbers. Dispersion analysis and numerical examples demonstrate that our optimized finite-difference scheme is computationally up to 2.5 times faster than the optimized schemes using the standard stencil to achieve the similar modeling accuracy for a given 2D or 3D problem. Compared with the high-order finite-difference scheme using the same new stencil, our optimized scheme reduces 50 percent of the computational cost to achieve the similar modeling accuracy. This new optimized finite-difference scheme is particularly useful for large-scale 3D scalar-wave modeling and inversion.
Hart, W.E.; Istrail, S. [Sandia National Labs., Albuquerque, NM (United States). Algorithms and Discrete Mathematics Dept.
1996-08-09
This paper considers the protein structure prediction problem for lattice and off-lattice protein folding models that explicitly represent side chains. Lattice models of proteins have proven extremely useful tools for reasoning about protein folding in unrestricted continuous space through analogy. This paper provides the first illustration of how rigorous algorithmic analyses of lattice models can lead to rigorous algorithmic analyses of off-lattice models. The authors consider two side chain models: a lattice model that generalizes the HP model (Dill 85) to explicitly represent side chains on the cubic lattice, and a new off-lattice model, the HP Tangent Spheres Side Chain model (HP-TSSC), that generalizes this model further by representing the backbone and side chains of proteins with tangent spheres. They describe algorithms for both of these models with mathematically guaranteed error bounds. In particular, the authors describe a linear time performance guaranteed approximation algorithm for the HP side chain model that constructs conformations whose energy is better than 865 of optimal in a face centered cubic lattice, and they demonstrate how this provides a 70% performance guarantee for the HP-TSSC model. This is the first algorithm in the literature for off-lattice protein structure prediction that has a rigorous performance guarantee. The analysis of the HP-TSSC model builds off of the work of Dancik and Hannenhalli who have developed a 16/30 approximation algorithm for the HP model on the hexagonal close packed lattice. Further, the analysis provides a mathematical methodology for transferring performance guarantees on lattices to off-lattice models. These results partially answer the open question of Karplus et al. concerning the complexity of protein folding models that include side chains.
MULTI-SCALE MODELING AND APPROXIMATION ASSISTED OPTIMIZATION OF BARE TUBE HEAT EXCHANGERS
Bacellar, Daniel; Ling, Jiazhen; Aute, Vikrant; Radermacher, Reinhard; Abdelaziz, Omar
2014-01-01
Air-to-refrigerant heat exchangers are very common in air-conditioning, heat pump and refrigeration applications. In these heat exchangers, there is a great benefit in terms of size, weight, refrigerant charge and heat transfer coefficient, by moving from conventional channel sizes (~ 9mm) to smaller channel sizes (< 5mm). This work investigates new designs for air-to-refrigerant heat exchangers with tube outer diameter ranging from 0.5 to 2.0mm. The goal of this research is to develop and optimize the design of these heat exchangers and compare their performance with existing state of the art designs. The air-side performance of various tube bundle configurations are analyzed using a Parallel Parameterized CFD (PPCFD) technique. PPCFD allows for fast-parametric CFD analyses of various geometries with topology change. Approximation techniques drastically reduce the number of CFD evaluations required during optimization. Maximum Entropy Design method is used for sampling and Kriging method is used for metamodeling. Metamodels are developed for the air-side heat transfer coefficients and pressure drop as a function of tube-bundle dimensions and air velocity. The metamodels are then integrated with an air-to-refrigerant heat exchanger design code. This integration allows a multi-scale analysis of air-side performance heat exchangers including air-to-refrigerant heat transfer and phase change. Overall optimization is carried out using a multi-objective genetic algorithm. The optimal designs found can exhibit 50 percent size reduction, 75 percent decrease in air side pressure drop and doubled air heat transfer coefficients compared to a high performance compact micro channel heat exchanger with same capacity and flow rates.
Reference Model MHK Turbine Array Optimization Study within a Generic River System.
Johnson, Erick; Barco Mugg, Janet; James, Scott; Roberts, Jesse D.
2011-12-01
Increasing interest in marine hydrokinetic (MHK) energy has spurred to significant research on optimal placement of emerging technologies to maximize energy conversion and minimize potential effects on the environment. However, these devices will be deployed as an array in order to reduce the cost of energy and little work has been done to understand the impact these arrays will have on the flow dynamics, sediment-bed transport and benthic habitats and how best to optimize these arrays for both performance and environmental considerations. An %22MHK-friendly%22 routine has been developed and implemented by Sandia National Laboratories (SNL) into the flow, sediment dynamics and water-quality code, SNL-EFDC. This routine has been verified and validated against three separate sets of experimental data. With SNL-EFDC, water quality and array optimization studies can be carried out to optimize an MHK array in a resource and study its effects on the environment. The present study examines the effect streamwise and spanwise spacing has on the array performance. Various hypothetical MHK array configurations are simulated within a trapezoidal river channel. Results show a non-linear increase in array-power efficiency as turbine spacing is increased in each direction, which matches the trends seen experimentally. While the sediment transport routines were not used in these simulations, the flow acceleration seen around the MHK arrays has the potential to significantly affect the sediment transport characteristics and benthic habitat of a resource. Evaluation Only. Created with Aspose.Pdf.Kit. Copyright 2002-2011 Aspose Pty Ltd Evaluation Only. Created with Aspose.Pdf.Kit. Copyright 2002-2011 Aspose Pty Ltd
Bein, A.; Dutton, A.R. )
1993-06-01
Na-Cl, halite Ca-Cl, and gypsum Ca-Cl brines with salinities from 45 to >300 g/L are identified and mapped in four hydrostratigraphic units in the Permian Basin area beneath western Texas and Oklahoma and eastern New Mexico, providing spatial and lithologic constraints on the interpretation of the origin and movement of brine. Na-Cl brine is derived from meteoric water as young as 5-10 Ma that dissolved anhydrite and halite, whereas Ca-Cl brine is interpreted to be ancient, modified-connate Permian brine that now is mixing with, and being displaced by, the Na-Cl brine. Displacement fronts appear as broad mixing zones with no significant salinity gradients. Evolution of Ca-Cl brine composition from ideal evaporated sea water is attributed to dolomitization and syndepositional recycling of halite and bittern salts by intermittent influx of fresh water and sea water. Halite Ca-Cl brine in the evaporite section in the northern part of the basin differs from gypsum Ca-Cl brine in the south-central part in salinity and Na/Cl ratio and reflects segregation between halite- and gypsum-precipitating lagoons during the Permian. Ca-Cl brine moved downward through the evaporite section into the underlying Lower Permian and Pennsylvanian marine section that is now the deep-basin brine aquifer, mixing there with pre-existing sea water. Buoyancy-driven convection of brine dominated local flow for most of basin history, with regional advection governed by topographically related forces dominant only for the past 5 to 10 Ma. 71 refs., 11 figs.
Bradonjic, Milan
2009-01-01
In this paper we study reputation mechanisms, and show how the notion of reputation can help us in building truthful online auction mechanisms. From the mechanism design prospective, we derive the conditions on and design a truthful online auction mechanism. Moreover, in the case when some agents may lay or cannot have the real knowledge about the other agents reputations, we derive the resolution of the auction, such that the mechanism is truthful. Consequently, we move forward to the optimal one-gambler/one-seller problem, and explain how that problem is refinement of the previously discussed online auction design in the presence of reputation mechanism. In the setting of the optimal one-gambler problem, we naturally rise and solve the specific question: What is an agent's optimal strategy, in order to maximize his revenue? We would like to stress that our analysis goes beyond the scope, which game theory usually discusses under the notion of reputation. We model one-player games, by introducing a new parameter (reputation), which helps us in predicting the agent's behavior, in real-world situations, such as, behavior of a gambler, real-estate dealer, etc.
First report on non-thermal plasma reactor scaling criteria and optimization models
Rosocha, L.A.; Korzekwa, R.A.
1998-01-13
The purpose of SERDP project CP-1038 is to evaluate and develop non-thermal plasma (NTP) reactor technology for Department of Defense (DoD) air emissions control applications. The primary focus is on oxides of nitrogen (NO{sub x}) and a secondary focus on hazardous air pollutants (HAPs), especially volatile organic compounds (VOCs). Example NO{sub x} sources are jet engine test cells (JETCs) and diesel engine powered electrical generators. Example VOCs are organic solvents used in painting, paint stripping, and parts cleaning. To design and build NTP reactors that are optimized for particular DoD applications, one must understand the basic decomposition chemistry of the target compound(s) and how the decomposition of a particular chemical species depends on the air emissions stream parameters and the reactor operating parameters. This report is intended to serve as an overview of the subject of reactor scaling and optimization and will discuss the basic decomposition chemistry of nitric oxide (NO) and two representative VOCs, trichloroethylene and carbon tetrachloride, and the connection between the basic plasma chemistry, the target species properties, and the reactor operating parameters (in particular, the operating plasma energy density). System architecture, that is how NTP reactors can be combined or ganged to achieve higher capacity, will also be briefly discussed.
From Physics Model to Results: An Optimizing Framework for Cross-Architecture Code Generation
DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)
Blazewicz, Marek; Hinder, Ian; Koppelman, David M.; Brandt, Steven R.; Ciznicki, Milosz; Kierzynka, Michal; Löffler, Frank; Schnetter, Erik; Tao, Jian
2013-01-01
Starting from a high-level problem description in terms of partial differential equations using abstract tensor notation, the Chemora framework discretizes, optimizes, and generates complete high performance codes for a wide range of compute architectures. Chemora extends the capabilities of Cactus, facilitating the usage of large-scale CPU/GPU systems in an efficient manner for complex applications, without low-level code tuning. Chemora achieves parallelism through MPI and multi-threading, combining OpenMP and CUDA. Optimizations include high-level code transformations, efficient loop traversal strategies, dynamically selected data and instruction cache usage strategies, and JIT compilation of GPU code tailored to the problem characteristics. The discretizationmore » is based on higher-order finite differences on multi-block domains. Chemora's capabilities are demonstrated by simulations of black hole collisions. This problem provides an acid test of the framework, as the Einstein equations contain hundreds of variables and thousands of terms.« less
Hour-by-Hour Cost Modeling of Optimized Central Wind-Based Water Electrolysis Production
Broader source: Energy.gov [DOE]
Presentation slides from the US DOE Fuel Cell Technologies Office webinar, Wind-to-Hydrogen Cost Modeling and Project Findings, on held January 17, 2013.
Bond-Lamberty, Benjamin; Calvin, Katherine V.; Jones, Andrew D.; Mao, Jiafu; Patel, Pralit L.; Shi, Xiaoying; Thomson, Allison M.; Thornton, Peter E.; Zhou, Yuyu
2014-01-01
Human activities are significantly altering biogeochemical cycles at the global scale, posing a significant problem for earth system models (ESMs), which may incorporate static land-use change inputs but do not actively simulate policy or economic forces. One option to address this problem is a to couple an ESM with an economically oriented integrated assessment model. Here we have implemented and tested a coupling mechanism between the carbon cycles of an ESM (CLM) and an integrated assessment (GCAM) model, examining the best proxy variables to share between the models, and quantifying our ability to distinguish climate- and land-use-driven flux changes. CLMs net primary production and heterotrophic respiration outputs were found to be the most robust proxy variables by which to manipulate GCAMs assumptions of long-term ecosystem steady state carbon, with short-term forest production strongly correlated with long-term biomass changes in climate-change model runs. By leveraging the fact that carbon-cycle effects of anthropogenic land-use change are short-term and spatially limited relative to widely distributed climate effects, we were able to distinguish these effects successfully in the model coupling, passing only the latter to GCAM. By allowing climate effects from a full earth system model to dynamically modulate the economic and policy decisions of an integrated assessment model, this work provides a foundation for linking these models in a robust and flexible framework capable of examining two-way interactions between human and earth system processes.
Origin of Macrostrains and Microstrains in Daimond-SiC Nanocomposites Based on the Core-shell Model
Palosz,B.; Stelmakh, S.; Grzanka, E.; Gierlotka, S.; Nauyoks, S.; Zerda, T.; Palosz, W.
2007-01-01
SiC-diamond nanocomposites were synthesized from nanodiamond and nanosilicon powders. A core-shell model of the composite nanocrystals was examined assuming that interatomic distances in the grain interior, the core, and at the surface shell (grain boundaries in nanocrystalline solids) are different. The samples were investigated by x-ray diffraction using synchrotron source. The powder diffractograms were elaborated based on the apparent lattice parameter methodology. The structure of the composites and its dependence on the sintering conditions is discussed. It is shown that as the sintering temperature increases the interatomic distances in the grain cores decrease, while the opposite occurs in the grain shells (forming the grain boundaries). Under some sintering temperature the interatomic distances in the core and in the shell get equal. However, for diamond this happens under different temperature than for SiC, thus internal strains in the composites are unavoidable.
An Integrated Approach to Coal Gasifier Testing, Modeling, and Process Optimization
Sundaram, S. K.; Johnson, Kenneth I.; Matyas, Josef; Williford, Ralph E.; Pilli, Siva Prasad; Korolev, Vladimir N.
2009-10-01
Gasification is an important method of converting coal into clean burning fuels and high-value industrial chemicals. However, gasifier reliability can be severely limited by rapid degradation of the refractory lining in hot-wall gasifiers. The Pacific Northwest National Laboratory (PNNL) is performing multidisciplinary research to provide the experimental data and the engineering models needed to control gasifier operation for extended refractory life. Our experimental program includes prediction of slag viscosity using empirical viscosity models encompassing US coals, characterization of selected slag-refractory interaction including transport of slag/refractory components at the slag-refractory interface, and measurement of slag penetration into refractories as a function of time and temperature. The experimental data is used in slag flow, slag penetration, and refractory damage models to predict the operating temperature limits for increased refractory life. A simplified entrained flow gasifier model is also being developed to simulate one-dimensional axial flow with average axial velocity, coal devolatilization, and combustion kinetics. Combining the slag flow, refractory degradation, and gasifier models will provide a powerful tool to predict the coal and oxidant feed rates and control the gasifier operation to balance coal conversion efficiency with increased refractory life. A research scale gasifier has also been constructed at PNNL to provide syngas for coal conversion and carbon sequestration research, and also valuable datasets on operating conditions for validating the modeling results.
Gu, Pei-Hong
2014-12-01
We propose an SO(10) × SO(10)' model to simultaneously realize a seesaw for Dirac neutrino masses and a leptogenesis for ordinary and dark matter-antimatter asymmetries. A (16 × 1-bar 6-bar '){sub H} scalar crossing the SO(10) and SO(10)' sectors plays an essential role in this seesaw-leptogenesis scenario. As a result of lepton number conservation, the lightest dark nucleon as the dark matter particle should have a determined mass around 15 GeV to explain the comparable fractions of ordinary and dark matter in the present universe. The (16 × 1-bar 6-bar '){sub H} scalar also mediates a U(1){sub em} × U(1)'{sub em} kinetic mixing after the ordinary and dark left-right symmetry breaking so that we can expect a dark nucleon scattering in direct detection experiments and/or a dark nucleon decay in indirect detection experiments. Furthermore, we can impose a softly broken mirror symmetry to simplify the parameter choice.
An integrated approach to coal gasifier testing, modeling, and process optimization
S.K. Sundaram; K.I. Johnson; J. Matyas; R.E. Williford; S.P. Pilli; V.N. Korolev
2009-09-15
Gasification is an important method of converting coal into clean-burning fuels and high-value industrial chemicals. However, gasifier reliability can be severely limited by rapid degradation of the refractory lining in hot-wall gasifiers. This paper describes an integrated approach to provide the experimental data and engineering models needed to better understand how to control gasifier operation for extended refractory life. The experimental program includes slag viscosity testing and measurement of slag penetration into refractories as a function of time and temperature. The experimental data is used in slag flow, slag penetration, and refractory damage models to predict the limits on operating temperature for increased refractory life. A simplified entrained flow gasifier model is also described to simulate one-dimensional axial flow with average axial velocity, coal devolatilization, and combustion kinetics. The goal of this experimental and model program is to predict coal and oxidant feed rates and to control the gasifier operation to balance coal conversion efficiency with increased refractory life. 26 refs., 7 figs., 3 tabs.
Rafique, Rashid; Kumar, Sandeep; Luo, Yiqi; Kiely, Gerard; Asrar, Ghassem R.
2015-02-01
he accurate calibration of complex biogeochemical models is essential for the robust estimation of soil greenhouse gases (GHG) as well as other environmental conditions and parameters that are used in research and policy decisions. DayCent is a popular biogeochemical model used both nationally and internationally for this purpose. Despite DayCent’s popularity, its complex parameter estimation is often based on experts’ knowledge which is somewhat subjective. In this study we used the inverse modelling parameter estimation software (PEST), to calibrate the DayCent model based on sensitivity and identifi- ability analysis. Using previously published N2 O and crop yield data as a basis of our calibration approach, we found that half of the 140 parameters used in this study were the primary drivers of calibration dif- ferences (i.e. the most sensitive) and the remaining parameters could not be identified given the data set and parameter ranges we used in this study. The post calibration results showed improvement over the pre-calibration parameter set based on, a decrease in residual differences 79% for N2O fluxes and 84% for crop yield, and an increase in coefficient of determination 63% for N2O fluxes and 72% for corn yield. The results of our study suggest that future studies need to better characterize germination tem- perature, number of degree-days and temperature dependency of plant growth; these processes were highly sensitive and could not be adequately constrained by the data used in our study. Furthermore, the sensitivity and identifiability analysis was helpful in providing deeper insight for important processes and associated parameters that can lead to further improvement in calibration of DayCent model.
Ely, James H.; Siciliano, Edward R.; Swinhoe, Martyn T.; Lintereur, Azaree T.
2013-01-01
This report details the results of the modeling and simulation work accomplished for the ‘Neutron Detection without Helium-3’ project during the 2011 and 2012 fiscal years. The primary focus of the project is to investigate commercially available technologies that might be used in safeguards applications in the relatively near term. Other technologies that are being developed may be more applicable in the future, but are outside the scope of this study.
J. Vernon Cole; Abhra Roy; Ashok Damle; Hari Dahr; Sanjiv Kumar; Kunal Jain; Ned Djilai
2012-10-02
Water management in Proton Exchange Membrane, PEM, Fuel Cells is challenging because of the inherent conflicts between the requirements for efficient low and high power operation. Particularly at low powers, adequate water must be supplied to sufficiently humidify the membrane or protons will not move through it adequately and resistance losses will decrease the cell efficiency. At high power density operation, more water is produced at the cathode than is necessary for membrane hydration. This excess water must be removed effectively or it will accumulate in the Gas Diffusion Layers, GDLs, between the gas channels and catalysts, blocking diffusion paths for reactants to reach the catalysts and potentially flooding the electrode. As power density of the cells is increased, the challenges arising from water management are expected to become more difficult to overcome simply due to the increased rate of liquid water generation relative to fuel cell volume. Thus, effectively addressing water management based issues is a key challenge in successful application of PEMFC systems. In this project, CFDRC and our partners used a combination of experimental characterization, controlled experimental studies of important processes governing how water moves through the fuel cell materials, and detailed models and simulations to improve understanding of water management in operating hydrogen PEM fuel cells. The characterization studies provided key data that is used as inputs to all state-of-the-art models for commercially important GDL materials. Experimental studies and microscopic scale models of how water moves through the GDLs showed that the water follows preferential paths, not branching like a river, as it moves toward the surface of the material. Experimental studies and detailed models of water and airflow in fuel cells channels demonstrated that such models can be used as an effective design tool to reduce operating pressure drop in the channels and the associated
SU-E-T-583: Optimizing the MLC Model Parameters for IMRT in the RayStation Treatment Planning System
Chen, S; Yi, B; Xu, H; Yang, X; Prado, K; D'Souza, W
2014-06-01
Purpose: To optimize the MLC model parameters for IMRT in the RayStation v.4.0 planning system and for a Varian C-series Linac with a 120-leaf Millennium MLC. Methods: The RayStation treatment planning system models rounded leaf-end MLC with the following parameters: average transmission, leaf-tip width, tongue-and-groove, and position offset. The position offset was provided by Varian. The leaf-tip width was iteratively evaluated by comparing computed and measured transverse dose profiles of MLC-defined fields at dmax in water. The profile comparison was also used to verify the MLC position offset. The transmission factor and leaf tongue width were derived iteratively by optimizing five clinical patient IMRT QA Results: brain, lung, pancreas, head-and-neck (HN), and prostate. The HN and prostate cases involved splitting fields. Verifications were performed with Mapcheck2 measurements and Monte Carlo calculations. Finally, the MLC model was validated using five test IMRT cases from the AAPM TG119 report. Absolute gamma analyses (3mm/3% and 2mm/2%) were applied. In addition, computed output factors for MLC-defined small fields (22, 33, 44, 66cm) of both 6MV and 18MV were compared to those measured by the Radiological Physics Center (RPC). Results: Both 6MV and 18MV models were determined to have the same MLC parameters: 2.5% transmission, tongue-and-groove 0.05cm, and leaftip 0.3cm. IMRT QA analysis for five cases in TG119 resulted in a 100% passing rate with 3mm/3% gamma analysis for 6MV, and >97.5% for 18MV. With 2mm/2% gamma analysis, the passing rate was >94.6% for 6MV and >90.9% for 18MV. The difference between computed output factors in RayStation and RPC measurements was less than 2% for all MLCdefined fields, which meets the RPC's acceptance criterion. Conclusion: The rounded leaf-end MLC model in RayStation 4.0 planning system was verified and IMRT commissioning was clinically acceptable. The IMRT commissioning was well validated using guidance from the
Modeling and Optimization of Direct Chill Casting to Reduce Ingot Cracking
Das, S.K.; Ningileri, S.; Long, Z.; Saito, K.; Khraisheh, M.; Hassan, M.H.; Kuwana, K.; Han, Q.; Viswanathan, S.; Sabau, A.S.; Clark, J.; Hyrn, J. (ANL)
2006-08-15
Approximately 68% of the aluminum produced in the United States is first cast into ingots prior to further processing into sheet, plate, extrusions, or foil. The direct chill (DC) semi-continuous casting process has been the mainstay of the aluminum industry for the production of ingots due largely to its robust nature and relative simplicity. Though the basic process of DC casting is in principle straightforward, the interaction of process parameters with heat extraction, microstructural evolution, and development of solidification stresses is too complex to analyze by intuition or practical experience. One issue in DC casting is the formation of stress cracks [1-15]. In particular, the move toward larger ingot cross-sections, the use of higher casting speeds, and an ever-increasing array of mold technologies have increased industry efficiencies but have made it more difficult to predict the occurrence of stress crack defects. The Aluminum Industry Technology Roadmap [16] has recognized the challenges inherent in the DC casting process and the control of stress cracks and selected the development of 'fundamental information on solidification of alloys to predict microstructure, surface properties, and stresses and strains' as a high-priority research need, and the 'lack of understanding of mechanisms of cracking as a function of alloy' and 'insufficient understanding of the aluminum solidification process', which is 'difficult to model', as technology barriers in aluminum casting processes. The goal of this Aluminum Industry of the Future (IOF) project was to assist the aluminum industry in reducing the incidence of stress cracks from the current level of 5% to 2%. Decreasing stress crack incidence is important for improving product quality and consistency as well as for saving resources and energy, since considerable amounts of cast metal could be saved by eliminating ingot cracking, by reducing the scalping thickness of the ingot before rolling, and by
Optimization of the parameters of plasma liners with zero-dimensional models
Oreshkin, V. I.
2013-11-15
The efficiency of conversion of the energy stored in the capacitor bank of a high-current pulse generator into the kinetic energy of an imploding plasma liner is analyzed. The analysis is performed by using a model consisting of LC circuit equations and equations of motion of a cylindrical shell. It is shown that efficient energy conversion can be attained only with a low-inductance generator. The mode of an 'ideal' load is considered where the load current at the final stage of implosion is close to zero. The advantages of this mode are, first, high efficiency of energy conversion (80%) and, second, improved stability of the shell implosion. In addition, for inertial confinement fusion realized by the scheme of a Z pinch dynamic hohlraum, not one but several fusion targets can be placed in the cavity on the pinch axis due to the large length of the liner.
U.S. Energy Information Administration (EIA) Indexed Site
Table OS-1. Domestic coal distribution, by origin State, 1st Quarter 2010 Origin: Alabama (thousand short tons) Coal Destination State...
U.S. Energy Information Administration (EIA) Indexed Site
Table OS-1. Domestic coal distribution, by origin State, 4th Quarter 2011 Origin: Alabama (thousand short tons) Coal Destination State...
U.S. Energy Information Administration (EIA) Indexed Site
Table OS-1. Domestic coal distribution, by origin State, 3rd Quarter 2011 Origin: Alabama (thousand short tons) Coal Destination State...
U.S. Energy Information Administration (EIA) Indexed Site
Table OS-1. Domestic coal distribution, by origin State, 4th Quarter 2010 Origin: Alabama (thousand short tons) Coal Destination State...
U.S. Energy Information Administration (EIA) Indexed Site
Table OS-1. Domestic coal distribution, by origin State, 2nd Quarter 2011 Origin: Alabama (thousand short tons) Coal Destination State...
U.S. Energy Information Administration (EIA) Indexed Site
Table OS-1. Domestic coal distribution, by origin State, 3rd Quarter 2010 Origin: Alabama (thousand short tons) Coal Destination State...
U.S. Energy Information Administration (EIA) Indexed Site
Table OS-1. Domestic coal distribution, by origin State, 1st Quarter 2011 Origin: Alabama (thousand short tons) Coal Destination State...
U.S. Energy Information Administration (EIA) Indexed Site
Table OS-1. Domestic coal distribution, by origin State, 2nd Quarter 2010 Origin: Alabama (thousand short tons) Coal Destination State...
Rong Xing; Ghaly, Michael; Frey, Eric C.
2013-06-15
Purpose: In yttrium-90 ({sup 90}Y) microsphere brachytherapy (radioembolization) of unresectable liver cancer, posttherapy {sup 90}Y bremsstrahlung single photon emission computed tomography (SPECT) has been used to document the distribution of microspheres in the patient and to help predict potential side effects. The energy window used during projection acquisition can have a significant effect on image quality. Thus, using an optimal energy window is desirable. However, there has been great variability in the choice of energy window due to the continuous and broad energy distribution of {sup 90}Y bremsstrahlung photons. The area under the receiver operating characteristic curve (AUC) for the ideal observer (IO) is a widely used figure of merit (FOM) for optimizing the imaging system for detection tasks. The IO implicitly assumes a perfect model of the image formation process. However, for {sup 90}Y bremsstrahlung SPECT there can be substantial model-mismatch (i.e., difference between the actual image formation process and the model of it assumed in reconstruction), and the amount of the model-mismatch depends on the energy window. It is thus important to account for the degradation of the observer performance due to model-mismatch in the optimization of the energy window. The purpose of this paper is to optimize the energy window for {sup 90}Y bremsstrahlung SPECT for a detection task while taking into account the effects of the model-mismatch. Methods: An observer, termed the ideal observer with model-mismatch (IO-MM), has been proposed previously to account for the effects of the model-mismatch on IO performance. In this work, the AUC for the IO-MM was used as the FOM for the optimization. To provide a clinically realistic object model and imaging simulation, the authors used a background-known-statistically and signal-known-statistically task. The background was modeled as multiple compartments in the liver with activity parameters independently following a
Integrated controls design optimization
Lou, Xinsheng; Neuschaefer, Carl H.
2015-09-01
A control system (207) for optimizing a chemical looping process of a power plant includes an optimizer (420), an income algorithm (230) and a cost algorithm (225) and a chemical looping process models. The process models are used to predict the process outputs from process input variables. Some of the process in puts and output variables are related to the income of the plant; and some others are related to the cost of the plant operations. The income algorithm (230) provides an income input to the optimizer (420) based on a plurality of input parameters (215) of the power plant. The cost algorithm (225) provides a cost input to the optimizer (420) based on a plurality of output parameters (220) of the power plant. The optimizer (420) determines an optimized operating parameter solution based on at least one of the income input and the cost input, and supplies the optimized operating parameter solution to the power plant.
DISSELKAMP RS
2011-01-06
Boehmite (e.g., aluminum oxyhydroxide) is a major non-radioactive component in Hanford and Savannah River nuclear tank waste sludge. Boehmite dissolution from sludge using caustic at elevated temperatures is being planned at Hanford to minimize the mass of material disposed of as high-level waste (HLW) during operation of the Waste Treatment Plant (WTP). To more thoroughly understand the chemistry of this dissolution process, we have developed an empirical kinetic model for aluminate production due to boehmite dissolution. Application of this model to Hanford tank wastes would allow predictability and optimization of the caustic leaching of aluminum solids, potentially yielding significant improvements to overall processing time, disposal cost, and schedule. This report presents an empirical kinetic model that can be used to estimate the aluminate production from the leaching of boehmite in Hanford waste as a function of the following parameters: (1) hydroxide concentration; (2) temperature; (3) specific surface area of boehmite; (4) initial soluble aluminate plus gibbsite present in waste; (5) concentration of boehmite in the waste; and (6) (pre-fit) Arrhenius kinetic parameters. The model was fit to laboratory, non-radioactive (e.g. 'simulant boehmite') leaching results, providing best-fit values of the Arrhenius A-factor, A, and apparent activation energy, E{sub A}, of A = 5.0 x 10{sup 12} hour{sup -1} and E{sub A} = 90 kJ/mole. These parameters were then used to predict boehmite leaching behavior observed in previously reported actual waste leaching studies. Acceptable aluminate versus leaching time profiles were predicted for waste leaching data from both Hanford and Savannah River site studies.
Broader source: Energy.gov [DOE]
Original Impact Calculations, from the Tool Kit Framework: Small Town University Energy Program (STEP).
Application of optimal prediction to molecular dynamics
Barber IV, John Letherman
2004-12-01
Optimal prediction is a general system reduction technique for large sets of differential equations. In this method, which was devised by Chorin, Hald, Kast, Kupferman, and Levy, a projection operator formalism is used to construct a smaller system of equations governing the dynamics of a subset of the original degrees of freedom. This reduced system consists of an effective Hamiltonian dynamics, augmented by an integral memory term and a random noise term. Molecular dynamics is a method for simulating large systems of interacting fluid particles. In this thesis, I construct a formalism for applying optimal prediction to molecular dynamics, producing reduced systems from which the properties of the original system can be recovered. These reduced systems require significantly less computational time than the original system. I initially consider first-order optimal prediction, in which the memory and noise terms are neglected. I construct a pair approximation to the renormalized potential, and ignore three-particle and higher interactions. This produces a reduced system that correctly reproduces static properties of the original system, such as energy and pressure, at low-to-moderate densities. However, it fails to capture dynamical quantities, such as autocorrelation functions. I next derive a short-memory approximation, in which the memory term is represented as a linear frictional force with configuration-dependent coefficients. This allows the use of a Fokker-Planck equation to show that, in this regime, the noise is {delta}-correlated in time. This linear friction model reproduces not only the static properties of the original system, but also the autocorrelation functions of dynamical variables.
Zhou, Zhi; de Bedout, Juan Manuel; Kern, John Michael; Biyik, Emrah; Chandra, Ramu Sharat
2013-01-22
A system for optimizing customer utility usage in a utility network of customer sites, each having one or more utility devices, where customer site is communicated between each of the customer sites and an optimization server having software for optimizing customer utility usage over one or more networks, including private and public networks. A customer site model for each of the customer sites is generated based upon the customer site information, and the customer utility usage is optimized based upon the customer site information and the customer site model. The optimization server can be hosted by an external source or within the customer site. In addition, the optimization processing can be partitioned between the customer site and an external source.
Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)
gasifier optimization The Gasifier Optimization and Plant Supporting Systems key technology area focuses on improving the performance and reducing the costs of advanced gasifiers. Specifically, current projects focus on creating models to better understand the kinetics and particulate behavior of fuel inside a gasifier. This work supports development of a highly advanced gasifier optimally configured with its supporting systems, which would incorporate the most aggressive and successful
Brigantic, Robert T.; Papatyi, Anthony F.; Perkins, Casey J.
2010-09-30
This report summarizes a study and corresponding model development conducted in support of the United States Pacific Command (USPACOM) as part of the Federal Energy Management Program (FEMP) American Reinvestment and Recovery Act (ARRA). This research was aimed at developing a mathematical programming framework and accompanying optimization methodology in order to simultaneously evaluate energy efficiency (EE) and renewable energy (RE) opportunities. Once developed, this research then demonstrated this methodology at a USPACOM installation - Camp H.M. Smith, Hawaii. We believe this is the first time such an integrated, joint EE and RE optimization methodology has been constructed and demonstrated.
Control and optimization system
Xinsheng, Lou
2013-02-12
A system for optimizing a power plant includes a chemical loop having an input for receiving an input parameter (270) and an output for outputting an output parameter (280), a control system operably connected to the chemical loop and having a multiple controller part (230) comprising a model-free controller. The control system receives the output parameter (280), optimizes the input parameter (270) based on the received output parameter (280), and outputs an optimized input parameter (270) to the input of the chemical loop to control a process of the chemical loop in an optimized manner.
Eranki, Pragnya L.; Manowitz, David H.; Bals, Bryan D.; Izaurralde, Roberto C.; Kim, Seungdo; Dale, Bruce E.
2013-07-23
An array of feedstock is being evaluated as potential raw material for cellulosic biofuel production. Thorough assessments are required in regional landscape settings before these feedstocks can be cultivated and sustainable management practices can be implemented. On the processing side, a potential solution to the logistical challenges of large biorefi neries is provided by a network of distributed processing facilities called local biomass processing depots. A large-scale cellulosic ethanol industry is likely to emerge soon in the United States. We have the opportunity to influence the sustainability of this emerging industry. The watershed-scale optimized and rearranged landscape design (WORLD) model estimates land allocations for different cellulosic feedstocks at biorefinery scale without displacing current animal nutrition requirements. This model also incorporates a network of the aforementioned depots. An integrated life cycle assessment is then conducted over the unified system of optimized feedstock production, processing, and associated transport operations to evaluate net energy yields (NEYs) and environmental impacts.
Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)
Original Signature on File Page8 of 8 M. EMERGENCY PROCEDURES 1. The owneroperator must maintain an adequately trained onsite RCRA emergency coordinator to direct emergency...
March-Leuba, S.; Jansen, J.F.; Kress, R.L.; Babcock, S.M. ); Dubey, R.V. . Dept. of Mechanical and Aerospace Engineering)
1992-08-01
A new program package, Symbolic Manipulator Laboratory (SML), for the automatic generation of both kinematic and static manipulator models in symbolic form is presented. Critical design parameters may be identified and optimized using symbolic models as shown in the sample application presented for the Future Armor Rearm System (FARS) arm. The computer-aided development of the symbolic models yields equations with reduced numerical complexity. Important considerations have been placed on the closed form solutions simplification and on the user friendly operation. The main emphasis of this research is the development of a methodology which is implemented in a computer program capable of generating symbolic kinematic and static forces models of manipulators. The fact that the models are obtained trigonometrically reduced is among the most significant results of this work and the most difficult to implement. Mathematica, a commercial program that allows symbolic manipulation, is used to implement the program package. SML is written such that the user can change any of the subroutines or create new ones easily. To assist the user, an on-line help has been written to make of SML a user friendly package. Some sample applications are presented. The design and optimization of the 5-degrees-of-freedom (DOF) FARS manipulator using SML is discussed. Finally, the kinematic and static models of two different 7-DOF manipulators are calculated symbolically.
March-Leuba, S.; Jansen, J.F.; Kress, R.L.; Babcock, S.M.; Dubey, R.V.
1992-08-01
A new program package, Symbolic Manipulator Laboratory (SML), for the automatic generation of both kinematic and static manipulator models in symbolic form is presented. Critical design parameters may be identified and optimized using symbolic models as shown in the sample application presented for the Future Armor Rearm System (FARS) arm. The computer-aided development of the symbolic models yields equations with reduced numerical complexity. Important considerations have been placed on the closed form solutions simplification and on the user friendly operation. The main emphasis of this research is the development of a methodology which is implemented in a computer program capable of generating symbolic kinematic and static forces models of manipulators. The fact that the models are obtained trigonometrically reduced is among the most significant results of this work and the most difficult to implement. Mathematica, a commercial program that allows symbolic manipulation, is used to implement the program package. SML is written such that the user can change any of the subroutines or create new ones easily. To assist the user, an on-line help has been written to make of SML a user friendly package. Some sample applications are presented. The design and optimization of the 5-degrees-of-freedom (DOF) FARS manipulator using SML is discussed. Finally, the kinematic and static models of two different 7-DOF manipulators are calculated symbolically.
Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)
Optimizing Performance Optimizing Performance Storage Optimization Optimizing the sizes of the files you store in HPSS and minimizing the number of tapes they are on will lead to the most effient use of NERSC HPSS: File sizes of about 1 GB or larger will give the best network performance (see graph below) Files sizes greater than about 500 GB can be more difficult to work with and lead to longer transfer times. Files larger than 15 TB cannot be uploaded to HPSS. Aggregate groups of small files
Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)
IPR 2008 Capital Investment Review CIR 2012 Quarterly Business Review Focus 2028 2011 Strategic Capital Discussions Access to Capital Debt Optimization Asset Management Cost...
HOPSPACK: Hybrid Optimization Parallel Search Package.
Gray, Genetha A.; Kolda, Tamara G.; Griffin, Joshua; Taddy, Matt; Martinez-Canales, Monica
2008-12-01
In this paper, we describe the technical details of HOPSPACK (Hybrid Optimization Parallel SearchPackage), a new software platform which facilitates combining multiple optimization routines into asingle, tightly-coupled, hybrid algorithm that supports parallel function evaluations. The frameworkis designed such that existing optimization source code can be easily incorporated with minimalcode modification. By maintaining the integrity of each individual solver, the strengths and codesophistication of the original optimization package are retained and exploited.4
U.S. Energy Information Administration (EIA) Indexed Site
Origin State ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ U.S. Energy Information Administration | Quarterly Coal Distribution Report 1st Quarter 2012 U.S. Energy Information Administration | Quarterly Coal Distribution Report 1st Quarter 2012 Alabama ___________________________________________________________________________________________________________________________________ Table OS-1. Domestic coal
Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)
Human Genome Research: DOE Origins Resources with Additional Information Charles DeLisi Charles DeLisi The genesis of the Department of Energy (DOE) human genome project took place ...
Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)
Copyright © 2016, Intel Corporation. All rights reserved. *Other names and brands may be claimed as the property of others. Optimization Notice 2 Agenda Scaling with MPI Performance Snapshot Tuning MPI Performance with Intel® Trace Analyzer and Collector Copyright © 2016, Intel Corporation. All rights reserved. *Other names and brands may be claimed as the property of others. Optimization Notice 4 Application Growth Problem Growth Cluster Growth Application/Tool Performance Application Size
Vanderbei, Robert J.; P Latin-Small-Letter-Dotless-I nar, Mustafa C.; Bozkaya, Efe B.
2013-02-15
An American option (or, warrant) is the right, but not the obligation, to purchase or sell an underlying equity at any time up to a predetermined expiration date for a predetermined amount. A perpetual American option differs from a plain American option in that it does not expire. In this study, we solve the optimal stopping problem of a perpetual American option (both call and put) in discrete time using linear programming duality. Under the assumption that the underlying stock price follows a discrete time and discrete state Markov process, namely a geometric random walk, we formulate the pricing problem as an infinite dimensional linear programming (LP) problem using the excessive-majorant property of the value function. This formulation allows us to solve complementary slackness conditions in closed-form, revealing an optimal stopping strategy which highlights the set of stock-prices where the option should be exercised. The analysis for the call option reveals that such a critical value exists only in some cases, depending on a combination of state-transition probabilities and the economic discount factor (i.e., the prevailing interest rate) whereas it ceases to be an issue for the put.
Lincoln, Don
2014-08-07
The Higgs boson was discovered in July of 2012 and is generally understood to be the origin of mass. While those statements are true, they are incomplete. It turns out that the Higgs boson is responsible for only about 2% of the mass of ordinary matter. In this dramatic new video, Dr. Don Lincoln of Fermilab tells us the rest of the story.
Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)
Original Prototypes (Status of detectors June, 1998) Ionization Chamber with one cell instrumented Ring 2-3 Silicon Detector Prototype CsI with dimensions approximately of Ring 2-3 Prototype CsI with PMT on Ring 2-3 prototype holder Silicon detectors also installed More Pictures: Recent data from NIMROD: Data Graph 1 Data Graph 2
Lincoln, Don
2014-07-30
The Higgs boson was discovered in July of 2012 and is generally understood to be the origin of mass. While those statements are true, they are incomplete. It turns out that the Higgs boson is responsible for only about 2% of the mass of ordinary matter. In this dramatic new video, Dr. Don Lincoln of Fermilab tells us the rest of the story.
Energy Science and Technology Software Center (OSTI)
2015-08-04
Electrolyte systems are common in advanced electrochemical devices and have numerous other industrial, scientific, and medical applications. For example, contemporary batteries are tasked with operating under increasing performance requirements. All battery operation is in some way tied to the electrolyte and how it interacts with various regions within the cell environment. Seeing the electrolyte plays a crucial role in battery performance and longevity, it is imperative that accurate, physics-based models be developed that will characterizemore » key electrolyte properties while keeping pace with the increasing complexity of these liquid systems. Advanced models are needed since laboratory measurements require significant resources to carry out for even a modest experimental The Advanced Electrolyte Model (AEM) developed at the INL is a proven capability designed to explore molecular-to-macroscale level aspects of electrolyte behavior, and can be used to drastically reduce the time required to characterize and optimize electrolytes. This technology earned an R&D 100 award in 2014. Although it is applied most frequently to lithium-ion and sodium-ion battery systems, it is general in its theory and can be used toward numerous other targets and intended applications. This capability is unique, powerful, relevant to present and future electrolyte development, and without peer. It redefines electrolyte modeling for highly-complex contemporary systems, wherein significant steps have been taken to capture the reality of electrolyte behavior in the electrochemical cell environment. This capability can have a very positive impact on accelerating domestic battery development to support aggressive vehicle and energy goals in the 21st century.« less
Fuzzy logic controller optimization
Sepe, Jr., Raymond B; Miller, John Michael
2004-03-23
A method is provided for optimizing a rotating induction machine system fuzzy logic controller. The fuzzy logic controller has at least one input and at least one output. Each input accepts a machine system operating parameter. Each output produces at least one machine system control parameter. The fuzzy logic controller generates each output based on at least one input and on fuzzy logic decision parameters. Optimization begins by obtaining a set of data relating each control parameter to at least one operating parameter for each machine operating region. A model is constructed for each machine operating region based on the machine operating region data obtained. The fuzzy logic controller is simulated with at least one created model in a feedback loop from a fuzzy logic output to a fuzzy logic input. Fuzzy logic decision parameters are optimized based on the simulation.
Ivanov, A.; Sanchez, V.; Imke, U.; Ivanov, K.
2012-07-01
In order to increase the accuracy and the degree of spatial resolution of core design studies, coupled Three-Dimensional (3D) neutronics (deterministic and Monte Carlo) and 3D thermal hydraulics (CFD and sub-channel) codes are being developed worldwide. In this paper the optimization of a coupling between MCNP5 code and an in-house development thermal-hydraulics code SUBCHANFLOW is presented. Various improvements of the coupling methodology are presented. With the help of novel interpolation tool a consistent methodology for the preparation of thermal scattering data library have been developed, ensuring that inelastic scattering from bound nuclei is treated at the correct moderator temperature. Trough the utilization of a hybrid coupling with discrete energy Monte-Carlo code KENO a methodology for acceleration of the coupled calculation is being demonstrated. In this approach an additional coupling between KENO and SUBCHANFLOW was developed, the converged results of which are used as initial conditions for the MCNP-SUBCHANFLOW coupling. Acceleration of fission source distribution convergence, by sampling fission source distribution from the power distribution obtained by KENO is also demonstrated. (authors)
Michael Harold; Vemuri Balakotaiah
2010-05-31
In this project a combined experimental and theoretical approach was taken to advance our understanding of lean NOx trap (LNT) technology. Fundamental kinetics studies were carried out of model LNT catalysts containing variable loadings of precious metals (Pt, Rh), and storage components (BaO, CeO{sub 2}). The Temporal Analysis of Products (TAP) reactor provided transient data under well-characterized conditions for both powder and monolith catalysts, enabling the identification of key reaction pathways and estimation of the corresponding kinetic parameters. The performance of model NOx storage and reduction (NSR) monolith catalysts were evaluated in a bench scale NOx trap using synthetic exhaust, with attention placed on the effect of the pulse timing and composition on the instantaneous and cycle-averaged product distributions. From these experiments we formulated a global model that predicts the main spatio-temporal features of the LNT and a mechanistic-based microkinetic models that incorporates a detailed understanding of the chemistry and predicts more detailed selectivity features of the LNT. The NOx trap models were used to determine its ability to simulate bench-scale data and ultimately to evaluate alternative LNT designs and operating strategies. The four-year project led to the training of several doctoral students and the dissemination of the findings as 47 presentations in conferences, catalysis societies, and academic departments as well 23 manuscripts in peer-reviewed journals. A condensed review of NOx storage and reduction was published in an encyclopedia of technology.
Arefinia, Zahra [Research Institute for Applied Physics and Astronomy, University of Tabriz, Tabriz 51666-14766 (Iran, Islamic Republic of); Asgari, Asghar, E-mail: asgari@tabrizu.ac.ir [Research Institute for Applied Physics and Astronomy, University of Tabriz, Tabriz 51666-14766 (Iran, Islamic Republic of); School of Electrical, Electronic, and Computer Engineering, University of Western Australia, Crawley, WA 6009 (Australia)
2014-05-21
Based on the ability of In{sub x}Ga{sub 1?x}N materials to optimally span the solar spectrum and their superior radiation resistance, solar cells based on p-type In{sub x}Ga{sub 1?x}N with low indium contents and interfacing with graphene film (G/In{sub x}Ga{sub 1?x}N), is proposed to exploit the benefit of transparency and work function tunability of graphene. Then, their solar power conversion efficiency modeled and optimized using a new analytical approach taking into account all recombination processes and accurate carrier mobility. Furthermore, their performance was compared with graphene on silicon counterparts and G/p-In{sub x}Ga{sub 1?x}N showed relatively smaller short-circuits current (?7?mA/cm{sup 2}) and significantly higher open-circuit voltage (?4?V) and efficiency (?30%). The thickness, doping concentration, and indium contents of p-In{sub x}Ga{sub 1?x}N and graphene work function were found to substantially affect the performance of G/p-In{sub x}Ga{sub 1?x}N.
Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)
Optimization Performance and Optimization Performance Monitoring Last edited: 2012-01-09 12:31:03...
Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)
Optimization Performance and Optimization Performance Monitoring Last edited: 2012-01-09 12:31:03
Robert C. Haight; John L. Ullmann; Daniel D. Strottman; Paul E. Koehler; Franz Kaeppeler
2000-01-01
This Workshop was held on September 3--4, 1999, following the 10th International Symposium on Capture Gamma-Ray Spectroscopy. Presentations were made by 14 speakers, 6 from the US and 8 from other countries on topics relevant to s-, r- and rp-process nucleosynthesis. Laboratory experiments, both present and planned, and astrophysical observations were represented as were astrophysical models. Approximately 50 scientists participated in this Workshop. These Proceedings consist of copies of vu-graphs presented at the Workshop. For further information, the interested readers are referred to the authors.
Dr. Ralph E. White; Dr. Branko N. Popov
2002-04-01
The dissolution of NiO cathodes during cell operation is a limiting factor to the successful commercialization of molten carbonate fuel cells (MCFCs). Lithium cobalt oxide coating onto the porous nickel electrode has been adopted to modify the conventional MCFC cathode which is believed to increase the stability of the cathodes in the carbonate melt. The material used for surface modification should possess thermodynamic stability in the molten carbonate and also should be electro catalytically active for MCFC reactions. Two approaches have been adopted to get a stable cathode material. First approach is the use of LiNi{sub 0.8}Co{sub 0.2}O{sub 2}, a commercially available lithium battery cathode material and the second is the use of tape cast electrodes prepared from cobalt coated nickel powders. The morphology and the structure of LiNi{sub 0.8}Co{sub 0.2}O{sub 2} and tape cast Co coated nickel powder electrodes were studied using scanning electron microscopy and X-Ray diffraction studies respectively. The electrochemical performance of the two materials was investigated by electrochemical impedance spectroscopy and polarization studies. A three phase homogeneous model was developed to simulate the performance of the molten carbonate fuel cell cathode. The homogeneous model is based on volume averaging of different variables in the three phases over a small volume element. The model gives a good fit to the experimental data. The model has been used to analyze MCFC cathode performance under a wide range of operating conditions.
Moro, Erik A.
2012-06-07
-modulated interferometric sensor depends on an appropriate performance function (e.g., desired displacement range, accuracy, robustness, etc.). In this dissertation, the performance limitations of a bundled differential intensity-modulated displacement sensor are analyzed, where the bundling configuration has been designed to optimize performance. The performance limitations of a white light Fabry-Perot displacement sensor are also analyzed. Both these sensors are non-contacting, but they have access to different regions of the performance-space. Further, both these sensors have different degrees of sensitivity to experimental uncertainty. Made in conjunction with careful analysis, the decision of which sensor to deploy need not be an uninformed one.
Lovley, Derek R
2012-12-28
The goal of this research was to provide computational tools to predictively model the behavior of two microbial communities of direct relevance to Department of Energy interests: 1) the microbial community responsible for in situ bioremediation of uranium in contaminated subsurface environments; and 2) the microbial community capable of harvesting electricity from waste organic matter and renewable biomass. During this project the concept of microbial electrosynthesis, a novel form of artificial photosynthesis for the direct production of fuels and other organic commodities from carbon dioxide and water was also developed and research was expanded into this area as well.
Kohut, Sviataslau V.; Staroverov, Viktor N.; Ryabinkin, Ilya G.
2014-05-14
We describe a method for constructing a hierarchy of model potentials approximating the functional derivative of a given orbital-dependent exchange-correlation functional with respect to electron density. Each model is derived by assuming a particular relationship between the self-consistent solutions of KohnSham (KS) and generalized KohnSham (GKS) equations for the same functional. In the KS scheme, the functional is differentiated with respect to density, in the GKS schemewith respect to orbitals. The lowest-level approximation is the orbital-averaged effective potential (OAEP) built with the GKS orbitals. The second-level approximation, termed the orbital-consistent effective potential (OCEP), is based on the assumption that the KS and GKS orbitals are the same. It has the form of the OAEP plus a correction term. The highest-level approximation is the density-consistent effective potential (DCEP), derived under the assumption that the KS and GKS electron densities are equal. The analytic expression for a DCEP is the OCEP formula augmented with kinetic-energy-density-dependent terms. In the case of exact-exchange functional, the OAEP is the Slater potential, the OCEP is roughly equivalent to the localized HartreeFock approximation and related models, and the DCEP is practically indistinguishable from the true optimized effective potential for exact exchange. All three levels of the proposed hierarchy require solutions of the GKS equations as input and have the same affordable computational cost.
Fuel Efficiency and Emissions Optimization of Heavy-Duty Diesel...
Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site
and Emissions Optimization of Heavy-Duty Diesel Engines using Model-Based Transient Calibration Fuel Efficiency and Emissions Optimization of Heavy-Duty Diesel Engines using ...
FEMP Completes 2000th Renewable Energy Optimization Screening...
Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site
FEMP Completes 2000th Renewable Energy Optimization Screening FEMP Completes 2000th Renewable Energy Optimization Screening July 23, 2015 - 12:03pm Addthis REopt models the complex ...
Next Generation Calibration Models with Dimensional Modeling...
Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site
Calibration Models with Dimensional Modeling Next Generation Calibration Models with ... Calibration Optimization for Next Generation Diesel Engines An Accelerated Aging ...
[SIAM conference on optimization
Not Available
1992-05-10
Abstracts are presented of 63 papers on the following topics: large-scale optimization, interior-point methods, algorithms for optimization, problems in control, network optimization methods, and parallel algorithms for optimization problems.
THE COSMIC ORIGINS SPECTROGRAPH
Green, James C.; Michael Shull, J.; Snow, Theodore P.; Stocke, John [Department of Astrophysical and Planetary Sciences, University of Colorado, 391-UCB, Boulder, CO 80309 (United States); Froning, Cynthia S.; Osterman, Steve; Beland, Stephane; Burgh, Eric B.; Danforth, Charles; France, Kevin [Center for Astrophysics and Space Astronomy, University of Colorado, 389-UCB, Boulder, CO 80309 (United States); Ebbets, Dennis [Ball Aerospace and Technologies Corp., 1600 Commerce Street, Boulder, CO 80301 (United States); Heap, Sara H. [NASA Goddard Space Flight Center, Code 681, Greenbelt, MD 20771 (United States); Leitherer, Claus; Sembach, Kenneth [Space Telescope Science Institute, 3700 San Martin Drive, Baltimore, MD 21218 (United States); Linsky, Jeffrey L. [JILA, University of Colorado and NIST, Boulder, CO 80309-0440 (United States); Savage, Blair D. [Department of Astronomy, University of Wisconsin-Madison, 475 North Charter Street, Madison, WI 53706 (United States); Siegmund, Oswald H. W. [Astronomy Department, University of California, Berkeley, CA 94720 (United States); Spencer, John; Alan Stern, S. [Southwest Research Institute, 1050 Walnut Street, Suite 300, Boulder, CO 80302 (United States); Welsh, Barry [Space Sciences Laboratory, University of California, 7 Gauss Way, Berkeley, CA 94720 (United States); and others
2012-01-01
The Cosmic Origins Spectrograph (COS) is a moderate-resolution spectrograph with unprecedented sensitivity that was installed into the Hubble Space Telescope (HST) in 2009 May, during HST Servicing Mission 4 (STS-125). We present the design philosophy and summarize the key characteristics of the instrument that will be of interest to potential observers. For faint targets, with flux F{sub {lambda}} Almost-Equal-To 1.0 Multiplication-Sign 10{sup -14} erg cm{sup -2} s{sup -1} A{sup -1}, COS can achieve comparable signal to noise (when compared to Space Telescope Imaging Spectrograph echelle modes) in 1%-2% of the observing time. This has led to a significant increase in the total data volume and data quality available to the community. For example, in the first 20 months of science operation (2009 September-2011 June) the cumulative redshift pathlength of extragalactic sight lines sampled by COS is nine times than sampled at moderate resolution in 19 previous years of Hubble observations. COS programs have observed 214 distinct lines of sight suitable for study of the intergalactic medium as of 2011 June. COS has measured, for the first time with high reliability, broad Ly{alpha} absorbers and Ne VIII in the intergalactic medium, and observed the He II reionization epoch along multiple sightlines. COS has detected the first CO emission and absorption in the UV spectra of low-mass circumstellar disks at the epoch of giant planet formation, and detected multiple ionization states of metals in extra-solar planetary atmospheres. In the coming years, COS will continue its census of intergalactic gas, probe galactic and cosmic structure, and explore physics in our solar system and Galaxy.
An optimization framework for workplace charging strategies ...
Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)
addressing different eligible levels of charging technology and employees' demographic distributions. The optimization model is to minimize the lifetime cost of...
Origin of primordial magnetic fields
Souza, Rafael S. de; Opher, Reuven
2008-02-15
Magnetic fields of intensities similar to those in our galaxy are also observed in high redshift galaxies, where a mean field dynamo would not have had time to produce them. Therefore, a primordial origin is indicated. It has been suggested that magnetic fields were created at various primordial eras: during inflation, the electroweak phase transition, the quark-hadron phase transition (QHPT), during the formation of the first objects, and during reionization. We suggest here that the large-scale fields {approx}{mu}G, observed in galaxies at both high and low redshifts by Faraday rotation measurements (FRMs), have their origin in the electromagnetic fluctuations that naturally occurred in the dense hot plasma that existed just after the QHPT. We evolve the predicted fields to the present time. The size of the region containing a coherent magnetic field increased due to the fusion of smaller regions. Magnetic fields (MFs) {approx}10 {mu}G over a comoving {approx}1 pc region are predicted at redshift z{approx}10. These fields are orders of magnitude greater than those predicted in previous scenarios for creating primordial magnetic fields. Line-of-sight average MFs {approx}10{sup -2} {mu}G, valid for FRMs, are obtained over a 1 Mpc comoving region at the redshift z{approx}10. In the collapse to a galaxy (comoving size {approx}30 kpc) at z{approx}10, the fields are amplified to {approx}10 {mu}G. This indicates that the MFs created immediately after the QHPT (10{sup -4} s), predicted by the fluctuation-dissipation theorem, could be the origin of the {approx}{mu}G fields observed by FRMs in galaxies at both high and low redshifts. Our predicted MFs are shown to be consistent with present observations. We discuss the possibility that the predicted MFs could cause non-negligible deflections of ultrahigh energy cosmic rays and help create the observed isotropic distribution of their incoming directions. We also discuss the importance of the volume average magnetic field
Optimal lattice-structured materials
DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)
Messner, Mark C.
2016-07-09
This paper describes a method for optimizing the mesostructure of lattice-structured materials. These materials are periodic arrays of slender members resembling efficient, lightweight macroscale structures like bridges and frame buildings. Current additive manufacturing technologies can assemble lattice structures with length scales ranging from nanometers to millimeters. Previous work demonstrates that lattice materials have excellent stiffness- and strength-to-weight scaling, outperforming natural materials. However, there are currently no methods for producing optimal mesostructures that consider the full space of possible 3D lattice topologies. The inverse homogenization approach for optimizing the periodic structure of lattice materials requires a parameterized, homogenized material model describingmore » the response of an arbitrary structure. This work develops such a model, starting with a method for describing the long-wavelength, macroscale deformation of an arbitrary lattice. The work combines the homogenized model with a parameterized description of the total design space to generate a parameterized model. Finally, the work describes an optimization method capable of producing optimal mesostructures. Several examples demonstrate the optimization method. One of these examples produces an elastically isotropic, maximally stiff structure, here called the isotruss, that arguably outperforms the anisotropic octet truss topology.« less
Putting combustion optimization to work
Spring, N.
2009-05-15
New plants and plants that are retrofitting can benefit from combustion optimization. Boiler tuning and optimization can complement each other. The continuous emissions monitoring system CEMS, and tunable diode laser absorption spectroscopy TDLAS can be used for optimisation. NeuCO's CombustionOpt neural network software can determine optimal fuel and air set points. Babcock and Wilcox Power Generation Group Inc's Flame Doctor can be used in conjunction with other systems to diagnose and correct coal-fired burner performance. The four units of the Colstrip power plant in Colstrips, Montana were recently fitted with combustion optimization systems based on advanced model predictive multi variable controls (MPCs), ABB's Predict & Control tool. Unit 4 of Tampa Electric's Big Bend plant in Florida is fitted with Emerson's SmartProcess fuzzy neural model based combustion optimisation system. 1 photo.
Optimal Electric Utility Expansion
Energy Science and Technology Software Center (OSTI)
1989-10-10
SAGE-WASP is designed to find the optimal generation expansion policy for an electrical utility system. New units can be automatically selected from a user-supplied list of expansion candidates which can include hydroelectric and pumped storage projects. The existing system is modeled. The calculational procedure takes into account user restrictions to limit generation configurations to an area of economic interest. The optimization program reports whether the restrictions acted as a constraint on the solution. All expansionmore » configurations considered are required to pass a user supplied reliability criterion. The discount rate and escalation rate are treated separately for each expansion candidate and for each fuel type. All expenditures are separated into local and foreign accounts, and a weighting factor can be applied to foreign expenditures.« less
Optimal segmentation and packaging process
Kostelnik, Kevin M.; Meservey, Richard H.; Landon, Mark D.
1999-01-01
A process for improving packaging efficiency uses three dimensional, computer simulated models with various optimization algorithms to determine the optimal segmentation process and packaging configurations based on constraints including container limitations. The present invention is applied to a process for decontaminating, decommissioning (D&D), and remediating a nuclear facility involving the segmentation and packaging of contaminated items in waste containers in order to minimize the number of cuts, maximize packaging density, and reduce worker radiation exposure. A three-dimensional, computer simulated, facility model of the contaminated items are created. The contaminated items are differentiated. The optimal location, orientation and sequence of the segmentation and packaging of the contaminated items is determined using the simulated model, the algorithms, and various constraints including container limitations. The cut locations and orientations are transposed to the simulated model. The contaminated items are actually segmented and packaged. The segmentation and packaging may be simulated beforehand. In addition, the contaminated items may be cataloged and recorded.
Penser Original Contract - Hanford Site
Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)
Original Contract DOE-RL Contracts/Procurements RL Contracts & Procurements Home Prime Contracts Current Solicitations Other Sources DOE RL Contracting Officers DOE RL Contracting Officer Representatives Penser Original Contract Email Email Page | Print Print Page | Text Increase Font Size Decrease Font Size Original contract issued on Date June 15, 2009 The following are links to Portable Document Format (PDF) format documents. You will need the Adobe Acrobat Reader in order to view the
Magnetic nematicity: A debated origin
DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)
Vaknin, David
2016-01-22
Different experimental studies based on nuclear magnetic resonance and inelastic neutron scattering reach opposing conclusions in regards to the origin of magnetic nematicity in iron chalcogenides.
Sootblowing optimization for improved boiler performance
James, John Robert; McDermott, John; Piche, Stephen; Pickard, Fred; Parikh, Neel J
2013-07-30
A sootblowing control system that uses predictive models to bridge the gap between sootblower operation and boiler performance goals. The system uses predictive modeling and heuristics (rules) associated with different zones in a boiler to determine an optimal sequence of sootblower operations and achieve boiler performance targets. The system performs the sootblower optimization while observing any operational constraints placed on the sootblowers.
Sootblowing optimization for improved boiler performance
James, John Robert; McDermott, John; Piche, Stephen; Pickard, Fred; Parikh, Neel J.
2012-12-25
A sootblowing control system that uses predictive models to bridge the gap between sootblower operation and boiler performance goals. The system uses predictive modeling and heuristics (rules) associated with different zones in a boiler to determine an optimal sequence of sootblower operations and achieve boiler performance targets. The system performs the sootblower optimization while observing any operational constraints placed on the sootblowers.
Ayad, G.; Barriere, T.; Gelin, J. C. [Femto-ST Institute/LMA, ENSMM, 26 Rue de l'Epitaphe, 25000 Besancon (France); Song, J. [Femto-ST Institute/LMA, ENSMM, 26 Rue de l'Epitaphe, 25000 Besancon (France); Department of Applied Mechanics and Engineering, Southwest Jiaotong University, 610031 Chengdu (China); Liu, B. [Department of Applied Mechanics and Engineering, Southwest Jiaotong University, 610031 Chengdu (China)
2007-05-17
The paper is concerned with optimization and parametric identification of Powder Injection Molding process that consists first in injection of powder mixture with polymer binder and then to the sintering of the resulting powders parts by solid state diffusion. In the first part, one describes an original methodology to optimize the injection stage based on the combination of Design Of Experiments and an adaptive Response Surface Modeling. Then the second part of the paper describes the identification strategy that one proposes for the sintering stage, using the identification of sintering parameters from dilatometer curves followed by the optimization of the sintering process. The proposed approaches are applied to the optimization for manufacturing of a ceramic femoral implant. One demonstrates that the proposed approach give satisfactory results.
An Optimized Swinging Door Algorithm for Wind Power Ramp Event...
Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)
... An applica- tion of the optimized SDA is provided to ascertain the op- timal parameter of the original SDA. Measured wind power data from the Electric Reliability Council of Texas ...
Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)
Performance and Optimization Performance and Optimization Benchmarking Software on Hopper and Carver PURPOSE Test the performance impact of multithreading with representative...
Wind Electrolysis: Hydrogen Cost Optimization
Saur, G.; Ramsden, T.
2011-05-01
This report describes a hydrogen production cost analysis of a collection of optimized central wind based water electrolysis production facilities. The basic modeled wind electrolysis facility includes a number of low temperature electrolyzers and a co-located wind farm encompassing a number of 3MW wind turbines that provide electricity for the electrolyzer units.
Modal test optimization using VETO (Virtual Environment for Test Optimization)
Klenke, S.E.; Reese, G.M.; Schoof, L.A.; Shierling, C.
1996-01-01
We present a software environment integrating analysis and test-based models to support optimal modal test design through a Virtual Environment for Test Optimization (VETO). A goal in developing this software tool is to provide test and analysis organizations with a capability of mathematically simulating the complete test environment in software. Derived models of test equipment, instrumentation and hardware can be combined within the VETO to provide the user with a unique analysis and visualization capability to evaluate new and existing test methods. The VETO assists analysis and test engineers in maximizing the value of each modal test. It is particularly advantageous for structural dynamics model reconciliation applications. The VETO enables an engineer to interact with a finite element model of a test object to optimally place sensors and exciters and to investigate the selection of data acquisition parameters needed to conduct a complete modal survey. Additionally, the user can evaluate the use of different types of instrumentation such as filters, amplifiers and transducers for which models are available in the VETO. The dynamic response of most of the virtual instruments (including the device under test) is modeled in the state space domain. Design of modal excitation levels and appropriate test instrumentation are facilitated by the VETO`s ability to simulate such features as unmeasured external inputs, A/D quantization effects, and electronic noise. Measures of the quality of the experimental design, including the Modal Assurance Criterion, and the Normal Mode Indicator Function are available.
Origin of magnetic fields in galaxies
Souza, Rafael S. de; Opher, Reuven
2010-03-15
Microgauss magnetic fields are observed in all galaxies at low and high redshifts. The origin of these intense magnetic fields is a challenging question in astrophysics. We show here that the natural plasma fluctuations in the primordial Universe (assumed to be random), predicted by the fluctuation -dissipation theorem, predicts {approx}0.034 {mu}G fields over {approx}0.3 kpc regions in galaxies. If the dipole magnetic fields predicted by the fluctuation-dissipation theorem are not completely random, microgauss fields over regions > or approx. 0.34 kpc are easily obtained. The model is thus a strong candidate for resolving the problem of the origin of magnetic fields in < or approx. 10{sup 9} years in high redshift galaxies.
Original Workshop Proposal and Description
Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)
Notes for Vis Requirements » Original Workshop Proposal and Description Original Workshop Proposal and Description Visualization Requirements for Computational Science and Engineering Applications Proposal for a DoE Workshop to Be Held at the Berkeley Marina Radisson Hotel, Berkeley, California, June 5, 2002 (date and location are tenative) Workshop Co-organizers: Bernd Hamann University of California-Davis Lawrence Berkeley Nat'l Lab. E. Wes Bethel Lawrence Berkeley Nat'l Lab.
Energy Science and Technology Software Center (OSTI)
2007-03-01
Aristos is a Trilinos package for nonlinear continuous optimization, based on full-space sequential quadratic programming (SQP) methods. Aristos is specifically designed for the solution of large-scale constrained optimization problems in which the linearized constraint equations require iterative (i.e. inexact) linear solver techniques. Aristos' unique feature is an efficient handling of inexactness in linear system solves. Aristos currently supports the solution of equality-constrained convex and nonconvex optimization problems. It has been used successfully in the areamore » of PDE-constrained optimization, for the solution of nonlinear optimal control, optimal design, and inverse problems.« less
Thermodynamic Metrics and Optimal Paths
Sivak, David; Crooks, Gavin
2012-05-08
A fundamental problem in modern thermodynamics is how a molecular-scale machine performs useful work, while operating away from thermal equilibrium without excessive dissipation. To this end, we derive a friction tensor that induces a Riemannian manifold on the space of thermodynamic states. Within the linear-response regime, this metric structure controls the dissipation of finite-time transformations, and bestows optimal protocols with many useful properties. We discuss the connection to the existing thermodynamic length formalism, and demonstrate the utility of this metric by solving for optimal control parameter protocols in a simple nonequilibrium model.
Optimized nanoporous materials.
Braun, Paul V.; Langham, Mary Elizabeth; Jacobs, Benjamin W.; Ong, Markus D.; Narayan, Roger J.; Pierson, Bonnie E.; Gittard, Shaun D.; Robinson, David B.; Ham, Sung-Kyoung; Chae, Weon-Sik; Gough, Dara V.; Wu, Chung-An Max; Ha, Cindy M.; Tran, Kim L.
2009-09-01
Nanoporous materials have maximum practical surface areas for electrical charge storage; every point in an electrode is within a few atoms of an interface at which charge can be stored. Metal-electrolyte interfaces make best use of surface area in porous materials. However, ion transport through long, narrow pores is slow. We seek to understand and optimize the tradeoff between capacity and transport. Modeling and measurements of nanoporous gold electrodes has allowed us to determine design principles, including the fact that these materials can deplete salt from the electrolyte, increasing resistance. We have developed fabrication techniques to demonstrate architectures inspired by these principles that may overcome identified obstacles. A key concept is that electrodes should be as close together as possible; this is likely to involve an interpenetrating pore structure. However, this may prove extremely challenging to fabricate at the finest scales; a hierarchically porous structure can be a worthy compromise.
Price, R; Veltchev, I; Cherian, G; Ma, C
2014-06-01
Purpose: Multiple publications exist concerning fixed-jaw utilization to avoid linac carriage shifts and reduce intensity modulated radiotherapy (IMRT) treatment times. The purpose of this work is to demonstrate delivery QA discrepancies and illustrate the need for improved treatment planning system (TPS) commissioning for non-routine use. Methods: A 6cm diameter spherical target was delineated on a virtual phantom containing the Iba Matrixx linear array within the Varian Eclipse TPS. Optimization was performed for target coverage for the following 3 scenarios: a single open, zero degree field where the X and Y jaws completely cover the target; the same field using an asymmetric, fixed-jaw technique where the upper Y jaw does not cover the superior 2cm of the target; and both of the aforementioned directed at the target at 315 and 45 degree gantry angles, respectively. This final orientation was also irradiated on a linac for delivery analysis. A sarcoma patient case was also analyzed where the fixed jaw technique was utilized for kidney sparing. Results: The open beam results were as predicted but the fixed-jaw results demonstrate a pronounced fluence increase along the asymmetric, upper jaw. Analysis of the delivery of the combined beam plan Resultin 83% of pixels evaluated passing gamma criteria of 3%, 3mm DTA. Analysis for the sarcoma patient, in the plane of the shielded kidney, indicated 93% passing although the maximum dose discrepancies in this region were approximately 23%. Conclusion: Optimization within the target is routinely performed using MLC leaf-end characteristics. The fixed-jaw technique forces optimization of target coverage to utilize the penumbra profiles of the associated beamdefining jaw. If the profiles were collected using a common 0.125cc ionization chamber, the resolution may be insufficient resulting in a planvs.-delivery mismatch. It is recommended that high-resolution beam characteristics be considered when non-routine planning
Constant-complexity stochastic simulation algorithm with optimal binning
Sanft, Kevin R.; Othmer, Hans G.
2015-08-21
At the molecular level, biochemical processes are governed by random interactions between reactant molecules, and the dynamics of such systems are inherently stochastic. When the copy numbers of reactants are large, a deterministic description is adequate, but when they are small, such systems are often modeled as continuous-time Markov jump processes that can be described by the chemical master equation. Gillespie’s Stochastic Simulation Algorithm (SSA) generates exact trajectories of these systems, but the amount of computational work required for each step of the original SSA is proportional to the number of reaction channels, leading to computational complexity that scales linearly with the problem size. The original SSA is therefore inefficient for large problems, which has prompted the development of several alternative formulations with improved scaling properties. We describe an exact SSA that uses a table data structure with event time binning to achieve constant computational complexity with respect to the number of reaction channels for weakly coupled reaction networks. We present a novel adaptive binning strategy and discuss optimal algorithm parameters. We compare the computational efficiency of the algorithm to existing methods and demonstrate excellent scaling for large problems. This method is well suited for generating exact trajectories of large weakly coupled models, including those that can be described by the reaction-diffusion master equation that arises from spatially discretized reaction-diffusion processes.
Murphy, Edward
2012-11-20
The world around us is made of atoms. Did you ever wonder where these atoms came from? How was the gold in our jewelry, the carbon in our bodies, and the iron in our cars made? In this lecture, we will trace the origin of a gold atom from the Big Bang to the present day, and beyond. You will learn how the elements were forged in the nuclear furnaces inside stars, and how, when they die, these massive stars spread the elements into space. You will learn about the origin of the building blocks of matter in the Big Bang, and we will speculate on the future of the atoms around us today.
Murphy, Edward
2014-08-06
The world around us is made of atoms. Did you ever wonder where these atoms came from? How was the gold in our jewelry, the carbon in our bodies, and the iron in our cars made? In this lecture, we will trace the origin of a gold atom from the Big Bang to the present day, and beyond. You will learn how the elements were forged in the nuclear furnaces inside stars, and how, when they die, these massive stars spread the elements into space. You will learn about the origin of the building blocks of matter in the Big Bang, and we will speculate on the future of the atoms around us today.
Optimal segmentation and packaging process
Kostelnik, K.M.; Meservey, R.H.; Landon, M.D.
1999-08-10
A process for improving packaging efficiency uses three dimensional, computer simulated models with various optimization algorithms to determine the optimal segmentation process and packaging configurations based on constraints including container limitations. The present invention is applied to a process for decontaminating, decommissioning (D and D), and remediating a nuclear facility involving the segmentation and packaging of contaminated items in waste containers in order to minimize the number of cuts, maximize packaging density, and reduce worker radiation exposure. A three-dimensional, computer simulated, facility model of the contaminated items are created. The contaminated items are differentiated. The optimal location, orientation and sequence of the segmentation and packaging of the contaminated items is determined using the simulated model, the algorithms, and various constraints including container limitations. The cut locations and orientations are transposed to the simulated model. The contaminated items are actually segmented and packaged. The segmentation and packaging may be simulated beforehand. In addition, the contaminated items may be cataloged and recorded. 3 figs.
CASL - Materials and Performance Optimization (MPO)
Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)
Materials and Performance Optimization (MPO) The Materials and Performance Optimization (MPO) focus area within CASL has recently developed and released a 3D modeling framework known as MAMBA (MPO Advanced Model for Boron Analysis) to predict CRUD deposition on nuclear fuel rods. CRUD, which refers to Chalk River Unidentified Deposit, is predominately a nickel-ferrite spinel corrosion product that deposits on hot fuel clad surfaces in nuclear reactors. CRUD has a lower thermal conductivity than
Microscopic origin of volume modulus inflation
Cicoli, Michele; Muia, Francesco; Pedro, Francisco Gil
2015-12-21
High-scale string inflationary models are in well-known tension with low-energy supersymmetry. A promising solution involves models where the inflaton is the volume of the extra dimensions so that the gravitino mass relaxes from large values during inflation to smaller values today. We describe a possible microscopic origin of the scalar potential of volume modulus inflation by exploiting non-perturbative effects, string loop and higher derivative perturbative corrections to the supergravity effective action together with contributions from anti-branes and charged hidden matter fields. We also analyse the relation between the size of the flux superpotential and the position of the late-time minimum and the inflection point around which inflation takes place. We perform a detailed study of the inflationary dynamics for a single modulus and a two moduli case where we also analyse the sensitivity of the cosmological observables on the choice of initial conditions.
origins.indd | Department of Energy
Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site
origins.indd origins.indd origins.indd (6.28 MB) More Documents & Publications Fehner and Gosling, Origins of the Nevada Test Site Fehner and Gosling, Atmospheric Nuclear Weapons Testing, 1951-1963. Battlefield of the Cold War: The Nevada Test Site, Volume I NTS_History.indd
dynamic-origin-destination-matrix
Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)
Dynamic Origin-Destination Matrix Estimation in TRANSIMS Using Direction-Guided Parallel Heuristic Search Algorithms Adel W. Sadek, Ph.D. Associate Professor University at Buffalo, The State University of New York 233 Ketter Hall Buffalo, NY 14260 Phone: (716) 645-4367 FAX: (716) 645-3733 E-mail: This email address is being protected from spambots. You need JavaScript enabled to view it. List of Authors ================ Adel W. Sadek, Ph.D. Shan Huang Liya Guo University at Buffalo, The State
Spearmint - Bayesian Hyperparameter Optimization
Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)
Optimization » Spearmint Spearmint - Bayesian Hyperparameter Optimization Spearmint is a Python Bayesian optimization codebase. Using Spearmint module load spearmint spearmint -c path/to/config.json config.json must have the following form: { "language" : "PYTHON", "experiment-name" : "any name you want", "polling-time" : 1, "resources" : { "my-machine" : { "scheduler" : "local", "max-concurrent"
Cold Climates Heat Pump Design Optimization
Abdelaziz, Omar [ORNL] [ORNL; Shen, Bo [ORNL] [ORNL
2012-01-01
Heat pumps provide an efficient heating method; however they suffer from sever capacity and performance degradation at low ambient conditions. This has deterred market penetration in cold climates. There is a continuing effort to find an efficient air source cold climate heat pump that maintains acceptable capacity and performance at low ambient conditions. Systematic optimization techniques provide a reliable approach for the design of such systems. This paper presents a step-by-step approach for the design optimization of cold climate heat pumps. We first start by describing the optimization problem: objective function, constraints, and design space. Then we illustrate how to perform this design optimization using an open source publically available optimization toolbox. The response of the heat pump design was evaluated using a validated component based vapor compression model. This model was treated as a black box model within the optimization framework. Optimum designs for different system configurations are presented. These optimum results were further analyzed to understand the performance tradeoff and selection criteria. The paper ends with a discussion on the use of systematic optimization for the cold climate heat pump design.
Hopper Performance and Optimization
Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)
Performance and Optimization Compiler Comparisons Comparison of different compilers with different options on several benchmarks. Read More Using OpenMP Effectively...
Optimizing Data Transfer Nodes
Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)
Optimizing Data Transfer Nodes using Packet Pacing Nathan Hanford University of California ... An important performance problem that we foresee with Data Transfer Nodes (DTNs) in the ...
Optimize Parallel Pumping Systems
Broader source: Energy.gov [DOE]
This tip sheet describes how to optimize the performance of multiple pumps operating continuously as part of a parallel pumping system.
Blasi, Pasquale [INAF/Arcetri-Italy and Fermilab, Italy
2010-01-08
Cosmic Rays reach the Earth from space with energies of up to more than 1020 eV, carrying information on the most powerful particle accelerators that Nature has been able to assemble. Understanding where and how cosmic rays originate has required almost one century of investigations, and, although the last word is not written yet, recent observations and theory seem now to fit together to provide us with a global picture of the origin of cosmic rays of unprecedented clarity. Here we will describe what we learned from recent observations of astrophysical sources (such as supernova remnants and active galaxies) and we will illustrate what these observations tell us about the physics of particle acceleration and transport. We will also discuss the ?end? of the Galactic cosmic ray spectrum, which bridges out attention towards the so called ultra high energy cosmic rays (UHECRs). At ~1020 eV the gyration scale of cosmic rays in cosmic magnetic fields becomes large enough to allow us to point back to their sources, thereby allowing us to perform ?cosmic ray astronomy?, as confirmed by the recent results obtained with the Pierre Auger Observatory. We will discuss the implications of these observations for the understanding of UHECRs, as well as some questions which will likely remain unanswered and will be the target of the next generation of cosmic ray experiments.
Stillwater Hybrid Geo-Solar Power Plant Optimization Analyses...
Office of Scientific and Technical Information (OSTI)
The validated Stillwater hybrid plant models were used to evaluate hybrid plant operating strategies including turbine IGV position optimization, ACC fan speed and turbine IGV ...
Oneida Tribe of Indians of Wisconsin- 2011 Energy Optimization Project
Broader source: Energy.gov [DOE]
The creation of this Oneida Nation Energy Optimization (ONEO) model is the next stage in the living document known as the Oneida Energy Security Plan.
Optimal design of reverse osmosis module networks
Maskan, F.; Wiley, D.E.; Johnston, L.P.M.; Clements, D.J.
2000-05-01
The structure of individual reverse osmosis modules, the configuration of the module network, and the operating conditions were optimized for seawater and brackish water desalination. The system model included simple mathematical equations to predict the performance of the reverse osmosis modules. The optimization problem was formulated as a constrained multivariable nonlinear optimization. The objective function was the annual profit for the system, consisting of the profit obtained from the permeate, capital cost for the process units, and operating costs associated with energy consumption and maintenance. Optimization of several dual-stage reverse osmosis systems were investigated and compared. It was found that optimal network designs are the ones that produce the most permeate. It may be possible to achieve economic improvements by refining current membrane module designs and their operating pressures.
POET: Parameterized Optimization for Empirical Tuning
Yi, Q; Seymour, K; You, H; Vuduc, R; Quinlan, D
2007-01-29
The excessive complexity of both machine architectures and applications have made it difficult for compilers to statically model and predict application behavior. This observation motivates the recent interest in performance tuning using empirical techniques. We present a new embedded scripting language, POET (Parameterized Optimization for Empirical Tuning), for parameterizing complex code transformations so that they can be empirically tuned. The POET language aims to significantly improve the generality, flexibility, and efficiency of existing empirical tuning systems. We have used the language to parameterize and to empirically tune three loop optimizations-interchange, blocking, and unrolling-for two linear algebra kernels. We show experimentally that the time required to tune these optimizations using POET, which does not require any program analysis, is significantly shorter than that when using a full compiler-based source-code optimizer which performs sophisticated program analysis and optimizations.
Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)
Optimization Performance and Optimization Running Jobs Efficiently This page defines job efficiency and how to measure the efficiency of your jobs. Read More » PDSF IO Monitoring Plots of continuous IO monitoring for the eliza file systems and project. Read More » Last edited: 2016-04-29 11:35:20
Crowe, B.; Yucel, V.; Rawlinson, S.; Black, P.; Carilli, J.; DiSanza, F.
2002-02-25
The U.S. Department of Energy (DOE), National Nuclear Security Administration of the Nevada Operations Office (NNSA/NV) operates and maintains two active facilities on the Nevada Test Site (NTS) that dispose defense-generated low-level radioactive waste (LLW), mixed radioactive waste, and ''classified waste'' in shallow trenches and pits. The operation and maintenance of the LLW disposal sites are self-regulated by the DOE under DOE Order 435.1. This Order requires formal review of a performance assessment (PA) and composite analysis (CA; assessment of all interacting radiological sources) for each LLW disposal system followed by an active maintenance program that extends through and beyond the site closure program. The Nevada disposal facilities continue to receive NTS-generated LLW and defense-generated LLW from across the DOE complex. The PA/CAs for the sites have been conditionally approved and the facilities are now under a formal maintenance program that requires testing of conceptual models, quantifying and attempting to reduce uncertainty, and implementing confirmatory and long-term background monitoring, all leading to eventual closure of the disposal sites. To streamline and reduce the cost of the maintenance program, the NNSA/NV is converting the deterministic PA/CAs to probabilistic models using GoldSim, a probabilistic simulation computer code. The output of probabilistic models will provide expanded information supporting long-term decision objectives of the NTS disposal sites.
Nature and Origin of the Cuprate Pseudogap
Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)
Nature and Origin of the Cuprate Pseudogap Nature and Origin of the Cuprate Pseudogap Print Wednesday, 30 May 2007 00:00 The workings of high-temperature superconductive (HTSC)...
Energy Science and Technology Software Center (OSTI)
2014-05-13
ROL provides interfaces to and implementations of algorithms for gradient-based unconstrained and constrained optimization. ROL can be used to optimize the response of any client simulation code that evaluates scalar-valued response functions. If the client code can provide gradient information for the response function, ROL will take advantage of it, resulting in faster runtimes. ROL's interfaces are matrix-free, in other words ROL only uses evaluations of scalar-valued and vector-valued functions. ROL can be used tomore » solve optimal design problems and inverse problems based on a variety of simulation software.« less
Dalziel, I.W.D. . Inst. for Geophysics)
1992-01-01
Laurentia, the Precambrian core of the North American continent, is surrounded by late Precambrian rift systems and therefore constitutes a suspect terrane''. A geometric and geological fit can be achieved between the Atlantic margin of Laurentia and the Pacific margin of the Gondwana craton. The enigmatic Arequipa massif along the southern Peruvian coast, that yields ca. 2.0 Ga radiometric ages, is juxtaposed with the Makkovik-Ketilidian province of the same age range in Labrador and southern Greenland. The Greenville belt continues beneath the ensialic Andes of the present day to join up with the 1.3--1.0 Ga San Ignacio and Sonsas-Aguapei orogens of the Transamazonian craton. Together with the recent identification of possible continuations of the Greenville orogen in East Antarctica and of the Taconic Appalachians in southern South America, the fit supports suggestions that Laurentia originated between East Antarctica-Australia and embryonic South America prior to the opening of the Pacific Ocean basin and amalgamation of the Gondwana Cordilleran and Appalachian margins, this implies that there may have been two supercontinents during the Neoproterozoic, before and after opening of the Pacific Ocean. As Laurentia and Gondwana appear to have collided on at least two occasions during the Paleozoic, this scenario therefore calls to question the existence of so-called supercontinental cycles. The Arica bight of the present day may reflect a primary reentrant in the South American continental margin that controlled subduction processes along the Andean margin and eventually led to uplift of the Altiplano.
Synthesis of optimal adsorptive carbon capture processes.
chang, Y.; Cozad, A.; Kim, H.; Lee, A.; Vouzis, P.; Konda, M.; Simon, A.; Sahinidis, N.; Miller, D.
2011-01-01
Solid sorbent carbon capture systems have the potential to require significantly lower regeneration energy compared to aqueous monoethanol amine (MEA) systems. To date, the majority of work on solid sorbents has focused on developing the sorbent materials themselves. In order to advance these technologies, it is necessary to design systems that can exploit the full potential and unique characteristics of these materials. The Department of Energy (DOE) recently initiated the Carbon Capture Simulation Initiative (CCSI) to develop computational tools to accelerate the commercialization of carbon capture technology. Solid sorbents is the first Industry Challenge Problem considered under this initiative. An early goal of the initiative is to demonstrate a superstructure-based framework to synthesize an optimal solid sorbent carbon capture process. For a given solid sorbent, there are a number of potential reactors and reactor configurations consisting of various fluidized bed reactors, moving bed reactors, and fixed bed reactors. Detailed process models for these reactors have been modeled using Aspen Custom Modeler; however, such models are computationally intractable for large optimization-based process synthesis. Thus, in order to facilitate the use of these models for process synthesis, we have developed an approach for generating simple algebraic surrogate models that can be used in an optimization formulation. This presentation will describe the superstructure formulation which uses these surrogate models to choose among various process alternatives and will describe the resulting optimal process configuration.
TOOLKIT FOR ADVANCED OPTIMIZATION
Energy Science and Technology Software Center (OSTI)
2000-10-13
The TAO project focuses on the development of software for large scale optimization problems. TAO uses an object-oriented design to create a flexible toolkit with strong emphasis on the reuse of external tools where appropriate. Our design enables bi-directional connection to lower level linear algebra support (for example, parallel sparse matrix data structures) as well as higher level application frameworks. The Toolkist for Advanced Optimization (TAO) is aimed at teh solution of large-scale optimization problemsmore » on high-performance architectures. Our main goals are portability, performance, scalable parallelism, and an interface independent of the architecture. TAO is suitable for both single-processor and massively-parallel architectures. The current version of TAO has algorithms for unconstrained and bound-constrained optimization.« less
Library for Nonlinear Optimization
Energy Science and Technology Software Center (OSTI)
2001-10-09
OPT++ is a C++ object-oriented library for nonlinear optimization. This incorporates an improved implementation of an existing capability and two new algorithmic capabilities based on existing journal articles and freely available software.
DOE Data Explorer [Office of Scientific and Technical Information (OSTI)]
Kawase, Mitsuhiro
2009-11-22
The zipped file contains a directory of data and routines used in the NNMREC turbine depth optimization study (Kawase et al., 2011), and calculation results thereof. For further info, please contact Mitsuhiro Kawase at kawase@uw.edu. Reference: Mitsuhiro Kawase, Patricia Beba, and Brian Fabien (2011), Finding an Optimal Placement Depth for a Tidal In-Stream Conversion Device in an Energetic, Baroclinic Tidal Channel, NNMREC Technical Report.
Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)
Optimization Performance and Optimization Compiler Comparisons Comparison of different compilers with different options on several benchmarks. Read More » Using OpenMP Effectively Performance implications and case studies of codes combining MPI and OpenMP Read More » Reordering MPI Ranks Reordering MPI ranks can result in improved application performance depending on the communication patterns of the application. Read More » Application Performance Variability on Hopper How an application is
Optimized Triple-Junction Solar Cells Using Inverted Metamorphic Approach (Presentation)
Geisz, J. F.
2008-11-01
Record efficiencies with triple-junction inverted metamorphic designs, modeling useful to optimize, and consider operating conditions before choosing design.
Optimization and Control of Electric Power Systems
Lesieutre, Bernard C.; Molzahn, Daniel K.
2014-10-17
The analysis and optimization needs for planning and operation of the electric power system are challenging due to the scale and the form of model representations. The connected network spans the continent and the mathematical models are inherently nonlinear. Traditionally, computational limits have necessitated the use of very simplified models for grid analysis, and this has resulted in either less secure operation, or less efficient operation, or both. The research conducted in this project advances techniques for power system optimization problems that will enhance reliable and efficient operation. The results of this work appear in numerous publications and address different application problems include optimal power flow (OPF), unit commitment, demand response, reliability margins, planning, transmission expansion, as well as general tools and algorithms.
NUG Single Node Optimization Presentation.
Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)
Cray Opteron systems, PGI. GNU compilers were on Franklin, but at that time GNU Fortran optimization was poor. Next came Pathscale because of superior optimization. ...
Forecourt and Gas Infrastructure Optimization | Department of Energy
Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site
and Gas Infrastructure Optimization Forecourt and Gas Infrastructure Optimization Presentation by Bruce Kelly of Nexant at the Joint Meeting on Hydrogen Delivery Modeling and Analysis, May 8-9, 2007 deliv_analysis_kelly.pdf (113.91 KB) More Documents & Publications H2A Hydrogen Delivery Infrastructure Analysis Models and Conventional Pathway Options Analysis Results - Interim Report H2A Delivery Components Model and Analysis Hydrogen Delivery Analysis Models
Energy Science and Technology Software Center (OSTI)
1998-07-01
GenOpt is a generic optimization program for nonlinear, constrained optimization. For evaluating the objective function, any simulation program that communicates over text files can be coupled to GenOpt without code modification. No analytic properties of the objective function are used by GenOpt. ptimization algorithms and numerical methods can be implemented in a library and shared among users. Gencpt offers an interlace between the optimization algorithm and its kernel to make the implementation of new algorithmsmore » fast and easy. Different algorithms of constrained and unconstrained minimization can be added to a library. Algorithms for approximation derivatives and performing line-search will be implemented. The objective function is evaluated as a black-box function by an external simulation program. The kernel of GenOpt deals with the data I/O, result sotrage and report, interlace to the external simulation program, and error handling. An abstract optimization class offers methods to interface the GenOpt kernel and the optimization algorithm library.« less
Roughness Optimization at High Modes for GDP CHx Microshells
Theobald, M.; Dumay, B.; Chicanne, C.; Barnouin, J.; Legaie, O.; Baclet, P.
2004-03-15
For the ''Megajoule'' Laser (LMJ) facility of the CEA, amorphous hydrogenated carbon (a-C:H) is the nominal ablator to be used for inertial confinement fusion (ICF) experiments. These capsules contain the fusible deuterium-tritium mixture to achieve ignition. Coatings are prepared by glow discharge polymerization (GDP) with trans-2-butene and hydrogen. The films properties have been investigated. Laser fusion targets must have optimized characteristics: a diameter of about 2.4 mm for LMJ targets, a thickness up to 175 {mu}m, a sphericity and a thickness concentricity better than 99% and an outer and an inner roughness lower than 20 nm at high modes. The surface finish of these laser fusion targets must be extremely smooth to minimize hydrodynamic instabilities.Movchan and Demchishin, and later Thornton introduced a structure zone model (SZM) based on both evaporated and sputtered metals. They investigated the influence of base temperature and the sputtering gas pressure on structure and properties of thick polycrystalline coatings of nickel, titanium, tungsten, aluminum oxide. An original cross-sectional analysis by atomic force microscopy (AFM) allows amorphous materials characterization and permits to make an analogy between the amorphous GDP material and the existing model (SZM). The purpose of this work is to understand the relationship between the deposition parameters, the growing structures and the surface roughness.The coating structure as a function of deposition parameters was first studied on plane silicon substrates and then optimized on PAMS shells. By adjusting the coating parameters, the structures are modified, and in some case, the high modes roughness decreases dramatically.
Asynchronous parallel pattern search for nonlinear optimization
P. D. Hough; T. G. Kolda; V. J. Torczon
2000-01-01
Parallel pattern search (PPS) can be quite useful for engineering optimization problems characterized by a small number of variables (say 10--50) and by expensive objective function evaluations such as complex simulations that take from minutes to hours to run. However, PPS, which was originally designed for execution on homogeneous and tightly-coupled parallel machine, is not well suited to the more heterogeneous, loosely-coupled, and even fault-prone parallel systems available today. Specifically, PPS is hindered by synchronization penalties and cannot recover in the event of a failure. The authors introduce a new asynchronous and fault tolerant parallel pattern search (AAPS) method and demonstrate its effectiveness on both simple test problems as well as some engineering optimization problems
Nature and Origin of the Cuprate Pseudogap
Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)
Nature and Origin of the Cuprate Pseudogap Nature and Origin of the Cuprate Pseudogap Print Wednesday, 30 May 2007 00:00 The workings of high-temperature superconductive (HTSC) materials are a mystery wrapped in an enigma. However, a team of researchers from the ALS, Brookhaven National Laboratory, and Cornell University has taken a major step in understanding part of this mystery-the nature and origin of the pseudogap. Using angle-resolved photoemission spectroscopy (ARPES) and scanning
Penser Original Contract (EM0003383) - Hanford Site
Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)
DOE-RL Contracts/Procurements Prime Contracts Penser Original Contract (EM0003383) DOE-RL Contracts/Procurements RL Contracts & Procurements Home Prime Contracts Current Solicitations Other Sources DOE RL Contracting Officers DOE RL Contracting Officer Representatives Penser Original Contract (EM0003383) Email Email Page | Print Print Page | Text Increase Font Size Decrease Font Size Original contract issued on Date September 15, 2014 The following are links to Portable Document Format (PDF)
Wasserman, H.; Lubeck, O.M.; Luo, Y.; Bassetti, F.
1997-11-01
In this paper the authors compare single processor performance of the SGI Origin and PowerChallenge and utilize a previously reported performance model for hierarchical memory systems to explain the results. Both the Origin and PowerChallenge use the same microprocessor (MIPS R10000) but have significant differences in their memory subsystems. Their memory model includes the effect of overlap between CPU and memory operations and allows them to infer the individual contributions of all three improvements in the Origin`s memory architecture and relate the effectiveness of each improvement to application characteristics.
Methodology for optimizing the development and operation of gas storage fields
Mercer, J.C.; Ammer, J.R.; Mroz, T.H.
1995-04-01
The Morgantown Energy Technology Center is pursuing the development of a methodology that uses geologic modeling and reservoir simulation for optimizing the development and operation of gas storage fields. Several Cooperative Research and Development Agreements (CRADAs) will serve as the vehicle to implement this product. CRADAs have been signed with National Fuel Gas and Equitrans, Inc. A geologic model is currently being developed for the Equitrans CRADA. Results from the CRADA with National Fuel Gas are discussed here. The first phase of the CRADA, based on original well data, was completed last year and reported at the 1993 Natural Gas RD&D Contractors Review Meeting. Phase 2 analysis was completed based on additional core and geophysical well log data obtained during a deepening/relogging program conducted by the storage operator. Good matches, within 10 percent, of wellhead pressure were obtained using a numerical simulator to history match 2 1/2 injection withdrawal cycles.
Laser scribe optimization study. Final report
Wannamaker, A.L.
1996-09-01
The laser scribe characterization/optimization project was initiated to better understand what factors influence response variables of the laser marking process. The laser marking system is utilized to indelibly identify weapon system components. Many components have limited field life, and traceability to production origin is critical. In many cases, the reliability of the weapon system and the safety of the users can be attributed to individual and subassembly component fabrication processes. Laser beam penetration of the substrate material may affect product function. The design agency for the DOE had requested that Federal Manufacturing and Technologies characterize the laser marking process and implement controls on critical process parameters.
2013-08-01
This technology evaluation was prepared by Pacific Northwest National Laboratory on behalf of the U.S. Department of Energy’s Federal Energy Management Program (FEMP). The technology evaluation assesses techniques for optimizing reverse osmosis (RO) systems to increase RO system performance and water efficiency. This evaluation provides a general description of RO systems, the influence of RO systems on water use, and key areas where RO systems can be optimized to reduce water and energy consumption. The evaluation is intended to help facility managers at Federal sites understand the basic concepts of the RO process and system optimization options, enabling them to make informed decisions during the system design process for either new projects or recommissioning of existing equipment. This evaluation is focused on commercial-sized RO systems generally treating more than 80 gallons per hour.
McMordie Stoughton, Kate; Duan, Xiaoli; Wendel, Emily M.
2013-08-26
This technology evaluation was prepared by Pacific Northwest National Laboratory on behalf of the U.S. Department of Energy’s Federal Energy Management Program (FEMP). ¬The technology evaluation assesses techniques for optimizing reverse osmosis (RO) systems to increase RO system performance and water efficiency. This evaluation provides a general description of RO systems, the influence of RO systems on water use, and key areas where RO systems can be optimized to reduce water and energy consumption. The evaluation is intended to help facility managers at Federal sites understand the basic concepts of the RO process and system optimization options, enabling them to make informed decisions during the system design process for either new projects or recommissioning of existing equipment. This evaluation is focused on commercial-sized RO systems generally treating more than 80 gallons per hour.¬
Nature and Origin of the Cuprate Pseudogap
Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)
Nature and Origin of the Cuprate Pseudogap Print The workings of high-temperature superconductive (HTSC) materials are a mystery wrapped in an enigma. However, a team of...
Lidar arc scan uncertainty reduction through scanning geometry optimization
DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)
Wang, H.; Barthelmie, R. J.; Pryor, S. C.; Brown, G.
2015-10-07
Doppler lidars are frequently operated in a mode referred to as arc scans, wherein the lidar beam scans across a sector with a fixed elevation angle and the resulting measurements are used to derive an estimate of the n minute horizontal mean wind velocity (speed and direction). Previous studies have shown that the uncertainty in the measured wind speed originates from turbulent wind fluctuations and depends on the scan geometry (the arc span and the arc orientation). This paper is designed to provide guidance on optimal scan geometries for two key applications in the wind energy industry: wind turbine powermoreperformance analysis and annual energy production. We present a quantitative analysis of the retrieved wind speed uncertainty derived using a theoretical model with the assumption of isotropic and frozen turbulence, and observations from three sites that are onshore with flat terrain, onshore with complex terrain and offshore, respectively. The results from both the theoretical model and observations show that the uncertainty is scaled with the turbulence intensity such that the relative standard error on the 10 min mean wind speed is about 30 % of the turbulence intensity. The uncertainty in both retrieved wind speeds and derived wind energy production estimates can be reduced by aligning lidar beams with the dominant wind direction, increasing the arc span and lowering the number of beams per arc scan. Large arc spans should be used at sites with high turbulence intensity and/or large wind direction variation when arc scans are used for wind resource assessment.less
Terascale Optimal PDE Simulations
David Keyes
2009-07-28
The Terascale Optimal PDE Solvers (TOPS) Integrated Software Infrastructure Center (ISIC) was created to develop and implement algorithms and support scientific investigations performed by DOE-sponsored researchers. These simulations often involve the solution of partial differential equations (PDEs) on terascale computers. The TOPS Center researched, developed and deployed an integrated toolkit of open-source, optimal complexity solvers for the nonlinear partial differential equations that arise in many DOE application areas, including fusion, accelerator design, global climate change and reactive chemistry. The algorithms created as part of this project were also designed to reduce current computational bottlenecks by orders of magnitude on terascale computers, enabling scientific simulation on a scale heretofore impossible.
Distributed Optimization System
Hurtado, John E.; Dohrmann, Clark R.; Robinett, III, Rush D.
2004-11-30
A search system and method for controlling multiple agents to optimize an objective using distributed sensing and cooperative control. The search agent can be one or more physical agents, such as a robot, and can be software agents for searching cyberspace. The objective can be: chemical sources, temperature sources, radiation sources, light sources, evaders, trespassers, explosive sources, time dependent sources, time independent sources, function surfaces, maximization points, minimization points, and optimal control of a system such as a communication system, an economy, a crane, and a multi-processor computer.
On the origin of porphyritic chondrules
Blander, M.; Unger, L.; Pelton, A.; Ericksson, G.
1994-05-01
A computer program for the complex equilibria in a cooling nebular gas was used to explore a possible origin of porphyritic chondrules, the major class of chondrules in chondritic meteorites. It uses a method of accurately calculating the thermodynamic properties of molten multicomponent aluminosilicates, which deduces the silicate condensates vs temperature and pressure of a nebular gas. This program is coupled with a chemical equilibrium algorithm for systems with at least 1000 chemical species; it has a data base of over 5000 solid, liquid, and gaseous species. Results are metastable subcooled liquid aluminoscilicates with compositions resembling types IA and II porphyritic chondrules at two different temperatures at any pressure between 10{sup {minus}2} and 1 (or possibly 10{sup {minus}3} to 5) atm. The different types of chondrules (types I, II, III) could have been produced from the same gas and do not need a different gas for each apparent oxidation state; thus, the difficulty of current models for making porphyritic chondrules by reheating different solids to just below their liquidus temperatures in different locations is not necessary. Initiation of a stage of crystallization just below liquidus is part of the natural crystallization (recalescence) process from metastable subcooled liquidus and does not require an improbably heating mechanism. 2 tabs.
Modeling and Optimization of Superhydrophobic Condensation (Journal...
Office of Scientific and Technical Information (OSTI)
Research Org: Energy Frontier Research Centers (EFRC); Solid-State Solar-Thermal Energy Conversion Center (S3TEC) Sponsoring Org: USDOE SC Office of Basic Energy Sciences (SC-22) ...
Optimizing multiphase aquifer remediation using ITOUGH2
Finsterle, S.; Pruess, K.
1994-06-01
The T2VOC computer model for simulating the transport of organic chemical contaminants in non-isothermal multiphase systems has been coupled to the ITOUGH2 code which solves parameter optimization problems. This allows one to use nonlinear programming and simulated annealing techniques to solve groundwater management problems, i.e. the optimization of multiphase aquifer remediation. This report contains three illustrative examples to demonstrate the optimization of remediation operations by means of simulation-minimization techniques. The code iteratively determines an optimal remediation strategy (e.g. pumping schedule) which minimizes, for instance, pumping and energy costs, the time for cleanup, and residual contamination. While minimizing the objective function is straightforward, the relative weighting of different performance measures--e.g. pumping costs versus cleanup time versus residual contaminant content--is subject to a management decision process. The intended audience of this report is someone who is familiar with numerical modeling of multiphase flow of contaminants, and who might actually use T2VOC in conjunction with ITOUGH2 to optimize the design of aquifer remediation operations.
Development of a Dynamic DOE Calibration Model
Broader source: Energy.gov [DOE]
A dynamic heavy duty diesel engine model was developed. The model can be applied for calibration and control system optimization.
COOPR: A COmmon Optimization Python Repository v. 1.0
Energy Science and Technology Software Center (OSTI)
2008-08-14
Coopr integrates Python packages for defining optimizers, modeling optimization applications, and managing computational experiments. A major driver for Coopr development is the Pyomo package that can be used to define abstract problems, create concrete problem instances, and solve these instances with standard solvers. Other Coopr packages include EXACT, a framework for managing computational experiments, SUCASA, a tool for customizing integer programming solvers, and OPT, a generic optimization interface.
Quasi-Optimal Elimination Trees for 2D Grids with Singularities
DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)
Paszyńska, A.; Paszyński, M.; Jopek, K.; Woźniak, M.; Goik, D.; Gurgul, P.; AbouEisha, H.; Moshkov, M.; Calo, V. M.; Lenharth, A.; et al
2015-01-01
We consmore » truct quasi-optimal elimination trees for 2D finite element meshes with singularities. These trees minimize the complexity of the solution of the discrete system. The computational cost estimates of the elimination process model the execution of the multifrontal algorithms in serial and in parallel shared-memory executions. Since the meshes considered are a subspace of all possible mesh partitions, we call these minimizers quasi-optimal. We minimize the cost functionals using dynamic programming. Finding these minimizers is more computationally expensive than solving the original algebraic system. Nevertheless, from the insights provided by the analysis of the dynamic programming minima, we propose a heuristic construction of the elimination trees that has cost O N e log N e , where N e is the number of elements in the mesh. We show that this heuristic ordering has similar computational cost to the quasi-optimal elimination trees found with dynamic programming and outperforms state-of-the-art alternatives in our numerical experiments.« less
Coupled Thermal-Hydrological-Mechanical-Chemical Model And Experiments...
Broader source: Energy.gov (indexed) [DOE]
Coupled Thermal-Hydrological-Mechanical-Chemical Model And Experiments For Optimization Of ... Coupled Thermal-Hydrological-Mechanical-Chemical Model and Experiments for Optimization ...
Centralized Stochastic Optimal Control of Complex Systems
Malikopoulos, Andreas
2015-01-01
In this paper we address the problem of online optimization of the supervisory power management control in parallel hybrid electric vehicles (HEVs). We model HEV operation as a controlled Markov chain using the long-run expected average cost per unit time criterion, and we show that the control policy yielding the Pareto optimal solution minimizes the average cost criterion online. The effectiveness of the proposed solution is validated through simulation and compared to the solution derived with dynamic programming using the average cost criterion.
Stencil Computation Optimization
Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)
Stencil Computation Optimization and Auto-tuning on State-of-the-Art Multicore Architectures Kaushik Datta ∗† , Mark Murphy † , Vasily Volkov † , Samuel Williams ∗† , Jonathan Carter ∗ , Leonid Oliker ∗† , David Patterson ∗† , John Shalf ∗ , and Katherine Yelick ∗† ∗ CRD/NERSC, Lawrence Berkeley National Laboratory, Berkeley, CA 94720, USA † Computer Science Division, University of California at Berkeley, Berkeley, CA 94720, USA Abstract Understanding the most
OriginOil Inc | Open Energy Information
Inc Place: Los Angeles, California Zip: 90016 Product: California-based OTC-quoted algae-to-oil technology developer. References: OriginOil Inc1 This article is a stub. You...
Nature and Origin of the Cuprate Pseudogap
Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)
Nature and Origin of the Cuprate Pseudogap Print The workings of high-temperature superconductive (HTSC) materials are a mystery wrapped in an enigma. However, a team of researchers from the ALS, Brookhaven National Laboratory, and Cornell University has taken a major step in understanding part of this mystery-the nature and origin of the pseudogap. Using angle-resolved photoemission spectroscopy (ARPES) and scanning tunneling microscopy (STM), they have determined the electronic structure of
Nature and Origin of the Cuprate Pseudogap
Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)
Nature and Origin of the Cuprate Pseudogap Print The workings of high-temperature superconductive (HTSC) materials are a mystery wrapped in an enigma. However, a team of researchers from the ALS, Brookhaven National Laboratory, and Cornell University has taken a major step in understanding part of this mystery-the nature and origin of the pseudogap. Using angle-resolved photoemission spectroscopy (ARPES) and scanning tunneling microscopy (STM), they have determined the electronic structure of
Nature and Origin of the Cuprate Pseudogap
Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)
Nature and Origin of the Cuprate Pseudogap Print The workings of high-temperature superconductive (HTSC) materials are a mystery wrapped in an enigma. However, a team of researchers from the ALS, Brookhaven National Laboratory, and Cornell University has taken a major step in understanding part of this mystery-the nature and origin of the pseudogap. Using angle-resolved photoemission spectroscopy (ARPES) and scanning tunneling microscopy (STM), they have determined the electronic structure of
Nature and Origin of the Cuprate Pseudogap
Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)
Nature and Origin of the Cuprate Pseudogap Print The workings of high-temperature superconductive (HTSC) materials are a mystery wrapped in an enigma. However, a team of researchers from the ALS, Brookhaven National Laboratory, and Cornell University has taken a major step in understanding part of this mystery-the nature and origin of the pseudogap. Using angle-resolved photoemission spectroscopy (ARPES) and scanning tunneling microscopy (STM), they have determined the electronic structure of
TAS: 89 0227: TAS Recovery Act - Optimization and Control of Electric Power Systems: ARRA
Chiang, Hsiao-Dong
2014-02-01
The name SuperOPF is used to refer several projects, problem formulations and soft-ware tools intended to extend, improve and re-define some of the standard methods of optimizing electric power systems. Our work included applying primal-dual interior point methods to standard AC optimal power flow problems of large size, as well as extensions of this problem to include co-optimization of multiple scenarios. The original SuperOPF problem formulation was based on co-optimizing a base scenario along with multiple post-contingency scenarios, where all AC power flow models and constraints are enforced for each, to find optimal energy contracts, endogenously determined locational reserves and appropriate nodal energy prices for a single period optimal power flow problem with uncertainty. This led to example non-linear programming problems on the order of 1 million constraints and half a million variables. The second generation SuperOPF formulation extends this by adding multiple periods and multiple base scenarios per period. It also incorporates additional variables and constraints to model load following reserves, ramping costs, and storage resources. A third generation of the multi-period SuperOPF, adds both integer variables and a receding horizon framework in which the problem type is more challenging (mixed integer), the size is even larger, and it must be solved more frequently, pushing the limits of currently available algorithms and solvers. The consideration of transient stability constraints in optimal power flow (OPF) problems has become increasingly important in modern power systems. Transient stability constrained OPF (TSCOPF) is a nonlinear optimization problem subject to a set of algebraic and differential equations. Solving a TSCOPF problem can be challenging due to (i) the differential-equation constraints in an optimization problem, (ii) the lack of a true analytical expression for transient stability in OPF. To handle the dynamics in TSCOPF, the set
Parallel performance optimizations on unstructured mesh-based simulations
DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)
Sarje, Abhinav; Song, Sukhyun; Jacobsen, Douglas; Huck, Kevin; Hollingsworth, Jeffrey; Malony, Allen; Williams, Samuel; Oliker, Leonid
2015-06-01
This paper addresses two key parallelization challenges the unstructured mesh-based ocean modeling code, MPAS-Ocean, which uses a mesh based on Voronoi tessellations: (1) load imbalance across processes, and (2) unstructured data access patterns, that inhibit intra- and inter-node performance. Our work analyzes the load imbalance due to naive partitioning of the mesh, and develops methods to generate mesh partitioning with better load balance and reduced communication. Furthermore, we present methods that minimize both inter- and intranode data movement and maximize data reuse. Our techniques include predictive ordering of data elements for higher cache efficiency, as well as communication reduction approaches.more » We present detailed performance data when running on thousands of cores using the Cray XC30 supercomputer and show that our optimization strategies can exceed the original performance by over 2×. Additionally, many of these solutions can be broadly applied to a wide variety of unstructured grid-based computations.« less
GMG: A Guaranteed, Efficient Global Optimization Algorithm for Remote Sensing.
D'Helon, CD
2004-08-18
The monocular passive ranging (MPR) problem in remote sensing consists of identifying the precise range of an airborne target (missile, plane, etc.) from its observed radiance. This inverse problem may be set as a global optimization problem (GOP) whereby the difference between the observed and model predicted radiances is minimized over the possible ranges and atmospheric conditions. Using additional information about the error function between the predicted and observed radiances of the target, we developed GMG, a new algorithm to find the Global Minimum with a Guarantee. The new algorithm transforms the original continuous GOP into a discrete search problem, thereby guaranteeing to find the position of the global minimum in a reasonably short time. The algorithm is first applied to the golf course problem, which serves as a litmus test for its performance in the presence of both complete and degraded additional information. GMG is further assessed on a set of standard benchmark functions and then applied to various realizations of the MPR problem.
Detachment faults: Evidence for a low-angle origin
Scott, R.J.; Lister, G.S. )
1992-09-01
The origin of low-angle normal faults or detachment faults mantling metamorphic core complexes in the southwestern United States remains controversial. If [sigma][sub 1] is vertical during extension, the formation of, or even slip along, such low-angle normal faults is mechanically implausible. No records exist of earthquakes on low-angle normal faults in areas currently undergoing continental extension, except from an area of actively forming core complexes in the Solomon Sea, Papua New Guinea. In light of such geophysical and mechanical arguments, W.R. Buck and B. Wernicke and G.J. Axen proposed models in which detachment faults originate as high-angle normal faults, but rotate to low angles and become inactive as extension proceeds. These models are inconsistent with critical field relations in several core complexes. The Rawhide fault, an areally extensive detachment fault in western Arizona, propagated at close to its present subhorizontal orientation late in the Tertiary extension of the region. Neither the Wernicke and Axen nor Buck models predict such behavior; in fact, both models preclude the operation of low-angle normal faults. The authors recommend that alternative explanations or modifications of existing models are needed to explain the evidence that detachment faults form and operate with gentle dips.
An Optimization Framework for Dynamic Hybrid Energy Systems
Wenbo Du; Humberto E Garcia; Christiaan J.J. Paredis
2014-03-01
A computational framework for the efficient analysis and optimization of dynamic hybrid energy systems (HES) is developed. A microgrid system with multiple inputs and multiple outputs (MIMO) is modeled using the Modelica language in the Dymola environment. The optimization loop is implemented in MATLAB, with the FMI Toolbox serving as the interface between the computational platforms. Two characteristic optimization problems are selected to demonstrate the methodology and gain insight into the system performance. The first is an unconstrained optimization problem that optimizes the dynamic properties of the battery, reactor and generator to minimize variability in the HES. The second problem takes operating and capital costs into consideration by imposing linear and nonlinear constraints on the design variables. The preliminary optimization results obtained in this study provide an essential step towards the development of a comprehensive framework for designing HES.
Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)
Optimization Strategies for Cori NERSC User Services Wednesday Feb 25, 2015 Introduction to Cori What is different about Cori? What is different about Cori? Edison (Ivy-Bridge): ● 12 Cores Per CPU ● 24 Virtual Cores Per CPU ● 2.4-3.2 GHz ● Can do 4 Double Precision Operations per Cycle (+ multiply/add) ● 2.5 GB of Memory Per Core ● ~100 GB/s Memory Bandwidth Cori (Knights-Landing): ● 60+ Physical Cores Per CPU ● 240+ Virtual Cores Per CPU ● Much slower GHz ● Can do 8 Double
Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)
Modeling & Analysis, News, News & Events, Photovoltaic, Renewable Energy, Research & Capabilities, Solar, Solar Newsletter, SunShot, Systems Analysis Sandia Develops Stochastic ...
Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)
Monte Carlo modeling it was found that for noisy signals with a significant background component, accuracy is improved by fitting the total emission data which includes the...
Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)
Science and Actuarial Practice" Read More Permalink New Project Is the ACME of Computer Science to Address Climate Change Analysis, Climate, Global Climate & Energy, Modeling, ...
Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)
Solar Sandia Labs Releases New Version of PVLib Toolbox Sandia has released version 1.3 of PVLib, its widely used Matlab toolbox for modeling photovoltaic (PV) power ...
Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)
... Sandia Will Host PV Bankability Workshop at Solar Power International (SPI) 2013 Computational Modeling & Simulation, Distribution Grid Integration, Energy, Facilities, Grid ...
Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)
Though adequate for modeling mean transport, this approach does not address ... Microphysics such as diffusive transport and chemical kinetics are represented by ...
Accelerating PDE-Constrained Optimization Problems using Adaptive...
Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)
Accelerating PDE-Constrained Optimization Problems using Adaptive Reduced-Order Models January 15, 2016 10:30AM to 11:30AM Presenter Matthew Zahr, Wilkinson Interviewee Location...
Optimal recovery sequencing for critical infrastructure resilience assessment.
Vugrin, Eric D.; Brown, Nathanael J. K.; Turnquist, Mark Alan
2010-09-01
Critical infrastructure resilience has become a national priority for the U. S. Department of Homeland Security. System resilience has been studied for several decades in many different disciplines, but no standards or unifying methods exist for critical infrastructure resilience analysis. This report documents the results of a late-start Laboratory Directed Research and Development (LDRD) project that investigated the identification of optimal recovery strategies that maximize resilience. To this goal, we formulate a bi-level optimization problem for infrastructure network models. In the 'inner' problem, we solve for network flows, and we use the 'outer' problem to identify the optimal recovery modes and sequences. We draw from the literature of multi-mode project scheduling problems to create an effective solution strategy for the resilience optimization model. We demonstrate the application of this approach to a set of network models, including a national railroad model and a supply chain for Army munitions production.
Optimal design of distributed wastewater treatment networks
Galan, B.; Grossmann, I.E.
1998-10-01
This paper deals with the optimum design of a distributed wastewater network where multicomponent streams are considered that are to be processed by units for reducing the concentration of several contaminants. The proposed model gives rise to a nonconvex nonlinear problem which often exhibits local minima and causes convergence difficulties. A search procedure is proposed in this paper that is based on the successive solution of a relaxed linear model and the original nonconvex nonlinear problem. Several examples are presented to illustrate that the proposed method often yields global or near global optimum solutions. The model is also extended for selecting different treatment technologies and for handling membrane separation modules.
Bower, Stanley
2011-12-31
A 5.0L V8 twin-turbocharged direct injection engine was designed, built, and tested for the purpose of assessing the fuel economy and performance in the F-Series pickup of the Dual Fuel engine concept and of an E85 optimized FFV engine. Additionally, production 3.5L gasoline turbocharged direct injection (GTDI) EcoBoost engines were converted to Dual Fuel capability and used to evaluate the cold start emissions and fuel system robustness of the Dual Fuel engine concept. Project objectives were: to develop a roadmap to demonstrate a minimized fuel economy penalty for an F-Series FFV truck with a highly boosted, high compression ratio spark ignition engine optimized to run with ethanol fuel blends up to E85; to reduce FTP 75 energy consumption by 15% - 20% compared to an equally powered vehicle with a current production gasoline engine; and to meet ULEV emissions, with a stretch target of ULEV II / Tier II Bin 4. All project objectives were met or exceeded.
Origin and dynamics of vortex rings in drop splashing
DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)
Lee, Ji San; Park, Su Ji; Lee, Jun Ho; Weon, Byung Mook; Fezzaa, Kamel; Je, Jung Ho
2015-09-04
A vortex is a flow phenomenon that is very commonly observed in nature. More than a century, a vortex ring that forms during drop splashing has caught the attention of many scientists due to its importance in understanding fluid mixing and mass transport processes. However, the origin of the vortices and their dynamics remain unclear, mostly due to the lack of appropriate visualization methods. Here, with ultrafast X-ray phase-contrast imaging, we show that the formation of vortex rings originates from the energy transfer by capillary waves generated at the moment of the drop impact. Interestingly, we find a row ofmore » vortex rings along the drop wall, as demonstrated by a phase diagram established here, with different power-law dependencies of the angular velocities on the Reynolds number. These results provide important insight that allows understanding and modelling any type of vortex rings in nature, beyond just vortex rings during drop splashing.« less
Origin and dynamics of vortex rings in drop splashing
Lee, Ji San; Park, Su Ji; Lee, Jun Ho; Weon, Byung Mook; Fezzaa, Kamel; Je, Jung Ho
2015-09-04
A vortex is a flow phenomenon that is very commonly observed in nature. More than a century, a vortex ring that forms during drop splashing has caught the attention of many scientists due to its importance in understanding fluid mixing and mass transport processes. However, the origin of the vortices and their dynamics remain unclear, mostly due to the lack of appropriate visualization methods. Here, with ultrafast X-ray phase-contrast imaging, we show that the formation of vortex rings originates from the energy transfer by capillary waves generated at the moment of the drop impact. Interestingly, we find a row of vortex rings along the drop wall, as demonstrated by a phase diagram established here, with different power-law dependencies of the angular velocities on the Reynolds number. These results provide important insight that allows understanding and modelling any type of vortex rings in nature, beyond just vortex rings during drop splashing.