Air Transport Optimization Model | NISAC
U.S. Department of Energy (DOE) - all webpages (Extended Search)
NISACAir Transport Optimization Model content top Network Optimization Models (RNAS and ATOM) Posted by Admin on Mar 1, 2012 in | Comments 0 comments Many critical infrastructures ...
Pyomo : Python Optimization Modeling Objects.
Siirola, John; Laird, Carl Damon; Hart, William Eugene; Watson, Jean-Paul
2010-11-01
The Python Optimization Modeling Objects (Pyomo) package [1] is an open source tool for modeling optimization applications within Python. Pyomo provides an objected-oriented approach to optimization modeling, and it can be used to define symbolic problems, create concrete problem instances, and solve these instances with standard solvers. While Pyomo provides a capability that is commonly associated with algebraic modeling languages such as AMPL, AIMMS, and GAMS, Pyomo's modeling objects are embedded within a full-featured high-level programming language with a rich set of supporting libraries. Pyomo leverages the capabilities of the Coopr software library [2], which integrates Python packages (including Pyomo) for defining optimizers, modeling optimization applications, and managing computational experiments. A central design principle within Pyomo is extensibility. Pyomo is built upon a flexible component architecture [3] that allows users and developers to readily extend the core Pyomo functionality. Through these interface points, extensions and applications can have direct access to an optimization model's expression objects. This facilitates the rapid development and implementation of new modeling constructs and as well as high-level solution strategies (e.g. using decomposition- and reformulation-based techniques). In this presentation, we will give an overview of the Pyomo modeling environment and model syntax, and present several extensions to the core Pyomo environment, including support for Generalized Disjunctive Programming (Coopr GDP), Stochastic Programming (PySP), a generic Progressive Hedging solver [4], and a tailored implementation of Bender's Decomposition.
HOMER: The Micropower Optimization Model
Not Available
2004-03-01
HOMER, the micropower optimization model, helps users to design micropower systems for off-grid and grid-connected power applications. HOMER models micropower systems with one or more power sources including wind turbines, photovoltaics, biomass power, hydropower, cogeneration, diesel engines, cogeneration, batteries, fuel cells, and electrolyzers. Users can explore a range of design questions such as which technologies are most effective, what size should components be, how project economics are affected by changes in loads or costs, and is the renewable resource adequate.
Network Optimization Models (RNAS and ATOM) | NISAC
U.S. Department of Energy (DOE) - all webpages (Extended Search)
been used to study policy options concerning the movement of toxic chemicals by rail. Air Transport Optimization Model (ATOM) The TOM is a network-optimization model designed to...
Biotrans: Cost Optimization Model | Open Energy Information
OpenEI (Open Energy Information) [EERE & EIA]
URI: cleanenergysolutions.orgcontentbiotrans-cost-optimization-model,http Language: English Policies: Deployment Programs DeploymentPrograms: Demonstration &...
Model-Based Transient Calibration Optimization for Next Generation...
Office of Energy Efficiency and Renewable Energy (EERE) (indexed site)
Based Transient Calibration Optimization for Next Generation Diesel Engines Model-Based Transient Calibration Optimization for Next Generation Diesel Engines 2005 Diesel Engine...
optimal initial conditions for coupling ice sheet models to earth...
Office of Scientific and Technical Information (OSTI)
optimal initial conditions for coupling ice sheet models to earth system models Perego, Mauro Sandia National Laboratories Sandia National Laboratories; Price, Stephen F. Dr...
Optimal Initial Conditions for Coupling Ice Sheet Models to Earth...
Office of Scientific and Technical Information (OSTI)
Optimal Initial Conditions for Coupling Ice Sheet Models to Earth System Models. Citation ... Country of Publication: United States Language: English Word Cloud More Like This Full ...
Model Identification for Optimal Diesel Emissions Control
Stevens, Andrew J.; Sun, Yannan; Song, Xiaobo; Parker, Gordon
2013-06-20
In this paper we develop a model based con- troller for diesel emission reduction using system identification methods. Specifically, our method minimizes the downstream readings from a production NOx sensor while injecting a minimal amount of urea upstream. Based on the linear quadratic estimator we derive the closed form solution to a cost function that accounts for the case some of the system inputs are not controllable. Our cost function can also be tuned to trade-off between input usage and output optimization. Our approach performs better than a production controller in simulation. Our NOx conversion efficiency was 92.7% while the production controller achieved 92.4%. For NH3 conversion, our efficiency was 98.7% compared to 88.5% for the production controller.
Chen, T.L.; Lin, Z.S.; Chen, Y.L.
1995-10-01
The purpose of this study was to estimate the original-gas-in-place (OGIP) of a water-drive reservoir using optimization algorithm for Port Arthur field, Texas, US. The properties of the associate aquifer were also obtained. The good agreement, between the results from this study and those from simulation study, would be demonstrated in this paper. In this study, material balance equation for a gas reservoir and van Everdingen-Hurst model for an aquifer were solved simultaneously to calculate cumulative gas production. The result was then compared with cumulative gas production measured in the field that observed at each pressure. The following parameters were manually adjusted to obtain: OGIP, thickness of the aquifer, water encroachment angle, ratio of aquifer to reservoir radius, and aquifer`s permeability. The procedure was then applied with simplex technique, an optimization algorithm, to adjust parameters automatically. When the difference between cumulative gas production calculated and observed was minimal, the parameters used in the model would be the results obtained. A water-drive gas reservoir, ``C`` sand gas reservoir in Port Arthur field, which had produced for about 12 years, was analyzed successfully. The results showed that the OGIP of 60.6 BCF estimated in this study was favorably compared with 56.2 BCF obtained by a numerical simulator in other study. In addition, the aquifer properties that were unavailable from the conventional plotting method can be estimated from this study. The estimated aquifer properties from this study were compared favorably with the core data.
Quark-Gluon Plasma Model and Origin of Magic Numbers
Ghahramany, N.; Ghanaatian, M.; Hooshmand, M.
2008-04-21
Using Boltzman distribution in a quark-gluon plasma sample it is possible to obtain all existing magic numbers and their extensions without applying the spin and spin-orbit couplings. In this model it is assumed that in a quark-gluon thermodynamic plasma, quarks have no interactions and they are trying to form nucleons. Considering a lattice for a central quark and the surrounding quarks, using a statistical approach to find the maximum number of microstates, the origin of magic numbers is explained and a new magic number is obtained.
Modeling and Multidimensional Optimization of a Tapered Free...
Office of Scientific and Technical Information (OSTI)
Journal Article: Modeling and Multidimensional Optimization of a Tapered Free Electron Laser Citation Details ... Publication Date: 2013-03-28 OSTI Identifier: 1074231 Report ...
Optimization of Processing and Modeling Issues for Thin Film...
U.S. Department of Energy (DOE) - all webpages (Extended Search)
Newark, Delaware Optimization of Processing and Modeling Issues for Thin Film Solar Cell Devices Including Concepts for the Development of Polycrystalline Multijunctions ...
Stochastic Robust Mathematical Programming Model for Power System Optimization
Liu, Cong; Changhyeok, Lee; Haoyong, Chen; Mehrotra, Sanjay
2016-01-01
This paper presents a stochastic robust framework for two-stage power system optimization problems with uncertainty. The model optimizes the probabilistic expectation of different worst-case scenarios with ifferent uncertainty sets. A case study of unit commitment shows the effectiveness of the proposed model and algorithms.
Use Computational Model to Design and Optimize Welding Conditions to
Office of Energy Efficiency and Renewable Energy (EERE) (indexed site)
Suppress Helium Cracking during Welding | Department of Energy Use Computational Model to Design and Optimize Welding Conditions to Suppress Helium Cracking during Welding Use Computational Model to Design and Optimize Welding Conditions to Suppress Helium Cracking during Welding Today, welding is widely used for repair, maintenance and upgrade of nuclear reactor components. As a critical technology to extend the service life of nuclear power plants beyond 60 years, weld technology must be
Contingency contractor optimization. Phase 3, model description and formulation.
Gearhart, Jared Lee; Adair, Kristin Lynn; Jones, Katherine A.; Bandlow, Alisa; Durfee, Justin D.; Jones, Dean A.; Martin, Nathaniel; Detry, Richard Joseph; Nanco, Alan Stewart; Nozick, Linda Karen
2013-10-01
The goal of Phase 3 the OSD ATL Contingency Contractor Optimization (CCO) project is to create an engineering prototype of a tool for the contingency contractor element of total force planning during the Support for Strategic Analysis (SSA). An optimization model was developed to determine the optimal mix of military, Department of Defense (DoD) civilians, and contractors that accomplishes a set of user defined mission requirements at the lowest possible cost while honoring resource limitations and manpower use rules. An additional feature allows the model to understand the variability of the Total Force Mix when there is uncertainty in mission requirements.
Contingency contractor optimization. phase 3, model description and formulation.
Gearhart, Jared Lee; Adair, Kristin Lynn; Jones, Katherine A.; Bandlow, Alisa; Detry, Richard Joseph; Durfee, Justin D.; Jones, Dean A.; Martin, Nathaniel; Nanco, Alan Stewart; Nozick, Linda Karen
2013-06-01
The goal of Phase 3 the OSD ATL Contingency Contractor Optimization (CCO) project is to create an engineering prototype of a tool for the contingency contractor element of total force planning during the Support for Strategic Analysis (SSA). An optimization model was developed to determine the optimal mix of military, Department of Defense (DoD) civilians, and contractors that accomplishes a set of user defined mission requirements at the lowest possible cost while honoring resource limitations and manpower use rules. An additional feature allows the model to understand the variability of the Total Force Mix when there is uncertainty in mission requirements.
He, Yi; Scheraga, Harold A.; Liwo, Adam
2015-12-28
Coarse-grained models are useful tools to investigate the structural and thermodynamic properties of biomolecules. They are obtained by merging several atoms into one interaction site. Such simplified models try to capture as much as possible information of the original biomolecular system in all-atom representation but the resulting parameters of these coarse-grained force fields still need further optimization. In this paper, a force field optimization method, which is based on maximum-likelihood fitting of the simulated to the experimental conformational ensembles and least-squares fitting of the simulated to the experimental heat-capacity curves, is applied to optimize the Nucleic Acid united-RESidue 2-point (NARES-2P) model for coarse-grained simulations of nucleic acids recently developed in our laboratory. The optimized NARES-2P force field reproduces the structural and thermodynamic data of small DNA molecules much better than the original force field.
Scientists use world's fastest supercomputer to model origins...
U.S. Department of Energy (DOE) - all webpages (Extended Search)
Kevin Roark Communications Office (505) 665-9202 Email One of the largest-ever computer models explores dark matter and dark energy, two cosmic constituents that remain a mystery ...
The origins of computer weather prediction and climate modeling
Lynch, Peter [Meteorology and Climate Centre, School of Mathematical Sciences, University College Dublin, Belfield (Ireland)], E-mail: Peter.Lynch@ucd.ie
2008-03-20
Numerical simulation of an ever-increasing range of geophysical phenomena is adding enormously to our understanding of complex processes in the Earth system. The consequences for mankind of ongoing climate change will be far-reaching. Earth System Models are capable of replicating climate regimes of past millennia and are the best means we have of predicting the future of our climate. The basic ideas of numerical forecasting and climate modeling were developed about a century ago, long before the first electronic computer was constructed. There were several major practical obstacles to be overcome before numerical prediction could be put into practice. A fuller understanding of atmospheric dynamics allowed the development of simplified systems of equations; regular radiosonde observations of the free atmosphere and, later, satellite data, provided the initial conditions; stable finite difference schemes were developed; and powerful electronic computers provided a practical means of carrying out the prodigious calculations required to predict the changes in the weather. Progress in weather forecasting and in climate modeling over the past 50 years has been dramatic. In this presentation, we will trace the history of computer forecasting through the ENIAC integrations to the present day. The useful range of deterministic prediction is increasing by about one day each decade, and our understanding of climate change is growing rapidly as Earth System Models of ever-increasing sophistication are developed.
Determining Reduced Order Models for Optimal Stochastic Reduced Order Models
Bonney, Matthew S.; Brake, Matthew R.W.
2015-08-01
The use of parameterized reduced order models(PROMs) within the stochastic reduced order model (SROM) framework is a logical progression for both methods. In this report, five different parameterized reduced order models are selected and critiqued against the other models along with truth model for the example of the Brake-Reuss beam. The models are: a Taylor series using finite difference, a proper orthogonal decomposition of the the output, a Craig-Bampton representation of the model, a method that uses Hyper-Dual numbers to determine the sensitivities, and a Meta-Model method that uses the Hyper-Dual results and constructs a polynomial curve to better represent the output data. The methods are compared against a parameter sweep and a distribution propagation where the first four statistical moments are used as a comparison. Each method produces very accurate results with the Craig-Bampton reduction having the least accurate results. The models are also compared based on time requirements for the evaluation of each model where the Meta- Model requires the least amount of time for computation by a significant amount. Each of the five models provided accurate results in a reasonable time frame. The determination of which model to use is dependent on the availability of the high-fidelity model and how many evaluations can be performed. Analysis of the output distribution is examined by using a large Monte-Carlo simulation along with a reduced simulation using Latin Hypercube and the stochastic reduced order model sampling technique. Both techniques produced accurate results. The stochastic reduced order modeling technique produced less error when compared to an exhaustive sampling for the majority of methods.
Optimization and Performance Modeling of Stencil Computations on Modern Microprocessors
Datta, Kaushik; Kamil, Shoaib; Williams, Samuel; Oliker, Leonid; Shalf, John; Yelick, Katherine
2007-06-01
Stencil-based kernels constitute the core of many important scientific applications on blockstructured grids. Unfortunately, these codes achieve a low fraction of peak performance, due primarily to the disparity between processor and main memory speeds. In this paper, we explore the impact of trends in memory subsystems on a variety of stencil optimization techniques and develop performance models to analytically guide our optimizations. Our work targets cache reuse methodologies across single and multiple stencil sweeps, examining cache-aware algorithms as well as cache-oblivious techniques on the Intel Itanium2, AMD Opteron, and IBM Power5. Additionally, we consider stencil computations on the heterogeneous multicore design of the Cell processor, a machine with an explicitly managed memory hierarchy. Overall our work represents one of the most extensive analyses of stencil optimizations and performance modeling to date. Results demonstrate that recent trends in memory system organization have reduced the efficacy of traditional cache-blocking optimizations. We also show that a cache-aware implementation is significantly faster than a cache-oblivious approach, while the explicitly managed memory on Cell enables the highest overall efficiency: Cell attains 88% of algorithmic peak while the best competing cache-based processor achieves only 54% of algorithmic peak performance.
A Physical Model For The Origin Of Volcanism Of The Tyrrhenian...
OpenEI (Open Energy Information) [EERE & EIA]
Of Neapolitan Area Jump to: navigation, search OpenEI Reference LibraryAdd to library Journal Article: A Physical Model For The Origin Of Volcanism Of The Tyrrhenian Margin- The...
Pumping Optimization Model for Pump and Treat Systems - 15091
Baker, S.; Ivarson, Kristine A.; Karanovic, M.; Miller, Charles W.; Tonkin, M.
2015-01-15
Pump and Treat systems are being utilized to remediate contaminated groundwater in the Hanford 100 Areas adjacent to the Columbia River in Eastern Washington. Design of the systems was supported by a three-dimensional (3D) fate and transport model. This model provided sophisticated simulation capabilities but requires many hours to calculate results for each simulation considered. Many simulations are required to optimize system performance, so a two-dimensional (2D) model was created to reduce run time. The 2D model was developed as a equivalent-property version of the 3D model that derives boundary conditions and aquifer properties from the 3D model. It produces predictions that are very close to the 3D model predictions, allowing it to be used for comparative remedy analyses. Any potential system modifications identified by using the 2D version are verified for use by running the 3D model to confirm performance. The 2D model was incorporated into a comprehensive analysis system (the Pumping Optimization Model, POM) to simplify analysis of multiple simulations. It allows rapid turnaround by utilizing a graphical user interface that: 1 allows operators to create hypothetical scenarios for system operation, 2 feeds the input to the 2D fate and transport model, and 3 displays the scenario results to evaluate performance improvement. All of the above is accomplished within the user interface. Complex analyses can be completed within a few hours and multiple simulations can be compared side-by-side. The POM utilizes standard office computing equipment and established groundwater modeling software.
Vrugt, Jasper A; Wohling, Thomas
2008-01-01
Most studies in vadose zone hydrology use a single conceptual model for predictive inference and analysis. Focusing on the outcome of a single model is prone to statistical bias and underestimation of uncertainty. In this study, we combine multi-objective optimization and Bayesian Model Averaging (BMA) to generate forecast ensembles of soil hydraulic models. To illustrate our method, we use observed tensiometric pressure head data at three different depths in a layered vadose zone of volcanic origin in New Zealand. A set of seven different soil hydraulic models is calibrated using a multi-objective formulation with three different objective functions that each measure the mismatch between observed and predicted soil water pressure head at one specific depth. The Pareto solution space corresponding to these three objectives is estimated with AMALGAM, and used to generate four different model ensembles. These ensembles are post-processed with BMA and used for predictive analysis and uncertainty estimation. Our most important conclusions for the vadose zone under consideration are: (1) the mean BMA forecast exhibits similar predictive capabilities as the best individual performing soil hydraulic model, (2) the size of the BMA uncertainty ranges increase with increasing depth and dryness in the soil profile, (3) the best performing ensemble corresponds to the compromise (or balanced) solution of the three-objective Pareto surface, and (4) the combined multi-objective optimization and BMA framework proposed in this paper is very useful to generate forecast ensembles of soil hydraulic models.
Modeling Microinverters and DC Power Optimizers in PVWatts
MacAlpine, S.; Deline, C.
2015-02-01
Module-level distributed power electronics including microinverters and DC power optimizers are increasingly popular in residential and commercial PV systems. Consumers are realizing their potential to increase design flexibility, monitor system performance, and improve energy capture. It is becoming increasingly important to accurately model PV systems employing these devices. This document summarizes existing published documents to provide uniform, impartial recommendations for how the performance of distributed power electronics can be reflected in NREL's PVWatts calculator (http://pvwatts.nrel.gov/).
WE-D-BRE-04: Modeling Optimal Concurrent Chemotherapy Schedules
Jeong, J; Deasy, J O
2014-06-15
Purpose: Concurrent chemo-radiation therapy (CCRT) has become a more common cancer treatment option with a better tumor control rate for several tumor sites, including head and neck and lung cancer. In this work, possible optimal chemotherapy schedules were investigated by implementing chemotherapy cell-kill into a tumor response model of RT. Methods: The chemotherapy effect has been added into a published model (Jeong et al., PMB (2013) 58:4897), in which the tumor response to RT can be simulated with the effects of hypoxia and proliferation. Based on the two-compartment pharmacokinetic model, the temporal concentration of chemotherapy agent was estimated. Log cell-kill was assumed and the cell-kill constant was estimated from the observed increase in local control due to concurrent chemotherapy. For a simplified two cycle CCRT regime, several different starting times and intervals were simulated with conventional RT regime (2Gy/fx, 5fx/wk). The effectiveness of CCRT was evaluated in terms of reduction in radiation dose required for 50% of control to find the optimal chemotherapy schedule. Results: Assuming the typical slope of dose response curve (γ50=2), the observed 10% increase in local control rate was evaluated to be equivalent to an extra RT dose of about 4 Gy, from which the cell-kill rate of chemotherapy was derived to be about 0.35. Best response was obtained when chemotherapy was started at about 3 weeks after RT began. As the interval between two cycles decreases, the efficacy of chemotherapy increases with broader range of optimal starting times. Conclusion: The effect of chemotherapy has been implemented into the resource-conservation tumor response model to investigate CCRT. The results suggest that the concurrent chemotherapy might be more effective when delayed for about 3 weeks, due to lower tumor burden and a larger fraction of proliferating cells after reoxygenation.
Hour-by-Hour Cost Modeling of Optimized Central Wind-Based Water...
Office of Energy Efficiency and Renewable Energy (EERE) (indexed site)
Hour-by-Hour Cost Modeling of Optimized Central Wind-Based Water Electrolysis Production Hour-by-Hour Cost Modeling of Optimized Central Wind-Based Water Electrolysis Production ...
Optimal control of CPR procedure using hemodynamic circulation model
Lenhart, Suzanne M.; Protopopescu, Vladimir A.; Jung, Eunok
2007-12-25
A method for determining a chest pressure profile for cardiopulmonary resuscitation (CPR) includes the steps of representing a hemodynamic circulation model based on a plurality of difference equations for a patient, applying an optimal control (OC) algorithm to the circulation model, and determining a chest pressure profile. The chest pressure profile defines a timing pattern of externally applied pressure to a chest of the patient to maximize blood flow through the patient. A CPR device includes a chest compressor, a controller communicably connected to the chest compressor, and a computer communicably connected to the controller. The computer determines the chest pressure profile by applying an OC algorithm to a hemodynamic circulation model based on the plurality of difference equations.
Computer model for characterizing, screening, and optimizing electrolyte systems
Gering, Kevin L.
2015-06-15
Electrolyte systems in contemporary batteries are tasked with operating under increasing performance requirements. All battery operation is in some way tied to the electrolyte and how it interacts with various regions within the cell environment. Seeing the electrolyte plays a crucial role in battery performance and longevity, it is imperative that accurate, physics-based models be developed that will characterize key electrolyte properties while keeping pace with the increasing complexity of these liquid systems. Advanced models are needed since laboratory measurements require significant resources to carry out for even a modest experimental matrix. The Advanced Electrolyte Model (AEM) developed at the INL is a proven capability designed to explore molecular-to-macroscale level aspects of electrolyte behavior, and can be used to drastically reduce the time required to characterize and optimize electrolytes. Although it is applied most frequently to lithium-ion battery systems, it is general in its theory and can be used toward numerous other targets and intended applications. This capability is unique, powerful, relevant to present and future electrolyte development, and without peer. It redefines electrolyte modeling for highly-complex contemporary systems, wherein significant steps have been taken to capture the reality of electrolyte behavior in the electrochemical cell environment. This capability can have a very positive impact on accelerating domestic battery development to support aggressive vehicle and energy goals in the 21st century.
Optimal Control of Distributed Energy Resources using Model Predictive Control
Mayhorn, Ebony T.; Kalsi, Karanjit; Elizondo, Marcelo A.; Zhang, Wei; Lu, Shuai; Samaan, Nader A.; Butler-Purry, Karen
2012-07-22
In an isolated power system (rural microgrid), Distributed Energy Resources (DERs) such as renewable energy resources (wind, solar), energy storage and demand response can be used to complement fossil fueled generators. The uncertainty and variability due to high penetration of wind makes reliable system operations and controls challenging. In this paper, an optimal control strategy is proposed to coordinate energy storage and diesel generators to maximize wind penetration while maintaining system economics and normal operation. The problem is formulated as a multi-objective optimization problem with the goals of minimizing fuel costs and changes in power output of diesel generators, minimizing costs associated with low battery life of energy storage and maintaining system frequency at the nominal operating value. Two control modes are considered for controlling the energy storage to compensate either net load variability or wind variability. Model predictive control (MPC) is used to solve the aforementioned problem and the performance is compared to an open-loop look-ahead dispatch problem. Simulation studies using high and low wind profiles, as well as, different MPC prediction horizons demonstrate the efficacy of the closed-loop MPC in compensating for uncertainties in wind and demand.
Computer model for characterizing, screening, and optimizing electrolyte systems
Energy Science and Technology Software Center
2015-06-15
Electrolyte systems in contemporary batteries are tasked with operating under increasing performance requirements. All battery operation is in some way tied to the electrolyte and how it interacts with various regions within the cell environment. Seeing the electrolyte plays a crucial role in battery performance and longevity, it is imperative that accurate, physics-based models be developed that will characterize key electrolyte properties while keeping pace with the increasing complexity of these liquid systems. Advanced modelsmore » are needed since laboratory measurements require significant resources to carry out for even a modest experimental matrix. The Advanced Electrolyte Model (AEM) developed at the INL is a proven capability designed to explore molecular-to-macroscale level aspects of electrolyte behavior, and can be used to drastically reduce the time required to characterize and optimize electrolytes. Although it is applied most frequently to lithium-ion battery systems, it is general in its theory and can be used toward numerous other targets and intended applications. This capability is unique, powerful, relevant to present and future electrolyte development, and without peer. It redefines electrolyte modeling for highly-complex contemporary systems, wherein significant steps have been taken to capture the reality of electrolyte behavior in the electrochemical cell environment. This capability can have a very positive impact on accelerating domestic battery development to support aggressive vehicle and energy goals in the 21st century.« less
OPF incorporating load models maximizing net revenue. [Optimal Power Flow
Dias, L.G.; El-Hawary, M.E. . Dept. of Electrical Engineering)
1993-02-01
Studies of effects of load modeling in optimal power flow studies using minimum cost and minimum loss objective reveal that a main disadvantage of cost minimization is the reduction of the objective via a reduction in the power demand. This inevitably results in lowering the total revenue and in most cases, reducing net revenue as well. An alternative approach for incorporating load models in security-constrained OPF (SCOPF) studies apparently avoids reducing the total power demand for the intact system, but reduces the voltages. A study of the behavior of conventional OPF solutions in the presence of loads not controlled by ULTC's shows that this result in a reducing the total power demand for the intact system. In this paper, the authors propose an objective that avoids the tendency to lower the total power demand, total revenue and net revenue, for OPF neglecting contingencies (normal OPF), as well as for security-constrained OPF. The minimum cost objective is modified by subtracting the total power demand from the total fuel cost. This is equivalent to maximizing the net revenue.
Applying the Battery Ownership Model in Pursuit of Optimal Battery Use Strategies (Presentation)
Neubauer, J.; Ahmad, P.; Brooker, A.; Wood, E.; Smith, K.; Johnson, C.; Mendelsohn, M.
2012-05-01
This Annual Merit Review presentation describes the application of the Battery Ownership Model for strategies for optimal battery use in electric drive vehicles (PEVs, PHEVs, and BEVs).
Optimal SCR Control Using Data-Driven Models
Stevens, Andrew J.; Sun, Yannan; Lian, Jianming; Devarakonda, Maruthi N.; Parker, Gordon
2013-04-16
We present an optimal control solution for the urea injection for a heavy-duty diesel (HDD) selective catalytic reduction (SCR). The approach taken here is useful beyond SCR and could be applied to any system where a control strategy is desired and input-output data is available. For example, the strategy could also be used for the diesel oxidation catalyst (DOC) system. In this paper, we identify and validate a one-step ahead Kalman state-space estimator for downstream NOx using the bench reactor data of an SCR core sample. The test data was acquired using a 2010 Cummins 6.7L ISB production engine with a 2010 Cummins production aftertreatment system. We used a surrogate HDD federal test procedure (FTP), developed at Michigan Technological University (MTU), which simulates the representative transients of the standard FTP cycle, but has less engine speed/load points. The identified state-space model is then used to develop a tunable cost function that simultaneously minimizes NOx emissions and urea usage. The cost function is quadratic and univariate, thus the minimum can be computed analytically. We show the performance of the closed-loop controller in using a reduced-order discrete SCR simulator developed at MTU. Our experiments with the surrogate HDD-FTP data show that the strategy developed in this paper can be used to identify performance bounds for urea dose controllers.
Continuously Optimized Reliable Energy (CORE) Microgrid: Models & Tools (Fact Sheet)
Not Available
2013-07-01
This brochure describes Continuously Optimized Reliable Energy (CORE), a trademarked process NREL employs to produce conceptual microgrid designs. This systems-based process enables designs to be optimized for economic value, energy surety, and sustainability. Capabilities NREL offers in support of microgrid design are explained.
optimal initial conditions for coupling ice sheet models to earth system
Office of Scientific and Technical Information (OSTI)
models (Conference) | SciTech Connect Conference: optimal initial conditions for coupling ice sheet models to earth system models Citation Details In-Document Search Title: optimal initial conditions for coupling ice sheet models to earth system models Authors: Perego, Mauro [1] ; Price, Stephen F. Dr [2] ; Stadler, Georg [3] + Show Author Affiliations Sandia National Laboratories Los Alamos National Laboratory Institute for Computational Engineering and Sciences, Univ. of Texas at Austin
Oneida Tribe of Indians of Wisconsin Energy Optimization Model
Troge, Michael
2014-12-01
Oneida Nation is located in Northeast Wisconsin. The reservation is approximately 96 square miles (8 miles x 12 miles), or 65,000 acres. The greater Green Bay area is east and adjacent to the reservation. A county line roughly splits the reservation in half; the west half is in Outagamie County and the east half is in Brown County. Land use is predominantly agriculture on the west 2/3 and suburban on the east 1/3 of the reservation. Nearly 5,000 tribally enrolled members live in the reservation with a total population of about 21,000. Tribal ownership is scattered across the reservation and is about 23,000 acres. Currently, the Oneida Tribe of Indians of Wisconsin (OTIW) community members and facilities receive the vast majority of electrical and natural gas services from two of the largest investor-owned utilities in the state, WE Energies and Wisconsin Public Service. All urban and suburban buildings have access to natural gas. About 15% of the population and five Tribal facilities are in rural locations and therefore use propane as a primary heating fuel. Wood and oil are also used as primary or supplemental heat sources for a small percent of the population. Very few renewable energy systems, used to generate electricity and heat, have been installed on the Oneida Reservation. This project was an effort to develop a reasonable renewable energy portfolio that will help Oneida to provide a leadership role in developing a clean energy economy. The Energy Optimization Model (EOM) is an exploration of energy opportunities available to the Tribe and it is intended to provide a decision framework to allow the Tribe to make the wisest choices in energy investment with an organizational desire to establish a renewable portfolio standard (RPS).
Diwekar, U.; Shastri, Y.; Subramanayan, K.; Zitney, S.
2007-01-01
APECS (Advanced Process Engineering Co-Simulator) is an integrated software suite that combines the power of process simulation with high-fidelity, computational fluid dynamics (CFD) for improved design, analysis, and optimization of process engineering systems. The APECS system uses commercial process simulation (e.g., Aspen Plus) and CFD (e.g., FLUENT) software integrated with the process-industry standard CAPE-OPEN (CO) interfaces. This breakthrough capability allows engineers to better understand and optimize the fluid mechanics that drive overall power plant performance and efficiency. The focus of this paper is the CAPE-OPEN complaint stochastic modeling and reduced order model computational capability around the APECS system. The usefulness of capabilities is illustrated with coal fired, gasification based, FutureGen power plant simulation. These capabilities are used to generate efficient reduced order models and optimizing model complexities.
High-throughput generation, optimization and analysis of genome-scale metabolic models.
Henry, C. S.; DeJongh, M.; Best, A. A.; Frybarger, P. M.; Linsay, B.; Stevens, R. L.
2010-09-01
Genome-scale metabolic models have proven to be valuable for predicting organism phenotypes from genotypes. Yet efforts to develop new models are failing to keep pace with genome sequencing. To address this problem, we introduce the Model SEED, a web-based resource for high-throughput generation, optimization and analysis of genome-scale metabolic models. The Model SEED integrates existing methods and introduces techniques to automate nearly every step of this process, taking {approx}48 h to reconstruct a metabolic model from an assembled genome sequence. We apply this resource to generate 130 genome-scale metabolic models representing a taxonomically diverse set of bacteria. Twenty-two of the models were validated against available gene essentiality and Biolog data, with the average model accuracy determined to be 66% before optimization and 87% after optimization.
Building Restoration Operations Optimization Model Beta Version 1.0
2007-05-31
The Building Restoration Operations Optimization Model (BROOM), developed by Sandia National Laboratories, is a software product designed to aid in the restoration of large facilities contaminated by a biological material. BROOMs integrated data collection, data management, and visualization software improves the efficiency of cleanup operations, minimizes facility downtime, and provides a transparent basis for reopening the facility. Secure remote access to building floor plans Floor plan drawings and knowledge of the HVAC system are critical to the design and implementation of effective sampling plans. In large facilities, access to these data may be complicated by the sheer abundance and disorganized state they are often stored in. BROOM avoids potentially costly delays by providing a means of organizing and storing mechanical and floor plan drawings in a secure remote database that is easily accessed. Sampling design tools BROOM provides an array of tools to answer the question of where to sample and how many samples to take. In addition to simple judgmental and random sampling plans, the software includes two sophisticated methods of adaptively developing a sampling strategy. Both tools strive to choose sampling locations that best satisfy a specified objective (i.e. minimizing kriging variance) but use numerically different strategies to do so. Surface samples are collected early in the restoration process to characterize the extent of contamination and then again later to verify that the facility is safe to reenter. BROOM supports sample collection using a ruggedized PDA equipped with a barcode scanner and laser range finder. The PDA displays building floor drawings, sampling plans, and electronic forms for data entry. Barcodes are placed on sample containers for the purpose of tracking the specimen and linking acquisition data (i.e. location, surface type, texture) to laboratory results. Sample location is determined by activating the integrated laser
Building Restoration Operations Optimization Model Beta Version 1.0
Energy Science and Technology Software Center
2007-05-31
The Building Restoration Operations Optimization Model (BROOM), developed by Sandia National Laboratories, is a software product designed to aid in the restoration of large facilities contaminated by a biological material. BROOMs integrated data collection, data management, and visualization software improves the efficiency of cleanup operations, minimizes facility downtime, and provides a transparent basis for reopening the facility. Secure remote access to building floor plans Floor plan drawings and knowledge of the HVAC system are criticalmore » to the design and implementation of effective sampling plans. In large facilities, access to these data may be complicated by the sheer abundance and disorganized state they are often stored in. BROOM avoids potentially costly delays by providing a means of organizing and storing mechanical and floor plan drawings in a secure remote database that is easily accessed. Sampling design tools BROOM provides an array of tools to answer the question of where to sample and how many samples to take. In addition to simple judgmental and random sampling plans, the software includes two sophisticated methods of adaptively developing a sampling strategy. Both tools strive to choose sampling locations that best satisfy a specified objective (i.e. minimizing kriging variance) but use numerically different strategies to do so. Surface samples are collected early in the restoration process to characterize the extent of contamination and then again later to verify that the facility is safe to reenter. BROOM supports sample collection using a ruggedized PDA equipped with a barcode scanner and laser range finder. The PDA displays building floor drawings, sampling plans, and electronic forms for data entry. Barcodes are placed on sample containers for the purpose of tracking the specimen and linking acquisition data (i.e. location, surface type, texture) to laboratory results. Sample location is determined by activating the integrated
An Optimization Model for Plug-In Hybrid Electric Vehicles
Malikopoulos, Andreas; Smith, David E
2011-01-01
The necessity for environmentally conscious vehicle designs in conjunction with increasing concerns regarding U.S. dependency on foreign oil and climate change have induced significant investment towards enhancing the propulsion portfolio with new technologies. More recently, plug-in hybrid electric vehicles (PHEVs) have held great intuitive appeal and have attracted considerable attention. PHEVs have the potential to reduce petroleum consumption and greenhouse gas (GHG) emissions in the commercial transportation sector. They are especially appealing in situations where daily commuting is within a small amount of miles with excessive stop-and-go driving. The research effort outlined in this paper aims to investigate the implications of motor/generator and battery size on fuel economy and GHG emissions in a medium-duty PHEV. An optimization framework is developed and applied to two different parallel powertrain configurations, e.g., pre-transmission and post-transmission, to derive the optimal design with respect to motor/generator and battery size. A comparison between the conventional and PHEV configurations with equivalent size and performance under the same driving conditions is conducted, thus allowing an assessment of the fuel economy and GHG emissions potential improvement. The post-transmission parallel configuration yields higher fuel economy and less GHG emissions compared to pre-transmission configuration partly attributable to the enhanced regenerative braking efficiency.
Applying the Battery Ownership Model in Pursuit of Optimal Battery Use
Office of Energy Efficiency and Renewable Energy (EERE) (indexed site)
Strategies | Department of Energy Applying the Battery Ownership Model in Pursuit of Optimal Battery Use Strategies Applying the Battery Ownership Model in Pursuit of Optimal Battery Use Strategies 2012 DOE Hydrogen and Fuel Cells Program and Vehicle Technologies Program Annual Merit Review and Peer Evaluation Meeting es123_neubauer_2012_o.pdf (709.43 KB) More Documents & Publications Vehicle Technologies Office: 2013 Energy Storage R&D Progress Report, Sections 4-6 Analysis of
Optimization of Depletion Modeling and Simulation for the High...
Office of Scientific and Technical Information (OSTI)
for the high-fidelity modeling and simulation of the ... Mathematics and Computation (M&C), Supercomputing in Nuclear Applications (SNA) and the Monte Carlo (MC) Method, ...
Hour-by-Hour Cost Modeling of Optimized Central Wind-Based Water
Office of Energy Efficiency and Renewable Energy (EERE) (indexed site)
Electrolysis Production | Department of Energy Hour-by-Hour Cost Modeling of Optimized Central Wind-Based Water Electrolysis Production Hour-by-Hour Cost Modeling of Optimized Central Wind-Based Water Electrolysis Production Download the presentation slides from the U.S. Department of Energy Fuel Cell Technologies Office webinar, "Wind-to-Hydrogen Cost Modeling and Project Findings," held on January 17, 2013. Wind-to-Hydrogen Cost Modeling and Project Findings Webinar Slides (2.09
Turner, D P; Ritts, W D; Wharton, S; Thomas, C; Monson, R; Black, T A
2009-02-26
The combination of satellite remote sensing and carbon cycle models provides an opportunity for regional to global scale monitoring of terrestrial gross primary production, ecosystem respiration, and net ecosystem production. FPAR (the fraction of photosynthetically active radiation absorbed by the plant canopy) is a critical input to diagnostic models, however little is known about the relative effectiveness of FPAR products from different satellite sensors nor about the sensitivity of flux estimates to different parameterization approaches. In this study, we used multiyear observations of carbon flux at four eddy covariance flux tower sites within the conifer biome to evaluate these factors. FPAR products from the MODIS and SeaWiFS sensors, and the effects of single site vs. cross-site parameter optimization were tested with the CFLUX model. The SeaWiFs FPAR product showed greater dynamic range across sites and resulted in slightly reduced flux estimation errors relative to the MODIS product when using cross-site optimization. With site-specific parameter optimization, the flux model was effective in capturing seasonal and interannual variation in the carbon fluxes at these sites. The cross-site prediction errors were lower when using parameters from a cross-site optimization compared to parameter sets from optimization at single sites. These results support the practice of multisite optimization within a biome for parameterization of diagnostic carbon flux models.
Optimal bispectrum constraints on single-field models of inflation
Anderson, Gemma J.; Regan, Donough; Seery, David E-mail: D.Regan@sussex.ac.uk
2014-07-01
We use WMAP 9-year bispectrum data to constrain the free parameters of an 'effective field theory' describing fluctuations in single-field inflation. The Lagrangian of the theory contains a finite number of operators associated with unknown mass scales. Each operator produces a fixed bispectrum shape, which we decompose into partial waves in order to construct a likelihood function. Based on this likelihood we are able to constrain four linearly independent combinations of the mass scales. As an example of our framework we specialize our results to the case of 'Dirac-Born-Infeld' and 'ghost' inflation and obtain the posterior probability for each model, which in Bayesian schemes is a useful tool for model comparison. Our results suggest that DBI-like models with two or more free parameters are disfavoured by the data by comparison with single-parameter models in the same class.
A system-level cost-of-energy wind farm layout optimization with landowner modeling
Chen, Le [Ames Laboratory; MacDonald, Erin [Ames Laboratory
2013-10-01
This work applies an enhanced levelized wind farm cost model, including landowner remittance fees, to determine optimal turbine placements under three landowner participation scenarios and two land-plot shapes. Instead of assuming a continuous piece of land is available for the wind farm construction, as in most layout optimizations, the problem formulation represents landowner participation scenarios as a binary string variable, along with the number of turbines. The cost parameters and model are a combination of models from the National Renewable Energy Laboratory (NREL), Lawrence Berkeley National Laboratory, and Windustiy. The system-level cost-of-energy (COE) optimization model is also tested under two land-plot shapes: equally-sized square land plots and unequal rectangle land plots. The optimal COEs results are compared to actual COE data and found to be realistic. The results show that landowner remittances account for approximately 10% of farm operating costs across all cases. Irregular land-plot shapes are easily handled by the model. We find that larger land plots do not necessarily receive higher remittance fees. The model can help site developers identify the most crucial land plots for project success and the optimal positions of turbines, with realistic estimates of costs and profitability. (C) 2013 Elsevier Ltd. All rights reserved.
THE APPLICATION OF AN EVOLUTIONARY ALGORITHM TO THE OPTIMIZATION OF A MESOSCALE METEOROLOGICAL MODEL
Werth, D.; O'Steen, L.
2008-02-11
We show that a simple evolutionary algorithm can optimize a set of mesoscale atmospheric model parameters with respect to agreement between the mesoscale simulation and a limited set of synthetic observations. This is illustrated using the Regional Atmospheric Modeling System (RAMS). A set of 23 RAMS parameters is optimized by minimizing a cost function based on the root mean square (rms) error between the RAMS simulation and synthetic data (observations derived from a separate RAMS simulation). We find that the optimization can be efficient with relatively modest computer resources, thus operational implementation is possible. The optimization efficiency, however, is found to depend strongly on the procedure used to perturb the 'child' parameters relative to their 'parents' within the evolutionary algorithm. In addition, the meteorological variables included in the rms error and their weighting are found to be an important factor with respect to finding the global optimum.
A MILP-Based Distribution Optimal Power Flow Model for Microgrid Operation
Liu, Guodong; Starke, Michael R; Zhang, Xiaohu; Tomsovic, Kevin
2016-01-01
This paper proposes a distribution optimal power flow (D-OPF) model for the operation of microgrids. The proposed model minimizes not only the operating cost, including fuel cost, purchasing cost and demand charge, but also several performance indices, including voltage deviation, network power loss and power factor. It co-optimizes the real and reactive power form distributed generators (DGs) and batteries considering their capacity and power factor limits. The D-OPF is formulated as a mixed-integer linear programming (MILP). Numerical simulation results show the effectiveness of the proposed model.
Optimization of large-scale heterogeneous system-of-systems models.
Parekh, Ojas; Watson, Jean-Paul; Phillips, Cynthia Ann; Siirola, John; Swiler, Laura Painton; Hough, Patricia Diane; Lee, Herbert K. H.; Hart, William Eugene; Gray, Genetha Anne; Woodruff, David L.
2012-01-01
Decision makers increasingly rely on large-scale computational models to simulate and analyze complex man-made systems. For example, computational models of national infrastructures are being used to inform government policy, assess economic and national security risks, evaluate infrastructure interdependencies, and plan for the growth and evolution of infrastructure capabilities. A major challenge for decision makers is the analysis of national-scale models that are composed of interacting systems: effective integration of system models is difficult, there are many parameters to analyze in these systems, and fundamental modeling uncertainties complicate analysis. This project is developing optimization methods to effectively represent and analyze large-scale heterogeneous system of systems (HSoS) models, which have emerged as a promising approach for describing such complex man-made systems. These optimization methods enable decision makers to predict future system behavior, manage system risk, assess tradeoffs between system criteria, and identify critical modeling uncertainties.
Optimization of a Two-Fluid Hydrodynamic Model of Churn-Turbulent Flow
Donna Post Guillen
2009-07-01
A hydrodynamic model of two-phase, churn-turbulent flows is being developed using the computational multiphase fluid dynamics (CMFD) code, NPHASE-CMFD. The numerical solutions obtained by this model are compared with experimental data obtained at the TOPFLOW facility of the Institute of Safety Research at the Forschungszentrum Dresden-Rossendorf. The TOPFLOW data is a high quality experimental database of upward, co-current air-water flows in a vertical pipe suitable for validation of computational fluid dynamics (CFD) codes. A five-field CMFD model was developed for the continuous liquid phase and four bubble size groups using mechanistic closure models for the ensemble-averaged Navier-Stokes equations. Mechanistic models for the drag and non-drag interfacial forces are implemented to include the governing physics to describe the hydrodynamic forces controlling the gas distribution. The closure models provide the functional form of the interfacial forces, with user defined coefficients to adjust the force magnitude. An optimization strategy was devised for these coefficients using commercial design optimization software. This paper demonstrates an approach to optimizing CMFD model parameters using a design optimization approach. Computed radial void fraction profiles predicted by the NPHASE-CMFD code are compared to experimental data for four bubble size groups.
Simulation and optimization of pressure swing adsorption systmes using reduced-order modeling
Agarwal, A.; Biegler, L.; Zitney, S.
2009-01-01
Over the past three decades, pressure swing adsorption (PSA) processes have been widely used as energyefficient gas separation techniques, especially for high purity hydrogen purification from refinery gases. Models for PSA processes are multiple instances of partial differential equations (PDEs) in time and space with periodic boundary conditions that link the processing steps together. The solution of this coupled stiff PDE system is governed by steep fronts moving with time. As a result, the optimization of such systems represents a significant computational challenge to current differential algebraic equation (DAE) optimization techniques and nonlinear programming algorithms. Model reduction is one approach to generate cost-efficient low-order models which can be used as surrogate models in the optimization problems. This study develops a reducedorder model (ROM) based on proper orthogonal decomposition (POD), which is a low-dimensional approximation to a dynamic PDE-based model. The proposed method leads to a DAE system of significantly lower order, thus replacing the one obtained from spatial discretization and making the optimization problem computationally efficient. The method has been applied to the dynamic coupled PDE-based model of a twobed four-step PSA process for separation of hydrogen from methane. Separate ROMs have been developed for each operating step with different POD modes for each of them. A significant reduction in the order of the number of states has been achieved. The reduced-order model has been successfully used to maximize hydrogen recovery by manipulating operating pressures, step times and feed and regeneration velocities, while meeting product purity and tight bounds on these parameters. Current results indicate the proposed ROM methodology as a promising surrogate modeling technique for cost-effective optimization purposes.
Optimization of ultrasonic array inspections using an efficient hybrid model and real crack shapes
Felice, Maria V.; Velichko, Alexander Wilcox, Paul D.; Barden, Tim; Dunhill, Tony
2015-03-31
Models which simulate the interaction of ultrasound with cracks can be used to optimize ultrasonic array inspections, but this approach can be time-consuming. To overcome this issue an efficient hybrid model is implemented which includes a finite element method that requires only a single layer of elements around the crack shape. Scattering Matrices are used to capture the scattering behavior of the individual cracks and a discussion on the angular degrees of freedom of elastodynamic scatterers is included. Real crack shapes are obtained from X-ray Computed Tomography images of cracked parts and these shapes are inputted into the hybrid model. The effect of using real crack shapes instead of straight notch shapes is demonstrated. An array optimization methodology which incorporates the hybrid model, an approximate single-scattering relative noise model and the real crack shapes is then described.
A partitioner-centric model for SAMR partitioning trade-off optimization : Part II.
Steensland, Johan; Ray, Jaideep
2004-03-01
Optimal partitioning of structured adaptive mesh applications necessitates dynamically determining and optimizing for the most time-inhibiting factor, such as data migration and communication volume. However, a trivial monitoring of an application evaluates the current partitioning rather than the inherent properties of the grid hierarchy. We present a model that given a structured adaptive grid, determines ab initio to what extent the partitioner should focus on reducing the amount of data migration to reduce execution time. This model contributes to the meta-partitioner, our ultimate aim of being able to select and configure the optimal partitioner based on the dynamic properties of the grid hierarchy and the computer. We validate the predictions of this model by comparing them with actual measurements (via traces) from four different adaptive simulations. The results show that the proposed model generally captures the inherent optimization-need in SAMR applications. We conclude that our model is a useful contribution, since tracking and adapting to the dynamic behavior of such applications lead to potentially large decreases in execution times.
Suthar, B; Northrop, PWC; Braatz, RD; Subramanian, VR
2014-07-30
This paper illustrates the application of dynamic optimization in obtaining the optimal current profile for charging a lithium-ion battery by restricting the intercalation-induced stresses to a pre-determined limit estimated using a pseudo 2-dimensional (P2D). model. This paper focuses on the problem of maximizing the charge stored in a given time while restricting capacity fade due to intercalation-induced stresses. Conventional charging profiles for lithium-ion batteries (e.g., constant current followed by constant voltage or CC-CV) are not derived by considering capacity fade mechanisms, which are not only inefficient in terms of life-time usage of the batteries but are also slower by not taking into account the changing dynamics of the system. (C) The Author(s) 2014. Published by ECS. This is an open access article distributed under the terms of the Creative Commons Attribution Non-Commercial No Derivatives 4.0 License (CC BY-NC-ND, http://creativecommons.org/licenses/by-nc-nd/4.0/), which permits non-commercial reuse, distribution, and reproduction in any medium, provided the original work is not changed in any way and is properly cited. For permission for commercial reuse, please email: oa@electrochem.org. All rights reserved.
Equation-based languages – A new paradigm for building energy modeling, simulation and optimization
Wetter, Michael; Bonvini, Marco; Nouidui, Thierry S.
2016-04-01
Most of the state-of-the-art building simulation programs implement models in imperative programming languages. This complicates modeling and excludes the use of certain efficient methods for simulation and optimization. In contrast, equation-based modeling languages declare relations among variables, thereby allowing the use of computer algebra to enable much simpler schematic modeling and to generate efficient code for simulation and optimization. We contrast the two approaches in this paper. We explain how such manipulations support new use cases. In the first of two examples, we couple models of the electrical grid, multiple buildings, HVAC systems and controllers to test a controller thatmore » adjusts building room temperatures and PV inverter reactive power to maintain power quality. In the second example, we contrast the computing time for solving an optimal control problem for a room-level model predictive controller with and without symbolic manipulations. As a result, exploiting the equation-based language led to 2, 200 times faster solution« less
Reduced order model based on principal component analysis for process simulation and optimization
Lang, Y.; Malacina, A.; Biegler, L.; Munteanu, S.; Madsen, J.; Zitney, S.
2009-01-01
It is well-known that distributed parameter computational fluid dynamics (CFD) models provide more accurate results than conventional, lumped-parameter unit operation models used in process simulation. Consequently, the use of CFD models in process/equipment co-simulation offers the potential to optimize overall plant performance with respect to complex thermal and fluid flow phenomena. Because solving CFD models is time-consuming compared to the overall process simulation, we consider the development of fast reduced order models (ROMs) based on CFD results to closely approximate the high-fidelity equipment models in the co-simulation. By considering process equipment items with complicated geometries and detailed thermodynamic property models, this study proposes a strategy to develop ROMs based on principal component analysis (PCA). Taking advantage of commercial process simulation and CFD software (for example, Aspen Plus and FLUENT), we are able to develop systematic CFD-based ROMs for equipment models in an efficient manner. In particular, we show that the validity of the ROM is more robust within well-sampled input domain and the CPU time is significantly reduced. Typically, it takes at most several CPU seconds to evaluate the ROM compared to several CPU hours or more to solve the CFD model. Two case studies, involving two power plant equipment examples, are described and demonstrate the benefits of using our proposed ROM methodology for process simulation and optimization.
Model Predictive Control-based Optimal Coordination of Distributed Energy Resources
Mayhorn, Ebony T.; Kalsi, Karanjit; Lian, Jianming; Elizondo, Marcelo A.
2013-01-07
Distributed energy resources, such as renewable energy resources (wind, solar), energy storage and demand response, can be used to complement conventional generators. The uncertainty and variability due to high penetration of wind makes reliable system operations and controls challenging, especially in isolated systems. In this paper, an optimal control strategy is proposed to coordinate energy storage and diesel generators to maximize wind penetration while maintaining system economics and normal operation performance. The goals of the optimization problem are to minimize fuel costs and maximize the utilization of wind while considering equipment life of generators and energy storage. Model predictive control (MPC) is used to solve a look-ahead dispatch optimization problem and the performance is compared to an open loop look-ahead dispatch problem. Simulation studies are performed to demonstrate the efficacy of the closed loop MPC in compensating for uncertainties and variability caused in the system.
Model Predictive Control-based Optimal Coordination of Distributed Energy Resources
Mayhorn, Ebony T.; Kalsi, Karanjit; Lian, Jianming; Elizondo, Marcelo A.
2013-04-03
Distributed energy resources, such as renewable energy resources (wind, solar), energy storage and demand response, can be used to complement conventional generators. The uncertainty and variability due to high penetration of wind makes reliable system operations and controls challenging, especially in isolated systems. In this paper, an optimal control strategy is proposed to coordinate energy storage and diesel generators to maximize wind penetration while maintaining system economics and normal operation performance. The goals of the optimization problem are to minimize fuel costs and maximize the utilization of wind while considering equipment life of generators and energy storage. Model predictive control (MPC) is used to solve a look-ahead dispatch optimization problem and the performance is compared to an open loop look-ahead dispatch problem. Simulation studies are performed to demonstrate the efficacy of the closed loop MPC in compensating for uncertainties and variability caused in the system.
ARRAY OPTIMIZATION FOR TIDAL ENERGY EXTRACTION IN A TIDAL CHANNEL A NUMERICAL MODELING ANALYSIS
Yang, Zhaoqing; Wang, Taiping; Copping, Andrea
2014-04-18
This paper presents an application of a hydrodynamic model to simulate tidal energy extraction in a tidal dominated estuary in the Pacific Northwest coast. A series of numerical experiments were carried out to simulate tidal energy extraction with different turbine array configurations, including location, spacing and array size. Preliminary model results suggest that array optimization for tidal energy extraction in a real-world site is a very complex process that requires consideration of multiple factors. Numerical models can be used effectively to assist turbine siting and array arrangement in a tidal turbine farm for tidal energy extraction.
Observations on the Optimality Tolerance in the CAISO 33% RPS Model
Yao, Y; Meyers, C; Schmidt, A; Smith, S; Streitz, F
2011-09-22
In 2008 Governor Schwarzenegger of California issued an executive order requiring that 33 percent of all electricity in the state in the year 2020 should come from renewable resources such as wind, solar, geothermal, biomass, and small hydroelectric facilities. This 33% renewable portfolio standard (RPS) was further codified and signed into law by Governor Brown in 2011. To assess the market impacts of such a requirement, the California Public Utilities Commission (CPUC) initiated a study to quantify the cost, risk, and timing of achieving a 33% RPS by 2020. The California Independent System Operator (CAISO) was contracted to manage this study. The production simulation model used in this study was developed using the PLEXOS software package, which allows energy planners to optimize long-term system planning decisions under a wide variety of system constraints. In this note we describe our observations on varying the optimality tolerance in the CAISO 33% RPS model. In particular, we observe that changing the optimality tolerance from .05% to .5% leads to solutions over 5 times faster, on average, producing very similar solutions with a negligible difference in overall distance from optimality.
Hour-by-Hour Cost Modeling of Optimized Central Wind-Based Water Electrolysis Production
Office of Energy Efficiency and Renewable Energy (EERE) (indexed site)
Hour-by-Hour Cost Modeling of Optimized Central Wind-Based Water Electrolysis Production Genevieve Saur (PI), Chris Ainscough (Presenter), Kevin Harrison, Todd Ramsden National Renewable Energy Laboratory January 17 th , 2013 This presentation does not contain any proprietary, confidential, or otherwise restricted information 2 Acknowledgements * This work was made possible by support from the U.S. Department of Energy's Fuel Cell Technologies Office within the Office of Energy Efficiency and
Office of Energy Efficiency and Renewable Energy (EERE) (indexed site)
in PEM Fuel Cells: Advanced Modeling, Material Selection, Testing, and Design Optimization J. Vernon Cole and Ashok Gidwani CFDRC Prepared for: DOE Hydrogen Fuel Cell Kickoff Meeting February 13, 2007 This presentation does not contain any proprietary or confidential information. Background Water Management Issues Arise From: Generation of water by cathodic reaction Membrane humidification requirements Capillary pressure driven transport through porous MEA and GDL materials Scaling
Model Based Optimal Sensor Network Design for Condition Monitoring in an IGCC Plant
Kumar, Rajeeva; Kumar, Aditya; Dai, Dan; Seenumani, Gayathri; Down, John; Lopez, Rodrigo
2012-12-31
This report summarizes the achievements and final results of this program. The objective of this program is to develop a general model-based sensor network design methodology and tools to address key issues in the design of an optimal sensor network configuration: the type, location and number of sensors used in a network, for online condition monitoring. In particular, the focus in this work is to develop software tools for optimal sensor placement (OSP) and use these tools to design optimal sensor network configuration for online condition monitoring of gasifier refractory wear and radiant syngas cooler (RSC) fouling. The methodology developed will be applicable to sensing system design for online condition monitoring for broad range of applications. The overall approach consists of (i) defining condition monitoring requirement in terms of OSP and mapping these requirements in mathematical terms for OSP algorithm, (ii) analyzing trade-off of alternate OSP algorithms, down selecting the most relevant ones and developing them for IGCC applications (iii) enhancing the gasifier and RSC models as required by OSP algorithms, (iv) applying the developed OSP algorithm to design the optimal sensor network required for the condition monitoring of an IGCC gasifier refractory and RSC fouling. Two key requirements for OSP for condition monitoring are desired precision for the monitoring variables (e.g. refractory wear) and reliability of the proposed sensor network in the presence of expected sensor failures. The OSP problem is naturally posed within a Kalman filtering approach as an integer programming problem where the key requirements of precision and reliability are imposed as constraints. The optimization is performed over the overall network cost. Based on extensive literature survey two formulations were identified as being relevant to OSP for condition monitoring; one based on LMI formulation and the other being standard INLP formulation. Various algorithms to solve
Dynamic optimization model of energy related economic planning and development for the Navajo nation
Beladi, S.A.
1983-01-01
The Navajo reservation located in portions of Arizona, New Mexico and Utah is rich in low sulfur coal deposits, ideal for strip mining operation. The Navajo Nation has been leasing the mineral resources to non-Indian enterprises for purposes of extraction. Since the early 1950s the Navajo Nation has entered into extensive coal leases with several large companies and utilities. Contracts have committed huge quantities of Navajo coal for mining. This research was directed to evaluate the shadow prices of Navajo coal and identify optimal coal extraction. An economic model of coal resource extraction over time was structured within an optimal control theory framework. The control problem was formulated as a discrete dynamic optimization problem. A comparison of the shadow prices of coal deposits derived from the dynamic model with the royalty payments the tribe receives on the basis of the present long-term lease contracts indicates that, in most cases, the tribe is paid considerably less than the amount of royalty projected by the model. Part of these discrepancies may be explained in terms of the low coal demand condition at the time of leasing and due to greater uncertainties with respect to the geologic information and other risks associated with mining operations. However, changes in the demand for coal with rigidly fixed terms of royalty rates will lead to non-optimal extraction of coal. A corrective tax scheme is suggested on the basis of the results of this research. The proposed tax per unit of coal shipped from a site is the difference between the shadow price and the present royalty rate. The estimated tax rates over time are derived.
Optimizing the transverse thermal conductivity of 2D-SiCf/SiC composites, I. Modeling
Youngblood, Gerald E.; Senor, David J.; Jones, Russell H.
2002-12-31
For potential fusion applications, considerable fabrication efforts have been directed to obtaining transverse thermal conductivity (Keff) values in excess of 30 W/mK (unirradiated) in the 800-1000°C temperature range for 2D-SiCf/SiC composites. To gain insight into the factors affecting Keff, at PNNL we have tested three different analytic models for predicting Keff in terms of constituent (fiber, matrix and interphase) properties. The tested models were: the Hasselman-Johnson (H-J) “2-Cylinder” model, which examines the effects of fiber-matrix (f/m) thermal barriers; the Markworth “3-Cylinder” model, which specifically examines the effects of interphase thickness and thermal conductivity; and a newly-developed Anisotropic “3-Square” model, which examines the potential effect of introducing a fiber coating with anisotropic properties to enhance (or diminish) f/m thermal coupling. The first two models are effective medium models, while the third model is a simple combination of parallel and series conductances. Model predictions suggest specific designs and/or development efforts directed to optimize the overall thermal transport performance of 2D-SiCf/SiC.
U.S. Department of Energy (DOE) - all webpages (Extended Search)
Optimization Hyperparameter Optimization In machine learning, parameters are the values that describe a machine learning model and are usually chosen by a learning algorithm. Hyperparameters, on the other hand, are parameters for the learning algorithm. The process of looking for the most optimal hyperparameters for a machine learning algorithm is called hyperparameter optimization. We support a few pieces of software for hyperparameter optimzation. Single Node Scikit-Learn Grid Search Random
Development of an entrained flow gasifier model for process optimization study
Biagini, E.; Bardi, A.; Pannocchia, G.; Tognotti, L.
2009-10-15
Coal gasification is a versatile process to convert a solid fuel in syngas, which can be further converted and separated in hydrogen, which is a valuable and environmentally acceptable energy carrier. Different technologies (fixed beds, fluidized beds, entrained flow reactors) are used, operating under different conditions of temperature, pressure, and residence time. Process studies should be performed for defining the best plant configurations and operating conditions. Although 'gasification models' can be found in the literature simulating equilibrium reactors, a more detailed approach is required for process analysis and optimization procedures. In this work, a gasifier model is developed by using AspenPlus as a tool to be implemented in a comprehensive process model for the production of hydrogen via coal gasification. It is developed as a multizonal model by interconnecting each step of gasification (preheating, devolatilization, combustion, gasification, quench) according to the reactor configuration, that is in entrained flow reactor. The model removes the hypothesis of equilibrium by introducing the kinetics of all steps and solves the heat balance by relating the gasification temperature to the operating conditions. The model allows to predict the syngas composition as well as quantity the heat recovery (for calculating the plant efficiency), 'byproducts', and residual char. Finally, in view of future works, the development of a 'gasifier model' instead of a 'gasification model' will allow different reactor configurations to be compared.
Fukazawa, Yasushi; Itoh, Ryosuke; Tokuda, Shin'ya; Finke, Justin; Stawarz, Łukasz; Tanaka, Yasuyuki
2015-01-10
We performed a systematic X-ray study of eight nearby γ-ray bright radio galaxies with Suzaku in order to understand the origins of their X-ray emissions. The Suzaku spectra for five of those have been presented previously, while the remaining three (M87, PKS 0625–354, and 3C 78) are presented here for the first time. Based on the Fe-K line strength, X-ray variability, and X-ray power-law photon indices, and using additional information on the [O III] line emission, we argue for a jet origin of the observed X-ray emission in these three sources. We also analyzed five years of Fermi Large Area Telescope (LAT) GeV gamma-ray data on PKS 0625–354 and 3C 78 to understand these sources within the blazar paradigm. We found significant γ-ray variability in the former object. Overall, we note that the Suzaku spectra for both PKS 0625–354 and 3C 78 are rather soft, while the LAT spectra are unusually hard when compared with other γ-ray detected low-power (FR I) radio galaxies. We demonstrate that the constructed broadband spectral energy distributions of PKS 0625–354 and 3C 78 are well described by a one-zone synchrotron/synchrotron self-Compton model. The results of the modeling indicate lower bulk Lorentz factors compared to those typically found in other BL Lacertae (BL Lac) objects, but consistent with the values inferred from modeling other LAT-detected FR I radio galaxies. Interestingly, the modeling also implies very high peak (∼10{sup 16} Hz) synchrotron frequencies in the two analyzed sources, contrary to previously suggested scenarios for Fanaroff-Riley (FR) type I/BL Lac unification. We discuss the implications of our findings in the context of the FR I/BL Lac unification schemes.
Tessier, Tracey E.; Caves, Carlton M.; Deutsch, Ivan H.; Eastin, Bryan; Bacon, Dave
2005-09-15
We present a model, motivated by the criterion of reality put forward by Einstein, Podolsky, and Rosen and supplemented by classical communication, which correctly reproduces the quantum-mechanical predictions for measurements of all products of Pauli operators on an n-qubit GHZ state (or 'cat state'). The n-2 bits employed by our model are shown to be optimal for the allowed set of measurements, demonstrating that the required communication overhead scales linearly with n. We formulate a connection between the generation of the local values utilized by our model and the stabilizer formalism, which leads us to conjecture that a generalization of this method will shed light on the content of the Gottesman-Knill theorem.
A Mathematical Tumor Model with Immune Resistance and Drug Therapy: An Optimal Control Approach
De Pillis, L. G.; Radunskaya, A.
2001-01-01
We present a competition model of cancer tumor growth that includes both the immune system response and drug therapy. This is a four-population model that includes tumor cells, host cells, immune cells, and drug interaction. We analyze the stability of the drug-free equilibria with respect to the immune response in order to look for target basins of attraction. One of our goals was to simulate qualitatively the asynchronous tumor-drug interaction known as “Jeffs phenomenon.” The model we develop is successful in generating this asynchronous response behavior. Our other goal was to identify treatment protocols that could improve standard pulsed chemotherapymore » regimens. Using optimal control theory with constraints and numerical simulations, we obtain new therapy protocols that we then compare with traditional pulsed periodic treatment. The optimal control generated therapies produce larger oscillations in the tumor population over time. However, by the end of the treatment period, total tumor size is smaller than that achieved through traditional pulsed therapy, and the normal cell population suffers nearly no oscillations.« less
A Technical Review on Biomass Processing: Densification, Preprocessing, Modeling and Optimization
Jaya Shankar Tumuluru; Christopher T. Wright
2010-06-01
It is now a well-acclaimed fact that burning fossil fuels and deforestation are major contributors to climate change. Biomass from plants can serve as an alternative renewable and carbon-neutral raw material for the production of bioenergy. Low densities of 40–60 kg/m3 for lignocellulosic and 200–400 kg/m3 for woody biomass limits their application for energy purposes. Prior to use in energy applications these materials need to be densified. The densified biomass can have bulk densities over 10 times the raw material helping to significantly reduce technical limitations associated with storage, loading and transportation. Pelleting, briquetting, or extrusion processing are commonly used methods for densification. The aim of the present research is to develop a comprehensive review of biomass processing that includes densification, preprocessing, modeling and optimization. The specific objective include carrying out a technical review on (a) mechanisms of particle bonding during densification; (b) methods of densification including extrusion, briquetting, pelleting, and agglomeration; (c) effects of process and feedstock variables and biomass biochemical composition on the densification (d) effects of preprocessing such as grinding, preheating, steam explosion, and torrefaction on biomass quality and binding characteristics; (e) models for understanding the compression characteristics; and (f) procedures for response surface modeling and optimization.
Vandersall, Jennifer A.; Gardner, Shea N.; Clague, David S.
2010-05-04
A computational method and computer-based system of modeling DNA synthesis for the design and interpretation of PCR amplification, parallel DNA synthesis, and microarray chip analysis. The method and system include modules that address the bioinformatics, kinetics, and thermodynamics of DNA amplification and synthesis. Specifically, the steps of DNA selection, as well as the kinetics and thermodynamics of DNA hybridization and extensions, are addressed, which enable the optimization of the processing and the prediction of the products as a function of DNA sequence, mixing protocol, time, temperature and concentration of species.
Modeling and optimizing of the random atomic spin gyroscope drift based on the atomic spin gyroscope
Quan, Wei; Lv, Lin Liu, Baiqi
2014-11-15
In order to improve the atom spin gyroscope's operational accuracy and compensate the random error caused by the nonlinear and weak-stability characteristic of the random atomic spin gyroscope (ASG) drift, the hybrid random drift error model based on autoregressive (AR) and genetic programming (GP) + genetic algorithm (GA) technique is established. The time series of random ASG drift is taken as the study object. The time series of random ASG drift is acquired by analyzing and preprocessing the measured data of ASG. The linear section model is established based on AR technique. After that, the nonlinear section model is built based on GP technique and GA is used to optimize the coefficients of the mathematic expression acquired by GP in order to obtain a more accurate model. The simulation result indicates that this hybrid model can effectively reflect the characteristics of the ASG's random drift. The square error of the ASG's random drift is reduced by 92.40%. Comparing with the AR technique and the GP + GA technique, the random drift is reduced by 9.34% and 5.06%, respectively. The hybrid modeling method can effectively compensate the ASG's random drift and improve the stability of the system.
A dynamic model for the optimization of oscillatory low grade heat engines
Markides, Christos N.; Smith, Thomas C. B.
2015-01-22
The efficiency of a thermodynamic system is a key quantity on which its usefulness and wider application relies. This is especially true for a device that operates with marginal energy sources and close to ambient temperatures. Various definitions of efficiency are available, each of which reveals a certain performance characteristic of a device. Of these, some consider only the thermodynamic cycle undergone by the working fluid, whereas others contain additional information, including relevant internal components of the device that are not part of the thermodynamic cycle. Yet others attempt to factor out the conditions of the surroundings with which the device is interfacing thermally during operation. In this paper we present a simple approach for the modeling of complex oscillatory thermal-fluid systems capable of converting low grade heat into useful work. We apply the approach to the NIFTE, a novel low temperature difference heat utilization technology currently under development. We use the results from the model to calculate various efficiencies and comment on the usefulness of the different definitions in revealing performance characteristics. We show that the approach can be applied to make design optimization decisions, and suggest features for optimal efficiency of the NIFTE.
Gneiding, N.; Zhuromskyy, O.; Peschel, U.; Shamonina, E.
2014-10-28
Metamaterials are comprised of metallic structures with a strong response to incident electromagnetic radiation, like, for example, split ring resonators. The interaction of resonator ensembles with electromagnetic waves can be simulated with finite difference or finite elements algorithms, however, above a certain ensemble size simulations become inadmissibly time or memory consuming. Alternatively a circuit description of metamaterials, a well developed modelling tool at radio and microwave frequencies, allows to significantly increase the simulated ensemble size. This approach can be extended to the IR spectral range with an appropriate set of circuit element parameters accounting for physical effects such as electron inertia and finite conductivity. The model is verified by comparing the coupling coefficients with the ones obtained from the full wave numerical simulations, and used to optimize the nano-antenna design with improved radiation characteristics.
U.S. Department of Energy (DOE) - all webpages (Extended Search)
Original Signatures on File
Gebraad, P. M. O.; Teeuwisse, F. W.; van Wingerden, J. W.; Fleming, Paul A.; Ruben, S. D.; Marden, J. R.; Pao, L. Y.
2016-01-01
This article presents a wind plant control strategy that optimizes the yaw settings of wind turbines for improved energy production of the whole wind plant by taking into account wake effects. The optimization controller is based on a novel internal parametric model for wake effects, called the FLOw Redirection and Induction in Steady-state (FLORIS) model. The FLORIS model predicts the steady-state wake locations and the effective flow velocities at each turbine, and the resulting turbine electrical energy production levels, as a function of the axial induction and the yaw angle of the different rotors. The FLORIS model has a limited number of parameters that are estimated based on turbine electrical power production data. In high-fidelity computational fluid dynamics simulations of a small wind plant, we demonstrate that the optimization control based on the FLORIS model increases the energy production of the wind plant, with a reduction of loads on the turbines as an additional effect.
Stamp, Jason E.; Eddy, John P.; Jensen, Richard P.; Munoz-Ramos, Karina
2016-01-01
Microgrids are a focus of localized energy production that support resiliency, security, local con- trol, and increased access to renewable resources (among other potential benefits). The Smart Power Infrastructure Demonstration for Energy Reliability and Security (SPIDERS) Joint Capa- bility Technology Demonstration (JCTD) program between the Department of Defense (DOD), Department of Energy (DOE), and Department of Homeland Security (DHS) resulted in the pre- liminary design and deployment of three microgrids at military installations. This paper is focused on the analysis process and supporting software used to determine optimal designs for energy surety microgrids (ESMs) in the SPIDERS project. There are two key pieces of software, an ex- isting software application developed by Sandia National Laboratories (SNL) called Technology Management Optimization (TMO) and a new simulation developed for SPIDERS called the per- formance reliability model (PRM). TMO is a decision support tool that performs multi-objective optimization over a mixed discrete/continuous search space for which the performance measures are unrestricted in form. The PRM is able to statistically quantify the performance and reliability of a microgrid operating in islanded mode (disconnected from any utility power source). Together, these two software applications were used as part of the ESM process to generate the preliminary designs presented by SNL-led DOE team to the DOD. Acknowledgements Sandia National Laboratories and the SPIDERS technical team would like to acknowledge the following for help in the project: * Mike Hightower, who has been the key driving force for Energy Surety Microgrids * Juan Torres and Abbas Akhil, who developed the concept of microgrids for military instal- lations * Merrill Smith, U.S. Department of Energy SPIDERS Program Manager * Ross Roley and Rich Trundy from U.S. Pacific Command * Bill Waugaman and Bill Beary from U.S. Northern Command * Tarek Abdallah, Melanie
Urniezius, Renaldas
2011-03-14
The principle of Maximum relative Entropy optimization was analyzed for dead reckoning localization of a rigid body when observation data of two attached accelerometers was collected. Model constraints were derived from the relationships between the sensors. The experiment's results confirmed that accelerometers each axis' noise can be successfully filtered utilizing dependency between channels and the dependency between time series data. Dependency between channels was used for a priori calculation, and a posteriori distribution was derived utilizing dependency between time series data. There was revisited data of autocalibration experiment by removing the initial assumption that instantaneous rotation axis of a rigid body was known. Performance results confirmed that such an approach could be used for online dead reckoning localization.
Lenhart, S. |; Protopopescu, V.
1994-09-01
The last years have witnessed a dramatic shift of the world`s military, political, and economic paradigm from a bi-polar competitive gridlock to a more fluid, multi-player environment. This change has necessarily been followed by a re-evaluation of the strategic thinking and by a reassessment of mutual positions, options, and decisions. The essential attributes of the new situation are modeled by a system of nonlinear evolution equations with competitive/cooperative interactions. The mathematical setting is quite general to accommodate models related to military confrontation, arms control, economic competition, political negotiations, etc. Irrespective of the specific details, all these situations share a common denominator, namely the presence of various players with different and often changing interests and goals. The interests, ranging from conflicting to consensual, are defined in a context of interactions between the players that vary from competitive to cooperative. Players with converging interests tend to build up cooperative coalitions while coalitions with diverging interests usually compete among themselves, but this is not an absolute requirement (namely, one may have groups with converging interests and competitive interactions, and vice-versa). Appurtenance to a coalition may change in time according to the shift in one`s perceptions, interests, or obligations. During the time evolution, the players try to modify their strategies as to best achieve their respective goals. An objective functional quantifying the rate of success (payoff) vs. effort (cost) measures the degree of goal attainment for all players involved, thus selecting an optimal strategy based on optimal controls. While the technical details may vary from problem to problem, the general approach described here establishes a standard framework for a host of concrete situations that may arise from tomorrow`s {open_quotes}next competition{close_quotes}.
A Full Demand Response Model in Co-Optimized Energy and
Liu, Guodong; Tomsovic, Kevin
2014-01-01
It has been widely accepted that demand response will play an important role in reliable and economic operation of future power systems and electricity markets. Demand response can not only influence the prices in the energy market by demand shifting, but also participate in the reserve market. In this paper, we propose a full model of demand response in which demand flexibility is fully utilized by price responsive shiftable demand bids in energy market as well as spinning reserve bids in reserve market. A co-optimized day-ahead energy and spinning reserve market is proposed to minimize the expected net cost under all credible system states, i.e., expected total cost of operation minus total benefit of demand, and solved by mixed integer linear programming. Numerical simulation results on the IEEE Reliability Test System show effectiveness of this model. Compared to conventional demand shifting bids, the proposed full demand response model can further reduce committed capacity from generators, starting up and shutting down of units and the overall system operating costs.
Munaò, Gianmarco Costa, Dino; Caccamo, Carlo; Gámez, Francisco; Giacometti, Achille
2015-06-14
We investigate thermodynamic properties of anisotropic colloidal dumbbells in the frameworks provided by the Reference Interaction Site Model (RISM) theory and an Optimized Perturbation Theory (OPT), this latter based on a fourth-order high-temperature perturbative expansion of the free energy, recently generalized to molecular fluids. Our model is constituted by two identical tangent hard spheres surrounded by square-well attractions with same widths and progressively different depths. Gas-liquid coexistence curves are obtained by predicting pressures, free energies, and chemical potentials. In comparison with previous simulation results, RISM and OPT agree in reproducing the progressive reduction of the gas-liquid phase separation as the anisotropy of the interaction potential becomes more pronounced; in particular, the RISM theory provides reasonable predictions for all coexistence curves, bar the strong anisotropy regime, whereas OPT performs generally less well. Both theories predict a linear dependence of the critical temperature on the interaction strength, reproducing in this way the mean-field behavior observed in simulations; the critical density—that drastically drops as the anisotropy increases—turns to be less accurate. Our results appear as a robust benchmark for further theoretical studies, in support to the simulation approach, of self-assembly in model colloidal systems.
A mathematical liver model and its application to system optimization and texture analysis
Cargill, E.B.
1989-01-01
This dissertation presents realistic mathematical models of normal and diseased livers and a nuclear medicine camera. The mathematical model of a normal liver is developed by creating a data set of points on the surface of the liver and fitting it to a truncated set of spherical harmonics. We model the depth-dependent MTF of a scintillation camera taking into account the effects of Compton scatter, linear attenuation, intrinsic detector resolution, collimator resolution, and Poisson noise. The differential diagnosis on a liver scan includes normal, focal disease, and diffuse disease. Object classes of normal livers are created by randomly perturbing the spherical harmonic coefficients. Object classes of livers with focal disease are created by introducing cold ellipsoids within the liver volume. Cirrhotic livers are created by modelling the gross morphological changes, heterogenous uptake, and decreased overall uptake. Simulated nuclear medicine images are made by projecting livers through nuclear imaging systems. The combination of object classes of simulated livers and models of different imaging systems is applied to imaging-system design optimization in a psycho-physical study. Human observer performance on simulated liver images made on nine different systems is compared to the Hotelling trace criterion (HTC). The system with the best observer performance is judged to be the best system. The correlation between the human performance metric d{sub a} and the HTC for this study was 0.829, suggesting that the HTC may have value as a predictor of observer performance. Texture in a liver scan is related to the three-dimensional distribution of functional acini, which changes with disease. One measure of texture is the fractal dimension, related to the Fourier power spectrum. We measured the average radial power spectra of 70 liver scans.
Optimization of Depletion Modeling and Simulation for the High Flux Isotope Reactor
Betzler, Benjamin R; Ade, Brian J; Chandler, David; Ilas, Germina; Sunny, Eva E
2015-01-01
Monte Carlo based depletion tools used for the high-fidelity modeling and simulation of the High Flux Isotope Reactor (HFIR) come at a great computational cost; finding sufficient approximations is necessary to make the use of these tools feasible. The optimization of the neutronics and depletion model for the HFIR is based on two factors: (i) the explicit representation of the involute fuel plates with sets of polyhedra and (ii) the treatment of depletion mixtures and control element position during depletion calculations. A very fine representation (i.e., more polyhedra in the involute plate approximation) does not significantly improve simulation accuracy. The recommended representation closely represents the physical plates and ensures sufficient fidelity in regions with high flux gradients. Including the fissile targets in the central flux trap of the reactor as depletion mixtures has the greatest effect on the calculated cycle length, while localized effects (e.g., the burnup of specific isotopes or the power distribution evolution over the cycle) are more noticeable consequences of including a critical control element search or depleting burnable absorbers outside the fuel region.
Nelson, R.A. Jr.; Pimentel, D.A.; Jolly-Woodruff, S.; Spore, J.
1998-04-01
In this report, a phenomenological model of simultaneous bottom-up and top-down quenching is developed and discussed. The model was implemented in the TRAC-PF1/MOD2 computer code. Two sets of closure relationships were compared within the study, the Absolute set and the Conditional set. The Absolute set of correlations is frequently viewed as the pure set because the correlations is frequently viewed as the pure set because the correlations utilize their original coefficients as suggested by the developer. The Conditional set is a modified set of correlations with changes to the correlation coefficient only. Results for these two sets indicate quite similar results. This report also summarizes initial results of an effort to investigate nonlinear optimization techniques applied to the closure model development. Results suggest that such techniques can provide advantages for future model development work, but that extensive expertise is required to utilize such techniques (i.e., the model developer must fully understand both the physics of the process being represented and the computational techniques being employed). The computer may then be used to improve the correlation of computational results with experiments.
Optical modeling toward optimizing monitoring of intestinal perfusion in trauma patients
Akl, Tony; Wilson, Mark A.; Ericson, Milton Nance; Cote, Gerard L.
2013-01-01
Trauma is the number one cause of death for people between the ages 1 and 44 years in the United States. In addition, according to the Centers of Disease Control and Prevention, injury results in over 31 million emergency department visits annually. Minimizing the resuscitation period in major abdominal injuries increases survival rates by correcting impaired tissue oxygen delivery. Optimization of resuscitation requires a monitoring method to determine sufficient tissue oxygenation. Oxygenation can be assessed by determining the adequacy of tissue perfusion. In this work, we present the design of a wireless perfusion and oxygenation sensor based on photoplethysmography. Through optical modeling, the benefit of using the visible wavelengths 470, 525 and 590nm (around the 525nm hemoglobin isobestic point) for intestinal perfusion monitoring is compared to the typical near infrared (NIR) wavelengths (805nm isobestic point) used in such sensors. Specifically, NIR wavelengths penetrate through the thin intestinal wall (~4mm) leading to high background signals. However, these visible wavelengths have two times shorter penetration depth that the NIR wavelengths. Monte-Carlo simulations show that the transmittance of the three selected wavelengths is lower by 5 orders of magnitude depending on the perfusion state. Due to the high absorbance of hemoglobin in the visible range, the perfusion signal carried by diffusely reflected light is also enhanced by an order of magnitude while oxygenation signal levels are maintained. In addition, short source-detector separations proved to be beneficial for limiting the probing depth to the thickness of the intestinal wall.
Gebraad, P. M. O.; Teeuwisse, F. W.; van Wingerden, J. W.; Fleming, Paul A.; Ruben, S. D.; Marden, J. R.; Pao, L. Y.
2016-01-01
This article presents a wind plant control strategy that optimizes the yaw settings of wind turbines for improved energy production of the whole wind plant by taking into account wake effects. The optimization controller is based on a novel internal parametric model for wake effects, called the FLOw Redirection and Induction in Steady-state (FLORIS) model. The FLORIS model predicts the steady-state wake locations and the effective flow velocities at each turbine, and the resulting turbine electrical energy production levels, as a function of the axial induction and the yaw angle of the different rotors. The FLORIS model has a limitedmore » number of parameters that are estimated based on turbine electrical power production data. In high-fidelity computational fluid dynamics simulations of a small wind plant, we demonstrate that the optimization control based on the FLORIS model increases the energy production of the wind plant, with a reduction of loads on the turbines as an additional effect.« less
Numerical research of the optimal control problem in the semi-Markov inventory model
Gorshenin, Andrey K.
2015-03-10
This paper is devoted to the numerical simulation of stochastic system for inventory management products using controlled semi-Markov process. The results of a special software for the systems research and finding the optimal control are presented.
Optimal Compensation with Hidden Action and Lump-Sum Payment in a Continuous-Time Model
Cvitanic, Jaksa Wan, Xuhu Zhang Jianfeng
2009-02-15
We consider a problem of finding optimal contracts in continuous time, when the agent's actions are unobservable by the principal, who pays the agent with a one-time payoff at the end of the contract. We fully solve the case of quadratic cost and separable utility, for general utility functions. The optimal contract is, in general, a nonlinear function of the final outcome only, while in the previously solved cases, for exponential and linear utility functions, the optimal contract is linear in the final output value. In a specific example we compute, the first-best principal's utility is infinite, while it becomes finite with hidden action, which is increasing in value of the output. In the second part of the paper we formulate a general mathematical theory for the problem. We apply the stochastic maximum principle to give necessary conditions for optimal contracts. Sufficient conditions are hard to establish, but we suggest a way to check sufficiency using non-convex optimization.
Tan, Sirui; Huang, Lianjie
2014-11-01
For modeling scalar-wave propagation in geophysical problems using finite-difference schemes, optimizing the coefficients of the finite-difference operators can reduce numerical dispersion. Most optimized finite-difference schemes for modeling seismic-wave propagation suppress only spatial but not temporal dispersion errors. We develop a novel optimized finite-difference scheme for numerical scalar-wave modeling to control dispersion errors not only in space but also in time. Our optimized scheme is based on a new stencil that contains a few more grid points than the standard stencil. We design an objective function for minimizing relative errors of phase velocities of waves propagating in all directions within a given range of wavenumbers. Dispersion analysis and numerical examples demonstrate that our optimized finite-difference scheme is computationally up to 2.5 times faster than the optimized schemes using the standard stencil to achieve the similar modeling accuracy for a given 2D or 3D problem. Compared with the high-order finite-difference scheme using the same new stencil, our optimized scheme reduces 50 percent of the computational cost to achieve the similar modeling accuracy. This new optimized finite-difference scheme is particularly useful for large-scale 3D scalar-wave modeling and inversion.
Hart, W.E.; Istrail, S. [Sandia National Labs., Albuquerque, NM (United States). Algorithms and Discrete Mathematics Dept.
1996-08-09
This paper considers the protein structure prediction problem for lattice and off-lattice protein folding models that explicitly represent side chains. Lattice models of proteins have proven extremely useful tools for reasoning about protein folding in unrestricted continuous space through analogy. This paper provides the first illustration of how rigorous algorithmic analyses of lattice models can lead to rigorous algorithmic analyses of off-lattice models. The authors consider two side chain models: a lattice model that generalizes the HP model (Dill 85) to explicitly represent side chains on the cubic lattice, and a new off-lattice model, the HP Tangent Spheres Side Chain model (HP-TSSC), that generalizes this model further by representing the backbone and side chains of proteins with tangent spheres. They describe algorithms for both of these models with mathematically guaranteed error bounds. In particular, the authors describe a linear time performance guaranteed approximation algorithm for the HP side chain model that constructs conformations whose energy is better than 865 of optimal in a face centered cubic lattice, and they demonstrate how this provides a 70% performance guarantee for the HP-TSSC model. This is the first algorithm in the literature for off-lattice protein structure prediction that has a rigorous performance guarantee. The analysis of the HP-TSSC model builds off of the work of Dancik and Hannenhalli who have developed a 16/30 approximation algorithm for the HP model on the hexagonal close packed lattice. Further, the analysis provides a mathematical methodology for transferring performance guarantees on lattices to off-lattice models. These results partially answer the open question of Karplus et al. concerning the complexity of protein folding models that include side chains.
U.S. Department of Energy (DOE) - all webpages (Extended Search)
Optimizing Performance Optimizing Performance Storage Optimization Optimizing the sizes of the files you store in HPSS and minimizing the number of tapes they are on will lead to...
Reference Model MHK Turbine Array Optimization Study within a Generic River System.
Johnson, Erick; Barco Mugg, Janet; James, Scott; Roberts, Jesse D.
2011-12-01
Increasing interest in marine hydrokinetic (MHK) energy has spurred to significant research on optimal placement of emerging technologies to maximize energy conversion and minimize potential effects on the environment. However, these devices will be deployed as an array in order to reduce the cost of energy and little work has been done to understand the impact these arrays will have on the flow dynamics, sediment-bed transport and benthic habitats and how best to optimize these arrays for both performance and environmental considerations. An %22MHK-friendly%22 routine has been developed and implemented by Sandia National Laboratories (SNL) into the flow, sediment dynamics and water-quality code, SNL-EFDC. This routine has been verified and validated against three separate sets of experimental data. With SNL-EFDC, water quality and array optimization studies can be carried out to optimize an MHK array in a resource and study its effects on the environment. The present study examines the effect streamwise and spanwise spacing has on the array performance. Various hypothetical MHK array configurations are simulated within a trapezoidal river channel. Results show a non-linear increase in array-power efficiency as turbine spacing is increased in each direction, which matches the trends seen experimentally. While the sediment transport routines were not used in these simulations, the flow acceleration seen around the MHK arrays has the potential to significantly affect the sediment transport characteristics and benthic habitat of a resource. Evaluation Only. Created with Aspose.Pdf.Kit. Copyright 2002-2011 Aspose Pty Ltd Evaluation Only. Created with Aspose.Pdf.Kit. Copyright 2002-2011 Aspose Pty Ltd
MULTI-SCALE MODELING AND APPROXIMATION ASSISTED OPTIMIZATION OF BARE TUBE HEAT EXCHANGERS
Bacellar, Daniel; Ling, Jiazhen; Aute, Vikrant; Radermacher, Reinhard; Abdelaziz, Omar
2014-01-01
Air-to-refrigerant heat exchangers are very common in air-conditioning, heat pump and refrigeration applications. In these heat exchangers, there is a great benefit in terms of size, weight, refrigerant charge and heat transfer coefficient, by moving from conventional channel sizes (~ 9mm) to smaller channel sizes (< 5mm). This work investigates new designs for air-to-refrigerant heat exchangers with tube outer diameter ranging from 0.5 to 2.0mm. The goal of this research is to develop and optimize the design of these heat exchangers and compare their performance with existing state of the art designs. The air-side performance of various tube bundle configurations are analyzed using a Parallel Parameterized CFD (PPCFD) technique. PPCFD allows for fast-parametric CFD analyses of various geometries with topology change. Approximation techniques drastically reduce the number of CFD evaluations required during optimization. Maximum Entropy Design method is used for sampling and Kriging method is used for metamodeling. Metamodels are developed for the air-side heat transfer coefficients and pressure drop as a function of tube-bundle dimensions and air velocity. The metamodels are then integrated with an air-to-refrigerant heat exchanger design code. This integration allows a multi-scale analysis of air-side performance heat exchangers including air-to-refrigerant heat transfer and phase change. Overall optimization is carried out using a multi-objective genetic algorithm. The optimal designs found can exhibit 50 percent size reduction, 75 percent decrease in air side pressure drop and doubled air heat transfer coefficients compared to a high performance compact micro channel heat exchanger with same capacity and flow rates.
Bradonjic, Milan
2009-01-01
In this paper we study reputation mechanisms, and show how the notion of reputation can help us in building truthful online auction mechanisms. From the mechanism design prospective, we derive the conditions on and design a truthful online auction mechanism. Moreover, in the case when some agents may lay or cannot have the real knowledge about the other agents reputations, we derive the resolution of the auction, such that the mechanism is truthful. Consequently, we move forward to the optimal one-gambler/one-seller problem, and explain how that problem is refinement of the previously discussed online auction design in the presence of reputation mechanism. In the setting of the optimal one-gambler problem, we naturally rise and solve the specific question: What is an agent's optimal strategy, in order to maximize his revenue? We would like to stress that our analysis goes beyond the scope, which game theory usually discusses under the notion of reputation. We model one-player games, by introducing a new parameter (reputation), which helps us in predicting the agent's behavior, in real-world situations, such as, behavior of a gambler, real-estate dealer, etc.
Virtual Wind Simulator Will Help Optimize Offshore Energy Production |
Office of Energy Efficiency and Renewable Energy (EERE) (indexed site)
Department of Energy Virtual Wind Simulator Will Help Optimize Offshore Energy Production Virtual Wind Simulator Will Help Optimize Offshore Energy Production September 13, 2016 - 3:13pm Addthis An advanced modeling tool funded by the Energy Department is now available to help offshore wind plant developers, wind turbine original equipment manufacturers, and researchers design offshore turbine and foundation systems. Created by the University of Minnesota, the Virtual Flow Simulator
From Physics Model to Results: An Optimizing Framework for Cross-Architecture Code Generation
Blazewicz, Marek; Hinder, Ian; Koppelman, David M.; Brandt, Steven R.; Ciznicki, Milosz; Kierzynka, Michal; Löffler, Frank; Schnetter, Erik; Tao, Jian
2013-01-01
Starting from a high-level problem description in terms of partial differential equations using abstract tensor notation, the Chemora framework discretizes, optimizes, and generates complete high performance codes for a wide range of compute architectures. Chemora extends the capabilities of Cactus, facilitating the usage of large-scale CPU/GPU systems in an efficient manner for complex applications, without low-level code tuning. Chemora achieves parallelism through MPI and multi-threading, combining OpenMP and CUDA. Optimizations include high-level code transformations, efficient loop traversal strategies, dynamically selected data and instruction cache usage strategies, and JIT compilation of GPU code tailored to the problem characteristics. The discretizationmore » is based on higher-order finite differences on multi-block domains. Chemora's capabilities are demonstrated by simulations of black hole collisions. This problem provides an acid test of the framework, as the Einstein equations contain hundreds of variables and thousands of terms.« less
First report on non-thermal plasma reactor scaling criteria and optimization models
Rosocha, L.A.; Korzekwa, R.A.
1998-01-13
The purpose of SERDP project CP-1038 is to evaluate and develop non-thermal plasma (NTP) reactor technology for Department of Defense (DoD) air emissions control applications. The primary focus is on oxides of nitrogen (NO{sub x}) and a secondary focus on hazardous air pollutants (HAPs), especially volatile organic compounds (VOCs). Example NO{sub x} sources are jet engine test cells (JETCs) and diesel engine powered electrical generators. Example VOCs are organic solvents used in painting, paint stripping, and parts cleaning. To design and build NTP reactors that are optimized for particular DoD applications, one must understand the basic decomposition chemistry of the target compound(s) and how the decomposition of a particular chemical species depends on the air emissions stream parameters and the reactor operating parameters. This report is intended to serve as an overview of the subject of reactor scaling and optimization and will discuss the basic decomposition chemistry of nitric oxide (NO) and two representative VOCs, trichloroethylene and carbon tetrachloride, and the connection between the basic plasma chemistry, the target species properties, and the reactor operating parameters (in particular, the operating plasma energy density). System architecture, that is how NTP reactors can be combined or ganged to achieve higher capacity, will also be briefly discussed.
Bein, A.; Dutton, A.R. )
1993-06-01
Na-Cl, halite Ca-Cl, and gypsum Ca-Cl brines with salinities from 45 to >300 g/L are identified and mapped in four hydrostratigraphic units in the Permian Basin area beneath western Texas and Oklahoma and eastern New Mexico, providing spatial and lithologic constraints on the interpretation of the origin and movement of brine. Na-Cl brine is derived from meteoric water as young as 5-10 Ma that dissolved anhydrite and halite, whereas Ca-Cl brine is interpreted to be ancient, modified-connate Permian brine that now is mixing with, and being displaced by, the Na-Cl brine. Displacement fronts appear as broad mixing zones with no significant salinity gradients. Evolution of Ca-Cl brine composition from ideal evaporated sea water is attributed to dolomitization and syndepositional recycling of halite and bittern salts by intermittent influx of fresh water and sea water. Halite Ca-Cl brine in the evaporite section in the northern part of the basin differs from gypsum Ca-Cl brine in the south-central part in salinity and Na/Cl ratio and reflects segregation between halite- and gypsum-precipitating lagoons during the Permian. Ca-Cl brine moved downward through the evaporite section into the underlying Lower Permian and Pennsylvanian marine section that is now the deep-basin brine aquifer, mixing there with pre-existing sea water. Buoyancy-driven convection of brine dominated local flow for most of basin history, with regional advection governed by topographically related forces dominant only for the past 5 to 10 Ma. 71 refs., 11 figs.
Bond-Lamberty, Benjamin; Calvin, Katherine V.; Jones, Andrew D.; Mao, Jiafu; Patel, Pralit L.; Shi, Xiaoying; Thomson, Allison M.; Thornton, Peter E.; Zhou, Yuyu
2014-01-01
Human activities are significantly altering biogeochemical cycles at the global scale, posing a significant problem for earth system models (ESMs), which may incorporate static land-use change inputs but do not actively simulate policy or economic forces. One option to address this problem is a to couple an ESM with an economically oriented integrated assessment model. Here we have implemented and tested a coupling mechanism between the carbon cycles of an ESM (CLM) and an integrated assessment (GCAM) model, examining the best proxy variables to share between the models, and quantifying our ability to distinguish climate- and land-use-driven flux changes. CLMs net primary production and heterotrophic respiration outputs were found to be the most robust proxy variables by which to manipulate GCAMs assumptions of long-term ecosystem steady state carbon, with short-term forest production strongly correlated with long-term biomass changes in climate-change model runs. By leveraging the fact that carbon-cycle effects of anthropogenic land-use change are short-term and spatially limited relative to widely distributed climate effects, we were able to distinguish these effects successfully in the model coupling, passing only the latter to GCAM. By allowing climate effects from a full earth system model to dynamically modulate the economic and policy decisions of an integrated assessment model, this work provides a foundation for linking these models in a robust and flexible framework capable of examining two-way interactions between human and earth system processes.
An Integrated Approach to Coal Gasifier Testing, Modeling, and Process Optimization
Sundaram, S. K.; Johnson, Kenneth I.; Matyas, Josef; Williford, Ralph E.; Pilli, Siva Prasad; Korolev, Vladimir N.
2009-10-01
Gasification is an important method of converting coal into clean burning fuels and high-value industrial chemicals. However, gasifier reliability can be severely limited by rapid degradation of the refractory lining in hot-wall gasifiers. The Pacific Northwest National Laboratory (PNNL) is performing multidisciplinary research to provide the experimental data and the engineering models needed to control gasifier operation for extended refractory life. Our experimental program includes prediction of slag viscosity using empirical viscosity models encompassing US coals, characterization of selected slag-refractory interaction including transport of slag/refractory components at the slag-refractory interface, and measurement of slag penetration into refractories as a function of time and temperature. The experimental data is used in slag flow, slag penetration, and refractory damage models to predict the operating temperature limits for increased refractory life. A simplified entrained flow gasifier model is also being developed to simulate one-dimensional axial flow with average axial velocity, coal devolatilization, and combustion kinetics. Combining the slag flow, refractory degradation, and gasifier models will provide a powerful tool to predict the coal and oxidant feed rates and control the gasifier operation to balance coal conversion efficiency with increased refractory life. A research scale gasifier has also been constructed at PNNL to provide syngas for coal conversion and carbon sequestration research, and also valuable datasets on operating conditions for validating the modeling results.
Origin of Macrostrains and Microstrains in Daimond-SiC Nanocomposites Based on the Core-shell Model
Palosz,B.; Stelmakh, S.; Grzanka, E.; Gierlotka, S.; Nauyoks, S.; Zerda, T.; Palosz, W.
2007-01-01
SiC-diamond nanocomposites were synthesized from nanodiamond and nanosilicon powders. A core-shell model of the composite nanocrystals was examined assuming that interatomic distances in the grain interior, the core, and at the surface shell (grain boundaries in nanocrystalline solids) are different. The samples were investigated by x-ray diffraction using synchrotron source. The powder diffractograms were elaborated based on the apparent lattice parameter methodology. The structure of the composites and its dependence on the sintering conditions is discussed. It is shown that as the sintering temperature increases the interatomic distances in the grain cores decrease, while the opposite occurs in the grain shells (forming the grain boundaries). Under some sintering temperature the interatomic distances in the core and in the shell get equal. However, for diamond this happens under different temperature than for SiC, thus internal strains in the composites are unavoidable.
Gu, Pei-Hong
2014-12-01
We propose an SO(10) × SO(10)' model to simultaneously realize a seesaw for Dirac neutrino masses and a leptogenesis for ordinary and dark matter-antimatter asymmetries. A (16 × 1-bar 6-bar '){sub H} scalar crossing the SO(10) and SO(10)' sectors plays an essential role in this seesaw-leptogenesis scenario. As a result of lepton number conservation, the lightest dark nucleon as the dark matter particle should have a determined mass around 15 GeV to explain the comparable fractions of ordinary and dark matter in the present universe. The (16 × 1-bar 6-bar '){sub H} scalar also mediates a U(1){sub em} × U(1)'{sub em} kinetic mixing after the ordinary and dark left-right symmetry breaking so that we can expect a dark nucleon scattering in direct detection experiments and/or a dark nucleon decay in indirect detection experiments. Furthermore, we can impose a softly broken mirror symmetry to simplify the parameter choice.
An integrated approach to coal gasifier testing, modeling, and process optimization
S.K. Sundaram; K.I. Johnson; J. Matyas; R.E. Williford; S.P. Pilli; V.N. Korolev
2009-09-15
Gasification is an important method of converting coal into clean-burning fuels and high-value industrial chemicals. However, gasifier reliability can be severely limited by rapid degradation of the refractory lining in hot-wall gasifiers. This paper describes an integrated approach to provide the experimental data and engineering models needed to better understand how to control gasifier operation for extended refractory life. The experimental program includes slag viscosity testing and measurement of slag penetration into refractories as a function of time and temperature. The experimental data is used in slag flow, slag penetration, and refractory damage models to predict the limits on operating temperature for increased refractory life. A simplified entrained flow gasifier model is also described to simulate one-dimensional axial flow with average axial velocity, coal devolatilization, and combustion kinetics. The goal of this experimental and model program is to predict coal and oxidant feed rates and to control the gasifier operation to balance coal conversion efficiency with increased refractory life. 26 refs., 7 figs., 3 tabs.
Rafique, Rashid; Kumar, Sandeep; Luo, Yiqi; Kiely, Gerard; Asrar, Ghassem R.
2015-02-01
he accurate calibration of complex biogeochemical models is essential for the robust estimation of soil greenhouse gases (GHG) as well as other environmental conditions and parameters that are used in research and policy decisions. DayCent is a popular biogeochemical model used both nationally and internationally for this purpose. Despite DayCent’s popularity, its complex parameter estimation is often based on experts’ knowledge which is somewhat subjective. In this study we used the inverse modelling parameter estimation software (PEST), to calibrate the DayCent model based on sensitivity and identifi- ability analysis. Using previously published N2 O and crop yield data as a basis of our calibration approach, we found that half of the 140 parameters used in this study were the primary drivers of calibration dif- ferences (i.e. the most sensitive) and the remaining parameters could not be identified given the data set and parameter ranges we used in this study. The post calibration results showed improvement over the pre-calibration parameter set based on, a decrease in residual differences 79% for N2O fluxes and 84% for crop yield, and an increase in coefficient of determination 63% for N2O fluxes and 72% for corn yield. The results of our study suggest that future studies need to better characterize germination tem- perature, number of degree-days and temperature dependency of plant growth; these processes were highly sensitive and could not be adequately constrained by the data used in our study. Furthermore, the sensitivity and identifiability analysis was helpful in providing deeper insight for important processes and associated parameters that can lead to further improvement in calibration of DayCent model.
Ely, James H.; Siciliano, Edward R.; Swinhoe, Martyn T.; Lintereur, Azaree T.
2013-01-01
This report details the results of the modeling and simulation work accomplished for the ‘Neutron Detection without Helium-3’ project during the 2011 and 2012 fiscal years. The primary focus of the project is to investigate commercially available technologies that might be used in safeguards applications in the relatively near term. Other technologies that are being developed may be more applicable in the future, but are outside the scope of this study.
J. Vernon Cole; Abhra Roy; Ashok Damle; Hari Dahr; Sanjiv Kumar; Kunal Jain; Ned Djilai
2012-10-02
Water management in Proton Exchange Membrane, PEM, Fuel Cells is challenging because of the inherent conflicts between the requirements for efficient low and high power operation. Particularly at low powers, adequate water must be supplied to sufficiently humidify the membrane or protons will not move through it adequately and resistance losses will decrease the cell efficiency. At high power density operation, more water is produced at the cathode than is necessary for membrane hydration. This excess water must be removed effectively or it will accumulate in the Gas Diffusion Layers, GDLs, between the gas channels and catalysts, blocking diffusion paths for reactants to reach the catalysts and potentially flooding the electrode. As power density of the cells is increased, the challenges arising from water management are expected to become more difficult to overcome simply due to the increased rate of liquid water generation relative to fuel cell volume. Thus, effectively addressing water management based issues is a key challenge in successful application of PEMFC systems. In this project, CFDRC and our partners used a combination of experimental characterization, controlled experimental studies of important processes governing how water moves through the fuel cell materials, and detailed models and simulations to improve understanding of water management in operating hydrogen PEM fuel cells. The characterization studies provided key data that is used as inputs to all state-of-the-art models for commercially important GDL materials. Experimental studies and microscopic scale models of how water moves through the GDLs showed that the water follows preferential paths, not branching like a river, as it moves toward the surface of the material. Experimental studies and detailed models of water and airflow in fuel cells channels demonstrated that such models can be used as an effective design tool to reduce operating pressure drop in the channels and the associated
SU-E-T-583: Optimizing the MLC Model Parameters for IMRT in the RayStation Treatment Planning System
Chen, S; Yi, B; Xu, H; Yang, X; Prado, K; D'Souza, W
2014-06-01
Purpose: To optimize the MLC model parameters for IMRT in the RayStation v.4.0 planning system and for a Varian C-series Linac with a 120-leaf Millennium MLC. Methods: The RayStation treatment planning system models rounded leaf-end MLC with the following parameters: average transmission, leaf-tip width, tongue-and-groove, and position offset. The position offset was provided by Varian. The leaf-tip width was iteratively evaluated by comparing computed and measured transverse dose profiles of MLC-defined fields at dmax in water. The profile comparison was also used to verify the MLC position offset. The transmission factor and leaf tongue width were derived iteratively by optimizing five clinical patient IMRT QA Results: brain, lung, pancreas, head-and-neck (HN), and prostate. The HN and prostate cases involved splitting fields. Verifications were performed with Mapcheck2 measurements and Monte Carlo calculations. Finally, the MLC model was validated using five test IMRT cases from the AAPM TG119 report. Absolute gamma analyses (3mm/3% and 2mm/2%) were applied. In addition, computed output factors for MLC-defined small fields (22, 33, 44, 66cm) of both 6MV and 18MV were compared to those measured by the Radiological Physics Center (RPC). Results: Both 6MV and 18MV models were determined to have the same MLC parameters: 2.5% transmission, tongue-and-groove 0.05cm, and leaftip 0.3cm. IMRT QA analysis for five cases in TG119 resulted in a 100% passing rate with 3mm/3% gamma analysis for 6MV, and >97.5% for 18MV. With 2mm/2% gamma analysis, the passing rate was >94.6% for 6MV and >90.9% for 18MV. The difference between computed output factors in RayStation and RPC measurements was less than 2% for all MLCdefined fields, which meets the RPC's acceptance criterion. Conclusion: The rounded leaf-end MLC model in RayStation 4.0 planning system was verified and IMRT commissioning was clinically acceptable. The IMRT commissioning was well validated using guidance from the
Modeling and Optimization of Direct Chill Casting to Reduce Ingot Cracking
Das, S.K.; Ningileri, S.; Long, Z.; Saito, K.; Khraisheh, M.; Hassan, M.H.; Kuwana, K.; Han, Q.; Viswanathan, S.; Sabau, A.S.; Clark, J.; Hyrn, J. (ANL)
2006-08-15
Approximately 68% of the aluminum produced in the United States is first cast into ingots prior to further processing into sheet, plate, extrusions, or foil. The direct chill (DC) semi-continuous casting process has been the mainstay of the aluminum industry for the production of ingots due largely to its robust nature and relative simplicity. Though the basic process of DC casting is in principle straightforward, the interaction of process parameters with heat extraction, microstructural evolution, and development of solidification stresses is too complex to analyze by intuition or practical experience. One issue in DC casting is the formation of stress cracks [1-15]. In particular, the move toward larger ingot cross-sections, the use of higher casting speeds, and an ever-increasing array of mold technologies have increased industry efficiencies but have made it more difficult to predict the occurrence of stress crack defects. The Aluminum Industry Technology Roadmap [16] has recognized the challenges inherent in the DC casting process and the control of stress cracks and selected the development of 'fundamental information on solidification of alloys to predict microstructure, surface properties, and stresses and strains' as a high-priority research need, and the 'lack of understanding of mechanisms of cracking as a function of alloy' and 'insufficient understanding of the aluminum solidification process', which is 'difficult to model', as technology barriers in aluminum casting processes. The goal of this Aluminum Industry of the Future (IOF) project was to assist the aluminum industry in reducing the incidence of stress cracks from the current level of 5% to 2%. Decreasing stress crack incidence is important for improving product quality and consistency as well as for saving resources and energy, since considerable amounts of cast metal could be saved by eliminating ingot cracking, by reducing the scalping thickness of the ingot before rolling, and by
Optimization of the parameters of plasma liners with zero-dimensional models
Oreshkin, V. I.
2013-11-15
The efficiency of conversion of the energy stored in the capacitor bank of a high-current pulse generator into the kinetic energy of an imploding plasma liner is analyzed. The analysis is performed by using a model consisting of LC circuit equations and equations of motion of a cylindrical shell. It is shown that efficient energy conversion can be attained only with a low-inductance generator. The mode of an 'ideal' load is considered where the load current at the final stage of implosion is close to zero. The advantages of this mode are, first, high efficiency of energy conversion (80%) and, second, improved stability of the shell implosion. In addition, for inertial confinement fusion realized by the scheme of a Z pinch dynamic hohlraum, not one but several fusion targets can be placed in the cavity on the pinch axis due to the large length of the liner.
Rong Xing; Ghaly, Michael; Frey, Eric C.
2013-06-15
Purpose: In yttrium-90 ({sup 90}Y) microsphere brachytherapy (radioembolization) of unresectable liver cancer, posttherapy {sup 90}Y bremsstrahlung single photon emission computed tomography (SPECT) has been used to document the distribution of microspheres in the patient and to help predict potential side effects. The energy window used during projection acquisition can have a significant effect on image quality. Thus, using an optimal energy window is desirable. However, there has been great variability in the choice of energy window due to the continuous and broad energy distribution of {sup 90}Y bremsstrahlung photons. The area under the receiver operating characteristic curve (AUC) for the ideal observer (IO) is a widely used figure of merit (FOM) for optimizing the imaging system for detection tasks. The IO implicitly assumes a perfect model of the image formation process. However, for {sup 90}Y bremsstrahlung SPECT there can be substantial model-mismatch (i.e., difference between the actual image formation process and the model of it assumed in reconstruction), and the amount of the model-mismatch depends on the energy window. It is thus important to account for the degradation of the observer performance due to model-mismatch in the optimization of the energy window. The purpose of this paper is to optimize the energy window for {sup 90}Y bremsstrahlung SPECT for a detection task while taking into account the effects of the model-mismatch. Methods: An observer, termed the ideal observer with model-mismatch (IO-MM), has been proposed previously to account for the effects of the model-mismatch on IO performance. In this work, the AUC for the IO-MM was used as the FOM for the optimization. To provide a clinically realistic object model and imaging simulation, the authors used a background-known-statistically and signal-known-statistically task. The background was modeled as multiple compartments in the liver with activity parameters independently following a
DISSELKAMP RS
2011-01-06
Boehmite (e.g., aluminum oxyhydroxide) is a major non-radioactive component in Hanford and Savannah River nuclear tank waste sludge. Boehmite dissolution from sludge using caustic at elevated temperatures is being planned at Hanford to minimize the mass of material disposed of as high-level waste (HLW) during operation of the Waste Treatment Plant (WTP). To more thoroughly understand the chemistry of this dissolution process, we have developed an empirical kinetic model for aluminate production due to boehmite dissolution. Application of this model to Hanford tank wastes would allow predictability and optimization of the caustic leaching of aluminum solids, potentially yielding significant improvements to overall processing time, disposal cost, and schedule. This report presents an empirical kinetic model that can be used to estimate the aluminate production from the leaching of boehmite in Hanford waste as a function of the following parameters: (1) hydroxide concentration; (2) temperature; (3) specific surface area of boehmite; (4) initial soluble aluminate plus gibbsite present in waste; (5) concentration of boehmite in the waste; and (6) (pre-fit) Arrhenius kinetic parameters. The model was fit to laboratory, non-radioactive (e.g. 'simulant boehmite') leaching results, providing best-fit values of the Arrhenius A-factor, A, and apparent activation energy, E{sub A}, of A = 5.0 x 10{sup 12} hour{sup -1} and E{sub A} = 90 kJ/mole. These parameters were then used to predict boehmite leaching behavior observed in previously reported actual waste leaching studies. Acceptable aluminate versus leaching time profiles were predicted for waste leaching data from both Hanford and Savannah River site studies.
Integrated controls design optimization
Lou, Xinsheng; Neuschaefer, Carl H.
2015-09-01
A control system (207) for optimizing a chemical looping process of a power plant includes an optimizer (420), an income algorithm (230) and a cost algorithm (225) and a chemical looping process models. The process models are used to predict the process outputs from process input variables. Some of the process in puts and output variables are related to the income of the plant; and some others are related to the cost of the plant operations. The income algorithm (230) provides an income input to the optimizer (420) based on a plurality of input parameters (215) of the power plant. The cost algorithm (225) provides a cost input to the optimizer (420) based on a plurality of output parameters (220) of the power plant. The optimizer (420) determines an optimized operating parameter solution based on at least one of the income input and the cost input, and supplies the optimized operating parameter solution to the power plant.
Energy Information Administration (EIA) (indexed site)
Table OS-1. Domestic coal distribution, by origin State, 1st Quarter 2010 Origin: Alabama (thousand short tons) Coal Destination State...
Energy Information Administration (EIA) (indexed site)
Table OS-1. Domestic coal distribution, by origin State, 4th Quarter 2011 Origin: Alabama (thousand short tons) Coal Destination State...
Energy Information Administration (EIA) (indexed site)
Table OS-1. Domestic coal distribution, by origin State, 3rd Quarter 2011 Origin: Alabama (thousand short tons) Coal Destination State...
Energy Information Administration (EIA) (indexed site)
Table OS-1. Domestic coal distribution, by origin State, 4th Quarter 2010 Origin: Alabama (thousand short tons) Coal Destination State...
Energy Information Administration (EIA) (indexed site)
Table OS-1. Domestic coal distribution, by origin State, 2nd Quarter 2011 Origin: Alabama (thousand short tons) Coal Destination State...
Energy Information Administration (EIA) (indexed site)
Table OS-1. Domestic coal distribution, by origin State, 3rd Quarter 2010 Origin: Alabama (thousand short tons) Coal Destination State...
Energy Information Administration (EIA) (indexed site)
Table OS-1. Domestic coal distribution, by origin State, 1st Quarter 2011 Origin: Alabama (thousand short tons) Coal Destination State...
Energy Information Administration (EIA) (indexed site)
Table OS-1. Domestic coal distribution, by origin State, 2nd Quarter 2010 Origin: Alabama (thousand short tons) Coal Destination State...
Original Impact Calculations, from the Tool Kit Framework: Small Town University Energy Program (STEP).
Application of optimal prediction to molecular dynamics
Barber IV, John Letherman
2004-12-01
Optimal prediction is a general system reduction technique for large sets of differential equations. In this method, which was devised by Chorin, Hald, Kast, Kupferman, and Levy, a projection operator formalism is used to construct a smaller system of equations governing the dynamics of a subset of the original degrees of freedom. This reduced system consists of an effective Hamiltonian dynamics, augmented by an integral memory term and a random noise term. Molecular dynamics is a method for simulating large systems of interacting fluid particles. In this thesis, I construct a formalism for applying optimal prediction to molecular dynamics, producing reduced systems from which the properties of the original system can be recovered. These reduced systems require significantly less computational time than the original system. I initially consider first-order optimal prediction, in which the memory and noise terms are neglected. I construct a pair approximation to the renormalized potential, and ignore three-particle and higher interactions. This produces a reduced system that correctly reproduces static properties of the original system, such as energy and pressure, at low-to-moderate densities. However, it fails to capture dynamical quantities, such as autocorrelation functions. I next derive a short-memory approximation, in which the memory term is represented as a linear frictional force with configuration-dependent coefficients. This allows the use of a Fokker-Planck equation to show that, in this regime, the noise is {delta}-correlated in time. This linear friction model reproduces not only the static properties of the original system, but also the autocorrelation functions of dynamical variables.
Zhou, Zhi; de Bedout, Juan Manuel; Kern, John Michael; Biyik, Emrah; Chandra, Ramu Sharat
2013-01-22
A system for optimizing customer utility usage in a utility network of customer sites, each having one or more utility devices, where customer site is communicated between each of the customer sites and an optimization server having software for optimizing customer utility usage over one or more networks, including private and public networks. A customer site model for each of the customer sites is generated based upon the customer site information, and the customer utility usage is optimized based upon the customer site information and the customer site model. The optimization server can be hosted by an external source or within the customer site. In addition, the optimization processing can be partitioned between the customer site and an external source.
Brigantic, Robert T.; Papatyi, Anthony F.; Perkins, Casey J.
2010-09-30
This report summarizes a study and corresponding model development conducted in support of the United States Pacific Command (USPACOM) as part of the Federal Energy Management Program (FEMP) American Reinvestment and Recovery Act (ARRA). This research was aimed at developing a mathematical programming framework and accompanying optimization methodology in order to simultaneously evaluate energy efficiency (EE) and renewable energy (RE) opportunities. Once developed, this research then demonstrated this methodology at a USPACOM installation - Camp H.M. Smith, Hawaii. We believe this is the first time such an integrated, joint EE and RE optimization methodology has been constructed and demonstrated.
Control and optimization system
Xinsheng, Lou
2013-02-12
A system for optimizing a power plant includes a chemical loop having an input for receiving an input parameter (270) and an output for outputting an output parameter (280), a control system operably connected to the chemical loop and having a multiple controller part (230) comprising a model-free controller. The control system receives the output parameter (280), optimizes the input parameter (270) based on the received output parameter (280), and outputs an optimized input parameter (270) to the input of the chemical loop to control a process of the chemical loop in an optimized manner.
Eranki, Pragnya L.; Manowitz, David H.; Bals, Bryan D.; Izaurralde, Roberto C.; Kim, Seungdo; Dale, Bruce E.
2013-07-23
An array of feedstock is being evaluated as potential raw material for cellulosic biofuel production. Thorough assessments are required in regional landscape settings before these feedstocks can be cultivated and sustainable management practices can be implemented. On the processing side, a potential solution to the logistical challenges of large biorefi neries is provided by a network of distributed processing facilities called local biomass processing depots. A large-scale cellulosic ethanol industry is likely to emerge soon in the United States. We have the opportunity to influence the sustainability of this emerging industry. The watershed-scale optimized and rearranged landscape design (WORLD) model estimates land allocations for different cellulosic feedstocks at biorefinery scale without displacing current animal nutrition requirements. This model also incorporates a network of the aforementioned depots. An integrated life cycle assessment is then conducted over the unified system of optimized feedstock production, processing, and associated transport operations to evaluate net energy yields (NEYs) and environmental impacts.
March-Leuba, S.; Jansen, J.F.; Kress, R.L.; Babcock, S.M. ); Dubey, R.V. . Dept. of Mechanical and Aerospace Engineering)
1992-08-01
A new program package, Symbolic Manipulator Laboratory (SML), for the automatic generation of both kinematic and static manipulator models in symbolic form is presented. Critical design parameters may be identified and optimized using symbolic models as shown in the sample application presented for the Future Armor Rearm System (FARS) arm. The computer-aided development of the symbolic models yields equations with reduced numerical complexity. Important considerations have been placed on the closed form solutions simplification and on the user friendly operation. The main emphasis of this research is the development of a methodology which is implemented in a computer program capable of generating symbolic kinematic and static forces models of manipulators. The fact that the models are obtained trigonometrically reduced is among the most significant results of this work and the most difficult to implement. Mathematica, a commercial program that allows symbolic manipulation, is used to implement the program package. SML is written such that the user can change any of the subroutines or create new ones easily. To assist the user, an on-line help has been written to make of SML a user friendly package. Some sample applications are presented. The design and optimization of the 5-degrees-of-freedom (DOF) FARS manipulator using SML is discussed. Finally, the kinematic and static models of two different 7-DOF manipulators are calculated symbolically.
March-Leuba, S.; Jansen, J.F.; Kress, R.L.; Babcock, S.M.; Dubey, R.V.
1992-08-01
A new program package, Symbolic Manipulator Laboratory (SML), for the automatic generation of both kinematic and static manipulator models in symbolic form is presented. Critical design parameters may be identified and optimized using symbolic models as shown in the sample application presented for the Future Armor Rearm System (FARS) arm. The computer-aided development of the symbolic models yields equations with reduced numerical complexity. Important considerations have been placed on the closed form solutions simplification and on the user friendly operation. The main emphasis of this research is the development of a methodology which is implemented in a computer program capable of generating symbolic kinematic and static forces models of manipulators. The fact that the models are obtained trigonometrically reduced is among the most significant results of this work and the most difficult to implement. Mathematica, a commercial program that allows symbolic manipulation, is used to implement the program package. SML is written such that the user can change any of the subroutines or create new ones easily. To assist the user, an on-line help has been written to make of SML a user friendly package. Some sample applications are presented. The design and optimization of the 5-degrees-of-freedom (DOF) FARS manipulator using SML is discussed. Finally, the kinematic and static models of two different 7-DOF manipulators are calculated symbolically.
U.S. Department of Energy (DOE) - all webpages (Extended Search)
Optimizing Performance Optimizing Performance Storage Optimization Optimizing the sizes of the files you store in HPSS and minimizing the number of tapes they are on will lead to the most effient use of NERSC HPSS: File sizes of about 1 GB or larger will give the best network performance (see graph below) Files sizes greater than about 500 GB can be more difficult to work with and lead to longer transfer times. Files larger than 15 TB cannot be uploaded to HPSS. Aggregate groups of small files
U.S. Department of Energy (DOE) - all webpages (Extended Search)
IPR 2008 Capital Investment Review CIR 2012 Quarterly Business Review Focus 2028 2011 Strategic Capital Discussions Access to Capital Debt Optimization Asset Management Cost...
HOPSPACK: Hybrid Optimization Parallel Search Package.
Gray, Genetha A.; Kolda, Tamara G.; Griffin, Joshua; Taddy, Matt; Martinez-Canales, Monica
2008-12-01
In this paper, we describe the technical details of HOPSPACK (Hybrid Optimization Parallel SearchPackage), a new software platform which facilitates combining multiple optimization routines into asingle, tightly-coupled, hybrid algorithm that supports parallel function evaluations. The frameworkis designed such that existing optimization source code can be easily incorporated with minimalcode modification. By maintaining the integrity of each individual solver, the strengths and codesophistication of the original optimization package are retained and exploited.4
Vanderbei, Robert J.; P Latin-Small-Letter-Dotless-I nar, Mustafa C.; Bozkaya, Efe B.
2013-02-15
An American option (or, warrant) is the right, but not the obligation, to purchase or sell an underlying equity at any time up to a predetermined expiration date for a predetermined amount. A perpetual American option differs from a plain American option in that it does not expire. In this study, we solve the optimal stopping problem of a perpetual American option (both call and put) in discrete time using linear programming duality. Under the assumption that the underlying stock price follows a discrete time and discrete state Markov process, namely a geometric random walk, we formulate the pricing problem as an infinite dimensional linear programming (LP) problem using the excessive-majorant property of the value function. This formulation allows us to solve complementary slackness conditions in closed-form, revealing an optimal stopping strategy which highlights the set of stock-prices where the option should be exercised. The analysis for the call option reveals that such a critical value exists only in some cases, depending on a combination of state-transition probabilities and the economic discount factor (i.e., the prevailing interest rate) whereas it ceases to be an issue for the put.
Energy Information Administration (EIA) (indexed site)
Origin State ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ U.S. Energy Information Administration | Quarterly Coal Distribution Report 1st Quarter 2012 U.S. Energy Information Administration | Quarterly Coal Distribution Report 1st Quarter 2012 Alabama ___________________________________________________________________________________________________________________________________ Table OS-1. Domestic coal
U.S. Department of Energy (DOE) - all webpages (Extended Search)
Human Genome Research: DOE Origins Resources with Additional Information Charles DeLisi Charles DeLisi The genesis of the Department of Energy (DOE) human genome project took place ...
Energy Science and Technology Software Center
2015-08-04
Electrolyte systems are common in advanced electrochemical devices and have numerous other industrial, scientific, and medical applications. For example, contemporary batteries are tasked with operating under increasing performance requirements. All battery operation is in some way tied to the electrolyte and how it interacts with various regions within the cell environment. Seeing the electrolyte plays a crucial role in battery performance and longevity, it is imperative that accurate, physics-based models be developed that will characterizemore » key electrolyte properties while keeping pace with the increasing complexity of these liquid systems. Advanced models are needed since laboratory measurements require significant resources to carry out for even a modest experimental The Advanced Electrolyte Model (AEM) developed at the INL is a proven capability designed to explore molecular-to-macroscale level aspects of electrolyte behavior, and can be used to drastically reduce the time required to characterize and optimize electrolytes. This technology earned an R&D 100 award in 2014. Although it is applied most frequently to lithium-ion and sodium-ion battery systems, it is general in its theory and can be used toward numerous other targets and intended applications. This capability is unique, powerful, relevant to present and future electrolyte development, and without peer. It redefines electrolyte modeling for highly-complex contemporary systems, wherein significant steps have been taken to capture the reality of electrolyte behavior in the electrochemical cell environment. This capability can have a very positive impact on accelerating domestic battery development to support aggressive vehicle and energy goals in the 21st century.« less
Ivanov, A.; Sanchez, V.; Imke, U.; Ivanov, K.
2012-07-01
In order to increase the accuracy and the degree of spatial resolution of core design studies, coupled Three-Dimensional (3D) neutronics (deterministic and Monte Carlo) and 3D thermal hydraulics (CFD and sub-channel) codes are being developed worldwide. In this paper the optimization of a coupling between MCNP5 code and an in-house development thermal-hydraulics code SUBCHANFLOW is presented. Various improvements of the coupling methodology are presented. With the help of novel interpolation tool a consistent methodology for the preparation of thermal scattering data library have been developed, ensuring that inelastic scattering from bound nuclei is treated at the correct moderator temperature. Trough the utilization of a hybrid coupling with discrete energy Monte-Carlo code KENO a methodology for acceleration of the coupled calculation is being demonstrated. In this approach an additional coupling between KENO and SUBCHANFLOW was developed, the converged results of which are used as initial conditions for the MCNP-SUBCHANFLOW coupling. Acceleration of fission source distribution convergence, by sampling fission source distribution from the power distribution obtained by KENO is also demonstrated. (authors)
Michael Harold; Vemuri Balakotaiah
2010-05-31
In this project a combined experimental and theoretical approach was taken to advance our understanding of lean NOx trap (LNT) technology. Fundamental kinetics studies were carried out of model LNT catalysts containing variable loadings of precious metals (Pt, Rh), and storage components (BaO, CeO{sub 2}). The Temporal Analysis of Products (TAP) reactor provided transient data under well-characterized conditions for both powder and monolith catalysts, enabling the identification of key reaction pathways and estimation of the corresponding kinetic parameters. The performance of model NOx storage and reduction (NSR) monolith catalysts were evaluated in a bench scale NOx trap using synthetic exhaust, with attention placed on the effect of the pulse timing and composition on the instantaneous and cycle-averaged product distributions. From these experiments we formulated a global model that predicts the main spatio-temporal features of the LNT and a mechanistic-based microkinetic models that incorporates a detailed understanding of the chemistry and predicts more detailed selectivity features of the LNT. The NOx trap models were used to determine its ability to simulate bench-scale data and ultimately to evaluate alternative LNT designs and operating strategies. The four-year project led to the training of several doctoral students and the dissemination of the findings as 47 presentations in conferences, catalysis societies, and academic departments as well 23 manuscripts in peer-reviewed journals. A condensed review of NOx storage and reduction was published in an encyclopedia of technology.
Fuzzy logic controller optimization
Sepe, Jr., Raymond B; Miller, John Michael
2004-03-23
A method is provided for optimizing a rotating induction machine system fuzzy logic controller. The fuzzy logic controller has at least one input and at least one output. Each input accepts a machine system operating parameter. Each output produces at least one machine system control parameter. The fuzzy logic controller generates each output based on at least one input and on fuzzy logic decision parameters. Optimization begins by obtaining a set of data relating each control parameter to at least one operating parameter for each machine operating region. A model is constructed for each machine operating region based on the machine operating region data obtained. The fuzzy logic controller is simulated with at least one created model in a feedback loop from a fuzzy logic output to a fuzzy logic input. Fuzzy logic decision parameters are optimized based on the simulation.
Lincoln, Don
2016-07-12
The Higgs boson was discovered in July of 2012 and is generally understood to be the origin of mass. While those statements are true, they are incomplete. It turns out that the Higgs boson is responsible for only about 2% of the mass of ordinary matter. In this dramatic new video, Dr. Don Lincoln of Fermilab tells us the rest of the story.
Lincoln, Don
2014-07-30
The Higgs boson was discovered in July of 2012 and is generally understood to be the origin of mass. While those statements are true, they are incomplete. It turns out that the Higgs boson is responsible for only about 2% of the mass of ordinary matter. In this dramatic new video, Dr. Don Lincoln of Fermilab tells us the rest of the story.
Arefinia, Zahra [Research Institute for Applied Physics and Astronomy, University of Tabriz, Tabriz 51666-14766 (Iran, Islamic Republic of); Asgari, Asghar, E-mail: asgari@tabrizu.ac.ir [Research Institute for Applied Physics and Astronomy, University of Tabriz, Tabriz 51666-14766 (Iran, Islamic Republic of); School of Electrical, Electronic, and Computer Engineering, University of Western Australia, Crawley, WA 6009 (Australia)
2014-05-21
Based on the ability of In{sub x}Ga{sub 1?x}N materials to optimally span the solar spectrum and their superior radiation resistance, solar cells based on p-type In{sub x}Ga{sub 1?x}N with low indium contents and interfacing with graphene film (G/In{sub x}Ga{sub 1?x}N), is proposed to exploit the benefit of transparency and work function tunability of graphene. Then, their solar power conversion efficiency modeled and optimized using a new analytical approach taking into account all recombination processes and accurate carrier mobility. Furthermore, their performance was compared with graphene on silicon counterparts and G/p-In{sub x}Ga{sub 1?x}N showed relatively smaller short-circuits current (?7?mA/cm{sup 2}) and significantly higher open-circuit voltage (?4?V) and efficiency (?30%). The thickness, doping concentration, and indium contents of p-In{sub x}Ga{sub 1?x}N and graphene work function were found to substantially affect the performance of G/p-In{sub x}Ga{sub 1?x}N.
Dr. Ralph E. White; Dr. Branko N. Popov
2002-04-01
The dissolution of NiO cathodes during cell operation is a limiting factor to the successful commercialization of molten carbonate fuel cells (MCFCs). Lithium cobalt oxide coating onto the porous nickel electrode has been adopted to modify the conventional MCFC cathode which is believed to increase the stability of the cathodes in the carbonate melt. The material used for surface modification should possess thermodynamic stability in the molten carbonate and also should be electro catalytically active for MCFC reactions. Two approaches have been adopted to get a stable cathode material. First approach is the use of LiNi{sub 0.8}Co{sub 0.2}O{sub 2}, a commercially available lithium battery cathode material and the second is the use of tape cast electrodes prepared from cobalt coated nickel powders. The morphology and the structure of LiNi{sub 0.8}Co{sub 0.2}O{sub 2} and tape cast Co coated nickel powder electrodes were studied using scanning electron microscopy and X-Ray diffraction studies respectively. The electrochemical performance of the two materials was investigated by electrochemical impedance spectroscopy and polarization studies. A three phase homogeneous model was developed to simulate the performance of the molten carbonate fuel cell cathode. The homogeneous model is based on volume averaging of different variables in the three phases over a small volume element. The model gives a good fit to the experimental data. The model has been used to analyze MCFC cathode performance under a wide range of operating conditions.
Moro, Erik A.
2012-06-07
-modulated interferometric sensor depends on an appropriate performance function (e.g., desired displacement range, accuracy, robustness, etc.). In this dissertation, the performance limitations of a bundled differential intensity-modulated displacement sensor are analyzed, where the bundling configuration has been designed to optimize performance. The performance limitations of a white light Fabry-Perot displacement sensor are also analyzed. Both these sensors are non-contacting, but they have access to different regions of the performance-space. Further, both these sensors have different degrees of sensitivity to experimental uncertainty. Made in conjunction with careful analysis, the decision of which sensor to deploy need not be an uninformed one.
U.S. Department of Energy (DOE) - all webpages (Extended Search)
Optimization Performance and Optimization Performance Monitoring Last edited: 2012-01-09 12:31:03...
U.S. Department of Energy (DOE) - all webpages (Extended Search)
Optimization Performance and Optimization Performance Monitoring Last edited: 2012-01-09 12:31:03
Lovley, Derek R
2012-12-28
The goal of this research was to provide computational tools to predictively model the behavior of two microbial communities of direct relevance to Department of Energy interests: 1) the microbial community responsible for in situ bioremediation of uranium in contaminated subsurface environments; and 2) the microbial community capable of harvesting electricity from waste organic matter and renewable biomass. During this project the concept of microbial electrosynthesis, a novel form of artificial photosynthesis for the direct production of fuels and other organic commodities from carbon dioxide and water was also developed and research was expanded into this area as well.
Robert C. Haight; John L. Ullmann; Daniel D. Strottman; Paul E. Koehler; Franz Kaeppeler
2000-01-01
This Workshop was held on September 3--4, 1999, following the 10th International Symposium on Capture Gamma-Ray Spectroscopy. Presentations were made by 14 speakers, 6 from the US and 8 from other countries on topics relevant to s-, r- and rp-process nucleosynthesis. Laboratory experiments, both present and planned, and astrophysical observations were represented as were astrophysical models. Approximately 50 scientists participated in this Workshop. These Proceedings consist of copies of vu-graphs presented at the Workshop. For further information, the interested readers are referred to the authors.
Kohut, Sviataslau V.; Staroverov, Viktor N.; Ryabinkin, Ilya G.
2014-05-14
We describe a method for constructing a hierarchy of model potentials approximating the functional derivative of a given orbital-dependent exchange-correlation functional with respect to electron density. Each model is derived by assuming a particular relationship between the self-consistent solutions of KohnSham (KS) and generalized KohnSham (GKS) equations for the same functional. In the KS scheme, the functional is differentiated with respect to density, in the GKS schemewith respect to orbitals. The lowest-level approximation is the orbital-averaged effective potential (OAEP) built with the GKS orbitals. The second-level approximation, termed the orbital-consistent effective potential (OCEP), is based on the assumption that the KS and GKS orbitals are the same. It has the form of the OAEP plus a correction term. The highest-level approximation is the density-consistent effective potential (DCEP), derived under the assumption that the KS and GKS electron densities are equal. The analytic expression for a DCEP is the OCEP formula augmented with kinetic-energy-density-dependent terms. In the case of exact-exchange functional, the OAEP is the Slater potential, the OCEP is roughly equivalent to the localized HartreeFock approximation and related models, and the DCEP is practically indistinguishable from the true optimized effective potential for exact exchange. All three levels of the proposed hierarchy require solutions of the GKS equations as input and have the same affordable computational cost.
U.S. Department of Energy (DOE) - all webpages (Extended Search)
Original Signature on File Page8 of 8 M. EMERGENCY PROCEDURES 1. The owner/operator must maintain an adequately trained onsite RCRA emergency coordinator to direct emergency procedures which could resultfrom fires, explosions or releases of PCB containing waste at the Facility. The owner/operator must submitthe name and qualifications of the emergency coordinator within sixty (60) daysof the effective dateof this approval. 2. The owner/operator must maintain in good working orderany equipment
Fuel Efficiency and Emissions Optimization of Heavy-Duty Diesel...
Office of Energy Efficiency and Renewable Energy (EERE) (indexed site)
and Emissions Optimization of Heavy-Duty Diesel Engines using Model-Based Transient Calibration Fuel Efficiency and Emissions Optimization of Heavy-Duty Diesel Engines using ...
QCAD simulation and optimization of semiconductor double quantum...
Office of Scientific and Technical Information (OSTI)
for modeling quantum computing devices; (iii) it couples with an optimization engine Dakota that enables optimization of gate voltages in DQDs for multiple desired targets. ...
FEMP Completes 2000th Renewable Energy Optimization Screening...
Office of Energy Efficiency and Renewable Energy (EERE) (indexed site)
FEMP Completes 2000th Renewable Energy Optimization Screening FEMP Completes 2000th Renewable Energy Optimization Screening July 23, 2015 - 12:03pm Addthis REopt models the complex ...
Next Generation Calibration Models with Dimensional Modeling...
Office of Energy Efficiency and Renewable Energy (EERE) (indexed site)
Calibration Models with Dimensional Modeling Next Generation Calibration Models with ... Calibration Optimization for Next Generation Diesel Engines An Accelerated Aging ...
[SIAM conference on optimization
Not Available
1992-05-10
Abstracts are presented of 63 papers on the following topics: large-scale optimization, interior-point methods, algorithms for optimization, problems in control, network optimization methods, and parallel algorithms for optimization problems.
An optimization framework for workplace charging strategies ...
U.S. Department of Energy (DOE) - all webpages (Extended Search)
addressing different eligible levels of charging technology and employees' demographic distributions. The optimization model is to minimize the lifetime cost of...
Optimal lattice-structured materials
Messner, Mark C.
2016-07-09
This paper describes a method for optimizing the mesostructure of lattice-structured materials. These materials are periodic arrays of slender members resembling efficient, lightweight macroscale structures like bridges and frame buildings. Current additive manufacturing technologies can assemble lattice structures with length scales ranging from nanometers to millimeters. Previous work demonstrates that lattice materials have excellent stiffness- and strength-to-weight scaling, outperforming natural materials. However, there are currently no methods for producing optimal mesostructures that consider the full space of possible 3D lattice topologies. The inverse homogenization approach for optimizing the periodic structure of lattice materials requires a parameterized, homogenized material model describingmore » the response of an arbitrary structure. This work develops such a model, starting with a method for describing the long-wavelength, macroscale deformation of an arbitrary lattice. The work combines the homogenized model with a parameterized description of the total design space to generate a parameterized model. Finally, the work describes an optimization method capable of producing optimal mesostructures. Several examples demonstrate the optimization method. One of these examples produces an elastically isotropic, maximally stiff structure, here called the isotruss, that arguably outperforms the anisotropic octet truss topology.« less
THE COSMIC ORIGINS SPECTROGRAPH
Green, James C.; Michael Shull, J.; Snow, Theodore P.; Stocke, John [Department of Astrophysical and Planetary Sciences, University of Colorado, 391-UCB, Boulder, CO 80309 (United States); Froning, Cynthia S.; Osterman, Steve; Beland, Stephane; Burgh, Eric B.; Danforth, Charles; France, Kevin [Center for Astrophysics and Space Astronomy, University of Colorado, 389-UCB, Boulder, CO 80309 (United States); Ebbets, Dennis [Ball Aerospace and Technologies Corp., 1600 Commerce Street, Boulder, CO 80301 (United States); Heap, Sara H. [NASA Goddard Space Flight Center, Code 681, Greenbelt, MD 20771 (United States); Leitherer, Claus; Sembach, Kenneth [Space Telescope Science Institute, 3700 San Martin Drive, Baltimore, MD 21218 (United States); Linsky, Jeffrey L. [JILA, University of Colorado and NIST, Boulder, CO 80309-0440 (United States); Savage, Blair D. [Department of Astronomy, University of Wisconsin-Madison, 475 North Charter Street, Madison, WI 53706 (United States); Siegmund, Oswald H. W. [Astronomy Department, University of California, Berkeley, CA 94720 (United States); Spencer, John; Alan Stern, S. [Southwest Research Institute, 1050 Walnut Street, Suite 300, Boulder, CO 80302 (United States); Welsh, Barry [Space Sciences Laboratory, University of California, 7 Gauss Way, Berkeley, CA 94720 (United States); and others
2012-01-01
The Cosmic Origins Spectrograph (COS) is a moderate-resolution spectrograph with unprecedented sensitivity that was installed into the Hubble Space Telescope (HST) in 2009 May, during HST Servicing Mission 4 (STS-125). We present the design philosophy and summarize the key characteristics of the instrument that will be of interest to potential observers. For faint targets, with flux F{sub {lambda}} Almost-Equal-To 1.0 Multiplication-Sign 10{sup -14} erg cm{sup -2} s{sup -1} A{sup -1}, COS can achieve comparable signal to noise (when compared to Space Telescope Imaging Spectrograph echelle modes) in 1%-2% of the observing time. This has led to a significant increase in the total data volume and data quality available to the community. For example, in the first 20 months of science operation (2009 September-2011 June) the cumulative redshift pathlength of extragalactic sight lines sampled by COS is nine times than sampled at moderate resolution in 19 previous years of Hubble observations. COS programs have observed 214 distinct lines of sight suitable for study of the intergalactic medium as of 2011 June. COS has measured, for the first time with high reliability, broad Ly{alpha} absorbers and Ne VIII in the intergalactic medium, and observed the He II reionization epoch along multiple sightlines. COS has detected the first CO emission and absorption in the UV spectra of low-mass circumstellar disks at the epoch of giant planet formation, and detected multiple ionization states of metals in extra-solar planetary atmospheres. In the coming years, COS will continue its census of intergalactic gas, probe galactic and cosmic structure, and explore physics in our solar system and Galaxy.
Origin of primordial magnetic fields
Souza, Rafael S. de; Opher, Reuven
2008-02-15
Magnetic fields of intensities similar to those in our galaxy are also observed in high redshift galaxies, where a mean field dynamo would not have had time to produce them. Therefore, a primordial origin is indicated. It has been suggested that magnetic fields were created at various primordial eras: during inflation, the electroweak phase transition, the quark-hadron phase transition (QHPT), during the formation of the first objects, and during reionization. We suggest here that the large-scale fields {approx}{mu}G, observed in galaxies at both high and low redshifts by Faraday rotation measurements (FRMs), have their origin in the electromagnetic fluctuations that naturally occurred in the dense hot plasma that existed just after the QHPT. We evolve the predicted fields to the present time. The size of the region containing a coherent magnetic field increased due to the fusion of smaller regions. Magnetic fields (MFs) {approx}10 {mu}G over a comoving {approx}1 pc region are predicted at redshift z{approx}10. These fields are orders of magnitude greater than those predicted in previous scenarios for creating primordial magnetic fields. Line-of-sight average MFs {approx}10{sup -2} {mu}G, valid for FRMs, are obtained over a 1 Mpc comoving region at the redshift z{approx}10. In the collapse to a galaxy (comoving size {approx}30 kpc) at z{approx}10, the fields are amplified to {approx}10 {mu}G. This indicates that the MFs created immediately after the QHPT (10{sup -4} s), predicted by the fluctuation-dissipation theorem, could be the origin of the {approx}{mu}G fields observed by FRMs in galaxies at both high and low redshifts. Our predicted MFs are shown to be consistent with present observations. We discuss the possibility that the predicted MFs could cause non-negligible deflections of ultrahigh energy cosmic rays and help create the observed isotropic distribution of their incoming directions. We also discuss the importance of the volume average magnetic field
Putting combustion optimization to work
Spring, N.
2009-05-15
New plants and plants that are retrofitting can benefit from combustion optimization. Boiler tuning and optimization can complement each other. The continuous emissions monitoring system CEMS, and tunable diode laser absorption spectroscopy TDLAS can be used for optimisation. NeuCO's CombustionOpt neural network software can determine optimal fuel and air set points. Babcock and Wilcox Power Generation Group Inc's Flame Doctor can be used in conjunction with other systems to diagnose and correct coal-fired burner performance. The four units of the Colstrip power plant in Colstrips, Montana were recently fitted with combustion optimization systems based on advanced model predictive multi variable controls (MPCs), ABB's Predict & Control tool. Unit 4 of Tampa Electric's Big Bend plant in Florida is fitted with Emerson's SmartProcess fuzzy neural model based combustion optimisation system. 1 photo.
Optimal Electric Utility Expansion
Energy Science and Technology Software Center
1989-10-10
SAGE-WASP is designed to find the optimal generation expansion policy for an electrical utility system. New units can be automatically selected from a user-supplied list of expansion candidates which can include hydroelectric and pumped storage projects. The existing system is modeled. The calculational procedure takes into account user restrictions to limit generation configurations to an area of economic interest. The optimization program reports whether the restrictions acted as a constraint on the solution. All expansionmore » configurations considered are required to pass a user supplied reliability criterion. The discount rate and escalation rate are treated separately for each expansion candidate and for each fuel type. All expenditures are separated into local and foreign accounts, and a weighting factor can be applied to foreign expenditures.« less
Optimal segmentation and packaging process
Kostelnik, Kevin M.; Meservey, Richard H.; Landon, Mark D.
1999-01-01
A process for improving packaging efficiency uses three dimensional, computer simulated models with various optimization algorithms to determine the optimal segmentation process and packaging configurations based on constraints including container limitations. The present invention is applied to a process for decontaminating, decommissioning (D&D), and remediating a nuclear facility involving the segmentation and packaging of contaminated items in waste containers in order to minimize the number of cuts, maximize packaging density, and reduce worker radiation exposure. A three-dimensional, computer simulated, facility model of the contaminated items are created. The contaminated items are differentiated. The optimal location, orientation and sequence of the segmentation and packaging of the contaminated items is determined using the simulated model, the algorithms, and various constraints including container limitations. The cut locations and orientations are transposed to the simulated model. The contaminated items are actually segmented and packaged. The segmentation and packaging may be simulated beforehand. In addition, the contaminated items may be cataloged and recorded.
Magnetic nematicity: A debated origin
Vaknin, David
2016-01-22
Different experimental studies based on nuclear magnetic resonance and inelastic neutron scattering reach opposing conclusions in regards to the origin of magnetic nematicity in iron chalcogenides.
Penser Original Contract - Hanford Site
U.S. Department of Energy (DOE) - all webpages (Extended Search)
& Procurements Home Prime Contracts Current Solicitations Other Sources DOE RL Contracting Officers DOE RL Contracting Officer Representatives Penser Original Contract Email...
Sootblowing optimization for improved boiler performance
James, John Robert; McDermott, John; Piche, Stephen; Pickard, Fred; Parikh, Neel J
2013-07-30
A sootblowing control system that uses predictive models to bridge the gap between sootblower operation and boiler performance goals. The system uses predictive modeling and heuristics (rules) associated with different zones in a boiler to determine an optimal sequence of sootblower operations and achieve boiler performance targets. The system performs the sootblower optimization while observing any operational constraints placed on the sootblowers.
Sootblowing optimization for improved boiler performance
James, John Robert; McDermott, John; Piche, Stephen; Pickard, Fred; Parikh, Neel J.
2012-12-25
A sootblowing control system that uses predictive models to bridge the gap between sootblower operation and boiler performance goals. The system uses predictive modeling and heuristics (rules) associated with different zones in a boiler to determine an optimal sequence of sootblower operations and achieve boiler performance targets. The system performs the sootblower optimization while observing any operational constraints placed on the sootblowers.
Ayad, G.; Barriere, T.; Gelin, J. C. [Femto-ST Institute/LMA, ENSMM, 26 Rue de l'Epitaphe, 25000 Besancon (France); Song, J. [Femto-ST Institute/LMA, ENSMM, 26 Rue de l'Epitaphe, 25000 Besancon (France); Department of Applied Mechanics and Engineering, Southwest Jiaotong University, 610031 Chengdu (China); Liu, B. [Department of Applied Mechanics and Engineering, Southwest Jiaotong University, 610031 Chengdu (China)
2007-05-17
The paper is concerned with optimization and parametric identification of Powder Injection Molding process that consists first in injection of powder mixture with polymer binder and then to the sintering of the resulting powders parts by solid state diffusion. In the first part, one describes an original methodology to optimize the injection stage based on the combination of Design Of Experiments and an adaptive Response Surface Modeling. Then the second part of the paper describes the identification strategy that one proposes for the sintering stage, using the identification of sintering parameters from dilatometer curves followed by the optimization of the sintering process. The proposed approaches are applied to the optimization for manufacturing of a ceramic femoral implant. One demonstrates that the proposed approach give satisfactory results.
U.S. Department of Energy (DOE) - all webpages (Extended Search)
Performance and Optimization Performance and Optimization Benchmarking Software on Hopper and Carver PURPOSE Test the performance impact of multithreading with representative...
An Optimized Swinging Door Algorithm for Wind Power Ramp Event...
U.S. Department of Energy (DOE) - all webpages (Extended Search)
... An applica- tion of the optimized SDA is provided to ascertain the op- timal parameter of the original SDA. Measured wind power data from the Electric Reliability Council of Texas ...
CSC Original Contract - Hanford Site
U.S. Department of Energy (DOE) - all webpages (Extended Search)
Original Contract DOE-RL Contracts/Procurements RL Contracts & Procurements Home Prime Contracts Current Solicitations Other Sources DOE RL Contracting Officers DOE RL Contracting Officer Representatives CSC Original Contract Email Email Page | Print Print Page | Text Increase Font Size Decrease Font Size The following are links to Portable Document Format (PDF) format documents. You will need the Adobe Acrobat Reader in order to view the documents. The Adobe Acrobat Reader is available at
Wind Electrolysis: Hydrogen Cost Optimization
Saur, G.; Ramsden, T.
2011-05-01
This report describes a hydrogen production cost analysis of a collection of optimized central wind based water electrolysis production facilities. The basic modeled wind electrolysis facility includes a number of low temperature electrolyzers and a co-located wind farm encompassing a number of 3MW wind turbines that provide electricity for the electrolyzer units.
Modal test optimization using VETO (Virtual Environment for Test Optimization)
Klenke, S.E.; Reese, G.M.; Schoof, L.A.; Shierling, C.
1996-01-01
We present a software environment integrating analysis and test-based models to support optimal modal test design through a Virtual Environment for Test Optimization (VETO). A goal in developing this software tool is to provide test and analysis organizations with a capability of mathematically simulating the complete test environment in software. Derived models of test equipment, instrumentation and hardware can be combined within the VETO to provide the user with a unique analysis and visualization capability to evaluate new and existing test methods. The VETO assists analysis and test engineers in maximizing the value of each modal test. It is particularly advantageous for structural dynamics model reconciliation applications. The VETO enables an engineer to interact with a finite element model of a test object to optimally place sensors and exciters and to investigate the selection of data acquisition parameters needed to conduct a complete modal survey. Additionally, the user can evaluate the use of different types of instrumentation such as filters, amplifiers and transducers for which models are available in the VETO. The dynamic response of most of the virtual instruments (including the device under test) is modeled in the state space domain. Design of modal excitation levels and appropriate test instrumentation are facilitated by the VETO`s ability to simulate such features as unmeasured external inputs, A/D quantization effects, and electronic noise. Measures of the quality of the experimental design, including the Modal Assurance Criterion, and the Normal Mode Indicator Function are available.
U.S. Department of Energy (DOE) - all webpages (Extended Search)
D-Wave for Optimization Binary-Quadratic Program (B-QP / QUBO) 2 NP-Hard Combinatorial Optimization Problem Open Question 4 State-of-the-Art Optimization Tools Open Question 4 State-of-the-Art Optimization Tools ? > > The Optimization Landscape Solvers (i.e. Algorithms) Benchmarks (i.e. Problems) 5 Optimization Science http://scip.zib.de/ http://plato.asu.edu/ftp/milpc.html 6 Operated by Los Alamos National Security, LLC for the U.S. Department of Energy's NNSA UNCLASSIFIED 8 Overview *
Energy Science and Technology Software Center
2007-03-01
Aristos is a Trilinos package for nonlinear continuous optimization, based on full-space sequential quadratic programming (SQP) methods. Aristos is specifically designed for the solution of large-scale constrained optimization problems in which the linearized constraint equations require iterative (i.e. inexact) linear solver techniques. Aristos' unique feature is an efficient handling of inexactness in linear system solves. Aristos currently supports the solution of equality-constrained convex and nonconvex optimization problems. It has been used successfully in the areamore » of PDE-constrained optimization, for the solution of nonlinear optimal control, optimal design, and inverse problems.« less
Origin of magnetic fields in galaxies
Souza, Rafael S. de; Opher, Reuven
2010-03-15
Microgauss magnetic fields are observed in all galaxies at low and high redshifts. The origin of these intense magnetic fields is a challenging question in astrophysics. We show here that the natural plasma fluctuations in the primordial Universe (assumed to be random), predicted by the fluctuation -dissipation theorem, predicts {approx}0.034 {mu}G fields over {approx}0.3 kpc regions in galaxies. If the dipole magnetic fields predicted by the fluctuation-dissipation theorem are not completely random, microgauss fields over regions > or approx. 0.34 kpc are easily obtained. The model is thus a strong candidate for resolving the problem of the origin of magnetic fields in < or approx. 10{sup 9} years in high redshift galaxies.
Thermodynamic Metrics and Optimal Paths
Sivak, David; Crooks, Gavin
2012-05-08
A fundamental problem in modern thermodynamics is how a molecular-scale machine performs useful work, while operating away from thermal equilibrium without excessive dissipation. To this end, we derive a friction tensor that induces a Riemannian manifold on the space of thermodynamic states. Within the linear-response regime, this metric structure controls the dissipation of finite-time transformations, and bestows optimal protocols with many useful properties. We discuss the connection to the existing thermodynamic length formalism, and demonstrate the utility of this metric by solving for optimal control parameter protocols in a simple nonequilibrium model.
Optimized nanoporous materials.
Braun, Paul V.; Langham, Mary Elizabeth; Jacobs, Benjamin W.; Ong, Markus D.; Narayan, Roger J.; Pierson, Bonnie E.; Gittard, Shaun D.; Robinson, David B.; Ham, Sung-Kyoung; Chae, Weon-Sik; Gough, Dara V.; Wu, Chung-An Max; Ha, Cindy M.; Tran, Kim L.
2009-09-01
Nanoporous materials have maximum practical surface areas for electrical charge storage; every point in an electrode is within a few atoms of an interface at which charge can be stored. Metal-electrolyte interfaces make best use of surface area in porous materials. However, ion transport through long, narrow pores is slow. We seek to understand and optimize the tradeoff between capacity and transport. Modeling and measurements of nanoporous gold electrodes has allowed us to determine design principles, including the fact that these materials can deplete salt from the electrolyte, increasing resistance. We have developed fabrication techniques to demonstrate architectures inspired by these principles that may overcome identified obstacles. A key concept is that electrodes should be as close together as possible; this is likely to involve an interpenetrating pore structure. However, this may prove extremely challenging to fabricate at the finest scales; a hierarchically porous structure can be a worthy compromise.
Price, R; Veltchev, I; Cherian, G; Ma, C
2014-06-01
Purpose: Multiple publications exist concerning fixed-jaw utilization to avoid linac carriage shifts and reduce intensity modulated radiotherapy (IMRT) treatment times. The purpose of this work is to demonstrate delivery QA discrepancies and illustrate the need for improved treatment planning system (TPS) commissioning for non-routine use. Methods: A 6cm diameter spherical target was delineated on a virtual phantom containing the Iba Matrixx linear array within the Varian Eclipse TPS. Optimization was performed for target coverage for the following 3 scenarios: a single open, zero degree field where the X and Y jaws completely cover the target; the same field using an asymmetric, fixed-jaw technique where the upper Y jaw does not cover the superior 2cm of the target; and both of the aforementioned directed at the target at 315 and 45 degree gantry angles, respectively. This final orientation was also irradiated on a linac for delivery analysis. A sarcoma patient case was also analyzed where the fixed jaw technique was utilized for kidney sparing. Results: The open beam results were as predicted but the fixed-jaw results demonstrate a pronounced fluence increase along the asymmetric, upper jaw. Analysis of the delivery of the combined beam plan Resultin 83% of pixels evaluated passing gamma criteria of 3%, 3mm DTA. Analysis for the sarcoma patient, in the plane of the shielded kidney, indicated 93% passing although the maximum dose discrepancies in this region were approximately 23%. Conclusion: Optimization within the target is routinely performed using MLC leaf-end characteristics. The fixed-jaw technique forces optimization of target coverage to utilize the penumbra profiles of the associated beamdefining jaw. If the profiles were collected using a common 0.125cc ionization chamber, the resolution may be insufficient resulting in a planvs.-delivery mismatch. It is recommended that high-resolution beam characteristics be considered when non-routine planning
Original Workshop Proposal and Description
U.S. Department of Energy (DOE) - all webpages (Extended Search)
Notes for Vis Requirements » Original Workshop Proposal and Description Original Workshop Proposal and Description Visualization Requirements for Computational Science and Engineering Applications Proposal for a DoE Workshop to Be Held at the Berkeley Marina Radisson Hotel, Berkeley, California, June 5, 2002 (date and location are tenative) Workshop Co-organizers: Bernd Hamann University of California-Davis Lawrence Berkeley Nat'l Lab. E. Wes Bethel Lawrence Berkeley Nat'l Lab.
E85 Optimized Engine through Boosting, Spray Optimized GDi, VCR...
Office of Energy Efficiency and Renewable Energy (EERE) (indexed site)
Engine through Boosting, Spray Optimized GDi, VCR and Variable Valvetrain E85 Optimized ... (1.73 MB) More Documents & Publications E85 Optimized Engine Gasoline Ultra Fuel ...
Optimization of sodium fire suppression system
1985-02-01
This report describes the major areas of revision and optimization of the design of the CRBRP Sodium Fire Suppression System (SFSS) following the confirmatory testing program. The design temperatures for the SFSS were substantially increased after the Large Scale Sodium Fire Test (LSSFT) making the original design inadequate. A redesign of the main features was performed in which the experience in the construction of the LSSFT test article was also utilized for optimization. The design criteria, loads and load combinations and revised design are discussed.
Constant-complexity stochastic simulation algorithm with optimal binning
Sanft, Kevin R.; Othmer, Hans G.
2015-08-21
At the molecular level, biochemical processes are governed by random interactions between reactant molecules, and the dynamics of such systems are inherently stochastic. When the copy numbers of reactants are large, a deterministic description is adequate, but when they are small, such systems are often modeled as continuous-time Markov jump processes that can be described by the chemical master equation. Gillespie’s Stochastic Simulation Algorithm (SSA) generates exact trajectories of these systems, but the amount of computational work required for each step of the original SSA is proportional to the number of reaction channels, leading to computational complexity that scales linearly with the problem size. The original SSA is therefore inefficient for large problems, which has prompted the development of several alternative formulations with improved scaling properties. We describe an exact SSA that uses a table data structure with event time binning to achieve constant computational complexity with respect to the number of reaction channels for weakly coupled reaction networks. We present a novel adaptive binning strategy and discuss optimal algorithm parameters. We compare the computational efficiency of the algorithm to existing methods and demonstrate excellent scaling for large problems. This method is well suited for generating exact trajectories of large weakly coupled models, including those that can be described by the reaction-diffusion master equation that arises from spatially discretized reaction-diffusion processes.
Optimal segmentation and packaging process
Kostelnik, K.M.; Meservey, R.H.; Landon, M.D.
1999-08-10
A process for improving packaging efficiency uses three dimensional, computer simulated models with various optimization algorithms to determine the optimal segmentation process and packaging configurations based on constraints including container limitations. The present invention is applied to a process for decontaminating, decommissioning (D and D), and remediating a nuclear facility involving the segmentation and packaging of contaminated items in waste containers in order to minimize the number of cuts, maximize packaging density, and reduce worker radiation exposure. A three-dimensional, computer simulated, facility model of the contaminated items are created. The contaminated items are differentiated. The optimal location, orientation and sequence of the segmentation and packaging of the contaminated items is determined using the simulated model, the algorithms, and various constraints including container limitations. The cut locations and orientations are transposed to the simulated model. The contaminated items are actually segmented and packaged. The segmentation and packaging may be simulated beforehand. In addition, the contaminated items may be cataloged and recorded. 3 figs.
CASL - Materials and Performance Optimization (MPO)
U.S. Department of Energy (DOE) - all webpages (Extended Search)
Materials and Performance Optimization (MPO) The Materials and Performance Optimization (MPO) focus area within CASL has recently developed and released a 3D modeling framework known as MAMBA (MPO Advanced Model for Boron Analysis) to predict CRUD deposition on nuclear fuel rods. CRUD, which refers to Chalk River Unidentified Deposit, is predominately a nickel-ferrite spinel corrosion product that deposits on hot fuel clad surfaces in nuclear reactors. CRUD has a lower thermal conductivity than
Murphy, Edward
2012-11-20
The world around us is made of atoms. Did you ever wonder where these atoms came from? How was the gold in our jewelry, the carbon in our bodies, and the iron in our cars made? In this lecture, we will trace the origin of a gold atom from the Big Bang to the present day, and beyond. You will learn how the elements were forged in the nuclear furnaces inside stars, and how, when they die, these massive stars spread the elements into space. You will learn about the origin of the building blocks of matter in the Big Bang, and we will speculate on the future of the atoms around us today.
Murphy, Edward
2016-07-12
The world around us is made of atoms. Did you ever wonder where these atoms came from? How was the gold in our jewelry, the carbon in our bodies, and the iron in our cars made? In this lecture, we will trace the origin of a gold atom from the Big Bang to the present day, and beyond. You will learn how the elements were forged in the nuclear furnaces inside stars, and how, when they die, these massive stars spread the elements into space. You will learn about the origin of the building blocks of matter in the Big Bang, and we will speculate on the future of the atoms around us today.
COLLOQUIUM: Chance, Necessity, and the Origins of Life | Princeton Plasma
U.S. Department of Energy (DOE) - all webpages (Extended Search)
Physics Lab December 2, 2015, 4:15pm to 5:30pm Colloquia MBG Auditorium COLLOQUIUM: Chance, Necessity, and the Origins of Life Professor Robert Hazen Carnegie Institute of Washington & George Mason University Earth's 4.5 billion year history is a complex tale of deterministic physical and chemical processes, as well as 'frozen accidents'. Most models of life's origins also invoke chance and necessity. Recent research adds two important insights to this discussion. First, chance versus
Microscopic origin of volume modulus inflation
Cicoli, Michele; Muia, Francesco; Pedro, Francisco Gil
2015-12-21
High-scale string inflationary models are in well-known tension with low-energy supersymmetry. A promising solution involves models where the inflaton is the volume of the extra dimensions so that the gravitino mass relaxes from large values during inflation to smaller values today. We describe a possible microscopic origin of the scalar potential of volume modulus inflation by exploiting non-perturbative effects, string loop and higher derivative perturbative corrections to the supergravity effective action together with contributions from anti-branes and charged hidden matter fields. We also analyse the relation between the size of the flux superpotential and the position of the late-time minimum and the inflection point around which inflation takes place. We perform a detailed study of the inflationary dynamics for a single modulus and a two moduli case where we also analyse the sensitivity of the cosmological observables on the choice of initial conditions.
Schmidt, Andres; Law, Beverly E.; Göckede, Mathias; Hanson, Chad; Yang, Zhenlin; Conley, Stephen
2016-09-15
Here, the vast forests and natural areas of the Pacific Northwest comprise one of the most productive ecosystems in the northern hemisphere. The heterogeneous landscape of Oregon poses a particular challenge to ecosystem models. We present a framework using a scaling factor Bayesian inversion to improve the modeled atmosphere-biosphere exchange of carbon dioxide. Observations from 5 CO/CO2 towers, eddy covariance towers, and airborne campaigns were used to constrain the Community Land Model CLM4.5 simulated terrestrial CO2 exchange at a high spatial and temporal resolution (1/24°, 3-hourly). To balance aggregation errors and the degrees of freedom in the inverse modeling system,more » we applied an unsupervised clustering approach for the spatial structuring of our model domain. Data from flight campaigns were used to quantify the uncertainty introduced by the Lagrangian particle dispersion model that was applied for the inversions. The average annual statewide net ecosystem productivity (NEP) was increased by 32% to 29.7 TgC per year by assimilating the tropospheric mixing ratio data. The associated uncertainty was decreased by 28.4% to 29%, on average over the entire Oregon model domain with the lowest uncertainties of 11% in western Oregon. The largest differences between posterior and prior CO2 fluxes were found for the Coast Range ecoregion of Oregon that also exhibits the highest availability of atmospheric observations and associated footprints. In this area, covered by highly productive Douglas-fir forest, the differences between the prior and posterior estimate of NEP averaged 3.84 TgC per year during the study period from 2012 through 2014.« less
Spearmint - Bayesian Hyperparameter Optimization
U.S. Department of Energy (DOE) - all webpages (Extended Search)
Optimization » Spearmint Spearmint - Bayesian Hyperparameter Optimization Spearmint is a Python Bayesian optimization codebase. Using Spearmint module load spearmint spearmint -c path/to/config.json config.json must have the following form: { "language" : "PYTHON", "experiment-name" : "any name you want", "polling-time" : 1, "resources" : { "my-machine" : { "scheduler" : "local", "max-concurrent"
Cold Climates Heat Pump Design Optimization
Abdelaziz, Omar [ORNL] [ORNL; Shen, Bo [ORNL] [ORNL
2012-01-01
Heat pumps provide an efficient heating method; however they suffer from sever capacity and performance degradation at low ambient conditions. This has deterred market penetration in cold climates. There is a continuing effort to find an efficient air source cold climate heat pump that maintains acceptable capacity and performance at low ambient conditions. Systematic optimization techniques provide a reliable approach for the design of such systems. This paper presents a step-by-step approach for the design optimization of cold climate heat pumps. We first start by describing the optimization problem: objective function, constraints, and design space. Then we illustrate how to perform this design optimization using an open source publically available optimization toolbox. The response of the heat pump design was evaluated using a validated component based vapor compression model. This model was treated as a black box model within the optimization framework. Optimum designs for different system configurations are presented. These optimum results were further analyzed to understand the performance tradeoff and selection criteria. The paper ends with a discussion on the use of systematic optimization for the cold climate heat pump design.
Gursu, S.; Veziroglu, T.N. . Clean Energy Research Inst.); Sherif, S.A. . Dept. of Mechanical Engineering); Sheffield, J.W. . Dept. of Mechanical and Aerospace Engineering and Engineering Mechanics)
1993-09-01
Three models capable of predicting the phenomena of thermal stratification and self-pressurization in liquid hydrogen storage systems were presented in Part 1 of this paper. In order to be able to evaluate the performance of the different pressure rise models, the results are compared with experimental data obtained from different tests. The set of experimental data obtained from the Plum Brook B-2 test, in the NASA-Lewis Research Center, represents a very accurately instrumented and closely controlled experimental work performed on the liquid hydrogen storage tank. Another set of data is taken from the experimental study conducted again in the NASA-Lewis Research Center to obtain a correlating parameter which relates the rate of pressure rise to the volume of spherical liquid hydrogen tank. In this paper model results are presented and discussed and general conclusions are reached.
Hopper Performance and Optimization
U.S. Department of Energy (DOE) - all webpages (Extended Search)
Performance and Optimization Compiler Comparisons Comparison of different compilers with different options on several benchmarks. Read More Using OpenMP Effectively...
Optimize Parallel Pumping Systems
This tip sheet describes how to optimize the performance of multiple pumps operating continuously as part of a parallel pumping system.
Oneida Tribe of Indians of Wisconsin- 2011 Energy Optimization Project
The creation of this Oneida Nation Energy Optimization (ONEO) model is the next stage in the living document known as the Oneida Energy Security Plan.
Optimal design of reverse osmosis module networks
Maskan, F.; Wiley, D.E.; Johnston, L.P.M.; Clements, D.J.
2000-05-01
The structure of individual reverse osmosis modules, the configuration of the module network, and the operating conditions were optimized for seawater and brackish water desalination. The system model included simple mathematical equations to predict the performance of the reverse osmosis modules. The optimization problem was formulated as a constrained multivariable nonlinear optimization. The objective function was the annual profit for the system, consisting of the profit obtained from the permeate, capital cost for the process units, and operating costs associated with energy consumption and maintenance. Optimization of several dual-stage reverse osmosis systems were investigated and compared. It was found that optimal network designs are the ones that produce the most permeate. It may be possible to achieve economic improvements by refining current membrane module designs and their operating pressures.
POET: Parameterized Optimization for Empirical Tuning
Yi, Q; Seymour, K; You, H; Vuduc, R; Quinlan, D
2007-01-29
The excessive complexity of both machine architectures and applications have made it difficult for compilers to statically model and predict application behavior. This observation motivates the recent interest in performance tuning using empirical techniques. We present a new embedded scripting language, POET (Parameterized Optimization for Empirical Tuning), for parameterizing complex code transformations so that they can be empirically tuned. The POET language aims to significantly improve the generality, flexibility, and efficiency of existing empirical tuning systems. We have used the language to parameterize and to empirically tune three loop optimizations-interchange, blocking, and unrolling-for two linear algebra kernels. We show experimentally that the time required to tune these optimizations using POET, which does not require any program analysis, is significantly shorter than that when using a full compiler-based source-code optimizer which performs sophisticated program analysis and optimizations.
Economic and environmental optimization of waste treatment
Münster, M.; Ravn, H.; Hedegaard, K.; Juul, N.; Ljunggren Söderman, M.
2015-04-15
Highlights: • Optimizing waste treatment by incorporating LCA methodology. • Applying different objectives (minimizing costs or GHG emissions). • Prioritizing multiple objectives given different weights. • Optimum depends on objective and assumed displaced electricity production. - Abstract: This article presents the new systems engineering optimization model, OptiWaste, which incorporates a life cycle assessment (LCA) methodology and captures important characteristics of waste management systems. As part of the optimization, the model identifies the most attractive waste management options. The model renders it possible to apply different optimization objectives such as minimizing costs or greenhouse gas emissions or to prioritize several objectives given different weights. A simple illustrative case is analysed, covering alternative treatments of one tonne of residual household waste: incineration of the full amount or sorting out organic waste for biogas production for either combined heat and power generation or as fuel in vehicles. The case study illustrates that the optimal solution depends on the objective and assumptions regarding the background system – illustrated with different assumptions regarding displaced electricity production. The article shows that it is feasible to combine LCA methodology with optimization. Furthermore, it highlights the need for including the integrated waste and energy system into the model.
Blasi, Pasquale [INAF/Arcetri-Italy and Fermilab, Italy
2016-07-12
Cosmic Rays reach the Earth from space with energies of up to more than 1020 eV, carrying information on the most powerful particle accelerators that Nature has been able to assemble. Understanding where and how cosmic rays originate has required almost one century of investigations, and, although the last word is not written yet, recent observations and theory seem now to fit together to provide us with a global picture of the origin of cosmic rays of unprecedented clarity. Here we will describe what we learned from recent observations of astrophysical sources (such as supernova remnants and active galaxies) and we will illustrate what these observations tell us about the physics of particle acceleration and transport. We will also discuss the âendâ of the Galactic cosmic ray spectrum, which bridges out attention towards the so called ultra high energy cosmic rays (UHECRs). At ~1020 eV the gyration scale of cosmic rays in cosmic magnetic fields becomes large enough to allow us to point back to their sources, thereby allowing us to perform âcosmic ray astronomyâ, as confirmed by the recent results obtained with the Pierre Auger Observatory. We will discuss the implications of these observations for the understanding of UHECRs, as well as some questions which will likely remain unanswered and will be the target of the next generation of cosmic ray experiments.
U.S. Department of Energy (DOE) - all webpages (Extended Search)
Optimization Performance and Optimization Running Jobs Efficiently This page defines job efficiency and how to measure the efficiency of your jobs. Read More » PDSF IO Monitoring Plots of continuous IO monitoring for the eliza file systems and project. Read More » Last edited: 2016-04-29 11:35:20
Crowe, B.; Yucel, V.; Rawlinson, S.; Black, P.; Carilli, J.; DiSanza, F.
2002-02-25
The U.S. Department of Energy (DOE), National Nuclear Security Administration of the Nevada Operations Office (NNSA/NV) operates and maintains two active facilities on the Nevada Test Site (NTS) that dispose defense-generated low-level radioactive waste (LLW), mixed radioactive waste, and ''classified waste'' in shallow trenches and pits. The operation and maintenance of the LLW disposal sites are self-regulated by the DOE under DOE Order 435.1. This Order requires formal review of a performance assessment (PA) and composite analysis (CA; assessment of all interacting radiological sources) for each LLW disposal system followed by an active maintenance program that extends through and beyond the site closure program. The Nevada disposal facilities continue to receive NTS-generated LLW and defense-generated LLW from across the DOE complex. The PA/CAs for the sites have been conditionally approved and the facilities are now under a formal maintenance program that requires testing of conceptual models, quantifying and attempting to reduce uncertainty, and implementing confirmatory and long-term background monitoring, all leading to eventual closure of the disposal sites. To streamline and reduce the cost of the maintenance program, the NNSA/NV is converting the deterministic PA/CAs to probabilistic models using GoldSim, a probabilistic simulation computer code. The output of probabilistic models will provide expanded information supporting long-term decision objectives of the NTS disposal sites.
dynamic-origin-destination-matrix
U.S. Department of Energy (DOE) - all webpages (Extended Search)
His research interests span a wide range of topics including transportation modeling and simulation, intelligent transportation systems, artificial intelligence applications in ...
U.S. Department of Energy (DOE) - all webpages (Extended Search)
WVMinputs-outputs Permalink Gallery Sandia Labs releases wavelet variability model (WVM) Modeling, News, Photovoltaic, Solar Sandia Labs releases wavelet variability model (WVM) ...
Energy Science and Technology Software Center
2014-05-13
ROL provides interfaces to and implementations of algorithms for gradient-based unconstrained and constrained optimization. ROL can be used to optimize the response of any client simulation code that evaluates scalar-valued response functions. If the client code can provide gradient information for the response function, ROL will take advantage of it, resulting in faster runtimes. ROL's interfaces are matrix-free, in other words ROL only uses evaluations of scalar-valued and vector-valued functions. ROL can be used tomore » solve optimal design problems and inverse problems based on a variety of simulation software.« less
Nature and Origin of the Cuprate Pseudogap
U.S. Department of Energy (DOE) - all webpages (Extended Search)
Nature and Origin of the Cuprate Pseudogap Nature and Origin of the Cuprate Pseudogap Print Wednesday, 30 May 2007 00:00 The workings of high-temperature superconductive (HTSC)...
Synthesis of optimal adsorptive carbon capture processes.
chang, Y.; Cozad, A.; Kim, H.; Lee, A.; Vouzis, P.; Konda, M.; Simon, A.; Sahinidis, N.; Miller, D.
2011-01-01
Solid sorbent carbon capture systems have the potential to require significantly lower regeneration energy compared to aqueous monoethanol amine (MEA) systems. To date, the majority of work on solid sorbents has focused on developing the sorbent materials themselves. In order to advance these technologies, it is necessary to design systems that can exploit the full potential and unique characteristics of these materials. The Department of Energy (DOE) recently initiated the Carbon Capture Simulation Initiative (CCSI) to develop computational tools to accelerate the commercialization of carbon capture technology. Solid sorbents is the first Industry Challenge Problem considered under this initiative. An early goal of the initiative is to demonstrate a superstructure-based framework to synthesize an optimal solid sorbent carbon capture process. For a given solid sorbent, there are a number of potential reactors and reactor configurations consisting of various fluidized bed reactors, moving bed reactors, and fixed bed reactors. Detailed process models for these reactors have been modeled using Aspen Custom Modeler; however, such models are computationally intractable for large optimization-based process synthesis. Thus, in order to facilitate the use of these models for process synthesis, we have developed an approach for generating simple algebraic surrogate models that can be used in an optimization formulation. This presentation will describe the superstructure formulation which uses these surrogate models to choose among various process alternatives and will describe the resulting optimal process configuration.
Library for Nonlinear Optimization
Energy Science and Technology Software Center
2001-10-09
OPT++ is a C++ object-oriented library for nonlinear optimization. This incorporates an improved implementation of an existing capability and two new algorithmic capabilities based on existing journal articles and freely available software.
TOOLKIT FOR ADVANCED OPTIMIZATION
Energy Science and Technology Software Center
2000-10-13
The TAO project focuses on the development of software for large scale optimization problems. TAO uses an object-oriented design to create a flexible toolkit with strong emphasis on the reuse of external tools where appropriate. Our design enables bi-directional connection to lower level linear algebra support (for example, parallel sparse matrix data structures) as well as higher level application frameworks. The Toolkist for Advanced Optimization (TAO) is aimed at teh solution of large-scale optimization problemsmore » on high-performance architectures. Our main goals are portability, performance, scalable parallelism, and an interface independent of the architecture. TAO is suitable for both single-processor and massively-parallel architectures. The current version of TAO has algorithms for unconstrained and bound-constrained optimization.« less
Kawase, Mitsuhiro
2009-11-22
The zipped file contains a directory of data and routines used in the NNMREC turbine depth optimization study (Kawase et al., 2011), and calculation results thereof. For further info, please contact Mitsuhiro Kawase at kawase@uw.edu. Reference: Mitsuhiro Kawase, Patricia Beba, and Brian Fabien (2011), Finding an Optimal Placement Depth for a Tidal In-Stream Conversion Device in an Energetic, Baroclinic Tidal Channel, NNMREC Technical Report.
U.S. Department of Energy (DOE) - all webpages (Extended Search)
Optimization Performance and Optimization Compiler Comparisons Comparison of different compilers with different options on several benchmarks. Read More » Using OpenMP Effectively Performance implications and case studies of codes combining MPI and OpenMP Read More » Reordering MPI Ranks Reordering MPI ranks can result in improved application performance depending on the communication patterns of the application. Read More » Application Performance Variability on Hopper How an application is
Optimized Triple-Junction Solar Cells Using Inverted Metamorphic Approach (Presentation)
Geisz, J. F.
2008-11-01
Record efficiencies with triple-junction inverted metamorphic designs, modeling useful to optimize, and consider operating conditions before choosing design.
Dalziel, I.W.D. . Inst. for Geophysics)
1992-01-01
Laurentia, the Precambrian core of the North American continent, is surrounded by late Precambrian rift systems and therefore constitutes a suspect terrane''. A geometric and geological fit can be achieved between the Atlantic margin of Laurentia and the Pacific margin of the Gondwana craton. The enigmatic Arequipa massif along the southern Peruvian coast, that yields ca. 2.0 Ga radiometric ages, is juxtaposed with the Makkovik-Ketilidian province of the same age range in Labrador and southern Greenland. The Greenville belt continues beneath the ensialic Andes of the present day to join up with the 1.3--1.0 Ga San Ignacio and Sonsas-Aguapei orogens of the Transamazonian craton. Together with the recent identification of possible continuations of the Greenville orogen in East Antarctica and of the Taconic Appalachians in southern South America, the fit supports suggestions that Laurentia originated between East Antarctica-Australia and embryonic South America prior to the opening of the Pacific Ocean basin and amalgamation of the Gondwana Cordilleran and Appalachian margins, this implies that there may have been two supercontinents during the Neoproterozoic, before and after opening of the Pacific Ocean. As Laurentia and Gondwana appear to have collided on at least two occasions during the Paleozoic, this scenario therefore calls to question the existence of so-called supercontinental cycles. The Arica bight of the present day may reflect a primary reentrant in the South American continental margin that controlled subduction processes along the Andean margin and eventually led to uplift of the Altiplano.
Optimization and Control of Electric Power Systems
Lesieutre, Bernard C.; Molzahn, Daniel K.
2014-10-17
The analysis and optimization needs for planning and operation of the electric power system are challenging due to the scale and the form of model representations. The connected network spans the continent and the mathematical models are inherently nonlinear. Traditionally, computational limits have necessitated the use of very simplified models for grid analysis, and this has resulted in either less secure operation, or less efficient operation, or both. The research conducted in this project advances techniques for power system optimization problems that will enhance reliable and efficient operation. The results of this work appear in numerous publications and address different application problems include optimal power flow (OPF), unit commitment, demand response, reliability margins, planning, transmission expansion, as well as general tools and algorithms.
NUG Single Node Optimization Presentation.
U.S. Department of Energy (DOE) - all webpages (Extended Search)
Cray Opteron systems, PGI. GNU compilers were on Franklin, but at that time GNU Fortran optimization was poor. Next came Pathscale because of superior optimization. ...
Forecourt and Gas Infrastructure Optimization | Department of Energy
Office of Energy Efficiency and Renewable Energy (EERE) (indexed site)
and Gas Infrastructure Optimization Forecourt and Gas Infrastructure Optimization Presentation by Bruce Kelly of Nexant at the Joint Meeting on Hydrogen Delivery Modeling and Analysis, May 8-9, 2007 deliv_analysis_kelly.pdf (113.91 KB) More Documents & Publications H2A Hydrogen Delivery Infrastructure Analysis Models and Conventional Pathway Options Analysis Results - Interim Report H2A Delivery Components Model and Analysis Hydrogen Delivery Analysis Models
Quasivelocities and Optimal Control for underactuated Mechanical Systems
Colombo, L.; Martin de Diego, D.
2010-07-28
This paper is concerned with the application of the theory of quasivelocities for optimal control for underactuated mechanical systems. Using this theory, we convert the original problem in a variational second-order lagrangian system subjected to constraints. The equations of motion are geometrically derived using an adaptation of the classical Skinner and Rusk formalism.
U.S. Department of Energy (DOE) - all webpages (Extended Search)
Modelers at the CRF are developing high-fidelity simulation tools for engine combustion and detailed micro-kinetic, surface chemistry modeling tools for catalyst-based exhaust ...
Roughness Optimization at High Modes for GDP CHx Microshells
Theobald, M.; Dumay, B.; Chicanne, C.; Barnouin, J.; Legaie, O.; Baclet, P.
2004-03-15
For the ''Megajoule'' Laser (LMJ) facility of the CEA, amorphous hydrogenated carbon (a-C:H) is the nominal ablator to be used for inertial confinement fusion (ICF) experiments. These capsules contain the fusible deuterium-tritium mixture to achieve ignition. Coatings are prepared by glow discharge polymerization (GDP) with trans-2-butene and hydrogen. The films properties have been investigated. Laser fusion targets must have optimized characteristics: a diameter of about 2.4 mm for LMJ targets, a thickness up to 175 {mu}m, a sphericity and a thickness concentricity better than 99% and an outer and an inner roughness lower than 20 nm at high modes. The surface finish of these laser fusion targets must be extremely smooth to minimize hydrodynamic instabilities.Movchan and Demchishin, and later Thornton introduced a structure zone model (SZM) based on both evaporated and sputtered metals. They investigated the influence of base temperature and the sputtering gas pressure on structure and properties of thick polycrystalline coatings of nickel, titanium, tungsten, aluminum oxide. An original cross-sectional analysis by atomic force microscopy (AFM) allows amorphous materials characterization and permits to make an analogy between the amorphous GDP material and the existing model (SZM). The purpose of this work is to understand the relationship between the deposition parameters, the growing structures and the surface roughness.The coating structure as a function of deposition parameters was first studied on plane silicon substrates and then optimized on PAMS shells. By adjusting the coating parameters, the structures are modified, and in some case, the high modes roughness decreases dramatically.
Energy Science and Technology Software Center
1998-07-01
GenOpt is a generic optimization program for nonlinear, constrained optimization. For evaluating the objective function, any simulation program that communicates over text files can be coupled to GenOpt without code modification. No analytic properties of the objective function are used by GenOpt. ptimization algorithms and numerical methods can be implemented in a library and shared among users. Gencpt offers an interlace between the optimization algorithm and its kernel to make the implementation of new algorithmsmore » fast and easy. Different algorithms of constrained and unconstrained minimization can be added to a library. Algorithms for approximation derivatives and performing line-search will be implemented. The objective function is evaluated as a black-box function by an external simulation program. The kernel of GenOpt deals with the data I/O, result sotrage and report, interlace to the external simulation program, and error handling. An abstract optimization class offers methods to interface the GenOpt kernel and the optimization algorithm library.« less
Asynchronous parallel pattern search for nonlinear optimization
P. D. Hough; T. G. Kolda; V. J. Torczon
2000-01-01
Parallel pattern search (PPS) can be quite useful for engineering optimization problems characterized by a small number of variables (say 10--50) and by expensive objective function evaluations such as complex simulations that take from minutes to hours to run. However, PPS, which was originally designed for execution on homogeneous and tightly-coupled parallel machine, is not well suited to the more heterogeneous, loosely-coupled, and even fault-prone parallel systems available today. Specifically, PPS is hindered by synchronization penalties and cannot recover in the event of a failure. The authors introduce a new asynchronous and fault tolerant parallel pattern search (AAPS) method and demonstrate its effectiveness on both simple test problems as well as some engineering optimization problems
Methodology for optimizing the development and operation of gas storage fields
Mercer, J.C.; Ammer, J.R.; Mroz, T.H.
1995-04-01
The Morgantown Energy Technology Center is pursuing the development of a methodology that uses geologic modeling and reservoir simulation for optimizing the development and operation of gas storage fields. Several Cooperative Research and Development Agreements (CRADAs) will serve as the vehicle to implement this product. CRADAs have been signed with National Fuel Gas and Equitrans, Inc. A geologic model is currently being developed for the Equitrans CRADA. Results from the CRADA with National Fuel Gas are discussed here. The first phase of the CRADA, based on original well data, was completed last year and reported at the 1993 Natural Gas RD&D Contractors Review Meeting. Phase 2 analysis was completed based on additional core and geophysical well log data obtained during a deepening/relogging program conducted by the storage operator. Good matches, within 10 percent, of wellhead pressure were obtained using a numerical simulator to history match 2 1/2 injection withdrawal cycles.
2013-08-01
This technology evaluation was prepared by Pacific Northwest National Laboratory on behalf of the U.S. Department of Energy’s Federal Energy Management Program (FEMP). The technology evaluation assesses techniques for optimizing reverse osmosis (RO) systems to increase RO system performance and water efficiency. This evaluation provides a general description of RO systems, the influence of RO systems on water use, and key areas where RO systems can be optimized to reduce water and energy consumption. The evaluation is intended to help facility managers at Federal sites understand the basic concepts of the RO process and system optimization options, enabling them to make informed decisions during the system design process for either new projects or recommissioning of existing equipment. This evaluation is focused on commercial-sized RO systems generally treating more than 80 gallons per hour.
McMordie Stoughton, Kate; Duan, Xiaoli; Wendel, Emily M.
2013-08-26
This technology evaluation was prepared by Pacific Northwest National Laboratory on behalf of the U.S. Department of Energy’s Federal Energy Management Program (FEMP). ¬The technology evaluation assesses techniques for optimizing reverse osmosis (RO) systems to increase RO system performance and water efficiency. This evaluation provides a general description of RO systems, the influence of RO systems on water use, and key areas where RO systems can be optimized to reduce water and energy consumption. The evaluation is intended to help facility managers at Federal sites understand the basic concepts of the RO process and system optimization options, enabling them to make informed decisions during the system design process for either new projects or recommissioning of existing equipment. This evaluation is focused on commercial-sized RO systems generally treating more than 80 gallons per hour.¬
Laser scribe optimization study. Final report
Wannamaker, A.L.
1996-09-01
The laser scribe characterization/optimization project was initiated to better understand what factors influence response variables of the laser marking process. The laser marking system is utilized to indelibly identify weapon system components. Many components have limited field life, and traceability to production origin is critical. In many cases, the reliability of the weapon system and the safety of the users can be attributed to individual and subassembly component fabrication processes. Laser beam penetration of the substrate material may affect product function. The design agency for the DOE had requested that Federal Manufacturing and Technologies characterize the laser marking process and implement controls on critical process parameters.
Lidar arc scan uncertainty reduction through scanning geometry optimization
Wang, H.; Barthelmie, R. J.; Pryor, S. C.; Brown, G.
2015-10-07
Doppler lidars are frequently operated in a mode referred to as arc scans, wherein the lidar beam scans across a sector with a fixed elevation angle and the resulting measurements are used to derive an estimate of the n minute horizontal mean wind velocity (speed and direction). Previous studies have shown that the uncertainty in the measured wind speed originates from turbulent wind fluctuations and depends on the scan geometry (the arc span and the arc orientation). This paper is designed to provide guidance on optimal scan geometries for two key applications in the wind energy industry: wind turbine powermoreperformance analysis and annual energy production. We present a quantitative analysis of the retrieved wind speed uncertainty derived using a theoretical model with the assumption of isotropic and frozen turbulence, and observations from three sites that are onshore with flat terrain, onshore with complex terrain and offshore, respectively. The results from both the theoretical model and observations show that the uncertainty is scaled with the turbulence intensity such that the relative standard error on the 10 min mean wind speed is about 30 % of the turbulence intensity. The uncertainty in both retrieved wind speeds and derived wind energy production estimates can be reduced by aligning lidar beams with the dominant wind direction, increasing the arc span and lowering the number of beams per arc scan. Large arc spans should be used at sites with high turbulence intensity and/or large wind direction variation when arc scans are used for wind resource assessment.less
Distributed Optimization System
Hurtado, John E.; Dohrmann, Clark R.; Robinett, III, Rush D.
2004-11-30
A search system and method for controlling multiple agents to optimize an objective using distributed sensing and cooperative control. The search agent can be one or more physical agents, such as a robot, and can be software agents for searching cyberspace. The objective can be: chemical sources, temperature sources, radiation sources, light sources, evaders, trespassers, explosive sources, time dependent sources, time independent sources, function surfaces, maximization points, minimization points, and optimal control of a system such as a communication system, an economy, a crane, and a multi-processor computer.
Terascale Optimal PDE Simulations
David Keyes
2009-07-28
The Terascale Optimal PDE Solvers (TOPS) Integrated Software Infrastructure Center (ISIC) was created to develop and implement algorithms and support scientific investigations performed by DOE-sponsored researchers. These simulations often involve the solution of partial differential equations (PDEs) on terascale computers. The TOPS Center researched, developed and deployed an integrated toolkit of open-source, optimal complexity solvers for the nonlinear partial differential equations that arise in many DOE application areas, including fusion, accelerator design, global climate change and reactive chemistry. The algorithms created as part of this project were also designed to reduce current computational bottlenecks by orders of magnitude on terascale computers, enabling scientific simulation on a scale heretofore impossible.
Wasserman, H.; Lubeck, O.M.; Luo, Y.; Bassetti, F.
1997-11-01
In this paper the authors compare single processor performance of the SGI Origin and PowerChallenge and utilize a previously reported performance model for hierarchical memory systems to explain the results. Both the Origin and PowerChallenge use the same microprocessor (MIPS R10000) but have significant differences in their memory subsystems. Their memory model includes the effect of overlap between CPU and memory operations and allows them to infer the individual contributions of all three improvements in the Origin`s memory architecture and relate the effectiveness of each improvement to application characteristics.
Modeling and Optimization of Superhydrophobic Condensation (Journal...
Office of Scientific and Technical Information (OSTI)
Research Org: Energy Frontier Research Centers (EFRC); Solid-State Solar-Thermal Energy Conversion Center (S3TEC) Sponsoring Org: USDOE SC Office of Basic Energy Sciences (SC-22) ...
Penser Original Contract (EM0003383) - Hanford Site
U.S. Department of Energy (DOE) - all webpages (Extended Search)
Penser Original Contract (EM0003383) Email Email Page | ... Operations Plan (PDF) J-6 List of Applicable DOE Directives and Contractor Requirements Documents (PDF) Disclaimer: All ...
Nature and Origin of the Cuprate Pseudogap
U.S. Department of Energy (DOE) - all webpages (Extended Search)
Nature and Origin of the Cuprate Pseudogap Print The workings of high-temperature superconductive (HTSC) materials are a mystery wrapped in an enigma. However, a team of...
Nature and Origin of the Cuprate Pseudogap
U.S. Department of Energy (DOE) - all webpages (Extended Search)
an energy gap is already present at the Fermi surface in the normal, nonsuperconductive, state. This is known as a pseudogap, and its origin and relationship to superconductivity...
Development of a Dynamic DOE Calibration Model
A dynamic heavy duty diesel engine model was developed. The model can be applied for calibration and control system optimization.
On the origin of porphyritic chondrules
Blander, M.; Unger, L.; Pelton, A.; Ericksson, G.
1994-05-01
A computer program for the complex equilibria in a cooling nebular gas was used to explore a possible origin of porphyritic chondrules, the major class of chondrules in chondritic meteorites. It uses a method of accurately calculating the thermodynamic properties of molten multicomponent aluminosilicates, which deduces the silicate condensates vs temperature and pressure of a nebular gas. This program is coupled with a chemical equilibrium algorithm for systems with at least 1000 chemical species; it has a data base of over 5000 solid, liquid, and gaseous species. Results are metastable subcooled liquid aluminoscilicates with compositions resembling types IA and II porphyritic chondrules at two different temperatures at any pressure between 10{sup {minus}2} and 1 (or possibly 10{sup {minus}3} to 5) atm. The different types of chondrules (types I, II, III) could have been produced from the same gas and do not need a different gas for each apparent oxidation state; thus, the difficulty of current models for making porphyritic chondrules by reheating different solids to just below their liquidus temperatures in different locations is not necessary. Initiation of a stage of crystallization just below liquidus is part of the natural crystallization (recalescence) process from metastable subcooled liquidus and does not require an improbably heating mechanism. 2 tabs.
Optimizing multiphase aquifer remediation using ITOUGH2
Finsterle, S.; Pruess, K.
1994-06-01
The T2VOC computer model for simulating the transport of organic chemical contaminants in non-isothermal multiphase systems has been coupled to the ITOUGH2 code which solves parameter optimization problems. This allows one to use nonlinear programming and simulated annealing techniques to solve groundwater management problems, i.e. the optimization of multiphase aquifer remediation. This report contains three illustrative examples to demonstrate the optimization of remediation operations by means of simulation-minimization techniques. The code iteratively determines an optimal remediation strategy (e.g. pumping schedule) which minimizes, for instance, pumping and energy costs, the time for cleanup, and residual contamination. While minimizing the objective function is straightforward, the relative weighting of different performance measures--e.g. pumping costs versus cleanup time versus residual contaminant content--is subject to a management decision process. The intended audience of this report is someone who is familiar with numerical modeling of multiphase flow of contaminants, and who might actually use T2VOC in conjunction with ITOUGH2 to optimize the design of aquifer remediation operations.
COOPR: A COmmon Optimization Python Repository v. 1.0
Energy Science and Technology Software Center
2008-08-14
Coopr integrates Python packages for defining optimizers, modeling optimization applications, and managing computational experiments. A major driver for Coopr development is the Pyomo package that can be used to define abstract problems, create concrete problem instances, and solve these instances with standard solvers. Other Coopr packages include EXACT, a framework for managing computational experiments, SUCASA, a tool for customizing integer programming solvers, and OPT, a generic optimization interface.
Quasi-Optimal Elimination Trees for 2D Grids with Singularities
Paszyńska, A.; Paszyński, M.; Jopek, K.; Woźniak, M.; Goik, D.; Gurgul, P.; AbouEisha, H.; Moshkov, M.; Calo, V. M.; Lenharth, A.; et al
2015-01-01
We consmore » truct quasi-optimal elimination trees for 2D finite element meshes with singularities. These trees minimize the complexity of the solution of the discrete system. The computational cost estimates of the elimination process model the execution of the multifrontal algorithms in serial and in parallel shared-memory executions. Since the meshes considered are a subspace of all possible mesh partitions, we call these minimizers quasi-optimal. We minimize the cost functionals using dynamic programming. Finding these minimizers is more computationally expensive than solving the original algebraic system. Nevertheless, from the insights provided by the analysis of the dynamic programming minima, we propose a heuristic construction of the elimination trees that has cost O N e log N e , where N e is the number of elements in the mesh. We show that this heuristic ordering has similar computational cost to the quasi-optimal elimination trees found with dynamic programming and outperforms state-of-the-art alternatives in our numerical experiments.« less
Coupled Thermal-Hydrological-Mechanical-Chemical Model And Experiments...
Energy.gov [DOE] (indexed site)
Coupled Thermal-Hydrological-Mechanical-Chemical Model And Experiments For Optimization Of ... Coupled Thermal-Hydrological-Mechanical-Chemical Model and Experiments for Optimization ...
Centralized Stochastic Optimal Control of Complex Systems
Malikopoulos, Andreas
2015-01-01
In this paper we address the problem of online optimization of the supervisory power management control in parallel hybrid electric vehicles (HEVs). We model HEV operation as a controlled Markov chain using the long-run expected average cost per unit time criterion, and we show that the control policy yielding the Pareto optimal solution minimizes the average cost criterion online. The effectiveness of the proposed solution is validated through simulation and compared to the solution derived with dynamic programming using the average cost criterion.
U.S. Department of Energy (DOE) - all webpages (Extended Search)
Monte Carlo modeling it was found that for noisy signals with a significant background component, accuracy is improved by fitting the total emission data which includes the...
U.S. Department of Energy (DOE) - all webpages (Extended Search)
... Renewable Energy, Research & Capabilities, Wind Energy, Wind News|0 Comments Read More ... Energy, Research & Capabilities, Water Power Sandia Modifies Delft3D Turbine Model ...
U.S. Department of Energy (DOE) - all webpages (Extended Search)
Science and Actuarial Practice" Read More Permalink New Project Is the ACME of Computer Science to Address Climate Change Analysis, Climate, Global Climate & Energy, Modeling, ...
U.S. Department of Energy (DOE) - all webpages (Extended Search)
Though adequate for modeling mean transport, this approach does not address ... Microphysics such as diffusive transport and chemical kinetics are represented by ...
U.S. Department of Energy (DOE) - all webpages (Extended Search)
PVLibMatlab Permalink Gallery Sandia Labs Releases New Version of PVLib Toolbox Modeling, News, Photovoltaic, Solar Sandia Labs Releases New Version of PVLib Toolbox Sandia has ...
TAS: 89 0227: TAS Recovery Act - Optimization and Control of Electric Power Systems: ARRA
Chiang, Hsiao-Dong
2014-02-01
The name SuperOPF is used to refer several projects, problem formulations and soft-ware tools intended to extend, improve and re-define some of the standard methods of optimizing electric power systems. Our work included applying primal-dual interior point methods to standard AC optimal power flow problems of large size, as well as extensions of this problem to include co-optimization of multiple scenarios. The original SuperOPF problem formulation was based on co-optimizing a base scenario along with multiple post-contingency scenarios, where all AC power flow models and constraints are enforced for each, to find optimal energy contracts, endogenously determined locational reserves and appropriate nodal energy prices for a single period optimal power flow problem with uncertainty. This led to example non-linear programming problems on the order of 1 million constraints and half a million variables. The second generation SuperOPF formulation extends this by adding multiple periods and multiple base scenarios per period. It also incorporates additional variables and constraints to model load following reserves, ramping costs, and storage resources. A third generation of the multi-period SuperOPF, adds both integer variables and a receding horizon framework in which the problem type is more challenging (mixed integer), the size is even larger, and it must be solved more frequently, pushing the limits of currently available algorithms and solvers. The consideration of transient stability constraints in optimal power flow (OPF) problems has become increasingly important in modern power systems. Transient stability constrained OPF (TSCOPF) is a nonlinear optimization problem subject to a set of algebraic and differential equations. Solving a TSCOPF problem can be challenging due to (i) the differential-equation constraints in an optimization problem, (ii) the lack of a true analytical expression for transient stability in OPF. To handle the dynamics in TSCOPF, the set
GMG: A Guaranteed, Efficient Global Optimization Algorithm for Remote Sensing.
D'Helon, CD
2004-08-18
The monocular passive ranging (MPR) problem in remote sensing consists of identifying the precise range of an airborne target (missile, plane, etc.) from its observed radiance. This inverse problem may be set as a global optimization problem (GOP) whereby the difference between the observed and model predicted radiances is minimized over the possible ranges and atmospheric conditions. Using additional information about the error function between the predicted and observed radiances of the target, we developed GMG, a new algorithm to find the Global Minimum with a Guarantee. The new algorithm transforms the original continuous GOP into a discrete search problem, thereby guaranteeing to find the position of the global minimum in a reasonably short time. The algorithm is first applied to the golf course problem, which serves as a litmus test for its performance in the presence of both complete and degraded additional information. GMG is further assessed on a set of standard benchmark functions and then applied to various realizations of the MPR problem.
Parallel performance optimizations on unstructured mesh-based simulations
Sarje, Abhinav; Song, Sukhyun; Jacobsen, Douglas; Huck, Kevin; Hollingsworth, Jeffrey; Malony, Allen; Williams, Samuel; Oliker, Leonid
2015-06-01
This paper addresses two key parallelization challenges the unstructured mesh-based ocean modeling code, MPAS-Ocean, which uses a mesh based on Voronoi tessellations: (1) load imbalance across processes, and (2) unstructured data access patterns, that inhibit intra- and inter-node performance. Our work analyzes the load imbalance due to naive partitioning of the mesh, and develops methods to generate mesh partitioning with better load balance and reduced communication. Furthermore, we present methods that minimize both inter- and intranode data movement and maximize data reuse. Our techniques include predictive ordering of data elements for higher cache efficiency, as well as communication reduction approaches.more » We present detailed performance data when running on thousands of cores using the Cray XC30 supercomputer and show that our optimization strategies can exceed the original performance by over 2×. Additionally, many of these solutions can be broadly applied to a wide variety of unstructured grid-based computations.« less
OriginOil Inc | Open Energy Information
OpenEI (Open Energy Information) [EERE & EIA]
Inc Place: Los Angeles, California Zip: 90016 Product: California-based OTC-quoted algae-to-oil technology developer. References: OriginOil Inc1 This article is a stub. You...
origins.indd | Department of Energy
Fehner and Gosling, Origins of the Nevada Test Site Fehner and Gosling, Atmospheric Nuclear Weapons Testing, 1951-1963. Battlefield of the Cold War: The Nevada Test Site, Volume I ...
An Optimization Framework for Dynamic Hybrid Energy Systems
Wenbo Du; Humberto E Garcia; Christiaan J.J. Paredis
2014-03-01
A computational framework for the efficient analysis and optimization of dynamic hybrid energy systems (HES) is developed. A microgrid system with multiple inputs and multiple outputs (MIMO) is modeled using the Modelica language in the Dymola environment. The optimization loop is implemented in MATLAB, with the FMI Toolbox serving as the interface between the computational platforms. Two characteristic optimization problems are selected to demonstrate the methodology and gain insight into the system performance. The first is an unconstrained optimization problem that optimizes the dynamic properties of the battery, reactor and generator to minimize variability in the HES. The second problem takes operating and capital costs into consideration by imposing linear and nonlinear constraints on the design variables. The preliminary optimization results obtained in this study provide an essential step towards the development of a comprehensive framework for designing HES.
U.S. Department of Energy (DOE) - all webpages (Extended Search)
Optimization Strategies for Cori NERSC User Services Wednesday Feb 25, 2015 Introduction to Cori What is different about Cori? What is different about Cori? Edison (Ivy-Bridge): ● 12 Cores Per CPU ● 24 Virtual Cores Per CPU ● 2.4-3.2 GHz ● Can do 4 Double Precision Operations per Cycle (+ multiply/add) ● 2.5 GB of Memory Per Core ● ~100 GB/s Memory Bandwidth Cori (Knights-Landing): ● 60+ Physical Cores Per CPU ● 240+ Virtual Cores Per CPU ● Much slower GHz ● Can do 8 Double
Optimal recovery sequencing for critical infrastructure resilience assessment.
Vugrin, Eric D.; Brown, Nathanael J. K.; Turnquist, Mark Alan
2010-09-01
Critical infrastructure resilience has become a national priority for the U. S. Department of Homeland Security. System resilience has been studied for several decades in many different disciplines, but no standards or unifying methods exist for critical infrastructure resilience analysis. This report documents the results of a late-start Laboratory Directed Research and Development (LDRD) project that investigated the identification of optimal recovery strategies that maximize resilience. To this goal, we formulate a bi-level optimization problem for infrastructure network models. In the 'inner' problem, we solve for network flows, and we use the 'outer' problem to identify the optimal recovery modes and sequences. We draw from the literature of multi-mode project scheduling problems to create an effective solution strategy for the resilience optimization model. We demonstrate the application of this approach to a set of network models, including a national railroad model and a supply chain for Army munitions production.
Accelerating PDE-Constrained Optimization Problems using Adaptive...
U.S. Department of Energy (DOE) - all webpages (Extended Search)
Accelerating PDE-Constrained Optimization Problems using Adaptive Reduced-Order Models January 15, 2016 10:30AM to 11:30AM Presenter Matthew Zahr, Wilkinson Interviewee Location...
U.S. Department of Energy (DOE) - all webpages (Extended Search)
... is unique in its ability to model hot radiating plasmas and cold frag- menting solids. ... equation of state eects and heavy ion fusion beam-to-target energy coupling e ciency. ...
U.S. Department of Energy (DOE) - all webpages (Extended Search)
with application in modeling NDCX-II experiments Wangyi Liu 1 , John Barnard 2 , Alex Friedman 2 , Nathan Masters 2 , Aaron Fisher 2 , Alice Koniges 2 , David Eder 2 1 LBNL, USA, 2...
U.S. Department of Energy (DOE) - all webpages (Extended Search)
NASA Earth at Night Video EC, Energy, Energy Efficiency, Global, Modeling, News & Events, Solid-State Lighting, Videos NASA Earth at Night Video Have you ever wondered what the ...
Optimal design of distributed wastewater treatment networks
Galan, B.; Grossmann, I.E.
1998-10-01
This paper deals with the optimum design of a distributed wastewater network where multicomponent streams are considered that are to be processed by units for reducing the concentration of several contaminants. The proposed model gives rise to a nonconvex nonlinear problem which often exhibits local minima and causes convergence difficulties. A search procedure is proposed in this paper that is based on the successive solution of a relaxed linear model and the original nonconvex nonlinear problem. Several examples are presented to illustrate that the proposed method often yields global or near global optimum solutions. The model is also extended for selecting different treatment technologies and for handling membrane separation modules.
Detachment faults: Evidence for a low-angle origin
Scott, R.J.; Lister, G.S. )
1992-09-01
The origin of low-angle normal faults or detachment faults mantling metamorphic core complexes in the southwestern United States remains controversial. If [sigma][sub 1] is vertical during extension, the formation of, or even slip along, such low-angle normal faults is mechanically implausible. No records exist of earthquakes on low-angle normal faults in areas currently undergoing continental extension, except from an area of actively forming core complexes in the Solomon Sea, Papua New Guinea. In light of such geophysical and mechanical arguments, W.R. Buck and B. Wernicke and G.J. Axen proposed models in which detachment faults originate as high-angle normal faults, but rotate to low angles and become inactive as extension proceeds. These models are inconsistent with critical field relations in several core complexes. The Rawhide fault, an areally extensive detachment fault in western Arizona, propagated at close to its present subhorizontal orientation late in the Tertiary extension of the region. Neither the Wernicke and Axen nor Buck models predict such behavior; in fact, both models preclude the operation of low-angle normal faults. The authors recommend that alternative explanations or modifications of existing models are needed to explain the evidence that detachment faults form and operate with gentle dips.
Bower, Stanley
2011-12-31
A 5.0L V8 twin-turbocharged direct injection engine was designed, built, and tested for the purpose of assessing the fuel economy and performance in the F-Series pickup of the Dual Fuel engine concept and of an E85 optimized FFV engine. Additionally, production 3.5L gasoline turbocharged direct injection (GTDI) EcoBoost engines were converted to Dual Fuel capability and used to evaluate the cold start emissions and fuel system robustness of the Dual Fuel engine concept. Project objectives were: to develop a roadmap to demonstrate a minimized fuel economy penalty for an F-Series FFV truck with a highly boosted, high compression ratio spark ignition engine optimized to run with ethanol fuel blends up to E85; to reduce FTP 75 energy consumption by 15% - 20% compared to an equally powered vehicle with a current production gasoline engine; and to meet ULEV emissions, with a stretch target of ULEV II / Tier II Bin 4. All project objectives were met or exceeded.
Fast optimization and dose calculation in scanned ion beam therapy
Hild, S.; Graeff, C.; Trautmann, J.; Kraemer, M.; Zink, K.; Durante, M.; Bert, C.
2014-07-15
Purpose: Particle therapy (PT) has advantages over photon irradiation on static tumors. An increased biological effectiveness and active target conformal dose shaping are strong arguments for PT. However, the sensitivity to changes of internal geometry complicates the use of PT for moving organs. In case of interfractionally moving objects adaptive radiotherapy (ART) concepts known from intensity modulated radiotherapy (IMRT) can be adopted for PT treatments. One ART strategy is to optimize a new treatment plan based on daily image data directly before a radiation fraction is delivered [treatment replanning (TRP)]. Optimizing treatment plans for PT using a scanned beam is a time consuming problem especially for particles other than protons where the biological effective dose has to be calculated. For the purpose of TRP, fast optimization and fast dose calculation have been implemented into the GSI in-house treatment planning system (TPS) TRiP98. Methods: This work reports about the outcome of a code analysis that resulted in optimization of the calculation processes as well as implementation of routines supporting parallel execution of the code. To benchmark the new features, the calculation time for therapy treatment planning has been studied. Results: Compared to the original version of the TPS, calculation times for treatment planning (optimization and dose calculation) have been improved by a factor of 10 with code optimization. The parallelization of the TPS resulted in a speedup factor of 12 and 5.5 for the original version and the code optimized version, respectively. Hence the total speedup of the new implementation of the authors' TPS yielded speedup factors up to 55. Conclusions: The improved TPS is capable of completing treatment planning for ion beam therapy of a prostate irradiation considering organs at risk in this has been overseen in the review process. Also see below 6 min.
Advanced Modeling for Particle Accelerators
U.S. Department of Energy (DOE) - all webpages (Extended Search)
multiphysics, multi-bunch modeling of injectors, boosters, and debunchers for performance optimization. These applications include large-scale electromagnetic modeling of...
Stochastic Optimal Scheduling of Residential Appliances with Renewable Energy Sources
Wu, Hongyu; Pratt, Annabelle; Chakraborty, Sudipta
2015-07-03
This paper proposes a stochastic, multi-objective optimization model within a Model Predictive Control (MPC) framework, to determine the optimal operational schedules of residential appliances operating in the presence of renewable energy source (RES). The objective function minimizes the weighted sum of discomfort, energy cost, total and peak electricity consumption, and carbon footprint. A heuristic method is developed for combining different objective components. The proposed stochastic model utilizes Monte Carlo simulation (MCS) for representing uncertainties in electricity price, outdoor temperature, RES generation, water usage, and non-controllable loads. The proposed model is solved using a mixed integer linear programming (MILP) solver and numerical results show the validity of the model. Case studies show the benefit of using the proposed optimization model.
Adiabatic quantum optimization for associative memory recall
Seddiqi, Hadayat; Humble, Travis S.
2014-12-22
Hopfield networks are a variant of associative memory that recall patterns stored in the couplings of an Ising model. Stored memories are conventionally accessed as fixed points in the network dynamics that correspond to energetic minima of the spin state. We show that memories stored in a Hopfield network may also be recalled by energy minimization using adiabatic quantum optimization (AQO). Numerical simulations of the underlying quantum dynamics allow us to quantify AQO recall accuracy with respect to the number of stored memories and noise in the input key. We investigate AQO performance with respect to how memories are stored in the Ising model according to different learning rules. Our results demonstrate that AQO recall accuracy varies strongly with learning rule, a behavior that is attributed to differences in energy landscapes. Consequently, learning rules offer a family of methods for programming adiabatic quantum optimization that we expect to be useful for characterizing AQO performance.
Adiabatic quantum optimization for associative memory recall
Seddiqi, Hadayat; Humble, Travis S.
2014-12-22
Hopfield networks are a variant of associative memory that recall patterns stored in the couplings of an Ising model. Stored memories are conventionally accessed as fixed points in the network dynamics that correspond to energetic minima of the spin state. We show that memories stored in a Hopfield network may also be recalled by energy minimization using adiabatic quantum optimization (AQO). Numerical simulations of the underlying quantum dynamics allow us to quantify AQO recall accuracy with respect to the number of stored memories and noise in the input key. We investigate AQO performance with respect to how memories are storedmore » in the Ising model according to different learning rules. Our results demonstrate that AQO recall accuracy varies strongly with learning rule, a behavior that is attributed to differences in energy landscapes. Consequently, learning rules offer a family of methods for programming adiabatic quantum optimization that we expect to be useful for characterizing AQO performance.« less
Control strategy optimization of HVAC plants
Facci, Andrea Luigi; Zanfardino, Antonella; Martini, Fabrizio; Pirozzi, Salvatore; Ubertini, Stefano
2015-03-10
In this paper we present a methodology to optimize the operating conditions of heating, ventilation and air conditioning (HVAC) plants to achieve a higher energy efficiency in use. Semi-empiric numerical models of the plant components are used to predict their performances as a function of their set-point and the environmental and occupied space conditions. The optimization is performed through a graph-based algorithm that finds the set-points of the system components that minimize energy consumption and/or energy costs, while matching the user energy demands. The resulting model can be used with systems of almost any complexity, featuring both HVAC components and energy systems, and is sufficiently fast to make it applicable to real-time setting.
Origin and dynamics of vortex rings in drop splashing
Lee, Ji San; Park, Su Ji; Lee, Jun Ho; Weon, Byung Mook; Fezzaa, Kamel; Je, Jung Ho
2015-09-04
A vortex is a flow phenomenon that is very commonly observed in nature. More than a century, a vortex ring that forms during drop splashing has caught the attention of many scientists due to its importance in understanding fluid mixing and mass transport processes. However, the origin of the vortices and their dynamics remain unclear, mostly due to the lack of appropriate visualization methods. Here, with ultrafast X-ray phase-contrast imaging, we show that the formation of vortex rings originates from the energy transfer by capillary waves generated at the moment of the drop impact. Interestingly, we find a row ofmore » vortex rings along the drop wall, as demonstrated by a phase diagram established here, with different power-law dependencies of the angular velocities on the Reynolds number. These results provide important insight that allows understanding and modelling any type of vortex rings in nature, beyond just vortex rings during drop splashing.« less
Origin and dynamics of vortex rings in drop splashing
Lee, Ji San; Park, Su Ji; Lee, Jun Ho; Weon, Byung Mook; Fezzaa, Kamel; Je, Jung Ho
2015-09-04
A vortex is a flow phenomenon that is very commonly observed in nature. More than a century, a vortex ring that forms during drop splashing has caught the attention of many scientists due to its importance in understanding fluid mixing and mass transport processes. However, the origin of the vortices and their dynamics remain unclear, mostly due to the lack of appropriate visualization methods. Here, with ultrafast X-ray phase-contrast imaging, we show that the formation of vortex rings originates from the energy transfer by capillary waves generated at the moment of the drop impact. Interestingly, we find a row of vortex rings along the drop wall, as demonstrated by a phase diagram established here, with different power-law dependencies of the angular velocities on the Reynolds number. These results provide important insight that allows understanding and modelling any type of vortex rings in nature, beyond just vortex rings during drop splashing.
Optimal recovery of linear operators in non-Euclidean metrics
Osipenko, K Yu
2014-10-31
The paper looks at problems concerning the recovery of operators from noisy information in non-Euclidean metrics. Anumber of general theorems are proved and applied to recovery problems for functions and their derivatives from the noisy Fourier transform. In some cases, afamily of optimal methods is found, from which the methods requiring the least amount of original information are singled out. Bibliography: 25 titles.
Desalination Plant Optimization
Energy Science and Technology Software Center
1992-10-01
MSF21 and VTE21 perform design and costing calculations for multistage flash evaporator (MSF) and multieffect vertical tube evaporator (VTE) desalination plants. An optimization capability is available, if desired. The MSF plant consists of a recovery section, reject section, brine heater, and associated buildings and equipment. Operating costs and direct and indirect capital costs for plant, buildings, site, and intakes are calculated. Computations are based on the first and last stages of each section and amore » typical middle recovery stage. As a result, the program runs rapidly but does not give stage by stage parameters. The VTE plant consists of vertical tube effects, multistage flash preheater, condenser, and brine heater and associated buildings and equipment. Design computations are done for each vertical tube effect, but preheater computations are based on the first and last stages and a typical middle stage.« less
Hierarchical optimization for neutron scattering problems
Bao, Feng; Archibald, Rick; Bansal, Dipanshu; Delaire, Olivier
2016-03-14
In this study, we present a scalable optimization method for neutron scattering problems that determines confidence regions of simulation parameters in lattice dynamics models used to fit neutron scattering data for crystalline solids. The method uses physics-based hierarchical dimension reduction in both the computational simulation domain and the parameter space. We demonstrate for silicon that after a few iterations the method converges to parameters values (interatomic force-constants) computed with density functional theory simulations.
Lidar arc scan uncertainty reduction through scanning geometry optimization
Wang, Hui; Barthelmie, Rebecca J.; Pryor, Sara C.; Brown, Gareth.
2016-04-13
Doppler lidars are frequently operated in a mode referred to as arc scans, wherein the lidar beam scans across a sector with a fixed elevation angle and the resulting measurements are used to derive an estimate of the n minute horizontal mean wind velocity (speed and direction). Previous studies have shown that the uncertainty in the measured wind speed originates from turbulent wind fluctuations and depends on the scan geometry (the arc span and the arc orientation). This paper is designed to provide guidance on optimal scan geometries for two key applications in the wind energy industry: wind turbine power performance analysis and annualmore » energy production prediction. We present a quantitative analysis of the retrieved wind speed uncertainty derived using a theoretical model with the assumption of isotropic and frozen turbulence, and observations from three sites that are onshore with flat terrain, onshore with complex terrain and offshore, respectively. The results from both the theoretical model and observations show that the uncertainty is scaled with the turbulence intensity such that the relative standard error on the 10 min mean wind speed is about 30% of the turbulence intensity. The uncertainty in both retrieved wind speeds and derived wind energy production estimates can be reduced by aligning lidar beams with the dominant wind direction, increasing the arc span and lowering the number of beams per arc scan. As a result, large arc spans should be used at sites with high turbulence intensity and/or large wind direction variation.« less
Lidar arc scan uncertainty reduction through scanning geometry optimization
Wang, Hui; Barthelmie, Rebecca J.; Pryor, Sara C.; Brown, Gareth.
2016-04-13
Doppler lidars are frequently operated in a mode referred to as arc scans, wherein the lidar beam scans across a sector with a fixed elevation angle and the resulting measurements are used to derive an estimate of the n minute horizontal mean wind velocity (speed and direction). Previous studies have shown that the uncertainty in the measured wind speed originates from turbulent wind fluctuations and depends on the scan geometry (the arc span and the arc orientation). This paper is designed to provide guidance on optimal scan geometries for two key applications in the wind energy industry: wind turbine power performance analysis and annualmore » energy production prediction. We present a quantitative analysis of the retrieved wind speed uncertainty derived using a theoretical model with the assumption of isotropic and frozen turbulence, and observations from three sites that are onshore with flat terrain, onshore with complex terrain and offshore, respectively. The results from both the theoretical model and observations show that the uncertainty is scaled with the turbulence intensity such that the relative standard error on the 10 min mean wind speed is about 30 % of the turbulence intensity. The uncertainty in both retrieved wind speeds and derived wind energy production estimates can be reduced by aligning lidar beams with the dominant wind direction, increasing the arc span and lowering the number of beams per arc scan. Large arc spans should be used at sites with high turbulence intensity and/or large wind direction variation.« less
Optimization of Advanced Diesel Engine Combustion Strategies...
Office of Energy Efficiency and Renewable Energy (EERE) (indexed site)
More Documents & Publications Optimization of Advanced Diesel Engine Combustion Strategies Optimization of Advanced Diesel Engine Combustion Strategies Computational Fluid Dynamics ...
Successful Selection of LED Streetlight Luminaires: Optimizing...
Webcasts Successful Selection of LED Streetlight Luminaires: Optimizing Illumination and Economic Performance Successful Selection of LED Streetlight Luminaires: Optimizing ...
Sandia Energy - Optimizing Engines for Alternative Fuels
U.S. Department of Energy (DOE) - all webpages (Extended Search)
Optimizing Engines for Alternative Fuels Home Energy Transportation Energy CRF Facilities News News & Events Research & Capabilities Sensors & Optical Diagnostics Optimizing...
Getting Started and Optimization Strategy
U.S. Department of Energy (DOE) - all webpages (Extended Search)
Getting Started and Optimization Strategy Getting Started and Optimization Strategy The purpose of this page is to get you started thinking about how to optimize your application for the Knights Landing (KNL) Architecture that will be on Cori. This page will walk you through the high level steps and give an example using a real application that runs at NERSC. How Cori Differs From Edison There are several important differences between the Cori (Knight's Landing) node architecture and the Edison
HMX Cooling Core Optimization Software
Energy Science and Technology Software Center
2006-08-31
The Software consists of code which is used to determine the optimal configuration of an HMX cooling core in a heat exchanger.
Optimizing PDFs for Search Engines
For search engine optimization (SEO), follow the Office of Energy Efficiency and Renewable Energy (EERE) best practices for adding metadata to PDFs.
Loth, E.; Tryggvason, G.; Tsuji, Y.; Elghobashi, S. E.; Crowe, Clayton T.; Berlemont, A.; Reeks, M.; Simonin, O.; Frank, Th; Onishi, Yasuo; Van Wachem, B.
2005-09-01
Slurry flows occur in many circumstances, including chemical manufacturing processes, pipeline transfer of coal, sand, and minerals; mud flows; and disposal of dredged materials. In this section we discuss slurry flow applications related to radioactive waste management. The Hanford tank waste solids and interstitial liquids will be mixed to form a slurry so it can be pumped out for retrieval and treatment. The waste is very complex chemically and physically. The ARIEL code is used to model the chemical interactions and fluid dynamics of the waste.
Malikopoulos, Andreas
2015-01-01
The increasing urgency to extract additional efficiency from hybrid propulsion systems has led to the development of advanced power management control algorithms. In this paper we address the problem of online optimization of the supervisory power management control in parallel hybrid electric vehicles (HEVs). We model HEV operation as a controlled Markov chain and we show that the control policy yielding the Pareto optimal solution minimizes online the long-run expected average cost per unit time criterion. The effectiveness of the proposed solution is validated through simulation and compared to the solution derived with dynamic programming using the average cost criterion.more » Both solutions achieved the same cumulative fuel consumption demonstrating that the online Pareto control policy is an optimal control policy.« less
Malikopoulos, Andreas
2015-01-01
The increasing urgency to extract additional efficiency from hybrid propulsion systems has led to the development of advanced power management control algorithms. In this paper we address the problem of online optimization of the supervisory power management control in parallel hybrid electric vehicles (HEVs). We model HEV operation as a controlled Markov chain and we show that the control policy yielding the Pareto optimal solution minimizes online the long-run expected average cost per unit time criterion. The effectiveness of the proposed solution is validated through simulation and compared to the solution derived with dynamic programming using the average cost criterion. Both solutions achieved the same cumulative fuel consumption demonstrating that the online Pareto control policy is an optimal control policy.
EIA - Distribution of U.S. Coal by Origin State
Energy Information Administration (EIA) (indexed site)
Origin State Glossary Home > Coal> Distribution of U.S. Coal by Origin State Distribution of U.S. Coal by Origin State Release Date: January 2006 Next Release Date: 2006...
Eslick, John C.; Ng, Brenda; Gao, Qianwen; Tong, Charles H.; Sahinidis, Nikolaos V.; Miller, David C.
2014-12-31
Under the auspices of the U.S. Department of Energy’s Carbon Capture Simulation Initiative (CCSI), a Framework for Optimization and Quantification of Uncertainty and Sensitivity (FOQUS) has been developed. This tool enables carbon capture systems to be rapidly synthesized and rigorously optimized, in an environment that accounts for and propagates uncertainties in parameters and models. FOQUS currently enables (1) the development of surrogate algebraic models utilizing the ALAMO algorithm, which can be used for superstructure optimization to identify optimal process configurations, (2) simulation-based optimization utilizing derivative free optimization (DFO) algorithms with detailed black-box process models, and (3) rigorous uncertainty quantification throughmore » PSUADE. FOQUS utilizes another CCSI technology, the Turbine Science Gateway, to manage the thousands of simulated runs necessary for optimization and UQ. Thus, this computational framework has been demonstrated for the design and analysis of a solid sorbent based carbon capture system.« less
Eslick, John C.; Ng, Brenda; Gao, Qianwen; Tong, Charles H.; Sahinidis, Nikolaos V.; Miller, David C.
2014-12-31
Under the auspices of the U.S. Department of Energys Carbon Capture Simulation Initiative (CCSI), a Framework for Optimization and Quantification of Uncertainty and Sensitivity (FOQUS) has been developed. This tool enables carbon capture systems to be rapidly synthesized and rigorously optimized, in an environment that accounts for and propagates uncertainties in parameters and models. FOQUS currently enables (1) the development of surrogate algebraic models utilizing the ALAMO algorithm, which can be used for superstructure optimization to identify optimal process configurations, (2) simulation-based optimization utilizing derivative free optimization (DFO) algorithms with detailed black-box process models, and (3) rigorous uncertainty quantification through PSUADE. FOQUS utilizes another CCSI technology, the Turbine Science Gateway, to manage the thousands of simulated runs necessary for optimization and UQ. Thus, this computational framework has been demonstrated for the design and analysis of a solid sorbent based carbon capture system.
Transmission network expansion planning with simulation optimization
Bent, Russell W; Berscheid, Alan; Toole, G. Loren
2010-01-01
Within the electric power literatW''e the transmi ssion expansion planning problem (TNEP) refers to the problem of how to upgrade an electric power network to meet future demands. As this problem is a complex, non-linear, and non-convex optimization problem, researchers have traditionally focused on approximate models. Often, their approaches are tightly coupled to the approximation choice. Until recently, these approximations have produced results that are straight-forward to adapt to the more complex (real) problem. However, the power grid is evolving towards a state where the adaptations are no longer easy (i.e. large amounts of limited control, renewable generation) that necessitates new optimization techniques. In this paper, we propose a generalization of the powerful Limited Discrepancy Search (LDS) that encapsulates the complexity in a black box that may be queJied for information about the quality of a proposed expansion. This allows the development of a new optimization algOlitlun that is independent of the underlying power model.
Optimized Swinging Door Algorithm for Wind Power Ramp Event Detection: Preprint
Cui, Mingjian; Zhang, Jie; Florita, Anthony R.; Hodge, Bri-Mathias; Ke, Deping; Sun, Yuanzhang
2015-08-06
Significant wind power ramp events (WPREs) are those that influence the integration of wind power, and they are a concern to the continued reliable operation of the power grid. As wind power penetration has increased in recent years, so has the importance of wind power ramps. In this paper, an optimized swinging door algorithm (SDA) is developed to improve ramp detection performance. Wind power time series data are segmented by the original SDA, and then all significant ramps are detected and merged through a dynamic programming algorithm. An application of the optimized SDA is provided to ascertain the optimal parameter of the original SDA. Measured wind power data from the Electric Reliability Council of Texas (ERCOT) are used to evaluate the proposed optimized SDA.
The origin of white luminescence from silicon oxycarbide thin...
Office of Scientific and Technical Information (OSTI)
origin of white luminescence from silicon oxycarbide thin films Citation Details In-Document Search Title: The origin of white luminescence from silicon oxycarbide thin films ...
Toward Understanding the Microscopic Origin of Nuclear Clustering...
Office of Scientific and Technical Information (OSTI)
Toward Understanding the Microscopic Origin of Nuclear Clustering Citation Details In-Document Search Title: Toward Understanding the Microscopic Origin of Nuclear Clustering Open...
Origins of optical absorption characteristics of Cu2+ complexes...
Office of Scientific and Technical Information (OSTI)
Origins of optical absorption characteristics of Cu2+ complexes in solutions Citation Details In-Document Search Title: Origins of optical absorption characteristics of Cu2+ ...
Postinflationary Higgs Relaxation and the Origin of Matter-Antimatter...
Office of Scientific and Technical Information (OSTI)
Postinflationary Higgs Relaxation and the Origin of Matter-Antimatter Asymmetry Prev Next Title: Postinflationary Higgs Relaxation and the Origin of Matter-Antimatter ...
EIA-Voluntary Reporting of Greenhouse Gases Program - Original...
Energy Information Administration (EIA) (indexed site)
of Greenhouse Gases Program Original 1605(b) Program Calculation Tools The workbooks below were developed to assist participants in the original Voluntary Reporting of Greenhouse ...
Hansborough, L.; Hamm, R.; Stovall, J.; Swenson, D.
1980-01-01
PIGMI (Pion Generator for Medical Irradiations) is a compact linear proton accelerator design, optimized for pion production and cancer treatment use in a hospital environment. Technology developed during a four-year PIGMI Prototype experimental program allows the design of smaller, less expensive, and more reliable proton linacs. A new type of low-energy accelerating structure, the radio-frequency quadrupole (RFQ) has been tested; it produces an exceptionally good-quality beam and allows the use of a simple 30-kV injector. Average axial electric-field gradients of over 9 MV/m have been demonstrated in a drift-tube linac (DTL) structure. Experimental work is underway to test the disk-and-washer (DAW) structure, another new type of accelerating structure for use in the high-energy coupled-cavity linac (CCL). Sufficient experimental and developmental progress has been made to closely define an actual PIGMI. It will consist of a 30-kV injector, and RFQ linac to a proton energy of 2.5 MeV, a DTL linac to 125 MeV, and a CCL linac to the final energy of 650 MeV. The total length of the accelerator is 133 meters. The RFQ and DTL will be driven by a single 440-MHz klystron; the CCL will be driven by six 1320-MHz klystrons. The peak beam current is 28 mA. The beam pulse length is 60 ..mu..s at a 60-Hz repetition rate, resulting in a 100-..mu..A average beam current. The total cost of the accelerator is estimated to be approx. $10 million.
Nondimensional Schmidt analysis for optimal design of Stirling engines
Impero Abenavoli, R.I.; Sciaboni, A.; Carlini, M.; Kormanski, H.; Rudzinska, K.
1996-01-01
General directions for rough optimal calibration of Stirling machines can be given by a non-dimensional Schmidt model (nDSM). Since different relative parameters and performance indices have been analyzed by nDSM models, there is lack of uniform conclusions in the literature. This paper describes a new nDSM of six parameters and compares four performance indices as functions of relative parameters. Two optimization tasks of two and five parameters are formulated and solved using the nDSM. Maximized criterion is cycle work per unit of mean pressure and total swept volume. An optimization code based on the algorithm of conjugate gradients with projection on linear constraints is described. The optimal values of volume phase angle, nondimensional swept volume, and dead volume are presented for different constraints imposed on temperature ratio and relative dead volumes.
Theoretical evaluation of the optimal performance of a thermoacoustic refrigerator
Minner, B.L.; Braun, J.E.; Mongeau, L.G.
1997-12-31
Theoretical models were integrated with a design optimization tool to allow estimates of the maximum coefficient of performance for thermoacoustic cooling systems. The system model was validated using experimental results for a well-documented prototype. The optimization tool was then applied to this prototype to demonstrate the benefits of systematic optimization. A twofold increase in performance was predicted through the variation of component dimensions alone, while a threefold improvement was estimated when the working fluid parameters were also considered. Devices with a similar configuration were optimized for operating requirements representative of a home refrigerator. The results indicate that the coefficients of performance are comparable to those of existing vapor-compression equipment for this application. In addition to the choice of working fluid, the heat exchanger configuration was found to be a critical design factor affecting performance. Further experimental work is needed to confirm the theoretical predictions presented in this paper.
Optimal Portfolio Selection Under Concave Price Impact
Ma Jin; Song Qingshuo; Xu Jing; Zhang Jianfeng
2013-06-15
In this paper we study an optimal portfolio selection problem under instantaneous price impact. Based on some empirical analysis in the literature, we model such impact as a concave function of the trading size when the trading size is small. The price impact can be thought of as either a liquidity cost or a transaction cost, but the concavity nature of the cost leads to some fundamental difference from those in the existing literature. We show that the problem can be reduced to an impulse control problem, but without fixed cost, and that the value function is a viscosity solution to a special type of Quasi-Variational Inequality (QVI). We also prove directly (without using the solution to the QVI) that the optimal strategy exists and more importantly, despite the absence of a fixed cost, it is still in a 'piecewise constant' form, reflecting a more practical perspective.
Optimization of Regenerators for AMRR Systems
Nellis, Gregory; Klein, Sanford; Brey, William; Moine, Alexandra; Nielson, Kaspar
2015-06-18
Active Magnetic Regenerative Refrigeration (AMRR) systems have no direct global warming potential or ozone depletion potential and hold the potential for providing refrigeration with efficiencies that are equal to or greater than the vapor compression systems used today. The work carried out in this project has developed and improved modeling tools that can be used to optimize and evaluate the magnetocaloric materials and geometric structure of the regenerator beds required for AMRR Systems. There has been an explosion in the development of magnetocaloric materials for AMRR systems over the past few decades. The most attractive materials, based on the magnitude of the measured magnetocaloric effect, tend to also have large amounts of hysteresis. This project has provided for the first time a thermodynamically consistent method for evaluating these hysteretic materials in the context of an AMRR cycle. An additional, practical challenge that has been identified for AMRR systems is related to the participation of the regenerator wall in the cyclic process. The impact of housing heat capacity on both passive and active regenerative systems has been studied and clarified within this project. This report is divided into two parts corresponding to these two efforts. Part 1 describes the work related to modeling magnetic hysteresis while Part 2 discusses the modeling of the heat capacity of the housing. A key outcome of this project is the development of a publically available modeling tool that allows researchers to identify a truly optimal magnetocaloric refrigerant. Typically, the refrigeration potential of a magnetocaloric material is judged entirely based on the magnitude of the magnetocaloric effect and other properties of the material that are deemed unimportant. This project has shown that a material with a large magnetocaloric effect (as evidenced, for example, by a large adiabatic temperature change) may not be optimal when it is accompanied by a large hysteresis
Development and Optimization of Modular Hybrid Plasma Reactor (Technical
Office of Scientific and Technical Information (OSTI)
Report) | SciTech Connect Development and Optimization of Modular Hybrid Plasma Reactor Citation Details In-Document Search Title: Development and Optimization of Modular Hybrid Plasma Reactor INL developed a bench-scale, modular hybrid plasma system for gas-phase nanomaterials synthesis. The system was optimized for WO{sub 3} nanoparticle production and scale-model projection to a 300 kW pilot system. During the course of technology development, many modifications were made to the system to
Barus, R. P. P.; Tjokronegoro, H. A.; Leksono, E.; Ismunandar
2014-09-25
Fuel cells are promising new energy conversion devices that are friendly to the environment. A set of control systems are required in order to operate a fuel cell based power plant system optimally. For the purpose of control system design, an accurate fuel cell stack model in describing the dynamics of the real system is needed. Currently, linear model are widely used for fuel cell stack control purposes, but it has limitations in narrow operation range. While nonlinear models lead to nonlinear control implemnetation whos more complex and hard computing. In this research, nonlinear cancellation technique will be used to transform a nonlinear model into a linear form while maintaining the nonlinear characteristics. The transformation is done by replacing the input of the original model by a certain virtual input that has nonlinear relationship with the original input. Then the equality of the two models is tested by running a series of simulation. Input variation of H2, O2 and H2O as well as disturbance input I (current load) are studied by simulation. The error of comparison between the proposed model and the original nonlinear model are less than 1 %. Thus we can conclude that nonlinear cancellation technique can be used to represent fuel cell nonlinear model in a simple linear form while maintaining the nonlinear characteristics and therefore retain the wide operation range.
Building Energy Optimization (BEopt) Software | Department of Energy
Office of Energy Efficiency and Renewable Energy (EERE) (indexed site)
Residential Buildings » Building America » Building Energy Optimization (BEopt) Software Building Energy Optimization (BEopt) Software BEopt 2.4 Now Available! With the release of BEopt Version 2.4 Beta, users can now perform modeling analysis on multifamily buildings! Other new options for input include: heat pump clothes dryers; electric/gas clothes dryers; condensing tank water heaters; door construction and area; window areas defined by façade-specific WWRs; and 2013 ASHRAE 62.2
Optimizing Blast Furnace Operation to Increase Efficiency and Lower Costs
Office of Energy Efficiency and Renewable Energy (EERE) (indexed site)
Optimizing Blast Furnace Operation to Increase Efficiency and Lower Costs State-of-the-Art Computational Fluid Dynamics Model Optimizes Fuel Rate in Blast Furnaces The blast furnace (BF) is the most widely used ironmaking process in the U.S. A major advance in BF ironmaking has been the use of pulverized coal which partially replaces metallurgi- cal coke. This results in substantial improvement in furnace effciency and thus the reductions of energy consumption and greenhouse gas emissions.
Successful Selection of LED Streetlight Luminaires: Optimizing Illumination
Office of Energy Efficiency and Renewable Energy (EERE) (indexed site)
and Economic Performance | Department of Energy Webcasts » Successful Selection of LED Streetlight Luminaires: Optimizing Illumination and Economic Performance Successful Selection of LED Streetlight Luminaires: Optimizing Illumination and Economic Performance This March 6, 2013 webcast reviewed the factors involved in successful selection of LED streetlight luminaires. Presenters Eric Haugaard of Cree Lighting and Chad Stalker of Philips Lumileds guided participants through the modeling of
ORIGIN OF DUST AROUND V1309 SCO
Zhu, Chunhua; Lü, Guoliang; Wang, Zhaojun
2013-11-01
The origin of dust grains in the interstellar medium is still an unanswered problem. Nicholls et al. found the presence of a significant amount of dust around V1309 Sco, which may originate from the merger of a contact binary. We investigate the origin of dust around V1309 Sco and suggest that these dust grains are produced in the binary-merger ejecta. By means of the AGBDUST code, we estimate that ∼5.2 × 10{sup –4} M{sub ☉} dust grains are produced with a radii of ∼10{sup –5} cm. These dust grains are mainly composed of silicate and iron grains. Because the mass of the binary merger ejecta is very small, the contribution of dust produced by binary merger ejecta to the overall dust production in the interstellar medium is negligible. However, it is important to note that the discovery of a significant amount of dust around V1309 Sco offers a direct support for the idea that common-envelope ejecta provides an ideal environment for dust formation and growth. Therefore, we confirm that common envelope ejecta can be important source of cosmic dust.
Water Transport in PEM Fuel Cells: Advanced Modeling, Material...
Office of Energy Efficiency and Renewable Energy (EERE) (indexed site)
in PEM Fuel Cells: Advanced Modeling, Material Selection, Testing and Design Optimization ... Optimization Part of a 100 million fuel cell award announced by DOE Secretary Bodman on ...
Origin of the narrow, single peak in the fission-fragment mass distribution
Office of Scientific and Technical Information (OSTI)
for 258Fm (Journal Article) | SciTech Connect Origin of the narrow, single peak in the fission-fragment mass distribution for 258Fm Citation Details In-Document Search Title: Origin of the narrow, single peak in the fission-fragment mass distribution for 258Fm We discuss the origin of the narrowness of the single peak at mass-symmetric division in the fragment mass-yield curve for spontaneous fission of {sup 258}Fm. For this purpose, we employ the macroscopic-microscopic model and calculate
Inversion of seismic reflection traveltimes using a nonlinear optimization scheme
Pullammanappallil, S.K.; Louie, J.N. (Univ. of Nevada, Reno, NV (United States). Mackay School of Mines)
1993-11-01
The authors present the use of a nonlinear optimization scheme called generalized simulated annealing to invert seismic reflection times for velocities, reflector depths, and lengths. A finite-difference solution of the eikonal equation computes reflection traveltimes through the velocity model and avoids ray tracing. They test the optimization scheme on synthetic models and compare it with results from a linearized inversion. The synthetic tests illustrate that, unlike linear inversion schemes, the results obtained by the optimization scheme are independent of the initial model. The annealing method has the ability to produce a suite of models that satisfy the data equally well. They make use of this property to determine the uncertainties associated with the model parameters obtained. Synthetic examples demonstrate that allowing the reflector length to vary, along with its position, helps the optimization process obtain a better solution. The authors put this to use in imaging the Garlock fault, whose geometry at depth is poorly known. They use reflection times picked from shot gathers recorded along COCORP Mojave Line 5 to invert for the Garlock fault and velocities within the Cantil Basin below Fremont Valley, California. The velocities within the basin obtained by their optimization scheme are consistent with earlier studies, though their results suggest that the basin might extend 1--2 km further south. The reconstructed reflector seems to suggest shallowing of the dip of the Garlock fault at depth.
Mays, Gary T; Belles, Randy; Cetiner, Sacit M; Howard, Rob L; Liu, Cheng; Mueller, Don; Omitaomu, Olufemi A; Peterson, Steven K; Scaglione, John M
2012-06-01
The objective of this siting study work is to support DOE in evaluating integrated advanced nuclear plant and ISFSI deployment options in the future. This study looks at several nuclear power plant growth scenarios that consider the locations of existing and planned commercial nuclear power plants integrated with the establishment of consolidated interim spent fuel storage installations (ISFSIs). This research project is aimed at providing methodologies, information, and insights that inform the process for determining and optimizing candidate areas for new advanced nuclear power generation plants and consolidated ISFSIs to meet projected US electric power demands for the future.
Optimization of cable terminations
Nikolajevic, S.V.; Pekaric-Nad, N.M.; Dimitrijevic, R.M.
1997-04-01
This paper describes a study of various termination constructions for medium voltage cross-linked polyethylene (XLPE) cables. A special device was used for electrical field measurements around the cable termination which made it possible to monitor how stress relief materials with different permittivity and placement of isolated or grounded embedded electrodes (EE) affected electrical stress grading. The results of measurements for each configuration were examined by mathematical modeling based on the finite element method (FEM). Finally, the selected constructions of cable termination have passed severe test conditions with load cycling.
Optimization of a CNG series hybrid concept vehicle
Aceves, S.M.; Smith, J.R.; Perkins, L.J.; Haney, S.W.; Flowers, D.L.
1995-09-22
Compressed Natural Gas (CNG) has favorable characteristics as a vehicular fuel, in terms of fuel economy as well as emissions. Using CNG as a fuel in a series hybrid vehicle has the potential of resulting in very high fuel economy (between 26 and 30 km/liter, 60 to 70 mpg) and very low emissions (substantially lower than Federal Tier II or CARB ULEV). This paper uses a vehicle evaluation code and an optimizer to find a set of vehicle parameters that result in optimum vehicle fuel economy. The vehicle evaluation code used in this analysis estimates vehicle power performance, including engine efficiency and power, generator efficiency, energy storage device efficiency and state-of-charge, and motor and transmission efficiencies. Eight vehicle parameters are selected as free variables for the optimization. The optimum vehicle must also meet two perfect requirements: accelerate to 97 km/h in less than 10 s, and climb an infinitely long hill with a 6% slope at 97 km/h with a 272 kg (600 lb.) payload. The optimizer used in this work was originally developed in the magnetic fusion energy program, and has been used to optimize complex systems, such as magnetic and inertial fusion devices, neutron sources, and mil guns. The optimizer consists of two parts: an optimization package for minimizing non-linear functions of many variables subject to several non-linear equality and/or inequality constraints and a programmable shell that allows interactive configuration and execution of the optimizer. The results of the analysis indicate that the CNG series hybrid vehicle has a high efficiency and low emissions. These results emphasize the advantages of CNG as a near-term alternative fuel for vehicles.
Zhang, P; Hu, J; Tyagi, N; Mageras, G; Lee, N; Hunt, M
2014-06-01
Purpose: To develop a robust planning paradigm which incorporates a tumor regression model into the optimization process to ensure tumor coverage in head and neck radiotherapy. Methods: Simulation and weekly MR images were acquired for a group of head and neck patients to characterize tumor regression during radiotherapy. For each patient, the tumor and parotid glands were segmented on the MR images and the weekly changes were formulated with an affine transformation, where morphological shrinkage and positional changes are modeled by a scaling factor, and centroid shifts, respectively. The tumor and parotid contours were also transferred to the planning CT via rigid registration. To perform the robust planning, weekly predicted PTV and parotid structures were created by transforming the corresponding simulation structures according to the weekly affine transformation matrix averaged over patients other than him/herself. Next, robust PTV and parotid structures were generated as the union of the simulation and weekly prediction contours. In the subsequent robust optimization process, attainment of the clinical dose objectives was required for the robust PTV and parotids, as well as other organs at risk (OAR). The resulting robust plans were evaluated by looking at the weekly and total accumulated dose to the actual weekly PTV and parotid structures. The robust plan was compared with the original plan based on the planning CT to determine its potential clinical benefit. Results: For four patients, the average weekly change to tumor volume and position was ?4% and 1.2 mm laterally-posteriorly. Due to these temporal changes, the robust plans resulted in an accumulated PTV D95 that was, on average, 2.7 Gy higher than the plan created from the planning CT. OAR doses were similar. Conclusion: Integration of a tumor regression model into target delineation and plan robust optimization is feasible and may yield improved tumor coverage. Part of this research is supported by
CENTRAL PLATEAU REMEDIATION OPTIMIZATION STUDY
BERGMAN, T. B.; STEFANSKI, L. D.; SEELEY, P. N.; ZINSLI, L. C.; CUSACK, L. J.
2012-09-19
THE CENTRAL PLATEAU REMEDIATION OPTIMIZATION STUDY WAS CONDUCTED TO DEVELOP AN OPTIMAL SEQUENCE OF REMEDIATION ACTIVITIES IMPLEMENTING THE CERCLA DECISION ON THE CENTRAL PLATEAU. THE STUDY DEFINES A SEQUENCE OF ACTIVITIES THAT RESULT IN AN EFFECTIVE USE OF RESOURCES FROM A STRATEGIC PERSPECTIVE WHEN CONSIDERING EQUIPMENT PROCUREMENT AND STAGING, WORKFORCE MOBILIZATION/DEMOBILIZATION, WORKFORCE LEVELING, WORKFORCE SKILL-MIX, AND OTHER REMEDIATION/DISPOSITION PROJECT EXECUTION PARAMETERS.
Optimized Algorithms Boost Combustion Research
U.S. Department of Energy (DOE) - all webpages (Extended Search)
Optimized Algorithms Boost Combustion Research Optimized Algorithms Boost Combustion Research Methane Flame Simulations Run 6x Faster on NERSC's Hopper Supercomputer November 25, 2014 Contact: Kathy Kincade, +1 510 495 2124, kkincade@lbl.gov Turbulent combustion simulations, which provide input to the design of more fuel-efficient combustion systems, have gotten their own efficiency boost, thanks to researchers from the Computational Research Division (CRD) at Lawrence Berkeley National
Paap, Scott M.; West, Todd H.; Manley, Dawn Kataoka; Dibble, Dean C.; Simmons, Blake Alexander; Steen, Eric J.; Beller, Harry R.; Keasling, Jay D.; Chang, Shiyan
2013-01-01
In the current study, processes to produce either ethanol or a representative fatty acid ethyl ester (FAEE) via the fermentation of sugars liberated from lignocellulosic materials pretreated in acid or alkaline environments are analyzed in terms of economic and environmental metrics. Simplified process models are introduced and employed to estimate process performance, and Monte Carlo analyses were carried out to identify key sources of uncertainty and variability. We find that the near-term performance of processes to produce FAEE is significantly worse than that of ethanol production processes for all metrics considered, primarily due to poor fermentation yields and higher electricity demands for aerobic fermentation. In the longer term, the reduced cost and energy requirements of FAEE separation processes will be at least partially offset by inherent limitations in the relevant metabolic pathways that constrain the maximum yield potential of FAEE from biomass-derived sugars.
Portman, J.; Zhang, H.; Makino, K.; Ruan, C. Y.; Berz, M.; Duxbury, P. M.
2014-11-07
Using our model for the simulation of photoemission of high brightness electron beams, we investigate the virtual cathode physics and the limits to spatio-temporal and spectroscopic resolution originating from the image charge on the surface and from the profile of the exciting laser pulse. By contrasting the effect of varying surface properties (leading to expanding or pinned image charge), laser profiles (Gaussian, uniform, and elliptical), and aspect ratios (pancake- and cigar-like) under different extraction field strengths and numbers of generated electrons, we quantify the effect of these experimental parameters on macroscopic pulse properties such as emittance, brightness (4D and 6D), coherence length, and energy spread. Based on our results, we outline optimal conditions of pulse generation for ultrafast electron microscope systems that take into account constraints on the number of generated electrons and on the required time resolution.
DAKOTA Design Analysis Kit for Optimization and Terascale
Energy Science and Technology Software Center
2010-02-24
The DAKOTA (Design Analysis Kit for Optimization and Terascale Applications) toolkit provides a flexible and extensible interface between simulation codes (computational models) and iterative analysis methods. By employing object-oriented design to implement abstractions of the key components required for iterative systems analyses, the DAKOTA toolkit provides a flexible and extensible problem-solving environment for design and analysis of computational models on high performance computers.A user provides a set of DAKOTA commands in an input file andmore » launches DAKOTA. DAKOTA invokes instances of the computational models, collects their results, and performs systems analyses. DAKOTA contains algorithms for optimization with gradient and nongradient-based methods; uncertainty quantification with sampling, reliability, polynomial chaos, stochastic collocation, and epistemic methods; parameter estimation with nonlinear least squares methods; and sensitivity/variance analysis with design of experiments and parameter study methods. These capabilities may be used on their own or as components within advanced strategies such as hybrid optimization, surrogate-based optimization, mixed integer nonlinear programming, or optimization under uncertainty. Services for parallel computing, simulation interfacing, approximation modeling, fault tolerance, restart, and graphics are also included.« less
SOAR Sandia Optimization and Analysis Routines
Energy Science and Technology Software Center
2003-08-20
SOAR is a suite of problem specific desktop welding software applications to develop optimal automatic weld procedures. Applications provide interative displays of fusion zone dimensions versus input parameter levels in a weldment. SOAR also displays heat affected zones, temperature contours, process efficiencies, and sensitivity parameters, by computing solutions to analytical and/or empirical heat transfer models. SOAR provides the knowledgeable user valuable analysis tools to investigate the impact of changes in weld procedures on weld characteristics.more » SOAR operates via graphical user system input and returns both numerical and graphical output in both electronic and hardcopy form.« less
Optimal bolt preload for dynamic loading
Duffey, T.A.
1992-08-01
A simple spring-mass model is developed for closure bolting systems, including the effects of bolt prestress. An analytical solution is developed for the case of an initially peaked, exponentially decaying internal pressure pulse acting on the closure. The dependence of peak bolt stresses and deflections on bolt prestress level is investigated and an optimal prestress that minimizes peak bolt stress is found in certain cases. Vulnerability curves are developed for bolted-closure systems to provide rapid evaluation of the dynamic capacity of designs for a range in bolt prestress.
Optimal bolt preload for dynamic loading
Duffey, T.A.
1992-01-01
A simple spring-mass model is developed for closure bolting systems, including the effects of bolt prestress. An analytical solution is developed for the case of an initially peaked, exponentially decaying internal pressure pulse acting on the closure. The dependence of peak bolt stresses and deflections on bolt prestress level is investigated and an optimal prestress that minimizes peak bolt stress is found in certain cases. Vulnerability curves are developed for bolted-closure systems to provide rapid evaluation of the dynamic capacity of designs for a range in bolt prestress.
Nonlinear simulations to optimize magnetic nanoparticle hyperthermia
Reeves, Daniel B. Weaver, John B.
2014-03-10
Magnetic nanoparticle hyperthermia is an attractive emerging cancer treatment, but the acting microscopic energy deposition mechanisms are not well understood and optimization suffers. We describe several approximate forms for the characteristic time of Néel rotations with varying properties and external influences. We then present stochastic simulations that show agreement between the approximate expressions and the micromagnetic model. The simulations show nonlinear imaginary responses and associated relaxational hysteresis due to the field and frequency dependencies of the magnetization. This suggests that efficient heating is possible by matching fields to particles instead of resorting to maximizing the power of the applied magnetic fields.
Optimal charging profiles for mechanically constrained lithium-ion batteries
Suthar, B; Ramadesigan, V; De, S; Braatz, RD; Subramanian, VR
2014-01-01
The cost and safety related issues of lithium-ion batteries require intelligent charging profiles that can efficiently utilize the battery. This paper illustrates the application of dynamic optimization in obtaining the optimal current profile for charging a lithium-ion battery using a single-particle model while incorporating intercalation-induced stress generation. In this paper, we focus on the problem of maximizing the charge stored in a given time while restricting the development of stresses inside the particle. Conventional charging profiles for lithium-ion batteries (e.g., constant current followed by constant voltage) were not derived by considering capacity fade mechanisms. These charging profiles are not only inefficient in terms of lifetime usage of the batteries but are also slower since they do not exploit the changing dynamics of the system. Dynamic optimization based approaches have been used to derive optimal charging and discharging profiles with different objective functions. The progress made in understanding the capacity fade mechanisms has paved the way for inclusion of that knowledge in deriving optimal controls. While past efforts included thermal constraints, this paper for the first time presents strategies for optimally charging batteries by guaranteeing minimal mechanical damage to the electrode particles during intercalation. In addition, an executable form of the code has been developed and provided. This code can be used to identify optimal charging profiles for any material and design parameters.
Optimizing Installation, Operation, and Maintenance at Offshore...
Office of Energy Efficiency and Renewable Energy (EERE) (indexed site)
Optimizing Installation, Operation, and Maintenance at Offshore Wind Projects in the United States Optimizing Installation, Operation, and Maintenance at Offshore Wind Projects in...
Design Optimization of Piezoceramic Multilayer Actuators for...
Energy.gov [DOE] (indexed site)
More Documents & Publications Design Optimization of Piezoceramic Multilayer Actuators for Heavy Duty Diesel Engine Fuel Injectors Design Optimization of Piezoceramic Multilayer ...
Design Optimization of Piezoceramic Multilayer Actuators for...
Energy.gov [DOE] (indexed site)
Publications Design Optimization of Piezoceramic Multilayer Actuators for Heavy Duty Diesel Engine Fuel Injectors Vehicle Technologies Office Merit Review 2014: Design Optimization ...
Course Overview Pump Systems Matter Optimization | Department...
Office of Energy Efficiency and Renewable Energy (EERE) (indexed site)
Course Overview Pump Systems Matter Optimization Attendees of the "Pump Systems Optimization" one-day course will gain valuable new skills to help them improve...
Optimize carbon dioxide sequestration, enhance oil recovery
U.S. Department of Energy (DOE) - all webpages (Extended Search)
Optimize carbon dioxide sequestration, enhance oil recovery Optimize carbon dioxide sequestration, enhance oil recovery The simulation provides an important approach to estimate ...
Data Center Optimization Plan | Department of Energy
Data Center Optimization Plan Data Center Optimization Plan The Department of Energy (DOE) is committed to the overall reduction in the number of its data centers, consolidation of ...
Energy Optimizers USA | Open Energy Information
OpenEI (Open Energy Information) [EERE & EIA]
Optimizers USA Jump to: navigation, search Name: Energy Optimizers USA Address: 6 S. 3rd Street Place: Tipp City, Ohio Zip: 45371 Sector: Biomass, Carbon, Geothermal energy,...
Optimization of Advanced Diesel Engine Combustion Strategies...
Energy.gov [DOE] (indexed site)
Optimization of Advanced Diesel Engine Combustion Strategies Optimization of Advanced Diesel Engine Combustion Strategies Use of Low Cetane Fuel to Enable Low Temperature ...
Reservoir-Stimulation Optimization with Operational Monitoring...
Office of Energy Efficiency and Renewable Energy (EERE) (indexed site)
Reservoir-Stimulation Optimization with Operational Monitoring for Creation of Enhanced Geothermal Systems Reservoir-Stimulation Optimization with Operational Monitoring for ...
Intel compiler performance optimization and characterization
U.S. Department of Energy (DOE) - all webpages (Extended Search)
compiler performance optimization and characterization Intel compiler performance optimization and characterization May 13, 2015 NERSC will host an in-depth training presentation...
Parallel Programming and Optimization for Intel Architecture
U.S. Department of Energy (DOE) - all webpages (Extended Search)
Parallel Programming and Optimization for Intel Architecture Parallel Programming and Optimization for Intel Architecture August 14, 2015 by Richard Gerber Intel is sponsoring a ...
Optimize carbon dioxide sequestration, enhance oil recovery
U.S. Department of Energy (DOE) - all webpages (Extended Search)
Optimize carbon dioxide sequestration, enhance oil recovery Optimize carbon dioxide sequestration, enhance oil recovery The simulation provides an important approach to estimate...
Optimizing Installation, Operation, and Maintenance at Offshore...
Office of Energy Efficiency and Renewable Energy (EERE) (indexed site)
Optimizing Installation, Operation, and Maintenance at Offshore Wind Projects in the United States Optimizing Installation, Operation, and Maintenance at Offshore Wind Projects in ...
Michigan: General Motors Optimizes Engine Valve Technology |...
Office of Energy Efficiency and Renewable Energy (EERE) (indexed site)
Michigan: General Motors Optimizes Engine Valve Technology Michigan: General Motors Optimizes Engine Valve Technology November 8, 2013 - 12:00am Addthis An EERE-supported effort to ...
Computationally Optimized Homogenization Heat Treatment of Metal...
U.S. Department of Energy (DOE) - all webpages (Extended Search)
Return to Search Computationally Optimized Homogenization Heat Treatment of Metal Alloys ... PDF Document Publication Computationally Optimized Homogenization Heat Treatment of Metal ...
OPTIMIZATION OF WATER TO FUEL RATIOS IN CLADDED CYLINDER ARRAYS
Huffer, J
2007-03-14
Often in criticality safety problems, the analyst is concerned about two conditions: Loss of Mass Control and Loss of Moderation Control. Determining and modeling the maximum amount of fuel that can fit in a given container is usually trivial. Determining and modeling the maximum amount of water (or other potential moderator) is usually more difficult. Optimization of the pitch has been shown to provide an increase in system reactivity. Both MOX and LEU systems have been shown to be sensitive to moderator intrusion in varying pitched configurations. The analysis will have to determine the effect of optimizing the pitch for each array.
Stochastic Optimal Control for Series Hybrid Electric Vehicles
Malikopoulos, Andreas
2013-01-01
Increasing demand for improving fuel economy and reducing emissions has stimulated significant research and investment in hybrid propulsion systems. In this paper, we address the problem of optimizing online the supervisory control in a series hybrid configuration by modeling its operation as a controlled Markov chain using the average cost criterion. We treat the stochastic optimal control problem as a dual constrained optimization problem. We show that the control policy that yields higher probability distribution to the states with low cost and lower probability distribution to the states with high cost is an optimal control policy, defined as an equilibrium control policy. We demonstrate the effectiveness of the efficiency of the proposed controller in a series hybrid configuration and compare it with a thermostat-type controller.
An Optimization-based Atomistic-to-Continuum Coupling Method
Olson, Derek; Bochev, Pavel B.; Luskin, Mitchell; Shapeev, Alexander V.
2014-08-21
In this paper, we present a new optimization-based method for atomistic-to-continuum (AtC) coupling. The main idea is to cast the latter as a constrained optimization problem with virtual Dirichlet controls on the interfaces between the atomistic and continuum subdomains. The optimization objective is to minimize the error between the atomistic and continuum solutions on the overlap between the two subdomains, while the atomistic and continuum force balance equations provide the constraints. Separation, rather then blending of the atomistic and continuum problems, and their subsequent use as constraints in the optimization problem distinguishes our approach from the existing AtC formulations. Finally,more » we present and analyze the method in the context of a one-dimensional chain of atoms modeled using a linearized two-body potential with next-nearest neighbor interactions.« less
An Optimization-based Atomistic-to-Continuum Coupling Method
Olson, Derek; Bochev, Pavel B.; Luskin, Mitchell; Shapeev, Alexander V.
2014-08-21
In this paper, we present a new optimization-based method for atomistic-to-continuum (AtC) coupling. The main idea is to cast the latter as a constrained optimization problem with virtual Dirichlet controls on the interfaces between the atomistic and continuum subdomains. The optimization objective is to minimize the error between the atomistic and continuum solutions on the overlap between the two subdomains, while the atomistic and continuum force balance equations provide the constraints. Separation, rather then blending of the atomistic and continuum problems, and their subsequent use as constraints in the optimization problem distinguishes our approach from the existing AtC formulations. Finally, we present and analyze the method in the context of a one-dimensional chain of atoms modeled using a linearized two-body potential with next-nearest neighbor interactions.
On combining Laplacian and optimization-based mesh smoothing techniques
Freitag, L.A.
1997-07-01
Local mesh smoothing algorithms have been shown to be effective in repairing distorted elements in automatically generated meshes. The simplest such algorithm is Laplacian smoothing, which moves grid points to the geometric center of incident vertices. Unfortunately, this method operates heuristically and can create invalid meshes or elements of worse quality than those contained in the original mesh. In contrast, optimization-based methods are designed to maximize some measure of mesh quality and are very effective at eliminating extremal angles in the mesh. These improvements come at a higher computational cost, however. In this article the author proposes three smoothing techniques that combine a smart variant of Laplacian smoothing with an optimization-based approach. Several numerical experiments are performed that compare the mesh quality and computational cost for each of the methods in two and three dimensions. The author finds that the combined approaches are very cost effective and yield high-quality meshes.
Optimized scalar promotion with load and splat SIMD instructions
Eichenberger, Alexandre E. (Chappaqua, NY); Gschwind, Michael K. (Chappaqua, NY); Gunnels, John A. (Yorktown Heights, NY)
2012-08-28
Mechanisms for optimizing scalar code executed on a single instruction multiple data (SIMD) engine are provided. Placement of vector operation-splat operations may be determined based on an identification of scalar and SIMD operations in an original code representation. The original code representation may be modified to insert the vector operation-splat operations based on the determined placement of vector operation-splat operations to generate a first modified code representation. Placement of separate splat operations may be determined based on identification of scalar and SIMD operations in the first modified code representation. The first modified code representation may be modified to insert or delete separate splat operations based on the determined placement of the separate splat operations to generate a second modified code representation. SIMD code may be output based on the second modified code representation for execution by the SIMD engine.
Optimized scalar promotion with load and splat SIMD instructions
Eichenberger, Alexander E; Gschwind, Michael K; Gunnels, John A
2013-10-29
Mechanisms for optimizing scalar code executed on a single instruction multiple data (SIMD) engine are provided. Placement of vector operation-splat operations may be determined based on an identification of scalar and SIMD operations in an original code representation. The original code representation may be modified to insert the vector operation-splat operations based on the determined placement of vector operation-splat operations to generate a first modified code representation. Placement of separate splat operations may be determined based on identification of scalar and SIMD operations in the first modified code representation. The first modified code representation may be modified to insert or delete separate splat operations based on the determined placement of the separate splat operations to generate a second modified code representation. SIMD code may be output based on the second modified code representation for execution by the SIMD engine.
Distributed optimization system and method
Hurtado, John E.; Dohrmann, Clark R.; Robinett, III, Rush D.
2003-06-10
A search system and method for controlling multiple agents to optimize an objective using distributed sensing and cooperative control. The search agent can be one or more physical agents, such as a robot, and can be software agents for searching cyberspace. The objective can be: chemical sources, temperature sources, radiation sources, light sources, evaders, trespassers, explosive sources, time dependent sources, time independent sources, function surfaces, maximization points, minimization points, and optimal control of a system such as a communication system, an economy, a crane, and a multi-processor computer.
CHP Installed Capacity Optimizer Software
Energy Science and Technology Software Center
2004-11-30
The CHP Installed Capacity Optimizer is a Microsoft Excel spreadsheet application that determines the most economic amount of capacity of distributed generation and thermal utilization equipment (e.g., absorption chillers) to install for any user-defined set of load and cost data. Installing the optimum amount of capacity is critical to the life-cycle economic viability of a distributed generation/cooling heat and power (CHP) application. Using advanced optimization algorithms, the software accesses the loads, utility tariffs, equipment costs,more » etc., and provides to the user the most economic amount of system capacity to install.« less
CHP Installed Capacity Optimizer Software
2004-11-30
The CHP Installed Capacity Optimizer is a Microsoft Excel spreadsheet application that determines the most economic amount of capacity of distributed generation and thermal utilization equipment (e.g., absorption chillers) to install for any user-defined set of load and cost data. Installing the optimum amount of capacity is critical to the life-cycle economic viability of a distributed generation/cooling heat and power (CHP) application. Using advanced optimization algorithms, the software accesses the loads, utility tariffs, equipment costs, etc., and provides to the user the most economic amount of system capacity to install.
Zarepisheh, M; Li, R; Xing, L; Ye, Y; Boyd, S
2014-06-01
Purpose: Station Parameter Optimized Radiation Therapy (SPORT) was recently proposed to fully utilize the technical capability of emerging digital LINACs, in which the station parameters of a delivery system, (such as aperture shape and weight, couch position/angle, gantry/collimator angle) are optimized altogether. SPORT promises to deliver unprecedented radiation dose distributions efficiently, yet there does not exist any optimization algorithm to implement it. The purpose of this work is to propose an optimization algorithm to simultaneously optimize the beam sampling and aperture shapes. Methods: We build a mathematical model whose variables are beam angles (including non-coplanar and/or even nonisocentric beams) and aperture shapes. To solve the resulting large scale optimization problem, we devise an exact, convergent and fast optimization algorithm by integrating three advanced optimization techniques named column generation, gradient method, and pattern search. Column generation is used to find a good set of aperture shapes as an initial solution by adding apertures sequentially. Then we apply the gradient method to iteratively improve the current solution by reshaping the aperture shapes and updating the beam angles toward the gradient. Algorithm continues by pattern search method to explore the part of the search space that cannot be reached by the gradient method. Results: The proposed technique is applied to a series of patient cases and significantly improves the plan quality. In a head-and-neck case, for example, the left parotid gland mean-dose, brainstem max-dose, spinal cord max-dose, and mandible mean-dose are reduced by 10%, 7%, 24% and 12% respectively, compared to the conventional VMAT plan while maintaining the same PTV coverage. Conclusion: Combined use of column generation, gradient search and pattern search algorithms provide an effective way to optimize simultaneously the large collection of station parameters and significantly improves
The Origin of Mass (Conference) | SciTech Connect
Office of Scientific and Technical Information (OSTI)
The Origin of Mass Citation Details In-Document Search Title: The Origin of Mass You are accessing a document from the Department of Energy's (DOE) SciTech Connect. This site is ...
OpenEI:No original research | Open Energy Information
OpenEI (Open Energy Information) [EERE & EIA]
No original research Jump to: navigation, search OpenEI is a platform for bringing together the world's energy information. It is not a platform for original research. This means...
Domestic Coal Distribution 2009 Q1 by Origin State: Alabama
Energy Information Administration (EIA) (indexed site)
Q1 by Origin State: Alabama (1000 Short Tons) 1 58 Domestic Coal Distribution 2009 Q1 by Origin State: Alabama (1000 Short Tons) Destination State Transportation Mode Electricity...
Domestic Coal Distribution 2009 Q2 by Origin State: Alabama
Energy Information Administration (EIA) (indexed site)
Q2 by Origin State: Alabama (1000 Short Tons) 1 58 Domestic Coal Distribution 2009 Q2 by Origin State: Alabama (1000 Short Tons) Destination State Transportation Mode Electricity...
MULTIOBJECTIVE OPTIMIZATION POWER GENERATION SYSTEMS INVOLVING CHEMICAL LOOPING COMBUSTION
Juan M. Salazar; Urmila M. Diwekar; Stephen E. Zitney
2009-01-01
Integrated Gasification Combined Cycle (IGCC) system using coal gasification is an important approach for future energy options. This work focuses on understading the system operation and optimizing it in the presence of uncertain operating conditions using ASPEN Plus and CAPE-OPEN compliant stochastic simulation and multiobjective optimization capabilities developed by Vishwamitra Research Institute. The feasible operating surface for the IGCC system is generated and deterministic multiobjective optimization is performed. Since the feasible operating space is highly non-convex, heuristics based techniques that do not require gradient information are used to generate the Pareto surface. Accurate CFD models are simultaneously developed for the gasifier and chemical looping combustion system to characterize and quantify the process uncertainty in the ASPEN model.
Optimal design of a pilot OTEC power plant in Taiwan
Tseng, C.H.; Kao, K.Y. ); Yang, J.C. )
1991-12-01
In this paper, an optimal design concept has been utilized to find the best designs for a complex and large-scale ocean thermal energy conversion (OTEC) plant. THe OTEC power plant under this study is divided into three major subsystems consisting of power subsystem, seawater pipe subsystem, and containment subsystem. The design optimization model for the entire OTEC plant is integrated from these sub-systems under the considerations of their own various design criteria and constraints. The mathematical formulations of this optimization model for the entire OTEC plant are described. The design variables, objective function, and constraints for a pilot plant under the constraints of the feasible technologies at this stage in Taiwan have been carefully examined and selected.
Orbital-optimized density cumulant functional theory
Sokolov, Alexander Yu. Schaefer, Henry F.
2013-11-28
In density cumulant functional theory (DCFT) the electronic energy is evaluated from the one-particle density matrix and two-particle density cumulant, circumventing the computation of the wavefunction. To achieve this, the one-particle density matrix is decomposed exactly into the mean-field (idempotent) and correlation components. While the latter can be entirely derived from the density cumulant, the former must be obtained by choosing a specific set of orbitals. In the original DCFT formulation [W. Kutzelnigg, J. Chem. Phys. 125, 171101 (2006)] the orbitals were determined by diagonalizing the effective Fock operator, which introduces partial orbital relaxation. Here we present a new orbital-optimized formulation of DCFT where the energy is variationally minimized with respect to orbital rotations. This introduces important energy contributions and significantly improves the description of the dynamic correlation. In addition, it greatly simplifies the computation of analytic gradients, for which expressions are also presented. We offer a perturbative analysis of the new orbital stationarity conditions and benchmark their performance for a variety of chemical systems.
Equivalent Relaxations of Optimal Power Flow
Bose, S; Low, SH; Teeraratkul, T; Hassibi, B
2015-03-01
Several convex relaxations of the optimal power flow (OPF) problem have recently been developed using both bus injection models and branch flow models. In this paper, we prove relations among three convex relaxations: a semidefinite relaxation that computes a full matrix, a chordal relaxation based on a chordal extension of the network graph, and a second-order cone relaxation that computes the smallest partial matrix. We prove a bijection between the feasible sets of the OPF in the bus injection model and the branch flow model, establishing the equivalence of these two models and their second-order cone relaxations. Our results imply that, for radial networks, all these relaxations are equivalent and one should always solve the second-order cone relaxation. For mesh networks, the semidefinite relaxation and the chordal relaxation are equally tight and both are strictly tighter than the second-order cone relaxation. Therefore, for mesh networks, one should either solve the chordal relaxation or the SOCP relaxation, trading off tightness and the required computational effort. Simulations are used to illustrate these results.
Storage Viability and Optimization Web Service
Stadler, Michael; Marnay, Christ; Lai, Judy; Siddiqui, Afzal; Limpaitoon, Tanachai; Phan, Trucy; Megel, Olivier; Chang, Jessica; DeForest, Nicholas
2010-10-11
Non-residential sectors offer many promising applications for electrical storage (batteries) and photovoltaics (PVs). However, choosing and operating storage under complex tariff structures poses a daunting technical and economic problem that may discourage potential customers and result in lost carbon and economic savings. Equipment vendors are unlikely to provide adequate environmental analysis or unbiased economic results to potential clients, and are even less likely to completely describe the robustness of choices in the face of changing fuel prices and tariffs. Given these considerations, researchers at Lawrence Berkeley National Laboratory (LBNL) have designed the Storage Viability and Optimization Web Service (SVOW): a tool that helps building owners, operators and managers to decide if storage technologies and PVs merit deeper analysis. SVOW is an open access, web-based energy storage and PV analysis calculator, accessible by secure remote login. Upon first login, the user sees an overview of the parameters: load profile, tariff, technologies, and solar radiation location. Each parameter has a pull-down list of possible predefined inputs and users may upload their own as necessary. Since the non-residential sectors encompass a broad range of facilities with fundamentally different characteristics, the tool starts by asking the users to select a load profile from a limited cohort group of example facilities. The example facilities are categorized according to their North American Industry Classification System (NAICS) code. After the load profile selection, users select a predefined tariff or use the widget to create their own. The technologies and solar radiation menus operate in a similar fashion. After these four parameters have been inputted, the users have to select an optimization setting as well as an optimization objective. The analytic engine of SVOW is LBNL?s Distributed Energy Resources Customer Adoption Model (DER-CAM), which is a mixed
2015-09-01
The Biomass Scenario Model (BSM) is a unique, carefully validated, state-of-the-art dynamic model of the domestic biofuels supply chain which explicitly focuses on policy issues, their feasibility, and potential side effects. It integrates resource availability, physical/technological/economic constraints, behavior, and policy. The model uses a system dynamics simulation (not optimization) to model dynamic interactions across the supply chain.
Soliton molecules: Experiments and optimization
Mitschke, Fedor
2014-10-06
Stable compound states of several fiber-optic solitons have recently been demonstrated. In the first experiment their shape was approximated, for want of a better description, by a sum of Gaussians. Here we discuss an optimization strategy which helps to find preferable shapes so that the generation of radiative background is reduced.
Optimal deployment of solar index
Croucher, Matt
2010-11-15
There is a growing trend, generally caused by state-specific renewable portfolio standards, to increase the importance of renewable electricity generation within generation portfolios. While RPS assist with determining the composition of generation they do not, for the most part, dictate the location of generation. Using data from various public sources, the authors create an optimal index for solar deployment. (author)
Multicriteria optimization informed VMAT planning
Chen, Huixiao; Craft, David L.; Gierga, David P.
2014-04-01
We developed a patient-specific volumetric-modulated arc therapy (VMAT) optimization procedure using dose-volume histogram (DVH) information from multicriteria optimization (MCO) of intensity-modulated radiotherapy (IMRT) plans. The study included 10 patients with prostate cancer undergoing standard fractionation treatment, 10 patients with prostate cancer undergoing hypofractionation treatment, and 5 patients with head/neck cancer. MCO-IMRT plans using 20 and 7 treatment fields were generated for each patient on the RayStation treatment planning system (clinical version 2.5, RaySearch Laboratories, Stockholm, Sweden). The resulting DVH of the 20-field MCO-IMRT plan for each patient was used as the reference DVH, and the extracted point values of the resulting DVH of the MCO-IMRT plan were used as objectives and constraints for VMAT optimization. Weights of objectives or constraints of VMAT optimization or both were further tuned to generate the best match with the reference DVH of the MCO-IMRT plan. The final optimal VMAT plan quality was evaluated by comparison with MCO-IMRT plans based on homogeneity index, conformity number of planning target volume, and organ at risk sparing. The influence of gantry spacing, arc number, and delivery time on VMAT plan quality for different tumor sites was also evaluated. The resulting VMAT plan quality essentially matched the 20-field MCO-IMRT plan but with a shorter delivery time and less monitor units. VMAT plan quality of head/neck cancer cases improved using dual arcs whereas prostate cases did not. VMAT plan quality was improved by fine gantry spacing of 2 for the head/neck cancer cases and the hypofractionation-treated prostate cancer cases but not for the standard fractionationtreated prostate cancer cases. MCO-informed VMAT optimization is a useful and valuable way to generate patient-specific optimal VMAT plans, though modification of the weights of objectives or constraints extracted from resulting DVH of MCO-IMRT or
Role of Design Standards in Wind Plant Optimization (Presentation)
Veers, P.; Churchfield, M.; Lee, S.; Moon, J.; Larsen, G.
2013-10-01
When a turbine is optimized, it is done within the design constraints established by the objective criteria in the international design standards used to certify a design. Since these criteria are multifaceted, it is a challenging task to conduct the optimization, but it can be done. The optimization is facilitated by the fact that a standard turbine model is subjected to standard inflow conditions that are well characterized in the standard. Examples of applying these conditions to rotor optimization are examined. In other cases, an innovation may provide substantial improvement in one area, but be challenged to impact all of the myriad design load cases. When a turbine is placed in a wind plant, the challenge is magnified. Typical design practice optimizes the turbine for stand-alone operation, and then runs a check on the actual site conditions, including wakes from all nearby turbines. Thus, each turbine in a plant has unique inflow conditions. The possibility of creating objective and consistent inflow conditions for turbines within a plant, for used in optimization of the turbine and the plant, are examined with examples taken from LES simulation.
Phase II Final Report Computer Optimization of Electron Guns
R. Lawrence Ives; Thuc Bui; Hien Tran; Michael Read; Adam Attarian; William Tallis
2011-04-15
This program implemented advanced computer optimization into an adaptive mesh, finite element, 3D, charged particle code. The routines can optimize electron gun performance to achieve a specified current, beam size, and perveance. It can also minimize beam ripple and electric field gradients. The magnetics optimization capability allows design of coil geometries and magnetic material configurations to achieve a specified axial magnetic field profile. The optimization control program, built into the charged particle code Beam Optics Analyzer (BOA) utilizes a 3D solid modeling package to modify geometry using design tables. Parameters within the graphical user interface (currents, voltages, etc.) can be directly modified within BOA. The program implemented advanced post processing capability for the optimization routines as well as the user. A Graphical User Interface allows the user to set up goal functions, select variables, establish ranges of variation, and define performance criteria. The optimization capability allowed development of a doubly convergent multiple beam gun that could not be designed using previous techniques.
Spamology: A Study of Spam Origins
Shue, Craig A; Gupta, Prof. Minaxi; Kong, Chin Hua; Lubia, John T.; Yuksel, Asim S.
2009-01-01
The rise of spam in the last decade has been staggering, with the rate of spam exceeding that of legitimate email. While conjectures exist on how spammers gain access to email addresses to spam, most work in the area of spam containment has either focused on better spam filtering methodologies or on understanding the botnets commonly used to send spam. In this paper, we aim to understand the origins of spam. We post dedicated email addresses to record how and where spammers go to obtain email addresses. We find that posting an email address on public Web pages yields immediate and high-volume spam. Surprisingly, even simple email obfuscation approaches are still sufficient today to prevent spammers from harvesting emails. We also find that attempts to find open relays continue to be popular among spammers. The insights we gain on the use of Web crawlers used to harvest email addresses and the commonalities of techniques used by spammers open the door for radically different follow-up work on spam containment and even systematic enforcement of spam legislation at a large scale.
Tectonic origin of Crowley's Ridge, northeastern Arkansas
VanArsdale, R.B. (Univ. of Arkansas, Fayetteville, AR (United States). Geology Dept.); Williams, R.A.; Shedlock, K.M.; King, K.W.; Odum, J.K. (Geological survey, Denver, CO (United States). Denver Federal Center); Schweig, E.S. III; Kanter, L.R. (Memphis State Univ., TN (United States))
1992-01-01
Crowley's Ridge is a 320 km long topographic ridge that extends from Thebes, Illinois to Helena, Arkansas. The ridge has been interpreted as an erosional remnant formed during Quaternary incision of the ancestral Mississippi and Ohio rivers; however, the Reelfoot Rift COCORP line identified a down-to-the-west fault bounding the western margin of Crowley's Ridge south of Jonesboro, Arkansas. Subsequent Mini-Sosie seismic reflection profiles confirmed the COCORP data and identified additional faults beneath other margins of the ridge. In each case the faults lie beneath the base of the ridge scarp. The Mini-Sosie data did not resolve the uppermost 150 m and so it was not possible to determine if the faults displace the near-surface Claiborne Group (middle Eocene). A shotgun source seismic reflection survey was subsequently conducted to image the uppermost 250 m across the faulted margins. The shotgun survey across the western margin of the ridge south of Jonesboro reveals displaced reflectors as shallow as 30 m depth. Claiborne Group strata are displaced approximately 6 m and it appears that some of the topographic relief of Crowley's Ridge at this location is due to post middle Eocene fault displacement. Based on the reflection data, the authors suggest that Crowley's Ridge is tectonic in origin.
Understanding the origins of human cancer
Alexandrov, L. B.
2015-12-04
All cancers originate from a single cell that starts to behave abnormally, to divide uncontrollably, and, eventually, to invade adjacent tissues (1). The aberrant behavior of this single cell is due to somatic mutations—changes in the genomic DNA produced by the activity of different mutational processes (1). These various mutational processes include exposure to exogenous or endogenous mutagens, abnormal DNA editing, the incomplete fidelity of DNA polymerases, and failure of DNA repair mechanisms (2). Early studies that sequenced TP53, the most commonly mutated gene in human cancer, provided evidence that mutational processes leave distinct imprints of somatic mutations on the genome of a cancer cell (3). For example, C:G>A:T transversions predominate in smoking-associated lung cancer, whereas C:G>T:A transitions occurring mainly at dipyrimidines and CC:GG>TT:AA double-nucleotide substitutions are common in ultraviolet light–associated skin cancers. Moreover, these patterns of mutations matched the ones induced experimentally by tobacco mutagens and ultraviolet light, respectively, the major, known, exogenous carcinogenic influences in these cancer types, and demonstrated that examining patterns of mutations in cancer genomes can yield information about the mutational processes that cause human cancer (4).
Peloids: a bacterially-induced origin
Chafetz, H.S.
1985-01-01
The origin of peloids within modern reef accumulations has been a controversial subject for almost 20 years. Freshly broken and slabbed splits of samples from Holocene-Pleistocene reef tracts from Jamaica, Belize and Florida, were observed with an SEM; the majority of the specimens had been etched in dilute HC1 prior to coating. Peloids commonly occur within borings in corals and other reef constituents. The peloids are spherical bodies, generally 20-60u in diameter, composed of high-magnesian calcite. They have a fine-grained center of anhedral grains and a dentate exterior of clear euhedral spar. In thin section, the centers commonly are light brown indicating the presence of organic matter. Spherical to elliptical bacterial clumps, approximately 15u in diameter, are evident in SEM views of etched samples from all three locales, whereas no bacteria were observed in non-etched samples. Their apparent absence in non-etched samples is because they occur encased in calcite. The reefal peloids are similar to bacterially-induced precipitates that occur in some travertine deposits. The similarities include: diameter (20-60u), structure, composition, and occurrence in a restricted or harsh environment (borings within corals or in hot H/sub 2/S-rich waters). Laboratory experiments have demonstrated that bacteria can induce carbonate precipitation. Therefore, it is my contention that peloids in modern reefs are bacterially-induced precipitated grains.
Understanding the origins of human cancer
Alexandrov, L. B.
2015-12-04
All cancers originate from a single cell that starts to behave abnormally, to divide uncontrollably, and, eventually, to invade adjacent tissues (1). The aberrant behavior of this single cell is due to somatic mutations—changes in the genomic DNA produced by the activity of different mutational processes (1). These various mutational processes include exposure to exogenous or endogenous mutagens, abnormal DNA editing, the incomplete fidelity of DNA polymerases, and failure of DNA repair mechanisms (2). Early studies that sequenced TP53, the most commonly mutated gene in human cancer, provided evidence that mutational processes leave distinct imprints of somatic mutations on themore » genome of a cancer cell (3). For example, C:G>A:T transversions predominate in smoking-associated lung cancer, whereas C:G>T:A transitions occurring mainly at dipyrimidines and CC:GG>TT:AA double-nucleotide substitutions are common in ultraviolet light–associated skin cancers. Moreover, these patterns of mutations matched the ones induced experimentally by tobacco mutagens and ultraviolet light, respectively, the major, known, exogenous carcinogenic influences in these cancer types, and demonstrated that examining patterns of mutations in cancer genomes can yield information about the mutational processes that cause human cancer (4).« less
Stillwater Hybrid Geo-Solar Power Plant Optimization Analyses
Wendt, Daniel S.; Mines, Gregory L.; Turchi, Craig S.; Zhu, Guangdong; Cohan, Sander; Angelini, Lorenzo; Bizzarri, Fabrizio; Consoli, Daniele; De Marzo, Alessio
2015-09-02
The Stillwater Power Plant is the first hybrid plant in the world able to bring together a medium-enthalpy geothermal unit with solar thermal and solar photovoltaic systems. Solar field and power plant models have been developed to predict the performance of the Stillwater geothermal / solar-thermal hybrid power plant. The models have been validated using operational data from the Stillwater plant. A preliminary effort to optimize performance of the Stillwater hybrid plant using optical characterization of the solar field has been completed. The Stillwater solar field optical characterization involved measurement of mirror reflectance, mirror slope error, and receiver position error. The measurements indicate that the solar field may generate 9% less energy than the design value if an appropriate tracking offset is not employed. A perfect tracking offset algorithm may be able to boost the solar field performance by about 15%. The validated Stillwater hybrid plant models were used to evaluate hybrid plant operating strategies including turbine IGV position optimization, ACC fan speed and turbine IGV position optimization, turbine inlet entropy control using optimization of multiple process variables, and mixed working fluid substitution. The hybrid plant models predict that each of these operating strategies could increase net power generation relative to the baseline Stillwater hybrid plant operations.
Dandina N. Rao; Subhash C. Ayirala; Madhav M. Kulkarni; Wagirin Ruiz Paidin; Thaer N. N. Mahmoud; Daryl S. Sequeira; Amit P. Sharma
2006-09-30
This is the final report describing the evolution of the project ''Development and Optimization of Gas-Assisted Gravity Drainage (GAGD) Process for Improved Light Oil Recovery'' from its conceptual stage in 2002 to the field implementation of the developed technology in 2006. This comprehensive report includes all the experimental research, models developments, analyses of results, salient conclusions and the technology transfer efforts. As planned in the original proposal, the project has been conducted in three separate and concurrent tasks: Task 1 involved a physical model study of the new GAGD process, Task 2 was aimed at further developing the vanishing interfacial tension (VIT) technique for gas-oil miscibility determination, and Task 3 was directed at determining multiphase gas-oil drainage and displacement characteristics in reservoir rocks at realistic pressures and temperatures. The project started with the task of recruiting well-qualified graduate research assistants. After collecting and reviewing the literature on different aspects of the project such gas injection EOR, gravity drainage, miscibility characterization, and gas-oil displacement characteristics in porous media, research plans were developed for the experimental work to be conducted under each of the three tasks. Based on the literature review and dimensional analysis, preliminary criteria were developed for the design of the partially-scaled physical model. Additionally, the need for a separate transparent model for visual observation and verification of the displacement and drainage behavior under gas-assisted gravity drainage was identified. Various materials and methods (ceramic porous material, Stucco, Portland cement, sintered glass beads) were attempted in order to fabricate a satisfactory visual model. In addition to proving the effectiveness of the GAGD process (through measured oil recoveries in the range of 65 to 87% IOIP), the visual models demonstrated three possible
Nash, Stephen G.
2013-11-11
The research focuses on the modeling and optimization of nanoporous materials. In systems with hierarchical structure that we consider, the physics changes as the scale of the problem is reduced and it can be important to account for physics at the fine level to obtain accurate approximations at coarser levels. For example, nanoporous materials hold promise for energy production and storage. A significant issue is the fabrication of channels within these materials to allow rapid diffusion through the material. One goal of our research is to apply optimization methods to the design of nanoporous materials. Such problems are large and challenging, with hierarchical structure that we believe can be exploited, and with a large range of important scales, down to atomistic. This requires research on large-scale optimization for systems that exhibit different physics at different scales, and the development of algorithms applicable to designing nanoporous materials for many important applications in energy production, storage, distribution, and use. Our research has two major research thrusts. The first is hierarchical modeling. We plan to develop and study hierarchical optimization models for nanoporous materials. The models have hierarchical structure, and attempt to balance the conflicting aims of model fidelity and computational tractability. In addition, we analyze the general hierarchical model, as well as the specific application models, to determine their properties, particularly those properties that are relevant to the hierarchical optimization algorithms. The second thrust was to develop, analyze, and implement a class of hierarchical optimization algorithms, and apply them to the hierarchical models we have developed. We adapted and extended the optimization-based multigrid algorithms of Lewis and Nash to the optimization models exemplified by the hierarchical optimization model. This class of multigrid algorithms has been shown to be a powerful tool for
Optimized flashlamp pumping of disc amplifiers
Murray, J.E.; Powell, H.T.; Woods, B.W.
1986-01-17
Disk amplifier design for inertial fusion lasers has evolved with changing fusion-driver requirements from a primary emphasis on gain to a primary emphasis on efficiency. In this paper we compare Shiva and Nova amplifiers to a developmental amplifier (SSA) and show greater than a two-fold improvement in efficiency over past designs under all operating conditions. Experiments to optimize the efficiency of the SSA show that preionization of the flashlamps produces significant benefits and that the packing fraction of lamps is more important than the flashlamp reflector shape. They also show that the optimized flashlamp pulselength and reflector geometry depend on the desired stored energy in the laser medium. We have demonstrated a 7% storage efficiency at a stored fluence per disk of 0.5 J/cm/sup 2/ (stored energy density of 0.06 J/cm/sup 3/) and 4% at 2.0 J/cm/sup 2/ (0.25 J/cm/sup 3/). Comparison of SSA measurements with storage-efficiency calculations show that our flashlamp model accurately predicts the single-pass pumping of disk amplifiers. 24 refs., 22 figs.
Optimization of Nd: YAG Laser Marking of Alumina Ceramic Using RSM And ANN
Peter, Josephine; Doloi, B.; Bhattacharyya, B.
2011-01-17
The present research papers deals with the artificial neural network (ANN) and the response surface methodology (RSM) based mathematical modeling and also an optimization analysis on marking characteristics on alumina ceramic. The experiments have been planned and carried out based on Design of Experiment (DOE). It also analyses the influence of the major laser marking process parameters and the optimal combination of laser marking process parametric setting has been obtained. The output of the RSM optimal data is validated through experimentation and ANN predictive model. A good agreement is observed between the results based on ANN predictive model and actual experimental observations.
Optimization of Waste Disposal - 13338
Shephard, E.; Walter, N.; Downey, H.; Collopy, P.; Conant, J.
2013-07-01
From 2009 through 2011, remediation of areas of a former fuel cycle facility used for government contract work was conducted. Remediation efforts were focused on building demolition, underground pipeline removal, contaminated soil removal and removal of contaminated sediments from portions of an on-site stream. Prior to conducting the remediation field effort, planning and preparation for remediation (including strategic planning for waste characterization and disposal) was conducted during the design phase. During the remediation field effort, waste characterization and disposal practices were continuously reviewed and refined to optimize waste disposal practices. This paper discusses strategic planning for waste characterization and disposal that was employed in the design phase, and continuously reviewed and refined to optimize efficiency. (authors)
Integrated Energy System Dispatch Optimization
Firestone, Ryan; Stadler, Michael; Marnay, Chris
2006-06-16
On-site cogeneration of heat and electricity, thermal and electrical storage, and curtailing/rescheduling demand options are often cost-effective to commercial and industrial sites. This collection of equipment and responsive consumption can be viewed as an integrated energy system(IES). The IES can best meet the sites cost or environmental objectives when controlled in a coordinated manner. However, continuously determining this optimal IES dispatch is beyond the expectations for operators of smaller systems. A new algorithm is proposed in this paper to approximately solve the real-time dispatch optimization problem for a generic IES containing an on-site cogeneration system subject to random outages, limited curtailment opportunities, an intermittent renewable electricity source, and thermal storage. An example demonstrates how this algorithm can be used in simulation to estimate the value of IES components.
Optimal design of AC filter circuits in HVDC converter stations
Saied, M.M.; Khader, S.A.
1995-12-31
This paper investigates the reactive power as well as the harmonic conditions on both the valve and the AC-network sides of a HVDC converter station. The effect of the AC filter circuits is accurately modeled. The program is then augmented by adding an optimization routine. It can identify the optimal filter configuration, yielding the minimum current distortion factor at the AC network terminals for a prespecified fundamental reactive power to be provided by the filter. Several parameter studies were also conducted to illustrate the effect of accidental or intentional deletion of one of the filter branches.
Forecourt and Gas Infrastructure Optimization
Office of Energy Efficiency and Renewable Energy (EERE) (indexed site)
Forecourt and Gas Infrastructure Optimization Bruce Kelly Nexant, Inc. Hydrogen Delivery Analysis Meeting May 8-9, 2007 Columbia, Maryland 2 Analysis of Market Demand and Supply Variations Supply Side Variations: Central Production Plant Outages - Scheduled yearly maintenance: Typically 5 to 10 consecutive days each year - Unscheduled maintenance outages: Indeterminate time and length - Natural disasters: A few days? Demand side variations - Hourly at refueling sites - Day to day at refueling
Optimization of Cylindrical Hall Thrusters
Yevgeny Raitses, Artem Smirnov, Erik Granstedt, and Nathaniel J. Fi
2007-07-24
The cylindrical Hall thruster features high ionization efficiency, quiet operation, and ion acceleration in a large volume-to-surface ratio channel with performance comparable with the state-of-the-art annular Hall thrusters. These characteristics were demonstrated in low and medium power ranges. Optimization of miniaturized cylindrical thrusters led to performance improvements in the 50-200W input power range, including plume narrowing, increased thruster efficiency, reliable discharge initiation, and stable operation. __________________________________________________
Optimization of Cylindrical Hall Thrusters
Yevgeny Raitses, Artem Smirnov, Erik Granstedt, and Nathaniel J. Fisch
2007-11-27
The cylindrical Hall thruster features high ionization efficiency, quiet operation, and ion acceleration in a large volume-to-surface ratio channel with performance comparable with the state-of-the-art annular Hall thrusters. These characteristics were demonstrated in low and medium power ranges. Optimization of miniaturized cylindrical thrusters led to performance improvements in the 50-200W input power range, including plume narrowing, increased thruster efficiency, reliable discharge initiation, and stable operation.
Gauged B-xiL origin of R parity and its implications
Lee, Hye-Sung; Ma, Ernest
2010-05-01
Gauged B-L is a popular candidate for the origin of the conservation of R parity, i.e.R=(-)3B+L+2j, in supersymmetry, but it fails to forbid the effective dimension-five terms arising from the superfield combinations QQQL, ucucdcec, and ucdcdcNc, which allow the proton to decay. Changing it to B-xiL, where xe+xμ+xτ=3 (with xi≠1) for the three families, would forbid these terms while still serving as a gauge origin of Rparity. We show how this is achieved in two minimal models with realistic neutrino mass matrices, and discuss their phenomenological implications.
Optimal response to attacks on the open science grids.
Altunay, M.; Leyffer, S.; Linderoth, J. T.; Xie, Z.
2011-01-01
Cybersecurity is a growing concern, especially in open grids, where attack propagation is easy because of prevalent collaborations among thousands of users and hundreds of institutions. The collaboration rules that typically govern large science experiments as well as social networks of scientists span across the institutional security boundaries. A common concern is that the increased openness may allow malicious attackers to spread more readily around the grid. We consider how to optimally respond to attacks in open grid environments. To show how and why attacks spread more readily around the grid, we first discuss how collaborations manifest themselves in the grids and form the collaboration network graph, and how this collaboration network graph affects the security threat levels of grid participants. We present two mixed-integer program (MIP) models to find the optimal response to attacks in open grid environments, and also calculate the threat level associated with each grid participant. Given an attack scenario, our optimal response model aims to minimize the threat levels at unaffected participants while maximizing the uninterrupted scientific production (continuing collaborations). By adopting some of the collaboration rules (e.g., suspending a collaboration or shutting down a site), the model finds optimal response to subvert an attack scenario.
EUD-based biological optimization for carbon ion therapy
Brüningk, Sarah C. Kamp, Florian; Wilkens, Jan J.
2015-11-15
Purpose: Treatment planning for carbon ion therapy requires an accurate modeling of the biological response of each tissue to estimate the clinical outcome of a treatment. The relative biological effectiveness (RBE) accounts for this biological response on a cellular level but does not refer to the actual impact on the organ as a whole. For photon therapy, the concept of equivalent uniform dose (EUD) represents a simple model to take the organ response into account, yet so far no formulation of EUD has been reported that is suitable to carbon ion therapy. The authors introduce the concept of an equivalent uniform effect (EUE) that is directly applicable to both ion and photon therapies and exemplarily implemented it as a basis for biological treatment plan optimization for carbon ion therapy. Methods: In addition to a classical EUD concept, which calculates a generalized mean over the RBE-weighted dose distribution, the authors propose the EUE to simplify the optimization process of carbon ion therapy plans. The EUE is defined as the biologically equivalent uniform effect that yields the same probability of injury as the inhomogeneous effect distribution in an organ. Its mathematical formulation is based on the generalized mean effect using an effect-volume parameter to account for different organ architectures and is thus independent of a reference radiation. For both EUD concepts, quadratic and logistic objective functions are implemented into a research treatment planning system. A flexible implementation allows choosing for each structure between biological effect constraints per voxel and EUD constraints per structure. Exemplary treatment plans are calculated for a head-and-neck patient for multiple combinations of objective functions and optimization parameters. Results: Treatment plans optimized using an EUE-based objective function were comparable to those optimized with an RBE-weighted EUD-based approach. In agreement with previous results from photon
CBERD: Building Energy Simulation and Modeling | Department of Energy
Office of Energy Efficiency and Renewable Energy (EERE) (indexed site)
Energy Simulation and Modeling CBERD: Building Energy Simulation and Modeling Figure 1: Screenshot of the alpha version of CBERD eDOT (early design optimization tool), an online tool that enables multi-parameter optimization. Source: LBNL. Figure 1: Screenshot of the alpha version of CBERD eDOT (early design optimization tool), an online tool that enables multi-parameter optimization. Source: LBNL. Figure 2: CBERD Model Predictive Control: Model identification and closed loop predictive control
Optim Energy Marketing LLC | Open Energy Information
OpenEI (Open Energy Information) [EERE & EIA]
Optim Energy Marketing LLC Jump to: navigation, search Name: Optim Energy Marketing LLC Place: Texas References: EIA Form EIA-861 Final Data File for 2010 - File1a1 EIA Form 861...
Optimal stomatal behaviour around the world
Lin, Yan-Shih; Medlyn, Belinda E.; Duursma, Remko A.; Prentice, I. Colin; Wang, Han; Baig, Sofia; Eamus, Derek; de Dios, Victor Resco; Mitchell, Patrick; Ellsworth, David S.; de Beeck, Maarten Op; Wallin, Göran; Uddling, Johan; Tarvainen, Lasse; Linderson, Maj-Lena; Cernusak, Lucas A.; Nippert, Jesse B.; Ocheltree, Troy W.; Tissue, David T.; Martin-StPaul, Nicolas K.; Rogers, Alistair; Warren, Jeff M.; De Angelis, Paolo; Hikosaka, Kouki; Han, Qingmin; Onoda, Yusuke; Gimeno, Teresa E.; Barton, Craig V. M.; Bennie, Jonathan; Bonal, Damien; Bosc, Alexandre; Löw, Markus; Macinins-Ng, Cate; Rey, Ana; Rowland, Lucy; Setterfield, Samantha A.; Tausz-Posch, Sabine; Zaragoza-Castells, Joana; Broadmeadow, Mark S. J.; Drake, John E.; Freeman, Michael; Ghannoum, Oula; Hutley, Lindsay B.; Kelly, Jeff W.; Kikuzawa, Kihachiro; Kolari, Pasi; Koyama, Kohei; Limousin, Jean-Marc; Meir, Patrick; Lola da Costa, Antonio C.; Mikkelsen, Teis N.; Salinas, Norma; Sun, Wei; Wingate, Lisa
2015-03-02
Stomatal conductance (g_{s}) is a key land-surface attribute as it links transpiration, the dominant component of global land evapotranspiration, and photosynthesis, the driving force of the global carbon cycle. Despite the pivotal role of g_{s} in predictions of global water and carbon cycle changes, a global-scale database and an associated globally applicable model of g_{s} that allow predictions of stomatal behaviour are lacking. Here, we present a database of globally distributed g_{s} obtained in the field for a wide range of plant functional types (PFTs) and biomes. We find that stomatal behaviour differs among PFTs according to their marginal carbon cost of water use, as predicted by the theory underpinning the optimal stomatal model^{1} and the leaf and wood economics spectrum^{2,3}. We also demonstrate a global relationship with climate. In conclusion, these findings provide a robust theoretical framework for understanding and predicting the behaviour of g_{s} across biomes and across PFTs that can be applied to regional, continental and global-scale modelling of ecosystem productivity, energy balance and ecohydrological processes in a future changing climate.
Optimal stomatal behaviour around the world
Lin, Yan-Shih; Medlyn, Belinda E.; Duursma, Remko A.; Prentice, I. Colin; Wang, Han; Baig, Sofia; Eamus, Derek; de Dios, Victor Resco; Mitchell, Patrick; Ellsworth, David S.; et al
2015-03-02
Stomatal conductance (gs) is a key land-surface attribute as it links transpiration, the dominant component of global land evapotranspiration, and photosynthesis, the driving force of the global carbon cycle. Despite the pivotal role of gs in predictions of global water and carbon cycle changes, a global-scale database and an associated globally applicable model of gs that allow predictions of stomatal behaviour are lacking. Here, we present a database of globally distributed gs obtained in the field for a wide range of plant functional types (PFTs) and biomes. We find that stomatal behaviour differs among PFTs according to their marginal carbonmore » cost of water use, as predicted by the theory underpinning the optimal stomatal model1 and the leaf and wood economics spectrum2,3. We also demonstrate a global relationship with climate. In conclusion, these findings provide a robust theoretical framework for understanding and predicting the behaviour of gs across biomes and across PFTs that can be applied to regional, continental and global-scale modelling of ecosystem productivity, energy balance and ecohydrological processes in a future changing climate.« less
Optimizing near real time accountability for reprocessing.
Cipiti, Benjamin B.
2010-06-01
Near Real Time Accountability (NRTA) of actinides at high precision in reprocessing plants has been a long sought-after goal in the safeguards community. Achieving this goal is hampered by the difficulty of making precision measurements in the reprocessing environment, equipment cost, and impact to plant operations. Thus the design of future reprocessing plants requires an optimization of different approaches. The Separations and Safeguards Performance Model, developed at Sandia National Laboratories, was used to evaluate a number of NRTA strategies in a UREX+ reprocessing plant. Strategies examined include the incorporation of additional actinide measurements of internal plant vessels, more use of process monitoring data, and the option of periodic draining of inventory to key tanks. Preliminary results show that the addition of measurement technologies can increase the overall measurement uncertainty due to additional error propagation, so care must be taken when designing an advanced system. Initial results also show that relying on a combination of different NRTA techniques will likely be the best option. The model provides a platform for integrating all the data. The modeling results for the different NRTA options under various material loss conditions will be presented.
Reverse Osmosis Optimization | Department of Energy
Office of Energy Efficiency and Renewable Energy (EERE) (indexed site)
Reverse Osmosis Optimization Reverse Osmosis Optimization Report assesses techniques for optimizing reverse osmosis (RO) systems to increase RO system performance and water efficiency. It provides a general description of RO systems, the influence of RO systems on water use, and key areas where RO systems can be optimized to reduce water and energy consumption. This report is intended to help facility managers at Federal sites understand the basic concepts of the RO process and system
Cori Phase 1 Training: Programming and Optimization
U.S. Department of Energy (DOE) - all webpages (Extended Search)
Optimization Cori Phase 1 Training: Programming and Optimization NERSC will host a four-day training event for Cori Phase 1 users on Programming Environment, Debugging and Optimization from Monday June 13 to Thursday June 16. The presenters will be Cray instructor Rick Slick and NERSC staff. Cray XC Series Programming and Optimization Description This course is intended for people who work in applications support or development of Cray XC Series computer systems. It familiarizes students with
Optimization of radiation protection: a bibliography
Tang, G.R.; Khan, T.A.; Sullivan, S.G.
1996-10-01
This document provides a bibliography of radiation protection optimization documents. Abstracts, an author index, and a subject index are provided.
Simultaneous beam sampling and aperture shape optimization for SPORT
Zarepisheh, Masoud; Li, Ruijiang; Xing, Lei; Ye, Yinyu
2015-02-15
Purpose: Station parameter optimized radiation therapy (SPORT) was recently proposed to fully utilize the technical capability of emerging digital linear accelerators, in which the station parameters of a delivery system, such as aperture shape and weight, couch position/angle, gantry/collimator angle, can be optimized simultaneously. SPORT promises to deliver remarkable radiation dose distributions in an efficient manner, yet there exists no optimization algorithm for its implementation. The purpose of this work is to develop an algorithm to simultaneously optimize the beam sampling and aperture shapes. Methods: The authors build a mathematical model with the fundamental station point parameters as the decision variables. To solve the resulting large-scale optimization problem, the authors devise an effective algorithm by integrating three advanced optimization techniques: column generation, subgradient method, and pattern search. Column generation adds the most beneficial stations sequentially until the plan quality improvement saturates and provides a good starting point for the subsequent optimization. It also adds the new stations during the algorithm if beneficial. For each update resulted from column generation, the subgradient method improves the selected stations locally by reshaping the apertures and updating the beam angles toward a descent subgradient direction. The algorithm continues to improve the selected stations locally and globally by a pattern search algorithm to explore the part of search space not reachable by the subgradient method. By combining these three techniques together, all plausible combinations of station parameters are searched efficiently to yield the optimal solution. Results: A SPORT optimization framework with seamlessly integration of three complementary algorithms, column generation, subgradient method, and pattern search, was established. The proposed technique was applied to two previously treated clinical cases: a head and
The Institutional Origins of the Department of Energy | Department of
Office of Energy Efficiency and Renewable Energy (EERE) (indexed site)
Energy Operational Management » History » DOE History Timeline » The Institutional Origins of the Department of Energy The Institutional Origins of the Department of Energy Origins-of-the-Department-of-Energy.pdf (194.83 KB) More Documents & Publications National Offshore Wind Energy Grid Interconnection Study (NOWEGIS) Response to several FOIA requests - Renewable Energy. EIS-0002: Final Environmental Impact Statement Aviation Management Green Leases Executive Secretariat Energy
Biorefinery Optimization Workshop Presentations | Department of Energy
Office of Energy Efficiency and Renewable Energy (EERE) (indexed site)
Biorefinery Optimization Workshop Presentations Biorefinery Optimization Workshop Presentations Presentations from the Biorefinery Optimization Workshop , hosted by the U.S. Department of Energy's Bioenergy Technologies Office on October 5-6, 2016. Speaker Last Name Affiliation Title Hartford Jenike & Johanson, Inc. Biomass Material Handling Considerations Kenney Idaho National Laboratory Industrial Feed Handling of Lingocellulosic Feedstocks Webb Oak Ridge National Laboratory Addressing
Origins of weak lensing systematics, and requirements on future...
Office of Scientific and Technical Information (OSTI)
Journal Article: Origins of weak lensing systematics, and requirements on future instrumentation (or knowledge of instrumentation) Citation Details In-Document Search Title:...
The Gadonanotubes: Structural Origin of their High-Performance...
Office of Scientific and Technical Information (OSTI)
Title: The Gadonanotubes: Structural Origin of their High-Performance MRI Contrast Agent Behavior Authors: Ma, Qing ; Jebb, Meghan ; Tweedle, Michael F. ; Wilson, Lon J. 1 ; NWU) ...
The Institutional Origins of the Department of Energy | Department...
PDF icon Origins-of-the-Department-of-Energy.pdf More Documents & Publications National Offshore Wind Energy Grid Interconnection Study (NOWEGIS) CX-007131: Categorical Exclusion...
Space Dust Analysis Could Provide Clues to Solar System Origins
U.S. Department of Energy (DOE) - all webpages (Extended Search)
Space Dust Analysis Could Provide Clues to Solar System Origins Print New studies of space dust captured by NASA's Stardust Interstellar Dust Collector have shown that interstellar ...
Origin invariance in vibrational resonance Raman optical activity
Vidal, Luciano N. Cappelli, Chiara; Egidi, Franco; Barone, Vincenzo
2015-05-07
A theoretical investigation on the origin dependence of the vibronic polarizabilities, isotropic and anisotropic rotational invariants, and scattering cross sections in Resonance Raman Optical Activity (RROA) spectroscopy is presented. Expressions showing the origin dependence of these polarizabilities were written in the resonance regime using the Franck-Condon (FC) and Herzberg-Teller (HT) approximations for the electronic transition moments. Differently from the far-from-resonance scattering regime, where the origin dependent terms cancel out when the rotational invariants are calculated, RROA spectrum can exhibit some origin dependence even for eigenfunctions of the electronic Hamiltonian. At the FC level, the RROA spectrum is completely origin invariant if the polarizabilities are calculated using a single excited state or for a set of degenerate states. Otherwise, some origin effects can be observed in the spectrum. At the HT level, RROA spectrum is origin dependent even when the polarizabilities are evaluated from a single excited state but the origin effect is expected to be small in this case. Numerical calculations performed for (S)-methyloxirane, (2R,3R)-dimethyloxirane, and (R)-4-F-2-azetidinone at both FC and HT levels using the velocity representation of the electric dipole and quadrupole transition moments confirm the predictions of the theory and show the extent of origin effects and the effectiveness of suggested ways to remove them.
Space Dust Analysis Could Provide Clues to Solar System Origins
U.S. Department of Energy (DOE) - all webpages (Extended Search)
Space Dust Analysis Could Provide Clues to Solar System Origins Print New studies of space dust captured by NASA's Stardust Interstellar Dust Collector have shown that interstellar...
Los Alamos researchers uncover new origins of radiation-tolerant...
U.S. Department of Energy (DOE) - all webpages (Extended Search)
new origins of radiation-tolerant materials A new report this week in the journal Nature Communications provides new insight into what, exactly, makes some complex materials...
The origins of growth stresses in amorphous semiconductor thin...
Office of Scientific and Technical Information (OSTI)
Journal Article: The origins of growth stresses in amorphous semiconductor thin films. Citation Details In-Document ... Publication Date: 2003-03-01 OSTI Identifier: 917484 Report ...
Space Dust Analysis Could Provide Clues to Solar System Origins
U.S. Department of Energy (DOE) - all webpages (Extended Search)
chemical clues about the origins of our solar system. ... The aerogel panels were essentially photographed in tiny ... project called Stardust@home, volunteer space ...
Caltech researchers make discovery that hints at origin of phenomenon...
U.S. Department of Energy (DOE) - all webpages (Extended Search)
Caltech researchers make discovery that hints at origin of phenomenon like solar flares American Fusion News Category: U.S. Universities Link: Caltech researchers make discovery...
Structural Origins of DNA Target Selection and Nucleobase Extrusion...
Office of Scientific and Technical Information (OSTI)
of DNA Target Selection and Nucleobase Extrusion by a DNA Cytosine Methyltransferase Citation Details In-Document Search Title: Structural Origins of DNA Target Selection ...
Origins | U.S. DOE Office of Science (SC)
Origins Fusion Energy Sciences (FES) FES Home About Research Fusion Institutions Fusion ... Facilities Science Highlights Benefits of FES Funding Opportunities Fusion Energy Sciences ...
Pair breaking versus symmetry breaking: Origin of the Raman modes...
Office of Scientific and Technical Information (OSTI)
Pair breaking versus symmetry breaking: Origin of the Raman modes in superconducting cuprates Citation Details In-Document Search Title: Pair breaking versus symmetry breaking:...
EIA-Voluntary Reporting of Greenhouse Gases Program - Original...
Energy Information Administration (EIA) (indexed site)
Program Voluntary Reporting of Greenhouse Gases Program Original 1605(b) Program Section 1605(b) of the Energy Policy Act of 1992 established the Voluntary Reporting of Greenhouse ...
REopt: A Platform for Energy System Integration and Optimization: Preprint
Simpkins, T.; Cutler, D.; Anderson, K.; Olis, D.; Elgqvist, E.; Callahan, M.; Walker, A.
2014-08-01
REopt is NREL's energy planning platform offering concurrent, multi-technology integration and optimization capabilities to help clients meet their cost savings and energy performance goals. The REopt platform provides techno-economic decision-support analysis throughout the energy planning process, from agency-level screening and macro planning to project development to energy asset operation. REopt employs an integrated approach to optimizing a site?s energy costs by considering electricity and thermal consumption, resource availability, complex tariff structures including time-of-use, demand and sell-back rates, incentives, net-metering, and interconnection limits. Formulated as a mixed integer linear program, REopt recommends an optimally-sized mix of conventional and renewable energy, and energy storage technologies; estimates the net present value associated with implementing those technologies; and provides the cost-optimal dispatch strategy for operating them at maximum economic efficiency. The REopt platform can be customized to address a variety of energy optimization scenarios including policy, microgrid, and operational energy applications. This paper presents the REopt techno-economic model along with two examples of recently completed analysis projects.
Optimal linear reconstruction of dark matter from halo catalogues
Cai, Yan -Chuan; Bernstein, Gary; Sheth, Ravi K.
2011-04-01
The dark matter lumps (or "halos") that contain galaxies have locations in the Universe that are to some extent random with respect to the overall matter distributions. We investigate how best to estimate the total matter distribution from the locations of the halos. We derive the weight function w(M) to apply to dark-matter haloes that minimizes the stochasticity between the weighted halo distribution and its underlying mass density field. The optimal w(M) depends on the range of masses of halos being used. While the standard biased-Poisson model of the halo distribution predicts that bias weighting is optimal, the simple factmore » that the mass is comprised of haloes implies that the optimal w(M) will be a mixture of mass-weighting and bias-weighting. In N-body simulations, the Poisson estimator is up to 15× noisier than the optimal. Optimal weighting could make cosmological tests based on the matter power spectrum or cross-correlations much more powerful and/or cost effective.« less
Optimal linear reconstruction of dark matter from halo catalogues
Cai, Yan -Chuan; Bernstein, Gary; Sheth, Ravi K.
2011-04-01
The dark matter lumps (or "halos") that contain galaxies have locations in the Universe that are to some extent random with respect to the overall matter distributions. We investigate how best to estimate the total matter distribution from the locations of the halos. We derive the weight function w(M) to apply to dark-matter haloes that minimizes the stochasticity between the weighted halo distribution and its underlying mass density field. The optimal w(M) depends on the range of masses of halos being used. While the standard biased-Poisson model of the halo distribution predicts that bias weighting is optimal, the simple fact that the mass is comprised of haloes implies that the optimal w(M) will be a mixture of mass-weighting and bias-weighting. In N-body simulations, the Poisson estimator is up to 15 noisier than the optimal. Optimal weighting could make cosmological tests based on the matter power spectrum or cross-correlations much more powerful and/or cost effective.
Coupled Thermal-Hydrological-Mechanical-Chemical Model and Experiments for
Office of Energy Efficiency and Renewable Energy (EERE) (indexed site)
Optimization of Enhanced Geothermal System Development and Production | Department of Energy Coupled Thermal-Hydrological-Mechanical-Chemical Model and Experiments for Optimization of Enhanced Geothermal System Development and Production Coupled Thermal-Hydrological-Mechanical-Chemical Model and Experiments for Optimization of Enhanced Geothermal System Development and Production Project objective: Develop a novel Thermal-Hydrological-Mechanical-Chemical (THMC) modeling tool.
Optimizing High Level Waste Disposal
Dirk Gombert
2005-09-01
If society is ever to reap the potential benefits of nuclear energy, technologists must close the fuel-cycle completely. A closed cycle equates to a continued supply of fuel and safe reactors, but also reliable and comprehensive closure of waste issues. High level waste (HLW) disposal in borosilicate glass (BSG) is based on 1970s era evaluations. This host matrix is very adaptable to sequestering a wide variety of radionuclides found in raffinates from spent fuel reprocessing. However, it is now known that the current system is far from optimal for disposal of the diverse HLW streams, and proven alternatives are available to reduce costs by billions of dollars. The basis for HLW disposal should be reassessed to consider extensive waste form and process technology research and development efforts, which have been conducted by the United States Department of Energy (USDOE), international agencies and the private sector. Matching the waste form to the waste chemistry and using currently available technology could increase the waste content in waste forms to 50% or more and double processing rates. Optimization of the HLW disposal system would accelerate HLW disposition and increase repository capacity. This does not necessarily require developing new waste forms, the emphasis should be on qualifying existing matrices to demonstrate protection equal to or better than the baseline glass performance. Also, this proposed effort does not necessarily require developing new technology concepts. The emphasis is on demonstrating existing technology that is clearly better (reliability, productivity, cost) than current technology, and justifying its use in future facilities or retrofitted facilities. Higher waste processing and disposal efficiency can be realized by performing the engineering analyses and trade-studies necessary to select the most efficient methods for processing the full spectrum of wastes across the nuclear complex. This paper will describe technologies being
Eldred, M.S.; Hart, W.E.; Bohnhoff, W.J.; Romero, V.J.; Hutchinson, S.A.; Salinger, A.G.
1996-08-01
the benefits of applying optimization to computational models are well known, but their range of widespread application to date has been limited. This effort attempts to extend the disciplinary areas to which optimization algorithms may be readily applied through the development and application of advanced optimization strategies capable of handling the computational difficulties associated with complex simulation codes. Towards this goal, a flexible software framework is under continued development for the application of optimization techniques to broad classes of engineering applications, including those with high computational expense and nonsmooth, nonconvex design space features. Object-oriented software design with C++ has been employed as a tool in providing a flexible, extensible, and robust multidisciplinary toolkit with computationally intensive simulations. In this paper, demonstrations of advanced optimization strategies using the software are presented in the hybridization and parallel processing research areas. Performance of the advanced strategies is compared with a benchmark nonlinear programming optimization.
Optimized microsystems-enabled photovoltaics
Cruz-Campa, Jose Luis; Nielson, Gregory N.; Young, Ralph W.; Resnick, Paul J.; Okandan, Murat; Gupta, Vipin P.
2015-09-22
Technologies pertaining to designing microsystems-enabled photovoltaic (MEPV) cells are described herein. A first restriction for a first parameter of an MEPV cell is received. Subsequently, a selection of a second parameter of the MEPV cell is received. Values for a plurality of parameters of the MEPV cell are computed such that the MEPV cell is optimized with respect to the second parameter, wherein the values for the plurality of parameters are computed based at least in part upon the restriction for the first parameter.
Trajectory Analysis and Optimization System
Energy Science and Technology Software Center
1996-06-04
TAOS is a general-purpose software tool capable of analyzing nearly any type of three degree-of-freedom point-mass, high-speed trajectory. Input files contain aerodynamic coefficients, propulsion data, and a trajectory description. The trajectory description divides the trajectory into segments, and within each segment, guidance rules provided by the user describe how the trajectory is computed. Output files contain tabulated trajectory information such as position, velocity, and acceleration. Parametric optimization provides a powerful method for satisfying mission-planning constraints,more » and trajectories involving more than one vehicle can be computed within a single problem.« less
GAUSSIAN RANDOM FIELD: PHYSICAL ORIGIN OF SERSIC PROFILES
Cen, Renyue
2014-08-01
While the Sersic profile family provides adequate fits for the surface brightness profiles of observed galaxies, its physical origin is unknown. We show that if the cosmological density field is seeded by random Gaussian fluctuations, as in the standard cold dark matter model, galaxies with steep central profiles have simultaneously extended envelopes of shallow profiles in the outskirts, whereas galaxies with shallow central profiles are accompanied by steep density profiles in the outskirts. These properties are in accord with those of the Sersic profile family. Moreover, galaxies with steep central profiles form their central regions in smaller denser subunits that possibly merge subsequently, which naturally leads to the formation of bulges. In contrast, galaxies with shallow central profiles form their central regions in a coherent fashion without significant substructure, a necessary condition for disk galaxy formation. Thus, the scenario is self-consistent with respect to the correlation between observed galaxy morphology and the Sersic index. We further predict that clusters of galaxies should display a similar trend, which should be verifiable observationally.
Optimal planning for the sustainable utilization of municipal solid waste
Santibañez-Aguilar, José Ezequiel; Ponce-Ortega, José María; Betzabe González-Campos, J.; Serna-González, Medardo; El-Halwagi, Mahmoud M.
2013-12-15
Highlights: • An optimization approach for the sustainable management of municipal solid waste is proposed. • The proposed model optimizes the entire supply chain network of a distributed system. • A case study for the sustainable waste management in the central-west part of Mexico is presented. • Results shows different interesting solutions for the case study presented. - Abstract: The increasing generation of municipal solid waste (MSW) is a major problem particularly for large urban areas with insufficient landfill capacities and inefficient waste management systems. Several options associated to the supply chain for implementing a MSW management system are available, however to determine the optimal solution several technical, economic, environmental and social aspects must be considered. Therefore, this paper proposes a mathematical programming model for the optimal planning of the supply chain associated to the MSW management system to maximize the economic benefit while accounting for technical and environmental issues. The optimization model simultaneously selects the processing technologies and their location, the distribution of wastes from cities as well as the distribution of products to markets. The problem was formulated as a multi-objective mixed-integer linear programing problem to maximize the profit of the supply chain and the amount of recycled wastes, where the results are showed through Pareto curves that tradeoff economic and environmental aspects. The proposed approach is applied to a case study for the west-central part of Mexico to consider the integration of MSW from several cities to yield useful products. The results show that an integrated utilization of MSW can provide economic, environmental and social benefits.
On the robust optimization to the uncertain vaccination strategy problem
Chaerani, D. Anggriani, N. Firdaniza
2014-02-21
In order to prevent an epidemic of infectious diseases, the vaccination coverage needs to be minimized and also the basic reproduction number needs to be maintained below 1. This means that as we get the vaccination coverage as minimum as possible, thus we need to prevent the epidemic to a small number of people who already get infected. In this paper, we discuss the case of vaccination strategy in term of minimizing vaccination coverage, when the basic reproduction number is assumed as an uncertain parameter that lies between 0 and 1. We refer to the linear optimization model for vaccination strategy that propose by Becker and Starrzak (see [2]). Assuming that there is parameter uncertainty involved, we can see Tanner et al (see [9]) who propose the optimal solution of the problem using stochastic programming. In this paper we discuss an alternative way of optimizing the uncertain vaccination strategy using Robust Optimization (see [3]). In this approach we assume that the parameter uncertainty lies within an ellipsoidal uncertainty set such that we can claim that the obtained result will be achieved in a polynomial time algorithm (as it is guaranteed by the RO methodology). The robust counterpart model is presented.
Fracture optimization on every well
Ely, J.W.; Tiner, R.L.
1998-01-01
Since hydraulic fracturing was introduced in 1947, significant advances have been made in the area of fracture diagnostics, particularly in the last 20 years. Common diagnostic procedures used today to quantify fracture geometry and fracture fluid efficiency are listed in a table. During the past several years, the most popular procedure was to conduct most or all of the diagnostics on one well in a field, and apply the results to subsequent wells. However, experience has shown that critical factors can change drastically, even in fields with minimal well spacing. Although some variations in relative rock stresses have been seen, rock properties typically remain fairly consistent within a designated area. However, the factor that changes drastically from well to well--even in spacing as small as 10 acres--is fracture fluid efficiency. As much as a 60% change in fluid efficiencies has been noted for offset wells. Because of these variations, a new procedure has been developed in which fracture treatments on individual wells can be optimized on the day of the fracture treatment. The paper describes this fracture optimization procedure.
The origin of the most iron-poor star
Marassi, S.; Schneider, R.; Limongi, M. [INAF/Osservatorio Astronomico di Roma, Via di Frascati 33, I-00040 Monteporzio (Italy); Chiaki, G.; Yoshida, N. [Department of Physics, Graduate School of Science, The University of Tokyo, 7-3-1 Hongo, Bunkyo, Tokyo 113-0033 (Japan); Omukai, K. [Astronomical Institute, Tohoku University, Sendai 980-8578 (Japan); Nozawa, T. [National Astronomical Observatory of Japan, Mitaka, Tokyo 181-8588 (Japan); Chieffi, A., E-mail: stefania.marassi@oa-roma.inaf.it [INAF/IASF, Via Fosso del Cavaliere 100, I-00133 Roma (Italy)
2014-10-20
We investigate the origin of carbon-enhanced metal-poor (CEMP) stars starting from the recently discovered [Fe/H] < -7.1 star SMSS J031300. We show that the elemental abundances observed on the surface of SMSS J031300 can be well fit by the yields of faint, metal-free, supernovae (SNe). Using properly calibrated faint SN explosion models, we study, for the first time, the formation of dust grains in such carbon-rich, iron-poor SN ejecta. Calculations are performed assuming both unmixed and uniformly mixed ejecta and taking into account the partial destruction by the SN reverse shock. We find that, due to the paucity of refractory elements beside carbon, amorphous carbon is the only grain species to form, with carbon condensation efficiencies that range between (0.15 and 0.84), resulting in dust yields in the range (0.025-2.25) M {sub ?}. We follow the collapse and fragmentation of a star-forming cloud enriched by the products of these faint SN explosions and we explore the role played by fine structure line cooling and dust cooling. We show that even if grain growth during the collapse has a minor effect of the dust-to-gas ratio, due to C depletion into CO molecules at an early stage of the collapse, the formation of CEMP low-mass stars, such as SMSS J031300, could be triggered by dust cooling and fragmentation. A comparison between model predictions and observations of a sample of C-normal and C-rich metal-poor stars supports the idea that a single common pathway may be responsible for the formation of the first low-mass stars.
Origin of texture development in orthorhombic uranium
Zecevic, Miroslav; Knezevic, Marko; Beyerlein, Irene Jane; McCabe, Rodney James
2016-04-09
We study texture evolution of alpha-uranium (α-U) during plane strain compression and uniaxial compression to high strains at different temperatures. We combine a multiscale polycrystal constitutive model and detailed analysis of texture data to uncover the slip and twinning modes responsible for the formation of individual texture components. The analysis indicates that during plane strain compression, floor slip (001)[100] results in the formation of two pronounced {001}{001} texture peaks tilted 10–15° away from the normal toward the rolling direction. During both high-temperature (573 K) through-thickness compression and plane strain compression, the active slip modes are floor slip (001)[100] and chimneymore » slip 1/2{110} <11¯0> with slightly different ratios. {130} <31¯0> deformation twinning is profuse during rolling and in-plane compression and decreases with increasing temperature, but is not as active for through-thickness compression. Lastly, we comment on some similarities between rolling textures of α-U, which has a c/a ratio of 1.734, and those that develop in hexagonal close packed metals with similarly high c/a ratios like Zn (1.856) and Cd (1.885) and are dominated by basal slip.« less
Modeling Mathematical Programs with Equilibrium Constraints in Pyomo
Hart, William E.; Siirola, John Daniel
2015-07-01
We describe new capabilities for modeling MPEC problems within the Pyomo modeling software. These capabilities include new modeling components that represent complementar- ity conditions, modeling transformations for re-expressing models with complementarity con- ditions in other forms, and meta-solvers that apply transformations and numeric optimization solvers to optimize MPEC problems. We illustrate the breadth of Pyomo's modeling capabil- ities for MPEC problems, and we describe how Pyomo's meta-solvers can perform local and global optimization of MPEC problems.
Performing aggressive code optimization with an ability to rollback...
Office of Scientific and Technical Information (OSTI)
Title: Performing aggressive code optimization with an ability to rollback changes made by the aggressive optimizations Mechanisms for aggressively optimizing computer code are ...
The Use of Exhaust Gas Recirculation to Optimize Fuel Economy...
Energy.gov [DOE] (indexed site)
More Documents & Publications DoE Optimally Controlled Flexible Fuel Powertrain System E85 Optimized Engine through Boosting, Spray Optimized GDi, VCR and Variable Valvetrain ...
Maximize, minimize or target - optimization for a fitted response from a designed experiment
Anderson-Cook, Christine M.; Cao, Yongtao; Michaela, Christine
2016-04-01
One of the common goals of running and analyzing a designed experiment is to find a location in the design space that optimizes the response of interest. Depending on the goal of the experiment, we may seek to maximize or minimize the response, or set the process to hit a particular target value. After the designed experiment, a response model is fitted and the optimal settings of the input factors are obtained based on the estimated response model. Furthermore, the suggested optimal settings of the input factors are then used in the production environment.
Domestic Distribution of U.S. Coal by Origin State, Consumer...
Energy Information Administration (EIA) (indexed site)
Origin State, Consumer, Destination and Method of Transportation Home > Coal > Annual Coal Distribution > Coal Origin Map > Domestic Distribution by Origin: Alaska Data For: 2002...
Evaluation of Generic EBS Design Concepts and Process Models...
Office of Energy Efficiency and Renewable Energy (EERE) (indexed site)
Generic EBS Design Concepts and Process Models Implications to EBS Design Optimization Evaluation of Generic EBS Design Concepts and Process Models Implications to EBS Design...
Optimization of the main parameters of miniature split Stirling cooler
Tsesarsky, J.
1995-12-01
Unlike other modern industrial products Stirling refrigerators development is based mainly on experimental methods. Newly developed high accuracy numerical model for Stirling refrigerators analysis provides good approximation of gas stream process assured by large number of nodes placed in regenerator (300) and large number of time steps (240 per one machine turn). Confidence in accuracy of equations solution makes possible Stirling coolers optimization. In addition to information about refrigerator temperature field the model provides information about driving force of split cooler displacer for computer aided design of displacer driver. In this paper, four parameters of split Stirling refrigerator are optimized: compressor-expander swept volume ratio; phase angle; regenerator length; and regenerator diameter. In each program run power delivered to gas was kept constant by continuous correction of compressor and expander strokes without changing their ratio. Collection of the results produce the optimum cooler structure. Driving displacer force-theta function is also available.
About an Optimal Visiting Problem
Bagagiolo, Fabio Benetton, Michela
2012-02-15
In this paper we are concerned with the optimal control problem consisting in minimizing the time for reaching (visiting) a fixed number of target sets, in particular more than one target. Such a problem is of course reminiscent of the famous 'Traveling Salesman Problem' and brings all its computational difficulties. Our aim is to apply the dynamic programming technique in order to characterize the value function of the problem as the unique viscosity solution of a suitable Hamilton-Jacobi equation. We introduce some 'external' variables, one per target, which keep in memory whether the corresponding target is already visited or not, and we transform the visiting problem in a suitable Mayer problem. This fact allows us to overcome the lacking of the Dynamic Programming Principle for the originary problem. The external variables evolve with a hysteresis law and the Hamilton-Jacobi equation turns out to be discontinuous.
Hybrid Optimization Parallel Search PACKage
Energy Science and Technology Software Center
2009-11-10
HOPSPACK is open source software for solving optimization problems without derivatives. Application problems may have a fully nonlinear objective function, bound constraints, and linear and nonlinear constraints. Problem variables may be continuous, integer-valued, or a mixture of both. The software provides a framework that supports any derivative-free type of solver algorithm. Through the framework, solvers request parallel function evaluation, which may use MPI (multiple machines) or multithreading (multiple processors/cores on one machine). The framework providesmore » a Cache and Pending Cache of saved evaluations that reduces execution time and facilitates restarts. Solvers can dynamically create other algorithms to solve subproblems, a useful technique for handling multiple start points and integer-valued variables. HOPSPACK ships with the Generating Set Search (GSS) algorithm, developed at Sandia as part of the APPSPACK open source software project.« less
HCCI Engine Optimization and Control
Rolf D. Reitz
2005-09-30
The goal of this project was to develop methods to optimize and control Homogeneous-Charge Compression Ignition (HCCI) engines, with emphasis on diesel-fueled engines. HCCI offers the potential of nearly eliminating IC engine NOx and particulate emissions at reduced cost over Compression Ignition Direct Injection engines (CIDI) by controlling pollutant emissions in-cylinder. The project was initiated in January, 2002, and the present report is the final report for work conducted on the project through December 31, 2004. Periodic progress has also been reported at bi-annual working group meetings held at USCAR, Detroit, MI, and at the Sandia National Laboratories. Copies of these presentation materials are available on CD-ROM, as distributed by the Sandia National Labs. In addition, progress has been documented in DOE Advanced Combustion Engine R&D Annual Progress Reports for FY 2002, 2003 and 2004. These reports are included as the Appendices in this Final report.
Demonstration of integrated optimization software
2008-01-01
NeuCO has designed and demonstrated the integration of five system control modules using its proprietary ProcessLink{reg_sign} technology of neural networks, advanced algorithms and fuzzy logic to maximize performance of coal-fired plants. The separate modules control cyclone combustion, sootblowing, SCR operations, performance and equipment maintenance. ProcessLink{reg_sign} provides overall plant-level integration of controls responsive to plant operator and corporate criteria. Benefits of an integrated approach include NOx reduction improvement in heat rate, availability, efficiency and reliability; extension of SCR catalyst life; and reduced consumption of ammonia. All translate into cost savings. As plant complexity increases through retrofit, repowering or other plant modifications, this integrated process optimization approach will be an important tool for plant operators. 1 fig., 1 photo.
Moisture Research - Optimizing Wall Assemblies
Arena, L.; Mantha, P.
2013-05-01
The Consortium for Advanced Residential Buildings (CARB) evaluated several different configurations of wall assemblies to determine the accuracy of moisture modeling and make recommendations to ensure durable, efficient assemblies. WUFI and THERM were used to model the hygrothermal and heat transfer characteristics of these walls.
Calirimeter/absorber optimization for a RHIC dimuon experiment
Aronson, S.H.; Murtagh, M.J.; Starks, M.; Liu, X.T.; Petitt, G.A.; Zhang, Z.; Ewell, L.A.; Hill, J.C.; Wohn, F.K.; Costales, J.B.; Namboodiri, M.N., Sangster, T.C.; Thomas, J.H.; Gavron, A.; Waters, L.; Kehoe, W.L.; Steadman, S.G.; Awes, T.C.; Obenshain, F.E.; Saini, S.; Young, G.R.; Chang, J.; Fung, S.Y.; Kang, J.H.; Kreke, J.; He, Xiaochun, Sorensen, S.P.; Cornell, E.C.; Maguire, C.F.
1991-12-31
The RD-10 R&D effort on calorimeter/absorber optimization for a RHIC experiment had an extended run in 1991 using the A2 test beam at the AGS. Measurements were made of the leakage of particles behind various model hadron calorimeters. Behavior of the calorimeter/absorber as a muon-identifier was studied. First comparisons of results from test measurements to calculated results using the GHEISHA code were made
Gschwind, Michael K
2013-07-23
Mechanisms for aggressively optimizing computer code are provided. With these mechanisms, a compiler determines an optimization to apply to a portion of source code and determines if the optimization as applied to the portion of source code will result in unsafe optimized code that introduces a new source of exceptions being generated by the optimized code. In response to a determination that the optimization is an unsafe optimization, the compiler generates an aggressively compiled code version, in which the unsafe optimization is applied, and a conservatively compiled code version in which the unsafe optimization is not applied. The compiler stores both versions and provides them for execution. Mechanisms are provided for switching between these versions during execution in the event of a failure of the aggressively compiled code version. Moreover, predictive mechanisms are provided for predicting whether such a failure is likely.
Origin of the spin reorientation transitions in (Fe1–xCox)2B alloys
Belashchenko, Kirill D.; Ke, Liqin; Däne, Markus; Benedict, Lorin X.; Lamichhane, Tej Nath; Taufour, Valentin; Jesche, Anton; Bud'ko, Sergey L.; Canfield, Paul C.; Antropov, Vladimir P.
2015-02-13
Low-temperature measurements of the magnetocrystalline anisotropy energy K in (Fe1–xCox)2B alloys are reported, and the origin of this anisotropy is elucidated using a first-principles electronic structure analysis. The calculated concentration dependence K(x) with a maximum near x = 0.3 and a minimum near x = 0.8 is in excellent agreement with experiment. This dependence is traced down to spin-orbital selection rules and the filling of electronic bands with increasing electronic concentration. In conclusion, at the optimal Co concentration, K depends strongly on the tetragonality and doubles under a modest 3% increase of the c/a ratio, suggesting that the magnetocrystalline anisotropymore » can be further enhanced using epitaxial or chemical strain.« less
From the Building to the Grid: An Energy Revolution and Modeling...
U.S. Department of Energy (DOE) - all webpages (Extended Search)
important. * If we model a whole country, no need for detailed response, but need to know dynamic response. Sophisticated models are not required. * Models should be optimized for...
C++ LIBRARY OF ALOGRITHMS FOR STOCHASTIC GLOBAL OPTIMIZATION
Energy Science and Technology Software Center
2001-10-25
SGOPT is a C++ library that includes implementations of several algorithms for stochastic global optimization and derivative free optimization.
Intel compiler performance optimization and characterization
U.S. Department of Energy (DOE) - all webpages (Extended Search)
compiler performance optimization and characterization Intel compiler performance optimization and characterization May 13, 2015 NERSC will host an in-depth training presentation on using the Intel compiler as a performance optimization and characterization tool. The presentation will be May 13th from 10am to 12pm Pacific time. The speaker will be Rakesh Krishnaiyer of Intel. Abstract For identified hotspots/analysis done using performance profiling tools (such as VTune), we will discuss how to
Energy Optimized Desalination Technology Development Workshop - November
Office of Energy Efficiency and Renewable Energy (EERE) (indexed site)
5-6, 2015 | Department of Energy Workshops » Energy Optimized Desalination Technology Development Workshop - November 5-6, 2015 Energy Optimized Desalination Technology Development Workshop - November 5-6, 2015 The Department of Energy Office of Energy Efficiency and Renewable Energy and Office of Fossil Energy hosted a workshop on Energy Optimized Desalination Technology Development on November 5-6, 2015 at the Hilton San Francisco Union Square, in San Francisco, CA. This 2-day workshop
Plant Optimization Technologies | Department of Energy
Office of Energy Efficiency and Renewable Energy (EERE) (indexed site)
Science & Innovation » Clean Coal » Crosscutting Research » Plant Optimization Technologies Plant Optimization Technologies The Plant Optimization Technologies Program is a diverse, scientifically oriented research and development program that addresses issues affecting the way coal is used. The program's primary emphasis is to support the development of advanced technologies that use coal with near-zero emissions. To provide this support, the program identifies scientific and
Parallel Programming and Optimization for Intel Architecture
U.S. Department of Energy (DOE) - all webpages (Extended Search)
Parallel Programming and Optimization for Intel Architecture Parallel Programming and Optimization for Intel Architecture August 14, 2015 by Richard Gerber Intel is sponsoring a series of webinars entitled "Parallel Programming and Optimization for Intel Architecture." Here's the schedule for August (Registration link is: https://attendee.gotowebinar.com/register/6325131222429932289) Mon, August 17 - "Hello world from Intel Xeon Phi coprocessors". Overview of architecture,
Large-Scale Optimization for Bayesian Inference in Complex Systems
Willcox, Karen; Marzouk, Youssef
2013-11-12
The SAGUARO (Scalable Algorithms for Groundwater Uncertainty Analysis and Robust Optimization) Project focused on the development of scalable numerical algorithms for large-scale Bayesian inversion in complex systems that capitalize on advances in large-scale simulation-based optimization and inversion methods. The project was a collaborative effort among MIT, the University of Texas at Austin, Georgia Institute of Technology, and Sandia National Laboratories. The research was directed in three complementary areas: efficient approximations of the Hessian operator, reductions in complexity of forward simulations via stochastic spectral approximations and model reduction, and employing large-scale optimization concepts to accelerate sampling. The MIT--Sandia component of the SAGUARO Project addressed the intractability of conventional sampling methods for large-scale statistical inverse problems by devising reduced-order models that are faithful to the full-order model over a wide range of parameter values; sampling then employs the reduced model rather than the full model, resulting in very large computational savings. Results indicate little effect on the computed posterior distribution. On the other hand, in the Texas--Georgia Tech component of the project, we retain the full-order model, but exploit inverse problem structure (adjoint-based gradients and partial Hessian information of the parameter-to-observation map) to implicitly extract lower dimensional information on the posterior distribution; this greatly speeds up sampling methods, so that fewer sampling points are needed. We can think of these two approaches as ``reduce then sample'' and ``sample then reduce.'' In fact, these two approaches are complementary, and can be used in conjunction with each other. Moreover, they both exploit deterministic inverse problem structure, in the form of adjoint-based gradient and Hessian information of the underlying parameter-to-observation map, to achieve their
Getting Started and Optimization Strategy
U.S. Department of Energy (DOE) - all webpages (Extended Search)
of differences is summarized in the table below: Key differences are highlighted in red. ... Below is the roofline model for Edison and the marker for the code block above: The red ...
Financing Tribal Energy Infrastructure & Energy Optimization...
Office of Energy Efficiency and Renewable Energy (EERE) (indexed site)
Optimization Infrastructure (EOI) www.projectseastar.org WHERE WHAT Tribe's role? * Entrepreneur * Investor * Government WHO Want's the money: * Private Entity * Public Entity * ...
Fuzzy logic control and optimization system
Lou, Xinsheng
2012-04-17
A control system (300) for optimizing a power plant includes a chemical loop having an input for receiving an input signal (369) and an output for outputting an output signal (367), and a hierarchical fuzzy control system (400) operably connected to the chemical loop. The hierarchical fuzzy control system (400) includes a plurality of fuzzy controllers (330). The hierarchical fuzzy control system (400) receives the output signal (367), optimizes the input signal (369) based on the received output signal (367), and outputs an optimized input signal (369) to the input of the chemical loop to control a process of the chemical loop in an optimized manner.
Optimizing Asset Utilization and Operating Efficiency Efficiently...
Energy.gov [DOE] (indexed site)
Optimizing Asset Utilization and Operating Efficien Efficientl (55.31 KB) More Documents & Publications Metrics for Measuring Progress Toward Implementation of the Smart Grid (June ...
Stochastic Optimization of Complex Systems (Technical Report...
Office of Scientific and Technical Information (OSTI)
DOE Contract Number: SC0002587 Resource Type: Technical Report Research Org: University of ... Subject: 97 MATHEMATICS AND COMPUTING optimization, stochastic methods, complex systems ...
Optimizing areal capacities through understanding the limitations...
Office of Scientific and Technical Information (OSTI)
Title: Optimizing areal capacities through understanding the limitations of lithium-ion electrodes Increasing the areal capacity or electrode thickness in lithium ion batteries is ...
Renewable Energy Optimization (REopt) (Fact Sheet)
Not Available
2014-06-01
REopt is an energy planning platform offering concurrent, multiple technology integration and optimization capabilities to help clients meet their cost savings and energy performance goals.
Infrared Mapping Helps Optimize Catalytic Reactions
U.S. Department of Energy (DOE) - all webpages (Extended Search)
evolution of reactants into a desired product could be an invaluable tool for optimizing pharmaceutical-related synthetic processes that take place in flow reactors. Schematic of...
Optimizing Web Pages for Search Engines
For search engine optimization (SEO), follow these best practices when writing content for Office of Energy Efficiency and Renewable Energy (EERE) websites and applications.
Optimizing parameters for predicting the geochemical behavior...
Office of Energy Efficiency and Renewable Energy (EERE) (indexed site)
of discrete fracture networks in geothermal systems Optimizing parameters for predicting the geochemical behavior and performance of discrete fracture networks in geothermal ...
ENERGY SIGNATURES ENERGY SIGNATURES Optimizing Production and...
U.S. Department of Energy (DOE) - all webpages (Extended Search)
SIGNATURES ENERGY SIGNATURES Optimizing Production and Mitigating Impacts Los Alamos National Laboratory's Science of Signatures 2 Advanced Science for Energy Signatures Energy in ...
THE ORIGIN OF COMPLEX ORGANIC MOLECULES IN PRESTELLAR CORES
Vastel, C.; Ceccarelli, C.; Lefloch, B.; Bachiller, R.
2014-11-01
Complex organic molecules (COMs) have been detected in a variety of environments including cold prestellar cores. Given the low temperatures of these objects, these detections challenge existing models. We report here new observations toward the prestellar core L1544. They are based on an unbiased spectral survey of the 3 mm band at the IRAM 30 m telescope as part of the Large Program ASAI. The observations allow us to provide a full census of the oxygen-bearing COMs in this source. We detected tricarbon monoxide, methanol, acetaldehyde, formic acid, ketene, and propyne with abundances varying from 5 × 10{sup –11} to 6 × 10{sup –9}. The non-LTE analysis of the methanol lines shows that they are likely emitted at the border of the core at a radius of ∼8000 AU, where T ∼ 10 K and n {sub H{sub 2}} ∼2 × 10{sup 4} cm{sup –3}. Previous works have shown that water vapor is enhanced in the same region because of the photodesorption of water ices. We propose that a non-thermal desorption mechanism is also responsible for the observed emission of methanol and COMs from the same layer. The desorbed oxygen and a small amount of desorbed methanol and ethene are enough to reproduce the abundances of tricarbon monoxide, methanol, acetaldehyde, and ketene measured in L1544. These new findings open the possibility that COMs in prestellar cores originate in a similar outer layer rather than in the dense inner cores, as previously assumed, and that their formation is driven by the non-thermally desorbed species.
New Models Help Optimize Development of Bakken Shale Resources...
Office of Environmental Management (EM)
School of Mines (CSM), through research funded by FE's Oil and Natural Gas Program. A "play" is a shale formation containing significant accumulations of natural gas or oil. ...
Modeling and Optimization of Hybrid Solar Thermoelectric Systems...
Office of Scientific and Technical Information (OSTI)
DOE Contract Number: SC0001299; FG02-09ER46577 Resource Type: Journal Article Resource Relation: Journal Name: Solar Energy; Journal Volume: 85; Related Information: S3TEC partners ...
OneidaTribe of Indians Energy Optimization Model Development...
Office of Environmental Management (EM)
... thermal Bioenergy Biomass Thermal Electric Fuels CNG, biodiesel, electric Conventionals Strategy Energy history Energy forecast ...
optimal initial conditions for coupling ice sheet models to earth...
Office of Scientific and Technical Information (OSTI)
Authors: Perego, Mauro 1 ; Price, Stephen F. Dr 2 ; Stadler, Georg 3 + Show Author Affiliations Sandia National Laboratories Los Alamos National Laboratory Institute for ...
optimal initial conditions for coupling ice sheet models to earth...
Office of Scientific and Technical Information (OSTI)
Authors: Perego, Mauro 1 ; Price, Stephen F. Dr 2 ; Stadler, Georg 3 + Show Author Affiliations Sandia National Laboratories Sandia National Laboratories Los Alamos National ...
Modeling and Optimization of Direct Chill Casting to Reduce Ingot...
Office of Scientific and Technical Information (OSTI)
Web site http:www.osti.govbridge Reports produced before January 1, 1996, may be ... 703-605-6900 E-mail info@ntis.fedworld.gov Web site http:www.ntis.govsupport...
Applying the Battery Ownership Model in Pursuit of Optimal Battery...
Energy.gov [DOE] (indexed site)
Vehicle Technologies Office: 2013 Energy Storage R&D Progress Report, Sections 4-6 Analysis of Electric Vehicle Battery Performance Targets Building America Whole-House Solutions ...
Optimized boundary driven flows for dynamos in a sphere
Khalzov, I. V.; Brown, B. P.; Cooper, C. M.; Weisberg, D. B.; Forest, C. B. [Center for Magnetic Self Organization in Laboratory and Astrophysical Plasmas, University of Wisconsin-Madison, 1150 University Avenue, Madison, Wisconsin 53706 (United States)
2012-11-15
We perform numerical optimization of the axisymmetric flows in a sphere to minimize the critical magnetic Reynolds number Rm{sub cr} required for dynamo onset. The optimization is done for the class of laminar incompressible flows of von Karman type satisfying the steady-state Navier-Stokes equation. Such flows are determined by equatorially antisymmetric profiles of driving azimuthal (toroidal) velocity specified at the spherical boundary. The model is relevant to the Madison plasma dynamo experiment, whose spherical boundary is capable of differential driving of plasma in the azimuthal direction. We show that the dynamo onset in this system depends strongly on details of the driving velocity profile and the fluid Reynolds number Re. It is found that the overall lowest Rm{sub cr} Almost-Equal-To 200 is achieved at Re Almost-Equal-To 240 for the flow, which is hydrodynamically marginally stable. We also show that the optimized flows can sustain dynamos only in the range Rm{sub cr}
Optimizing potential energy functions for maximal intrinsic hyperpolarizability
Zhou Juefei; Szafruga, Urszula B.; Kuzyk, Mark G.; Watkins, David S.
2007-11-15
We use numerical optimization to study the properties of (1) the class of one-dimensional potential energy functions and (2) systems of point nuclei in two dimensions that yield the largest intrinsic hyperpolarizabilities, which we find to be within 30% of the fundamental limit. In all cases, we use a one-electron model. It is found that a broad range of optimized potentials, each of very different character, yield the same intrinsic hyperpolarizability ceiling of 0.709. Furthermore, all optimized potential energy functions share common features such as (1) the value of the normalized transition dipole moment to the dominant state, which forces the hyperpolarizability to be dominated by only two excited states and (2) the energy ratio between the two dominant states. All optimized potentials are found to obey the three-level ansatz to within about 1%. Many of these potential energy functions may be implementable in multiple quantum well structures. The subset of potentials with undulations reaffirm that modulation of conjugation may be an approach for making better organic molecules, though there appear to be many others. Additionally, our results suggest that one-dimensional molecules may have larger diagonal intrinsic hyperpolarizability {beta}{sub xxx}{sup int} than higher-dimensional systems.
Energy Optimization (Electric)- Commercial Efficiency Program
The "Michigan Public Clean, Renewable, and Efficient Energy Act" (Public Act 295 passed in 2008) provided original authorization to create utility energy efficiency programs across the state. Com...
The Origin of Mass (Conference) | SciTech Connect
Office of Scientific and Technical Information (OSTI)
Citation Details In-Document Search Title: The Origin of Mass Authors: Boyle, P ; Buchoff, M ; Christ, N ; Izubuchi, T ; Jung, C ; Luu, T ; Mawhinney, R ; Schroeder, C ; Soltz, R ; ...
Origin of banded iron formations : oceanic crust leaching & self...
Office of Scientific and Technical Information (OSTI)
Subject: 58 GEOSCIENCES; IRON; LEACHING; OCEANIC CRUST; ORIGIN Word Cloud More Like This Full Text Journal Articles Find in Google Scholar Find in Google Scholar Search WorldCat ...
ORIGIN OF MAGNETIC FIELD IN THE INTRACLUSTER MEDIUM: PRIMORDIAL OR ASTROPHYSICAL?
Cho, Jungyeon
2014-12-20
The origin of magnetic fields in galaxy clusters is still an unsolved problem that is largely due to our poor understanding of initial seed magnetic fields. If the seed magnetic fields have primordial origins, it is likely that large-scale pervasive magnetic fields were present before the formation of the large-scale structure. On the other hand, if they were ejected from astrophysical bodies, then they were highly localized in space at the time of injection. In this paper, using turbulence dynamo models for high magnetic Prandtl number fluids, we find constraints on the seed magnetic fields. The hydrodynamic Reynolds number based on the Spitzer viscosity in the intracluster medium (ICM) is believed to be less than O(10{sup 2}), while the magnetic Reynolds number can be much larger. In this case, if the seed magnetic fields have primordial origins, they should be stronger than O(10{sup 11})G, which is very close to the upper limit of O(10{sup 9})G set by the cosmic microwave background observations. On the other hand, if the seed magnetic fields were ejected from astrophysical bodies, any seed magnetic fields stronger than O(10{sup 9})G can safely magnetize the ICM. Therefore, it is less likely that primordial magnetic fields are the direct origin of present-day magnetic fields in the ICM.
Ferroelectric-like hysteresis loop originated from non-ferroelectric
Office of Scientific and Technical Information (OSTI)
effects (Journal Article) | SciTech Connect Ferroelectric-like hysteresis loop originated from non-ferroelectric effects Citation Details In-Document Search This content will become publicly available on September 6, 2017 Title: Ferroelectric-like hysteresis loop originated from non-ferroelectric effects Piezoresponse force microscopy (PFM) has provided advanced nanoscale understanding and analysis of ferroelectric and piezoelectric properties. In PFM-based studies, electromechanical strain
Origin of the Magnetoresistance in Oxide Tunnel Junctions Determined
Office of Scientific and Technical Information (OSTI)
through Electric Polarization Control of the Interface (Journal Article) | SciTech Connect Origin of the Magnetoresistance in Oxide Tunnel Junctions Determined through Electric Polarization Control of the Interface Citation Details In-Document Search Title: Origin of the Magnetoresistance in Oxide Tunnel Junctions Determined through Electric Polarization Control of the Interface Authors: Inoue, Hisashi ; Swartz, Adrian G. ; Harmon, Nicholas J. ; Tachikawa, Takashi ; Hikita, Yasuyuki ;
Optimizing Interacting Potentials to Form Targeted Materials Structures
Torquato, Salvatore
2015-09-28
Conventional applications of the principles of statistical mechanics (the "forward" problems), start with particle interaction potentials, and proceed to deduce local structure and macroscopic properties. Other applications (that may be classified as "inverse" problems), begin with targeted configurational information, such as low-order correlation functions that characterize local particle order, and attempt to back out full-system configurations and/or interaction potentials. To supplement these successful experimental and numerical "forward" approaches, we have focused on inverse approaches that make use of analytical and computational tools to optimize interactions for targeted self-assembly of nanosystems. The most original aspect of our work is its inherently inverse approach: instead of predicting structures that result from given interaction potentials among particles, we determine the optimal potential that most robustly stabilizes a given target structure subject to certain constraints. Our inverse approach could revolutionize the manner in which materials are designed and fabricated. There are a number of very tangible properties (e.g. zero thermal expansion behavior), elastic constants, optical properties for photonic applications, and transport properties.
Strategic plan for infrastructure optimization
Donley, C.D.
1998-05-27
This document represents Fluor Daniel Hanford`s and DynCorp`s Tri-Cities Strategic Plan for Fiscal Years 1998--2002, the road map that will guide them into the next century and their sixth year of providing safe and cost effective infrastructure services and support to the Department of Energy (DOE) and the Hanford Site. The Plan responds directly to the issues raised in the FDH/DOE Critical Self Assessment specifically: (1) a strategy in place to give DOE the management (systems) and physical infrastructure for the future; (2) dealing with the barriers that exist to making change; and (3) a plan to right-size the infrastructure and services, and reduce the cost of providing services. The Plan incorporates initiatives from several studies conducted in Fiscal Year 1997 to include: the Systems Functional Analysis, 200 Area Water Commercial Practices Plan, $ million Originated Cost Budget Achievement Plan, the 1OO Area Vacate Plan, the Railroad Shutdown Plan, as well as recommendations from the recently completed Review of Hanford Electrical Utility. These and other initiatives identified over the next five years will result in significant improvements in efficiency, allowing a greater portion of the infrastructure budget to be applied to Site cleanup. The Plan outlines a planning and management process that defines infrastructure services and structure by linking site technical base line data and customer requirements to work scope and resources. The Plan also provides a vision of where Site infrastructure is going and specific initiatives to get there.
Automated Cache Performance Analysis And Optimization
Mohror, Kathryn
2013-12-23
While there is no lack of performance counter tools for coarse-grained measurement of cache activity, there is a critical lack of tools for relating data layout to cache behavior to application performance. Generally, any nontrivial optimizations are either not done at all, or are done ”by hand” requiring significant time and expertise. To the best of our knowledge no tool available to users measures the latency of memory reference instructions for partic- ular addresses and makes this information available to users in an easy-to-use and intuitive way. In this project, we worked to enable the Open|SpeedShop performance analysis tool to gather memory reference latency information for specific instructions and memory ad- dresses, and to gather and display this information in an easy-to-use and intuitive way to aid performance analysts in identifying problematic data structures in their codes. This tool was primarily designed for use in the supercomputer domain as well as grid, cluster, cloud-based parallel e-commerce, and engineering systems and middleware. Ultimately, we envision a tool to automate optimization of application cache layout and utilization in the Open|SpeedShop performance analysis tool. To commercialize this soft- ware, we worked to develop core capabilities for gathering enhanced memory usage per- formance data from applications and create and apply novel methods for automatic data structure layout optimizations, tailoring the overall approach to support existing supercom- puter and cluster programming models and constraints. In this Phase I project, we focused on infrastructure necessary to gather performance data and present it in an intuitive way to users. With the advent of enhanced Precise Event-Based Sampling (PEBS) counters on recent Intel processor architectures and equivalent technology on AMD processors, we are now in a position to access memory reference information for particular addresses. Prior to the introduction of PEBS counters
Optimizing Monitoring Designs under Alternative Objectives
Gastelum, Jason A.; USA, Richland Washington; Porter, Ellen A.; USA, Richland Washington
2014-12-31
This paper describes an approach to identify monitoring designs that optimize detection of CO2 leakage from a carbon capture and sequestration (CCS) reservoir and compares the results generated under two alternative objective functions. The first objective function minimizes the expected time to first detection of CO2 leakage, the second more conservative objective function minimizes the maximum time to leakage detection across the set of realizations. The approach applies a simulated annealing algorithm that searches the solution space by iteratively mutating the incumbent monitoring design. The approach takes into account uncertainty by evaluating the performance of potential monitoring designs across amore » set of simulated leakage realizations. The approach relies on a flexible two-tiered signature to infer that CO2 leakage has occurred. This research is part of the National Risk Assessment Partnership, a U.S. Department of Energy (DOE) project tasked with conducting risk and uncertainty analysis in the areas of reservoir performance, natural leakage pathways, wellbore integrity, groundwater protection, monitoring, and systems level modeling.« less
Optimizing Monitoring Designs under Alternative Objectives
Gastelum, Jason A.; USA, Richland Washington; Porter, Ellen A.; USA, Richland Washington
2014-12-31
This paper describes an approach to identify monitoring designs that optimize detection of CO2 leakage from a carbon capture and sequestration (CCS) reservoir and compares the results generated under two alternative objective functions. The first objective function minimizes the expected time to first detection of CO2 leakage, the second more conservative objective function minimizes the maximum time to leakage detection across the set of realizations. The approach applies a simulated annealing algorithm that searches the solution space by iteratively mutating the incumbent monitoring design. The approach takes into account uncertainty by evaluating the performance of potential monitoring designs across a set of simulated leakage realizations. The approach relies on a flexible two-tiered signature to infer that CO2 leakage has occurred. This research is part of the National Risk Assessment Partnership, a U.S. Department of Energy (DOE) project tasked with conducting risk and uncertainty analysis in the areas of reservoir performance, natural leakage pathways, wellbore integrity, groundwater protection, monitoring, and systems level modeling.
Optimizing Hydronic System Performance in Residential Applications
Arena, L.; Faakye, O.
2013-10-01
Even though new homes constructed with hydronic heat comprise only 3% of the market (US Census Bureau 2009), of the 115 million existing homes in the United States, almost 14 million of those homes (11%) are heated with steam or hot water systems according to 2009 US Census data. Therefore, improvements in hydronic system performance could result in significant energy savings in the US. When operating properly, the combination of a gas-fired condensing boiler with baseboard convectors and an indirect water heater is a viable option for high-efficiency residential space heating in cold climates. Based on previous research efforts, however, it is apparent that these types of systems are typically not designed and installed to achieve maximum efficiency. Furthermore, guidance on proper design and commissioning for heating contractors and energy consultants is hard to find and is not comprehensive. Through modeling and monitoring, CARB sought to determine the optimal combination(s) of components - pumps, high efficiency heat sources, plumbing configurations and controls - that result in the highest overall efficiency for a hydronic system when baseboard convectors are used as the heat emitter. The impact of variable-speed pumps on energy use and system performance was also investigated along with the effects of various control strategies and the introduction of thermal mass.
Moisture Research - Optimizing Wall Assemblies
Arena, Lois; Mantha, Pallavi
2013-05-01
In this project, the Consortium for Advanced Residential Buildings (CARB) team evaluated several different configurations of wall assemblies to determine the accuracy of moisture modeling and make recommendations to ensure durable, efficient assemblies. WUFI and THERM were used to model the hygrothermal and heat transfer characteristics of these walls. Wall assemblies evaluated included code minimum walls using spray foam insulation and fiberglass batts, high R-value walls at least 12 in. thick (R-40 and R-60 assemblies), and brick walls with interior insulation.
Hudson, C.
1995-12-01
Pollution prevention managers need to select the best environmental projects for an installation within a constrained budget but have no standard way of selecting the optimal mix of projects. This thesis proposes a decision tool to aid decision makers in choosing this optimal mix. The model was built using decision analysis theory which provides a framework to aid the decision maker. Criteria sed in the model for selection was determined using a questionnaire sent to base-level pollution prevention managers. The model ses DPL, a software package designed to build, analyze, and conduct sensitivity analysis of decision problems to perform the quantitative analysis. Built in functions of DPL(TM) allow the decision maker to see the optimal decision policy based on the values entered into the model and to run sensitivity analysis to determine which values are the most critical to the outcome of the model. Decision analysis can be used to create a dominance curve that shows all optimal strategies based on the willingness of the decision maker to make tradeoffs between attributes. This model provides analytical data that can be used to justify decisions made by the pollution prevention manager when selecting the optimal mix of pollution prevention projects for implementation.
Optimizing legacy molecular dynamics software with directive-based offload
Michael Brown, W.; Carrillo, Jan-Michael Y.; Gavhane, Nitin; Thakkar, Foram M.; Plimpton, Steven J.
2015-05-14
The directive-based programming models are one solution for exploiting many-core coprocessors to increase simulation rates in molecular dynamics. They offer the potential to reduce code complexity with offload models that can selectively target computations to run on the CPU, the coprocessor, or both. In our paper, we describe modifications to the LAMMPS molecular dynamics code to enable concurrent calculations on a CPU and coprocessor. We also demonstrate that standard molecular dynamics algorithms can run efficiently on both the CPU and an x86-based coprocessor using the same subroutines. As a consequence, we demonstrate that code optimizations for the coprocessor also result in speedups on the CPU; in extreme cases up to 4.7X. We provide results for LAMMAS benchmarks and for production molecular dynamics simulations using the Stampede hybrid supercomputer with both Intel (R) Xeon Phi (TM) coprocessors and NVIDIA GPUs: The optimizations presented have increased simulation rates by over 2X for organic molecules and over 7X for liquid crystals on Stampede. The optimizations are available as part of the "Intel package" supplied with LAMMPS. (C) 2015 Elsevier B.V. All rights reserved.
Weather forecast-based optimization of integrated energy systems.
Zavala, V. M.; Constantinescu, E. M.; Krause, T.; Anitescu, M.
2009-03-01
In this work, we establish an on-line optimization framework to exploit detailed weather forecast information in the operation of integrated energy systems, such as buildings and photovoltaic/wind hybrid systems. We first discuss how the use of traditional reactive operation strategies that neglect the future evolution of the ambient conditions can translate in high operating costs. To overcome this problem, we propose the use of a supervisory dynamic optimization strategy that can lead to more proactive and cost-effective operations. The strategy is based on the solution of a receding-horizon stochastic dynamic optimization problem. This permits the direct incorporation of economic objectives, statistical forecast information, and operational constraints. To obtain the weather forecast information, we employ a state-of-the-art forecasting model initialized with real meteorological data. The statistical ambient information is obtained from a set of realizations generated by the weather model executed in an operational setting. We present proof-of-concept simulation studies to demonstrate that the proposed framework can lead to significant savings (more than 18% reduction) in operating costs.
Optimizing legacy molecular dynamics software with directive-based offload
Michael Brown, W.; Carrillo, Jan-Michael Y.; Gavhane, Nitin; Thakkar, Foram M.; Plimpton, Steven J.
2015-05-14
The directive-based programming models are one solution for exploiting many-core coprocessors to increase simulation rates in molecular dynamics. They offer the potential to reduce code complexity with offload models that can selectively target computations to run on the CPU, the coprocessor, or both. In our paper, we describe modifications to the LAMMPS molecular dynamics code to enable concurrent calculations on a CPU and coprocessor. We also demonstrate that standard molecular dynamics algorithms can run efficiently on both the CPU and an x86-based coprocessor using the same subroutines. As a consequence, we demonstrate that code optimizations for the coprocessor also resultmore » in speedups on the CPU; in extreme cases up to 4.7X. We provide results for LAMMAS benchmarks and for production molecular dynamics simulations using the Stampede hybrid supercomputer with both Intel (R) Xeon Phi (TM) coprocessors and NVIDIA GPUs: The optimizations presented have increased simulation rates by over 2X for organic molecules and over 7X for liquid crystals on Stampede. The optimizations are available as part of the "Intel package" supplied with LAMMPS. (C) 2015 Elsevier B.V. All rights reserved.« less
Production optimization in the Provincia field, Colombia
Blann, J.; Jacobson, L.; Faber, C.
1989-02-01
Designing or redesigning production facilities for optimum operation usually results in the generation of maximum profit from an installation. But in older fields, or fields where a short life is expected, design changes may not be a viable option. In such cases, obtaining maximum production within the limits of existing facilities, thereby minimizing new investments, may be an attractive option. This paper discusses application of the latter technique in the Provincia field, Colombia, to optimize oil and gas production within constraints imposed by periodic temporary gas-compression-capacity restrictions and by the configuration of existing oil and gas facilities. The multistep optimization program used at Provincia included improvement of individual well performance, optimization of individual well facilities, fieldwide optimization of surface facilities, and optimization of the field production scheme.
ORIGINS OF NON-MASS-DEPENDENT FRACTIONATION OF EXTRA-TERRESTRIAL OXYGEN
Barcena, Homar; Connolly, Harold C.
2012-08-01
The distribution of oxygen isotopes in meteorites and within the earliest solids that formed in the solar system hints that the precursors of these materials must have undergone a mass-independent process. The mass-independent process is specifically one that fractionates {sup 16}O from {sup 17}O and {sup 18}O. This chemical signature is indicative of non-equilibrium processing, which bear resemblance to some unusual terrestrial phenomenon such as fractionation of ozone in the upper Earth atmosphere. That the mass-independent fractionation of oxygen isotopes is preserved within petrological records presents planetary scientists interesting clues to the events that may have occurred during the formation of the solar system. Currently, there are several hypotheses on the origins of the oxygen isotope distribution within primitive planetary materials, which include both thermal and photochemical models. We present a new model based on a physico-chemical hypothesis for the origin of non-mass-dependent O-isotope distribution in oxygen-bearing extra-terrestrial materials, which originated from the disproportionation of CO in dark molecular clouds to create CO{sub 2} reservoirs. The disproportionation created a reservoir of heavy oxygen isotopes and could have occurred throughout the evolution of the disk. The CO{sub 2} was a carrier of the isotope anomaly in the solar nebula and we propose that non-steady-state mixing of these reservoirs with the early rock-forming materials during their formation corresponds with the birth and evolution of the solar system.
Characterization of objective-function spaces in optimal design of experiments
Crary, S.
1994-12-31
I present a systematic study of the objective-function spaces for finding exact I-optimal designs of experiments on continuous cuboidal spaces for multivariate second order models. The study looked at the number of equally good {open_quotes}global{close_quotes} minima and the aggregate hyper-volume of their basins of attraction, the number of local minima and the hyper-volumes of their basins, and the frequency with which the putatively optimal designs possessed a continuous symmetry or a reflection symmetry. This information can be used to speed search algorithms for optimal designs. For example, reflection symmetries can be identified early in a search, and the dimension of the search space significantly reduced. Our software pro gram I-OPT, which is available via anonymous ftp, uses a variety of application-specific speed-ups to find I-, D-, A-, and composite-optimal designs for a wide class of linear statistical model functions.
DPSS Laser Beam Quality Optimization Through Pump Current Tuning
Omohundro, Rob; Callen, Alice; Sukuta, Sydney; /San Jose City Coll.
2012-03-30
The goal of this study is to demonstrate how a DPSS laser beam's quality parameters can be simultaneously optimized through pump current tuning. Two DPSS lasers of the same make and model were used where the laser diode pump current was first varied to ascertain the lowest RMS noise region. The lowest noise was found to be 0.13% in this region and the best M{sup 2} value of 1.0 and highest laser output power were simultaneously attained at the same current point. The laser manufacturer reported a M{sup 2} value of 1.3 and RMS noise value of .14% for these lasers. This study therefore demonstrates that pump current tuning a DPSS laser can simultaneously optimize RMS Noise, Power and M{sup 2} values. Future studies will strive to broaden the scope of the beam quality parameters impacted by current tuning.
NOVA-NREL Optimal Vehicle Acquisition Analysis (Brochure)
Blakley, H.
2011-03-01
Federal fleet managers face unique challenges in accomplishing their mission - meeting agency transportation needs while complying with Federal goals and mandates. Included in these challenges are a variety of statutory requirements, executive orders, and internal goals and objectives that typically focus on petroleum consumption and greenhouse gas (GHG) emissions reductions, alternative fuel vehicle (AFV) acquisitions, and alternative fuel use increases. Given the large number of mandates affecting Federal fleets and the challenges faced by all fleet managers in executing day-to-day operations, a primary challenge for agencies and other organizations is ensuring that they are as efficient as possible in using constrained fleet budgets. An NREL Optimal Vehicle Acquisition (NOVA) analysis makes use of a mathematical model with a variety of fleet-related data to create an optimal vehicle acquisition strategy for a given goal, such as petroleum or GHG reduction. The analysis can helps fleets develop a vehicle acquisition strategy that maximizes petroleum and greenhouse gas reductions.
The behavior and origin of the excess wing in DEET (N,N-diethyl-3-methylbenzamide)
Hensel-Bielowka, S; Sangoro, Joshua R; Wojnarowska, S; Hawelek, L; Paluch, Marian
2013-01-01
Broadband dielectric spectroscopy along with a high pressure technique and quantum-mechanical calculations are employed to study in detail the behavior and to reveal the origin of the excess wing (EW) in neat N,N-diethyl-3-methylbenzamide (DEET). Our analysis of dielectric spectra again corroborates the idea that the EW is a hidden b-relaxation peak. Moreover, we found that the position frequency of the b peak corresponds to the position of the primitive relaxation of the Coupling Model. We also studied the possible intramolecular rotations in DEET by means of DFT calculation. On that basis we were able to describe the EW as the JG b-relaxation and find the possible origin of the g-relaxation visible in DEET dielectric spectra at very low temperatures.
Zitney, S.E.
2007-06-01
Emerging fossil energy power generation systems must operate with unprecedented efficiency and near-zero emissions, while optimizing profitably amid cost fluctuations for raw materials, finished products, and energy. To help address these challenges, the fossil energy industry will have to rely increasingly on the use advanced computational tools for modeling and simulating complex process systems. In this paper, we present the computational research challenges and opportunities for the optimization of fossil energy power generation systems across the plant lifecycle from process synthesis and design to plant operations. We also look beyond the plant gates to discuss research challenges and opportunities for enterprise-wide optimization, including planning, scheduling, and supply chain technologies.
Tahvili, Sahar; Österberg, Jonas; Silvestrov, Sergei; Biteus, Jonas
2014-12-10
One of the most important factors in the operations of many cooperations today is to maximize profit and one important tool to that effect is the optimization of maintenance activities. Maintenance activities is at the largest level divided into two major areas, corrective maintenance (CM) and preventive maintenance (PM). When optimizing maintenance activities, by a maintenance plan or policy, we seek to find the best activities to perform at each point in time, be it PM or CM. We explore the use of stochastic simulation, genetic algorithms and other tools for solving complex maintenance planning optimization problems in terms of a suggested framework model based on discrete event simulation.
Albertsson, T.; Semenov, D.; Henning, Th.
2014-03-20
Formation and evolution of water in the solar system and the origin of water on Earth constitute one of the most interesting questions in astronomy. The prevailing hypothesis for the origin of water on Earth is by delivery through water-rich small solar system bodies. In this paper, the isotopic and chemical evolution of water during the early history of the solar nebula, before the onset of planetesimal formation, is studied. A gas-grain chemical model that includes multiply deuterated species and nuclear spin-states is combined with a steady-state solar nebula model. To calculate initial abundances, we simulated 1 Myr of evolution of a cold and dark TMC-1-like prestellar core. Two time-dependent chemical models of the solar nebula are calculated over 1 Myr: (1) a laminar model and (2) a model with two-dimensional (2D) turbulent mixing. We find that the radial outward increase of the H{sub 2}O D/H ratio is shallower in the chemodynamical nebular model than in the laminar model. This is related to more efficient defractionation of HDO via rapid gas-phase processes because the 2D mixing model allows the water ice to be transported either inward and thermally evaporated or upward and photodesorbed. The laminar model shows the Earth water D/H ratio at r ? 2.5 AU, whereas for the 2D chemodynamical model this zone is larger, r ? 9 AU. Similarly, the water D/H ratios representative of the Oort-family comets, ?2.5-10 10{sup 4}, are achieved within ?2-6 AU and ?2-20 AU in the laminar and the 2D model, respectively. We find that with regards to the water isotopic composition and the origin of the comets, the mixing model seems to be favored over the laminar model.
Optimization of Post Combustion in Steelmaking (TRP 9925)
Dr. Richard J. Fruehan; Dr. R. J. Matway
2004-03-31
In the electric arc furnace (EAF), and the basic oxygen furnace (BOF) for producing steel, the major off gas is carbon monoxide (CO). If the CO can be combusted to CO{sub 2}, and the energy transferred to the metal, this reaction will reduce the energy consumed in the EAF and allow for more scrap melting in the BOF which would significantly lower the energy required to produce steel. This reaction is referred to as post combustion. In order to optimize the post combustion process, computational fluid dynamic models (CFD) of the two steelmaking processes were developed. Before the models could be fully developed information on reactions affecting post combustion had to be obtained. The role of the reaction of CO{sub 2} with scrap (iron) was measured at the temperatures relevant to post combustion in laboratory experiments. The experiments were done to separate the effects of gas phase mass transfer, chemical kinetics, and solid state mass transfer through the iron oxide formed by the reaction. The first CFD model was for the EAF using the FIDAP-CFD{trademark} code. Whereas this model gave some useful results it was incomplete due to problems with the FIDAP program. In the second EAF model, the CFX{trademark} code was used and was much more successful. The full 3-D model included all forms of heat transfer and the back reactions of CO{sub 2} with the metal and scrap. The model for the EAF was a full 3-D model and consisted of a primary oxygen lance with side wall injectors for post combustion. The model could predict the degree of post combustion and heat transfer. The BOF model was a slice of the BOF for which there was symmetry. The model could predict post combustion, heat transfer, temperature profiles and the effect of operating variables such as oxygen flow rates and distribution. The present research developed several new models such as limited combustion and depostcombustion. These were all documented by MSA Pass as a sub-contract. Instruction manuals were
A Framework to Design and Optimize Chemical Flooding Processes
Mojdeh Delshad; Gary A. Pope Kamy Sepehrnoori
2006-08-31
The goal of this proposed research is to provide an efficient and user friendly simulation framework for screening and optimizing chemical/microbial enhanced oil recovery processes. The framework will include (1) a user friendly interface to identify the variables that have the most impact on oil recovery using the concept of experimental design and response surface maps, (2) UTCHEM reservoir simulator to perform the numerical simulations, and (3) an economic model that automatically imports the simulation production data to evaluate the profitability of a particular design. Such a reservoir simulation framework is not currently available to the oil industry. The objectives of Task 1 are to develop three primary modules representing reservoir, chemical, and well data. The modules will be interfaced with an already available experimental design model. The objective of the Task 2 is to incorporate UTCHEM reservoir simulator and the modules with the strategic variables and developing the response surface maps to identify the significant variables from each module. The objective of the Task 3 is to develop the economic model designed specifically for the chemical processes targeted in this proposal and interface the economic model with UTCHEM production output. Task 4 is on the validation of the framework and performing simulations of oil reservoirs to screen, design and optimize the chemical processes.
A FRAMEWORK TO DESIGN AND OPTIMIZE CHEMICAL FLOODING PROCESSES
Mojdeh Delshad; Gary A. Pope; Kamy Sepehrnoori
2005-07-01
The goal of this proposed research is to provide an efficient and user friendly simulation framework for screening and optimizing chemical/microbial enhanced oil recovery processes. The framework will include (1) a user friendly interface to identify the variables that have the most impact on oil recovery using the concept of experimental design and response surface maps, (2) UTCHEM reservoir simulator to perform the numerical simulations, and (3) an economic model that automatically imports the simulation production data to evaluate the profitability of a particular design. Such a reservoir simulation framework is not currently available to the oil industry. The objectives of Task 1 are to develop three primary modules representing reservoir, chemical, and well data. The modules will be interfaced with an already available experimental design model. The objective of the Task 2 is to incorporate UTCHEM reservoir simulator and the modules with the strategic variables and developing the response surface maps to identify the significant variables from each module. The objective of the Task 3 is to develop the economic model designed specifically for the chemical processes targeted in this proposal and interface the economic model with UTCHEM production output. Task 4 is on the validation of the framework and performing simulations of oil reservoirs to screen, design and optimize the chemical processes.
A FRAMEWORK TO DESIGN AND OPTIMIZE CHEMICAL FLOODING PROCESSES
Mojdeh Delshad; Gary A. Pope; Kamy Sepehrnoori
2004-11-01
The goal of this proposed research is to provide an efficient and user friendly simulation framework for screening and optimizing chemical/microbial enhanced oil recovery processes. The framework will include (1) a user friendly interface to identify the variables that have the most impact on oil recovery using the concept of experimental design and response surface maps, (2) UTCHEM reservoir simulator to perform the numerical simulations, and (3) an economic model that automatically imports the simulation production data to evaluate the profitability of a particular design. Such a reservoir simulation framework is not currently available to the oil industry. The objectives of Task 1 are to develop three primary modules representing reservoir, chemical, and well data. The modules will be interfaced with an already available experimental design model. The objective of the Task 2 is to incorporate UTCHEM reservoir simulator and the modules with the strategic variables and developing the response surface maps to identify the significant variables from each module. The objective of the Task 3 is to develop the economic model designed specifically for the chemical processes targeted in this proposal and interface the economic model with UTCHEM production output. Task 4 is on the validation of the framework and performing simulations of oil reservoirs to screen, design and optimize the chemical processes.
Technology Solutions for New and Existing Homes: Optimized Slab-on-Grade
Office of Energy Efficiency and Renewable Energy (EERE) (indexed site)
Foundation Insulation Retrofits | Department of Energy Homes: Optimized Slab-on-Grade Foundation Insulation Retrofits Technology Solutions for New and Existing Homes: Optimized Slab-on-Grade Foundation Insulation Retrofits This project used a recently developed, three-dimensional, below-grade heat transfer simulation (BUilding Foundation Energy Transport Simulation-BCVTB, BUFETS-B) that operates as a subroutine of EnergyPlus to model 10 insulation upgrade options against a base (uninsulated)
Energy optimization in flash smelting
Partelpoeg, E.H.
1985-01-01
The copper smelting industry has been replacing old reverberatory furnaces with energy-efficient flash furnaces. While this in itself has been a significant move towards reduced energy costs, there is yet no industry consensus as to which mode of flash smelting is optimum. It is possible to model copper smelting, the ensuring converting step, and acid production with linear equations and inequalities. These equations include mass and heat balances, and energy and cost equations. The matrix of equations and inequalities can be entered into a linear programming routine to determine minimum costs. Such a model was developed and the results indicate that optimum smelting parameters include the following. (1) The grade of matte is 65% Cu. (2) The flash furnace operates autogenously with no air preheat. The flash furnace air is oxygen enriched to approximately 40 volume % O/sub 2/. (3) Total energy cost (1985 dollars and prices) for smelting, converting, and acid production is approximately $10 per tonne concentrate. The general model employed to obtain these optimum conditions can be modified to represent unique smelting conditions.
The Limits of Porous Materials in the Topology Optimization of Stokes Flows
Evgrafov, Anton
2005-10-15
We consider a problem concerning the distribution of a solid material in a given bounded control volume with the goal to minimize the potential power of the Stokes flow with given velocities at the boundary through the material-free part of the domain.We also study the relaxed problem of the optimal distribution of the porous material with a spatially varying Darcy permeability tensor, where the governing equations are known as the Darcy-Stokes, or Brinkman, equations. We show that the introduction of the requirement of zero power dissipation due to the flow through the porous material into the relaxed problem results in it becoming a well-posed mathematical problem, which admits optimal solutions that have extreme permeability properties (i.e., assume only zero or infinite permeability); thus, they are also optimal in the original (non-relaxed) problem. Two numerical techniques are presented for the solution of the constrained problem. One is based on a sequence of optimal Brinkman flows with increasing viscosities, from the mathematical point of view nothing but the exterior penalty approach applied to the problem. Another technique is more special, and is based on the 'sizing' approximation of the problem using a mix of two different porous materials with high and low permeabilities, respectively. This paper thus complements the study of Borrvall and Petersson (Internat. J. Numer. Methods Fluids, vol. 41, no. 1, pp. 77-107, 2003), where only sizing optimization problems are treated.
iTOUGH2 Universal Optimization Using the PEST Protocol
Finsterle, S.A.
2010-07-01
iTOUGH2 (http://www-esd.lbl.gov/iTOUGH2) is a computer program for parameter estimation, sensitivity analysis, and uncertainty propagation analysis [Finsterle, 2007a, b, c]. iTOUGH2 contains a number of local and global minimization algorithms for automatic calibration of a model against measured data, or for the solution of other, more general optimization problems (see, for example, Finsterle [2005]). A detailed residual and estimation uncertainty analysis is conducted to assess the inversion results. Moreover, iTOUGH2 can be used to perform a formal sensitivity analysis, or to conduct Monte Carlo simulations for the examination for prediction uncertainties. iTOUGH2's capabilities are continually enhanced. As the name implies, iTOUGH2 is developed for use in conjunction with the TOUGH2 forward simulator for nonisothermal multiphase flow in porous and fractured media [Pruess, 1991]. However, iTOUGH2 provides FORTRAN interfaces for the estimation of user-specified parameters (see subroutine USERPAR) based on user-specified observations (see subroutine USEROBS). These user interfaces can be invoked to add new parameter or observation types to the standard set provided in iTOUGH2. They can also be linked to non-TOUGH2 models, i.e., iTOUGH2 can be used as a universal optimization code, similar to other model-independent, nonlinear parameter estimation packages such as PEST [Doherty, 2008] or UCODE [Poeter and Hill, 1998]. However, to make iTOUGH2's optimization capabilities available for use with an external code, the user is required to write some FORTRAN code that provides the link between the iTOUGH2 parameter vector and the input parameters of the external code, and between the output variables of the external code and the iTOUGH2 observation vector. While allowing for maximum flexibility, the coding requirement of this approach limits its applicability to those users with FORTRAN coding knowledge. To make iTOUGH2 capabilities accessible to many application models
Optimization strategies for complex engineering applications
Eldred, M.S.
1998-02-01
LDRD research activities have focused on increasing the robustness and efficiency of optimization studies for computationally complex engineering problems. Engineering applications can be characterized by extreme computational expense, lack of gradient information, discrete parameters, non-converging simulations, and nonsmooth, multimodal, and discontinuous response variations. Guided by these challenges, the LDRD research activities have developed application-specific techniques, fundamental optimization algorithms, multilevel hybrid and sequential approximate optimization strategies, parallel processing approaches, and automatic differentiation and adjoint augmentation methods. This report surveys these activities and summarizes the key findings and recommendations.
TRACKING CODE DEVELOPMENT FOR BEAM DYNAMICS OPTIMIZATION
Yang, L.
2011-03-28
Dynamic aperture (DA) optimization with direct particle tracking is a straight forward approach when the computing power is permitted. It can have various realistic errors included and is more close than theoretical estimations. In this approach, a fast and parallel tracking code could be very helpful. In this presentation, we describe an implementation of storage ring particle tracking code TESLA for beam dynamics optimization. It supports MPI based parallel computing and is robust as DA calculation engine. This code has been used in the NSLS-II dynamics optimizations and obtained promising performance.