Meliopoulos, Sakis; Cokkinides, George; Fardanesh, Bruce; Hedrington, Clinton
2013-12-31
This is the final report for this project that was performed in the period: October1, 2009 to June 30, 2013. In this project, a fully distributed high-fidelity dynamic state estimator (DSE) that continuously tracks the real time dynamic model of a wide area system with update rates better than 60 times per second is achieved. The proposed technology is based on GPS-synchronized measurements but also utilizes data from all available Intelligent Electronic Devices in the system (numerical relays, digital fault recorders, digital meters, etc.). The distributed state estimator provides the real time model of the system not only the voltage phasors. The proposed system provides the infrastructure for a variety of applications and two very important applications (a) a high fidelity generating unit parameters estimation and (b) an energy function based transient stability monitoring of a wide area electric power system with predictive capability. Also the dynamic distributed state estimation results are stored (the storage scheme includes data and coincidental model) enabling an automatic reconstruction and “play back” of a system wide disturbance. This approach enables complete play back capability with fidelity equal to that of real time with the advantage of “playing back” at a user selected speed. The proposed technologies were developed and tested in the lab during the first 18 months of the project and then demonstrated on two actual systems, the USVI Water and Power Administration system and the New York Power Authority’s Blenheim-Gilboa pumped hydro plant in the last 18 months of the project. The four main thrusts of this project, mentioned above, are extremely important to the industry. The DSE with the achieved update rates (more than 60 times per second) provides a superior solution to the “grid visibility” question. The generator parameter identification method fills an important and practical need of the industry. The “energy function” based
Estimation of the Alpha Factor Parameters for the Emergency Diesel Generators of Ulchin Unit 3
Dae Il Kang; Sang Hoon Han
2006-07-01
Up to the present, the generic values of the Common cause failure (CCF) event parameters have been used in most PSA projects for the Korean NPPs. However, the CCF analysis should be performed with plant specific information to meet Category II of the ASME PRA Standard. Therefore, we estimated the Alpha factor parameters of the emergency diesel generator (EDG) for Ulchin Unit 3 by using the International Common-Cause Failure data Exchange (ICDE) database. The ICDE database provides the member countries with only the information needed for an estimation of the CCF parameters. The Ulchin Unit A3, pressurized water reactor, has two onsite EDGs and one alternate AC (AAC) diesel generator. The onsite EDGs of Unit 3 and 4 and the AAC are manufactured by the same company, but they are designed differently. The estimation procedure of the Alpha factor used in this study follows the approach of the NUREG/CR-5485. Since we did not find any qualitative difference between the target systems (two EDGs of Ulchin Unit 3) and the original systems (ICDE database), the applicability factor of each CCF event in the ICDE database was assumed to be 1. For the case of three EDGs including the AAC, five CCF events for the EDGs in the ICDE database were identified to be screened out. However, the detailed information for the independent events in the ICDE database is not presented. Thus, we assumed that the applicability factors for the CCF events to be screened out were, to be conservative, 0.5 and those of the other CCF events were 1. The study results show that the estimated Alpha parameters by using the ICDE database are lower than the generic values of the NUREG/CR-5497. The EDG system unavailability of the 1 out of 3 success criterion except for the supporting systems was calculated as 2.76 E-3. Compared with the system unavailability estimated by using the data of NUREG/CR-5497, it is decreased by 31.2%. (authors)
Reionization history and CMB parameter estimation
Dizgah, Azadeh Moradinezhad; Kinney, William H.; Gnedin, Nickolay Y. E-mail: gnedin@fnal.edu
2013-05-01
We study how uncertainty in the reionization history of the universe affects estimates of other cosmological parameters from the Cosmic Microwave Background. We analyze WMAP7 data and synthetic Planck-quality data generated using a realistic scenario for the reionization history of the universe obtained from high-resolution numerical simulation. We perform parameter estimation using a simple sudden reionization approximation, and using the Principal Component Analysis (PCA) technique proposed by Mortonson and Hu. We reach two main conclusions: (1) Adopting a simple sudden reionization model does not introduce measurable bias into values for other parameters, indicating that detailed modeling of reionization is not necessary for the purpose of parameter estimation from future CMB data sets such as Planck. (2) PCA analysis does not allow accurate reconstruction of the actual reionization history of the universe in a realistic case.
Lensed CMB simulation and parameter estimation
Lewis, Antony
2005-04-15
Modelling of the weak lensing of the CMB will be crucial to obtain correct cosmological parameter constraints from forthcoming precision CMB anisotropy observations. The lensing affects the power spectrum as well as inducing non-Gaussianities. We discuss the simulation of full-sky CMB maps in the weak lensing approximation and describe a fast numerical code. The series expansion in the deflection angle cannot be used to simulate accurate CMB maps, so a pixel remapping must be used. For parameter estimation accounting for the change in the power spectrum but assuming Gaussianity is sufficient to obtain accurate results up to Planck sensitivity using current tools. A fuller analysis may be required to obtain accurate error estimates and for more sensitive observations. We demonstrate a simple full-sky simulation and subsequent parameter estimation at Planck-like sensitivity. The lensed CMB simulation and parameter estimation codes are publicly available.
CosmoSIS: Modular cosmological parameter estimation
Zuntz, J.; Paterno, M.; Jennings, E.; Rudd, D.; Manzotti, A.; Dodelson, S.; Bridle, S.; Sehrish, S.; Kowalkowski, J.
2015-06-09
Cosmological parameter estimation is entering a new era. Large collaborations need to coordinate high-stakes analyses using multiple methods; furthermore such analyses have grown in complexity due to sophisticated models of cosmology and systematic uncertainties. In this paper we argue that modularity is the key to addressing these challenges: calculations should be broken up into interchangeable modular units with inputs and outputs clearly defined. Here we present a new framework for cosmological parameter estimation, CosmoSIS, designed to connect together, share, and advance development of inference tools across the community. We describe the modules already available in CosmoSIS, including CAMB, Planck, cosmic shear calculations, and a suite of samplers. Lastly, we illustrate it using demonstration code that you can run out-of-the-box with the installer available at http://bitbucket.org/joezuntz/cosmosis
CosmoSIS: Modular cosmological parameter estimation
DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)
Zuntz, J.; Paterno, M.; Jennings, E.; Rudd, D.; Manzotti, A.; Dodelson, S.; Bridle, S.; Sehrish, S.; Kowalkowski, J.
2015-06-09
Cosmological parameter estimation is entering a new era. Large collaborations need to coordinate high-stakes analyses using multiple methods; furthermore such analyses have grown in complexity due to sophisticated models of cosmology and systematic uncertainties. In this paper we argue that modularity is the key to addressing these challenges: calculations should be broken up into interchangeable modular units with inputs and outputs clearly defined. Here we present a new framework for cosmological parameter estimation, CosmoSIS, designed to connect together, share, and advance development of inference tools across the community. We describe the modules already available in CosmoSIS, including CAMB, Planck, cosmicmore » shear calculations, and a suite of samplers. Lastly, we illustrate it using demonstration code that you can run out-of-the-box with the installer available at http://bitbucket.org/joezuntz/cosmosis« less
Generalized REGression Package for Nonlinear Parameter Estimation
Energy Science and Technology Software Center (OSTI)
1995-05-15
GREG computes modal (maximum-posterior-density) and interval estimates of the parameters in a user-provided Fortran subroutine MODEL, using a user-provided vector OBS of single-response observations or matrix OBS of multiresponse observations. GREG can also select the optimal next experiment from a menu of simulated candidates, so as to minimize the volume of the parametric inference region based on the resulting augmented data set.
Estimation of economic parameters of U.S. hydropower resources
Hall, Douglas G.; Hunt, Richard T.; Reeves, Kelly S.; Carroll, Greg R.
2003-06-01
Tools for estimating the cost of developing and operating and maintaining hydropower resources in the form of regression curves were developed based on historical plant data. Development costs that were addressed included: licensing, construction, and five types of environmental mitigation. It was found that the data for each type of cost correlated well with plant capacity. A tool for estimating the annual and monthly electric generation of hydropower resources was also developed. Additional tools were developed to estimate the cost of upgrading a turbine or a generator. The development and operation and maintenance cost estimating tools, and the generation estimating tool were applied to 2,155 U.S. hydropower sites representing a total potential capacity of 43,036 MW. The sites included totally undeveloped sites, dams without a hydroelectric plant, and hydroelectric plants that could be expanded to achieve greater capacity. Site characteristics and estimated costs and generation for each site were assembled in a database in Excel format that is also included within the EERE Library under the title, “Estimation of Economic Parameters of U.S. Hydropower Resources - INL Hydropower Resource Economics Database.”
FUZZY SUPERNOVA TEMPLATES. II. PARAMETER ESTIMATION
Rodney, Steven A.; Tonry, John L. E-mail: jt@ifa.hawaii.ed
2010-05-20
Wide-field surveys will soon be discovering Type Ia supernovae (SNe) at rates of several thousand per year. Spectroscopic follow-up can only scratch the surface for such enormous samples, so these extensive data sets will only be useful to the extent that they can be characterized by the survey photometry alone. In a companion paper we introduced the Supernova Ontology with Fuzzy Templates (SOFT) method for analyzing SNe using direct comparison to template light curves, and demonstrated its application for photometric SN classification. In this work we extend the SOFT method to derive estimates of redshift and luminosity distance for Type Ia SNe, using light curves from the Sloan Digital Sky Survey (SDSS) and Supernova Legacy Survey (SNLS) as a validation set. Redshifts determined by SOFT using light curves alone are consistent with spectroscopic redshifts, showing an rms scatter in the residuals of rms{sub z} = 0.051. SOFT can also derive simultaneous redshift and distance estimates, yielding results that are consistent with the currently favored {Lambda}CDM cosmological model. When SOFT is given spectroscopic information for SN classification and redshift priors, the rms scatter in Hubble diagram residuals is 0.18 mag for the SDSS data and 0.28 mag for the SNLS objects. Without access to any spectroscopic information, and even without any redshift priors from host galaxy photometry, SOFT can still measure reliable redshifts and distances, with an increase in the Hubble residuals to 0.37 mag for the combined SDSS and SNLS data set. Using Monte Carlo simulations, we predict that SOFT will be able to improve constraints on time-variable dark energy models by a factor of 2-3 with each new generation of large-scale SN surveys.
System and method for motor parameter estimation
Luhrs, Bin; Yan, Ting
2014-03-18
A system and method for determining unknown values of certain motor parameters includes a motor input device connectable to an electric motor having associated therewith values for known motor parameters and an unknown value of at least one motor parameter. The motor input device includes a processing unit that receives a first input from the electric motor comprising values for the known motor parameters for the electric motor and receive a second input comprising motor data on a plurality of reference motors, including values for motor parameters corresponding to the known motor parameters of the electric motor and values for motor parameters corresponding to the at least one unknown motor parameter value of the electric motor. The processor determines the unknown value of the at least one motor parameter from the first input and the second input and determines a motor management strategy for the electric motor based thereon.
Parameter estimation with Sandage-Loeb test
Geng, Jia-Jia; Zhang, Jing-Fei; Zhang, Xin E-mail: jfzhang@mail.neu.edu.cn
2014-12-01
The Sandage-Loeb (SL) test directly measures the expansion rate of the universe in the redshift range of 2 ∼< z ∼< 5 by detecting redshift drift in the spectra of Lyman-α forest of distant quasars. We discuss the impact of the future SL test data on parameter estimation for the ΛCDM, the wCDM, and the w{sub 0}w{sub a}CDM models. To avoid the potential inconsistency with other observational data, we take the best-fitting dark energy model constrained by the current observations as the fiducial model to produce 30 mock SL test data. The SL test data provide an important supplement to the other dark energy probes, since they are extremely helpful in breaking the existing parameter degeneracies. We show that the strong degeneracy between Ω{sub m} and H{sub 0} in all the three dark energy models is well broken by the SL test. Compared to the current combined data of type Ia supernovae, baryon acoustic oscillation, cosmic microwave background, and Hubble constant, the 30-yr observation of SL test could improve the constraints on Ω{sub m} and H{sub 0} by more than 60% for all the three models. But the SL test can only moderately improve the constraint on the equation of state of dark energy. We show that a 30-yr observation of SL test could help improve the constraint on constant w by about 25%, and improve the constraints on w{sub 0} and w{sub a} by about 20% and 15%, respectively. We also quantify the constraining power of the SL test in the future high-precision joint geometric constraints on dark energy. The mock future supernova and baryon acoustic oscillation data are simulated based on the space-based project JDEM. We find that the 30-yr observation of SL test would help improve the measurement precision of Ω{sub m}, H{sub 0}, and w{sub a} by more than 70%, 20%, and 60%, respectively, for the w{sub 0}w{sub a}CDM model.
ARM - Evaluation Product - Radiatively Important Parameters Best Estimate
Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)
(RIPBE) ProductsRadiatively Important Parameters Best Estimate (RIPBE) ARM Data Discovery Browse Data Documentation Use the Data File Inventory tool to view data availability at the file level. Comments? We would love to hear from you! Send us a note below or call us at 1-888-ARM-DATA. Send Evaluation Product : Radiatively Important Parameters Best Estimate (RIPBE) The Radiatively Important Parameters Best Estimate (RIPBE) VAP combines multiple input datastreams, each with their own temporal
Derivative-free optimization for parameter estimation in computational...
Office of Scientific and Technical Information (OSTI)
nuclear physics Citation Details In-Document Search Title: Derivative-free optimization for parameter estimation in computational nuclear physics Authors: Wild, S ; ...
Derivative-free optimization for parameter estimation in computational...
Office of Scientific and Technical Information (OSTI)
Journal Article: Derivative-free optimization for parameter estimation in computational nuclear physics Citation Details ... RADIATION PHYSICS; 97 MATHEMATICS, COMPUTING, AND ...
Kalman filter data assimilation: Targeting observations and parameter estimation
Bellsky, Thomas Kostelich, Eric J.; Mahalov, Alex
2014-06-15
This paper studies the effect of targeted observations on state and parameter estimates determined with Kalman filter data assimilation (DA) techniques. We first provide an analytical result demonstrating that targeting observations within the Kalman filter for a linear model can significantly reduce state estimation error as opposed to fixed or randomly located observations. We next conduct observing system simulation experiments for a chaotic model of meteorological interest, where we demonstrate that the local ensemble transform Kalman filter (LETKF) with targeted observations based on largest ensemble variance is skillful in providing more accurate state estimates than the LETKF with randomly located observations. Additionally, we find that a hybrid ensemble Kalman filter parameter estimation method accurately updates model parameters within the targeted observation context to further improve state estimation.
Estimating atmospheric parameters and reducing noise for multispectral imaging
Conger, James Lynn
2014-02-25
A method and system for estimating atmospheric radiance and transmittance. An atmospheric estimation system is divided into a first phase and a second phase. The first phase inputs an observed multispectral image and an initial estimate of the atmospheric radiance and transmittance for each spectral band and calculates the atmospheric radiance and transmittance for each spectral band, which can be used to generate a "corrected" multispectral image that is an estimate of the surface multispectral image. The second phase inputs the observed multispectral image and the surface multispectral image that was generated by the first phase and removes noise from the surface multispectral image by smoothing out change in average deviations of temperatures.
Iterative methods for distributed parameter estimation in parabolic PDE
Vogel, C.R.; Wade, J.G.
1994-12-31
The goal of the work presented is the development of effective iterative techniques for large-scale inverse or parameter estimation problems. In this extended abstract, a detailed description of the mathematical framework in which the authors view these problem is presented, followed by an outline of the ideas and algorithms developed. Distributed parameter estimation problems often arise in mathematical modeling with partial differential equations. They can be viewed as inverse problems; the `forward problem` is that of using the fully specified model to predict the behavior of the system. The inverse or parameter estimation problem is: given the form of the model and some observed data from the system being modeled, determine the unknown parameters of the model. These problems are of great practical and mathematical interest, and the development of efficient computational algorithms is an active area of study.
ADVANTG An Automated Variance Reduction Parameter Generator, Rev. 1
Mosher, Scott W.; Johnson, Seth R.; Bevill, Aaron M.; Ibrahim, Ahmad M.; Daily, Charles R.; Evans, Thomas M.; Wagner, John C.; Johnson, Jeffrey O.; Grove, Robert E.
2015-08-01
The primary objective of ADVANTG is to reduce both the user effort and the computational time required to obtain accurate and precise tally estimates across a broad range of challenging transport applications. ADVANTG has been applied to simulations of real-world radiation shielding, detection, and neutron activation problems. Examples of shielding applications include material damage and dose rate analyses of the Oak Ridge National Laboratory (ORNL) Spallation Neutron Source and High Flux Isotope Reactor (Risner and Blakeman 2013) and the ITER Tokamak (Ibrahim et al. 2011). ADVANTG has been applied to a suite of radiation detection, safeguards, and special nuclear material movement detection test problems (Shaver et al. 2011). ADVANTG has also been used in the prediction of activation rates within light water reactor facilities (Pantelias and Mosher 2013). In these projects, ADVANTG was demonstrated to significantly increase the tally figure of merit (FOM) relative to an analog MCNP simulation. The ADVANTG-generated parameters were also shown to be more effective than manually generated geometry splitting parameters.
Parameter Estimation for Single Diode Models of Photovoltaic Modules
Hansen, Clifford
2015-03-01
Many popular models for photovoltaic system performance employ a single diode model to compute the I - V curve for a module or string of modules at given irradiance and temperature conditions. A single diode model requires a number of parameters to be estimated from measured I - V curves. Many available parameter estimation methods use only short circuit, o pen circuit and maximum power points for a single I - V curve at standard test conditions together with temperature coefficients determined separately for individual cells. In contrast, module testing frequently records I - V curves over a wide range of irradi ance and temperature conditions which, when available , should also be used to parameterize the performance model. We present a parameter estimation method that makes use of a fu ll range of available I - V curves. We verify the accuracy of the method by recov ering known parameter values from simulated I - V curves . We validate the method by estimating model parameters for a module using outdoor test data and predicting the outdoor performance of the module.
Anisotropic parameter estimation using velocity variation with offset analysis
Herawati, I.; Saladin, M.; Pranowo, W.; Winardhie, S.; Priyono, A.
2013-09-09
Seismic anisotropy is defined as velocity dependent upon angle or offset. Knowledge about anisotropy effect on seismic data is important in amplitude analysis, stacking process and time to depth conversion. Due to this anisotropic effect, reflector can not be flattened using single velocity based on hyperbolic moveout equation. Therefore, after normal moveout correction, there will still be residual moveout that relates to velocity information. This research aims to obtain anisotropic parameters, ? and ?, using two proposed methods. The first method is called velocity variation with offset (VVO) which is based on simplification of weak anisotropy equation. In VVO method, velocity at each offset is calculated and plotted to obtain vertical velocity and parameter ?. The second method is inversion method using linear approach where vertical velocity, ?, and ? is estimated simultaneously. Both methods are tested on synthetic models using ray-tracing forward modelling. Results show that ? value can be estimated appropriately using both methods. Meanwhile, inversion based method give better estimation for obtaining ? value. This study shows that estimation on anisotropic parameters rely on the accuracy of normal moveout velocity, residual moveout and offset to angle transformation.
Force Field Parameter Estimation of Functional Perfluoropolyether Lubricants
Smith, R.; Chung, P.S.; Steckel, J; Jhon, M.S.; Biegler, L.T.
2011-01-01
The head disk interface in a hard disk drive can be considered to be one of the hierarchical multiscale systems, which require the hybridization of multiscale modeling methods with coarse-graining procedure. However, the fundamental force field parameters are required to enable the coarse-graining procedure from atomistic/molecular scale to mesoscale models. In this paper, we investigate beyond molecular level and perform ab initio calculations to obtain the force field parameters. Intramolecular force field parameters for Zdol and Ztetraol were evaluated with truncated PFPE molecules to allow for feasible quantum calculations while still maintaining the characteristic chemical structure of the end groups. Using the harmonic approximation to the bond and angle potentials, the parameters were derived from the Hessian matrix, and the dihedral force constants are fit to the torsional energy profiles generated by a series of constrained molecular geometry optimization.
Force Field Parameter Estimation of Functional Perfluoropolyether Lubricants
Smith, R.; Chung, P.S.; Steckel, J; Jhon, M.S.; Biegler, L.T.
2011-01-01
The head disk interface in hard disk drive can be considered one of the hierarchical multiscale systems, which require the hybridization of multiscale modeling methods with coarse-graining procedure. However, the fundamental force field parameters are required to enable the coarse-graining procedure from atomistic/molecular scale to mesoscale models .In this paper, we investigate beyond molecular level and perform ab-initio calculations to obtain the force field parameters. Intramolecular force field parameters for the Zdol and Ztetraol were evaluated with truncated PFPE molecules to allow for feasible quantum calculations while still maintaining the characteristic chemical structure of the end groups. Using the harmonic approximation to the bond and angle potentials, the parameters were derived from the Hessian matrix, and the dihedral force constants are fit to the torsional energy profiles generated by a series of constrained molecular geometry optimization.
Dynamic State Estimation and Parameter Calibration of DFIG based on Ensemble Kalman Filter
Fan, Rui; Huang, Zhenyu; Wang, Shaobu; Diao, Ruisheng; Meng, Da
2015-07-30
With the growing interest in the application of wind energy, doubly fed induction generator (DFIG) plays an essential role in the industry nowadays. To deal with the increasing stochastic variations introduced by intermittent wind resource and responsive loads, dynamic state estimation (DSE) are introduced in any power system associated with DFIGs. However, sometimes this dynamic analysis canould not work because the parameters of DFIGs are not accurate enough. To solve the problem, an ensemble Kalman filter (EnKF) method is proposed for the state estimation and parameter calibration tasks. In this paper, a DFIG is modeled and implemented with the EnKF method. Sensitivity analysis is demonstrated regarding the measurement noise, initial state errors and parameter errors. The results indicate this EnKF method has a robust performance on the state estimation and parameter calibration of DFIGs.
CosmoSIS: A System for MC Parameter Estimation
Zuntz, Joe; Paterno, Marc; Jennings, Elise; Rudd, Douglas; Manzotti, Alessandro; Dodelson, Scott; Bridle, Sarah; Sehrish, Saba; Kowalkowski, James
2015-01-01
Cosmological parameter estimation is entering a new era. Large collaborations need to coordinate high-stakes analyses using multiple methods; furthermore such analyses have grown in complexity due to sophisticated models of cosmology and systematic uncertainties. In this paper we argue that modularity is the key to addressing these challenges: calculations should be broken up into interchangeable modular units with inputs and outputs clearly defined. We present a new framework for cosmological parameter estimation, CosmoSIS, designed to connect together, share, and advance development of inference tools across the community. We describe the modules already available in Cosmo- SIS, including camb, Planck, cosmic shear calculations, and a suite of samplers. We illustrate it using demonstration code that you can run out-of-the-box with the installer available at http://bitbucket.org/joezuntz/cosmosis.
Parameter Estimation for Single Diode Models of Photovoltaic Modules
Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)
2065 Unlimited Release Printed March 2015 Parameter Estimation for Single Diode Models of Photovoltaic Modules Clifford W. Hansen Prepared by Sandia National Laboratories Albuquerque, New Mexico 87185 and Livermore, California 94550 Sandia National Laboratories is a multi-program laboratory managed and operated by Sandia Corporation, a wholly owned subsidiary of Lockheed Martin Corporation, for the U.S. Department of Energy's National Nuclear Security Administration under contract
CosmoSIS: A system for MC parameter estimation
DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)
Bridle, S.; Dodelson, S.; Jennings, E.; Kowalkowski, J.; Manzotti, A.; Paterno, M.; Rudd, D.; Sehrish, S.; Zuntz, J.
2015-01-01
CosmoSIS is a modular system for cosmological parameter estimation, based on Markov Chain Monte Carlo and related techniques. It provides a series of samplers, which drive the exploration of the parameter space, and a series of modules, which calculate the likelihood of the observed data for a given physical model, determined by the location of a sample in the parameter space. While CosmoSIS ships with a set of modules that calculate quantities of interest to cosmologists, there is nothing about the framework itself, nor in the Markov Chain Monte Carlo technique, that is specific to cosmology. Thus CosmoSIS could bemore » used for parameter estimation problems in other fields, including HEP. This paper describes the features of CosmoSIS and show an example of its use outside of cosmology. Furthermore, it also discusses how collaborative development strategies differ between two different communities: that of HEP physicists, accustomed to working in large collaborations, and that of cosmologists, who have traditionally not worked in large groups.« less
Combined Estimation of Hydrogeologic Conceptual Model and Parameter Uncertainty
Meyer, Philip D.; Ye, Ming; Neuman, Shlomo P.; Cantrell, Kirk J.
2004-03-01
The objective of the research described in this report is the development and application of a methodology for comprehensively assessing the hydrogeologic uncertainties involved in dose assessment, including uncertainties associated with conceptual models, parameters, and scenarios. This report describes and applies a statistical method to quantitatively estimate the combined uncertainty in model predictions arising from conceptual model and parameter uncertainties. The method relies on model averaging to combine the predictions of a set of alternative models. Implementation is driven by the available data. When there is minimal site-specific data the method can be carried out with prior parameter estimates based on generic data and subjective prior model probabilities. For sites with observations of system behavior (and optionally data characterizing model parameters), the method uses model calibration to update the prior parameter estimates and model probabilities based on the correspondence between model predictions and site observations. The set of model alternatives can contain both simplified and complex models, with the requirement that all models be based on the same set of data. The method was applied to the geostatistical modeling of air permeability at a fractured rock site. Seven alternative variogram models of log air permeability were considered to represent data from single-hole pneumatic injection tests in six boreholes at the site. Unbiased maximum likelihood estimates of variogram and drift parameters were obtained for each model. Standard information criteria provided an ambiguous ranking of the models, which would not justify selecting one of them and discarding all others as is commonly done in practice. Instead, some of the models were eliminated based on their negligibly small updated probabilities and the rest were used to project the measured log permeabilities by kriging onto a rock volume containing the six boreholes. These four
The generation of shared cryptographic keys through channel impulse response estimation at 60 GHz.
Young, Derek P.; Forman, Michael A.; Dowdle, Donald Ryan
2010-09-01
Methods to generate private keys based on wireless channel characteristics have been proposed as an alternative to standard key-management schemes. In this work, we discuss past work in the field and offer a generalized scheme for the generation of private keys using uncorrelated channels in multiple domains. Proposed cognitive enhancements measure channel characteristics, to dynamically change transmission and reception parameters as well as estimate private key randomness and expiration times. Finally, results are presented on the implementation of a system for the generation of private keys for cryptographic communications using channel impulse-response estimation at 60 GHz. The testbed is composed of commercial millimeter-wave VubIQ transceivers, laboratory equipment, and software implemented in MATLAB. Novel cognitive enhancements are demonstrated, using channel estimation to dynamically change system parameters and estimate cryptographic key strength. We show for a complex channel that secret key generation can be accomplished on the order of 100 kb/s.
Parameter estimation for models of ligninolytic and cellulolytic enzyme kinetics
Wang, Gangsheng; Post, Wilfred M; Mayes, Melanie; Frerichs, Joshua T; Jagadamma, Sindhu
2012-01-01
While soil enzymes have been explicitly included in the soil organic carbon (SOC) decomposition models, there is a serious lack of suitable data for model parameterization. This study provides well-documented enzymatic parameters for application in enzyme-driven SOC decomposition models from a compilation and analysis of published measurements. In particular, we developed appropriate kinetic parameters for five typical ligninolytic and cellulolytic enzymes ( -glucosidase, cellobiohydrolase, endo-glucanase, peroxidase, and phenol oxidase). The kinetic parameters included the maximum specific enzyme activity (Vmax) and half-saturation constant (Km) in the Michaelis-Menten equation. The activation energy (Ea) and the pH optimum and sensitivity (pHopt and pHsen) were also analyzed. pHsen was estimated by fitting an exponential-quadratic function. The Vmax values, often presented in different units under various measurement conditions, were converted into the same units at a reference temperature (20 C) and pHopt. Major conclusions are: (i) Both Vmax and Km were log-normal distributed, with no significant difference in Vmax exhibited between enzymes originating from bacteria or fungi. (ii) No significant difference in Vmax was found between cellulases and ligninases; however, there was significant difference in Km between them. (iii) Ligninases had higher Ea values and lower pHopt than cellulases; average ratio of pHsen to pHopt ranged 0.3 0.4 for the five enzymes, which means that an increase or decrease of 1.1 1.7 pH units from pHopt would reduce Vmax by 50%. (iv) Our analysis indicated that the Vmax values from lab measurements with purified enzymes were 1 2 orders of magnitude higher than those for use in SOC decomposition models under field conditions.
Biomass Power Generation Market Capacity is Estimated to Reach...
Biomass Power Generation Market Capacity is Estimated to Reach 122,331.6 MW by 2022 Home > Groups > Renewable Energy RFPs Wayne31jan's picture Submitted by Wayne31jan(150)...
Reliable estimation of biochemical parameters from C3 leafphotosynthe...
Office of Scientific and Technical Information (OSTI)
The new approach implemented theoretical parameter resolvability with numerical procedures that maximally use the information content of the data. It was tested with model ...
Morrison, J.L.
1992-12-01
The objective of this research is to develop a simple, yet accurate, lumped parameter mathematical model for an explosively driven magnetohydrodynamic generator that can predict the pulse power variables of voltage and current from startup through regenerative operation. The inputs to the model will be the plasma properties entering the generator as predicted by the explosive shock model of Reference [1]. The strategy used was to simplify electromagnetic and thermodynamic three dimensional effects into a zero dimensional model. The model will provide a convenient tool for researchers to optimize designs to be used in pulse power applications. The model is validated using experimental data of Reference [1]. An overview of the operation of the explosively driven generator is first presented. Then a simplified electrical circuit model that describes basic performance of the device is developed. Then a lumped parameter model that incorporates the coupled electromagnetic and thermodynamic effects that govern generator performance is described and developed. The model is based on fundamental physical principles and parameters that were either obtained directly from design data or estimated from experimental data. The model was used to obtain parameter sensitivities and predict beyond the limits observed in the experiments to the levels desired by the potential Department of Defense sponsors. The model identifies process limitations that provide direction for future research.
UWB channel estimation using new generating TR transceivers
Nekoogar, Faranak; Dowla, Farid U.; Spiridon, Alex; Haugen, Peter C.; Benzel, Dave M.
2011-06-28
The present invention presents a simple and novel channel estimation scheme for UWB communication systems. As disclosed herein, the present invention maximizes the extraction of information by incorporating a new generation of transmitted-reference (Tr) transceivers that utilize a single reference pulse(s) or a preamble of reference pulses to provide improved channel estimation while offering higher Bit Error Rate (BER) performance and data rates without diluting the transmitter power.
Estimation of k-e parameters using surrogate models and jet-in-crossflow data.
Lefantzi, Sophia; Ray, Jaideep; Arunajatesan, Srinivasan; Dechant, Lawrence
2015-02-01
We demonstrate a Bayesian method that can be used to calibrate computationally expensive 3D RANS (Reynolds Av- eraged Navier Stokes) models with complex response surfaces. Such calibrations, conditioned on experimental data, can yield turbulence model parameters as probability density functions (PDF), concisely capturing the uncertainty in the parameter estimates. Methods such as Markov chain Monte Carlo (MCMC) estimate the PDF by sampling, with each sample requiring a run of the RANS model. Consequently a quick-running surrogate is used instead to the RANS simulator. The surrogate can be very difficult to design if the model's response i.e., the dependence of the calibration variable (the observable) on the parameter being estimated is complex. We show how the training data used to construct the surrogate can be employed to isolate a promising and physically realistic part of the parameter space, within which the response is well-behaved and easily modeled. We design a classifier, based on treed linear models, to model the "well-behaved region". This classifier serves as a prior in a Bayesian calibration study aimed at estimating 3 k [?] e parameters ( C u , C e 2 , C e 1 ) from experimental data of a transonic jet-in-crossflow interaction. The robustness of the calibration is investigated by checking its predictions of variables not included in the cal- ibration data. We also check the limit of applicability of the calibration by testing at off-calibration flow regimes. We find that calibration yield turbulence model parameters which predict the flowfield far better than when the nomi- nal values of the parameters are used. Substantial improvements are still obtained when we use the calibrated RANS model to predict jet-in-crossflow at Mach numbers and jet strengths quite different from those used to generate the ex- perimental (calibration) data. Thus the primary reason for poor predictive skill of RANS, when using nominal values of the turbulence model
NREL Estimates Economically Viable U.S. Renewable Generation - News
Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)
Releases | NREL Estimates Economically Viable U.S. Renewable Generation November 19, 2015 Analysts at the Energy Department's National Renewable Energy Laboratory (NREL) are providing, for the first time, a method for measuring the economic potential of renewable energy across the United States. A study applying this new method found that renewable energy generation is economically viable in many parts of the United States largely due to rapidly declining technology costs. The report,
Tonn, B.; Hwang, Ho-Ling; Elliot, S.; Peretz, J.; Bohm, R.; Hendrucko, B.
1994-04-01
This report contains descriptions of methodologies to be used to estimate the one-time generation of hazardous waste associated with five different types of remediation programs: Superfund sites, RCRA Corrective Actions, Federal Facilities, Underground Storage Tanks, and State and Private Programs. Estimates of the amount of hazardous wastes generated from these sources to be shipped off-site to commercial hazardous waste treatment and disposal facilities will be made on a state by state basis for the years 1993, 1999, and 2013. In most cases, estimates will be made for the intervening years, also.
Akrami, Yashar; Savage, Christopher; Scott, Pat; Conrad, Jan; Edsj, Joakim E-mail: savage@fysik.su.se E-mail: conrad@fysik.su.se
2011-07-01
Models of weak-scale supersymmetry offer viable dark matter (DM) candidates. Their parameter spaces are however rather large and complex, such that pinning down the actual parameter values from experimental data can depend strongly on the employed statistical framework and scanning algorithm. In frequentist parameter estimation, a central requirement for properly constructed confidence intervals is that they cover true parameter values, preferably at exactly the stated confidence level when experiments are repeated infinitely many times. Since most widely-used scanning techniques are optimised for Bayesian statistics, one needs to assess their abilities in providing correct confidence intervals in terms of the statistical coverage. Here we investigate this for the Constrained Minimal Supersymmetric Standard Model (CMSSM) when only constrained by data from direct searches for dark matter. We construct confidence intervals from one-dimensional profile likelihoods and study the coverage by generating several pseudo-experiments for a few benchmark sets of pseudo-true parameters. We use nested sampling to scan the parameter space and evaluate the coverage for the benchmarks when either flat or logarithmic priors are imposed on gaugino and scalar mass parameters. The sampling algorithm has been used in the configuration usually adopted for exploration of the Bayesian posterior. We observe both under- and over-coverage, which in some cases vary quite dramatically when benchmarks or priors are modified. We show how most of the variation can be explained as the impact of explicit priors as well as sampling effects, where the latter are indirectly imposed by physicality conditions. For comparison, we also evaluate the coverage for Bayesian credible intervals, and observe significant under-coverage in those cases.
A Three-Parameter Model for Estimating Atmospheric Tritium Dose at the Savannah River Site
Simpkins, A.A.; Hamby, D.M.
1997-12-31
The models used in the NRC approach to assess chronic atmospheric release of radioactivity generate deterministic dose estimates by using assumptions about exposure conditions and environmental transport mechanisms.
Zhang, Z. F.; Ward, Andy L.; Gee, Glendon W.
2002-12-10
As the Hanford Site transitions into remediation of contaminated soil waste sites and tank farm closure, more information is needed about the transport of contaminants as they move through the vadose zone to the underlying water table. The hydraulic properties must be characterized for accurate simulation of flow and transport. This characterization includes the determination of soil texture types, their three-dimensional distribution, and the parameterization of each soil texture. This document describes a method to estimate the soil hydraulic parameter using the parameter scaling concept (Zhang et al. 2002) and inverse techniques. To this end, the Groundwater Protection Program Science and Technology Project funded vadose zone transport field studies, including analysis of the results to estimate field-scale hydraulic parameters for modeling. Parameter scaling is a new method to scale hydraulic parameters. The method relates the hydraulic-parameter values measured at different spatial scales for different soil textures. Parameter scaling factors relevant to a reference texture are determined using these local-scale parameter values, e.g., those measured in the lab using small soil cores. After parameter scaling is applied, the total number of unknown variables in hydraulic parameters is reduced by a factor equal to the number of soil textures. The field-scale values of the unknown variables can then be estimated using inverse techniques and a well-designed field experiment. Finally, parameters for individual textures are obtained through inverse scaling of the reference values using an a priori relationship between reference parameter values and the specific values for each texture. Inverse methods have the benefits of 1) calculating parameter values that produce the best-fit between observed and simulated values, 2) quantifying the confidence limits in parameter estimates and the predictions, 3) providing diagnostic statistics that quantify the quality of
Cold-Crucible Design Parameters for Next Generation HLW Melters
Gombert, D.; Richardson, J.; Aloy, A.; Day, D.
2002-02-26
The cold-crucible induction melter (CCIM) design eliminates many materials and operating constraints inherent in joule-heated melter (JHM) technology, which is the standard for vitrification of high-activity wastes worldwide. The cold-crucible design is smaller, less expensive, and generates much less waste for ultimate disposal. It should also allow a much more flexible operating envelope, which will be crucial if the heterogeneous wastes at the DOE reprocessing sites are to be vitrified. A joule-heated melter operates by passing current between water-cooled electrodes through a molten pool in a refractory-lined chamber. This design is inherently limited by susceptibility of materials to corrosion and melting. In addition, redox conditions and free metal content have exacerbated materials problems or lead to electrical short-circuiting causing failures in DOE melters. In contrast, the CCIM design is based on inductive coupling of a water-cooled high-frequency electrical coil with the glass, causing eddycurrents that produce heat and mixing. A critical difference is that inductance coupling transfers energy through a nonconductive solid layer of slag coating the metal container inside the coil, whereas the jouleheated design relies on passing current through conductive molten glass in direct contact with the metal electrodes and ceramic refractories. The frozen slag in the CCIM design protects the containment and eliminates the need for refractory, while the corrosive molten glass can be the limiting factor in the JH melter design. The CCIM design also eliminates the need for electrodes that typically limit operating temperature to below 1200 degrees C. While significant marketing claims have been made by French and Russian technology suppliers and developers, little data is available for engineering and economic evaluation of the technology, and no facilities are available in the US to support testing. A currently funded project at the Idaho National Engineering
Madankan, R.; Pouget, S.; Singla, P.; Bursik, M.; Dehn, J.; Jones, M.; Patra, A.; Pavolonis, M.; Pitman, E.B.; Singh, T.; Webley, P.
2014-08-15
Volcanic ash advisory centers are charged with forecasting the movement of volcanic ash plumes, for aviation, health and safety preparation. Deterministic mathematical equations model the advection and dispersion of these plumes. However initial plume conditions height, profile of particle location, volcanic vent parameters are known only approximately at best, and other features of the governing system such as the windfield are stochastic. These uncertainties make forecasting plume motion difficult. As a result of these uncertainties, ash advisories based on a deterministic approach tend to be conservative, and many times over/under estimate the extent of a plume. This paper presents an end-to-end framework for generating a probabilistic approach to ash plume forecasting. This framework uses an ensemble of solutions, guided by Conjugate Unscented Transform (CUT) method for evaluating expectation integrals. This ensemble is used to construct a polynomial chaos expansion that can be sampled cheaply, to provide a probabilistic model forecast. The CUT method is then combined with a minimum variance condition, to provide a full posterior pdf of the uncertain source parameters, based on observed satellite imagery. The April 2010 eruption of the Eyjafjallajkull volcano in Iceland is employed as a test example. The puff advection/dispersion model is used to hindcast the motion of the ash plume through time, concentrating on the period 1416 April 2010. Variability in the height and particle loading of that eruption is introduced through a volcano column model called bent. Output uncertainty due to the assumed uncertain input parameter probability distributions, and a probabilistic spatial-temporal estimate of ash presence are computed.
Wang, Chao Yang; Luo, Gang; Jiang, Fangming; Carnes, Brian; Chen, Ken Shuang
2010-05-01
Current computational models for proton exchange membrane fuel cells (PEMFCs) include a large number of parameters such as boundary conditions, material properties, and numerous parameters used in sub-models for membrane transport, two-phase flow and electrochemistry. In order to successfully use a computational PEMFC model in design and optimization, it is important to identify critical parameters under a wide variety of operating conditions, such as relative humidity, current load, temperature, etc. Moreover, when experimental data is available in the form of polarization curves or local distribution of current and reactant/product species (e.g., O2, H2O concentrations), critical parameters can be estimated in order to enable the model to better fit the data. Sensitivity analysis and parameter estimation are typically performed using manual adjustment of parameters, which is also common in parameter studies. We present work to demonstrate a systematic approach based on using a widely available toolkit developed at Sandia called DAKOTA that supports many kinds of design studies, such as sensitivity analysis as well as optimization and uncertainty quantification. In the present work, we couple a multidimensional PEMFC model (which is being developed, tested and later validated in a joint effort by a team from Penn State Univ. and Sandia National Laboratories) with DAKOTA through the mapping of model parameters to system responses. Using this interface, we demonstrate the efficiency of performing simple parameter studies as well as identifying critical parameters using sensitivity analysis. Finally, we show examples of optimization and parameter estimation using the automated capability in DAKOTA.
Estimation of fracture flow parameters through numerical analysis of hydromechanical pressure pulses
Cappa, F.; Guglielmi, Y.; Rutqvist, J.; Tsang, C.-F.; Thoraval, A.
2008-03-16
The flow parameters of a natural fracture were estimated by modeling in situ pressure pulses. The pulses were generated in two horizontal boreholes spaced 1 m apart vertically and intersecting a near-vertical highly permeable fracture located within a shallow fractured carbonate reservoir. Fracture hydromechanical response was monitored using specialized fiber-optic borehole equipment that could simultaneously measure fluid pressure and fracture displacements. Measurements indicated a significant time lag between the pressure peak at the injection point and the one at the second measuring point, located 1 m away. The pressure pulse dilated and contracted the fracture. Field data were analyzed through hydraulic and coupled hydromechanical simulations using different governing flow laws. In matching the time lag between the pressure peaks at the two measuring points, our hydraulic models indicated that (1) flow was channeled in the fracture, (2) the hydraulic conductivity tensor was highly anisotropic, and (3) the radius of pulse influence was asymmetric, in that the pulse travelled faster vertically than horizontally. Moreover, our parametric study demonstrated that the fluid pressure diffusion through the fracture was quite sensitive to the spacing and orientation of channels, hydraulic aperture, storativity and hydraulic conductivity. Comparison between hydraulic and hydromechanical models showed that the deformation significantly affected fracture permeability and storativity, and consequently, the fluid pressure propagation, suggesting that the simultaneous measurements of pressure and mechanical displacement signals could substantially improve the interpretation of pulse tests during reservoir characterization.
Mukhopadhyay, S.; Tsang, Y.; Finsterle, S.
2009-01-15
A simple conceptual model has been recently developed for analyzing pressure and temperature data from flowing fluid temperature logging (FFTL) in unsaturated fractured rock. Using this conceptual model, we developed an analytical solution for FFTL pressure response, and a semianalytical solution for FFTL temperature response. We also proposed a method for estimating fracture permeability from FFTL temperature data. The conceptual model was based on some simplifying assumptions, particularly that a single-phase airflow model was used. In this paper, we develop a more comprehensive numerical model of multiphase flow and heat transfer associated with FFTL. Using this numerical model, we perform a number of forward simulations to determine the parameters that have the strongest influence on the pressure and temperature response from FFTL. We then use the iTOUGH2 optimization code to estimate these most sensitive parameters through inverse modeling and to quantify the uncertainties associated with these estimated parameters. We conclude that FFTL can be utilized to determine permeability, porosity, and thermal conductivity of the fracture rock. Two other parameters, which are not properties of the fractured rock, have strong influence on FFTL response. These are pressure and temperature in the borehole that were at equilibrium with the fractured rock formation at the beginning of FFTL. We illustrate how these parameters can also be estimated from FFTL data.
Estimating Parameters for the PVsyst Version 6 Photovoltaic Module Performance Model
Hansen, Clifford
2015-10-01
We present an algorithm to determine parameters for the photovoltaic module perf ormance model encoded in the software package PVsyst(TM) version 6. Our method operates on current - voltage (I - V) measured over a range of irradiance and temperature conditions. We describe the method and illustrate its steps using data for a 36 cell crystalli ne silicon module. We qualitatively compare our method with one other technique for estimating parameters for the PVsyst(TM) version 6 model .
Shonder, J.A.; Beck, J.V.
1998-11-01
A one-dimensional thermal model is derived to describe the temperature field around a vertical borehole heat exchanger (BHEx) for a geothermal heat pump. The inlet and outlet pipe flows are modeled as one, and an effective heat capacity is added to model the heat storage in the fluid and pipes. Parameter estimation techniques are then used to estimate various parameters associated with the model, including the thermal conductivity of the soil and of the grout which fills the borehole and surrounds the u-tube. The model is validated using test data from an experimental rig containing sand with known thermal conductivity. The estimates of the sand thermal conductivity derived from the model are found to be in good agreement with independent measurements.
Shonder, John A; Beck, Dr. James V.
1999-01-01
A one-dimensional thermal model is derived to describe the temperature field around a vertical borehole heat exchanger (BHEX) for a geothermal heat pump. The inlet and outlet pipe flows are modeled as one, and an effective heat capacity is added to model the heat storage in the fluid and pipes. Parameter estimation techniques are then used to estimate various parameters associated with the model, including the thermal conductivity of the soil and the grout that fills the borehole and surrounds the U-tube. The model is validated using test data from an experimental rig containing sand with known thermal conductivity. The estimates of the sand's thermal conductivity derived from the model are found to be in good agreement with independent measurements.
SDSS/SEGUE spectral feature analysis for stellar atmospheric parameter estimation
Li, Xiangru; Lu, Yu; Yang, Tan; Wang, Yongjun; Wu, Q. M. Jonathan; Luo, Ali; Zhao, Yongheng; Zuo, Fang
2014-08-01
Large-scale and deep sky survey missions are rapidly collecting a large amount of stellar spectra, which necessitate the estimation of atmospheric parameters directly from spectra and make it feasible to statistically investigate latent principles in a large data set. We present a technique for estimating parameters T{sub eff}, log g, and [Fe/H] from stellar spectra. With this technique, we first extract features from stellar spectra using the LASSO algorithm; then, the parameters are estimated from the extracted features using the support vector regression. On a subsample of 20,000 stellar spectra from the Sloan Digital Sky Survey (SDSS) with reference parameters provided by the SDSS/SEGUE Spectroscopic Parameter Pipeline, estimation consistency are 0.007458 dex for log T{sub eff} (101.609921 K for T{sub eff}), 0.189557 dex for log g, and 0.182060 for [Fe/H], where the consistency is evaluated by mean absolute error. Prominent characteristics of the proposed scheme are sparseness, locality, and physical interpretability. In this work, each spectrum consists of 3821 fluxes, and 10, 19, and 14 typical wavelength positions are detected, respectively, for estimating T{sub eff}, log g, and [Fe/H]. It is shown that the positions are related to typical lines of stellar spectra. This characteristic is important in investigating physical indications from analysis results. Then, stellar spectra can be described by the individual fluxes on the detected positions (PD) or local integration of fluxes near them (LI). The aforementioned consistency is the result based on features described by LI. If features are described by PD, consistency is 0.009092 dex for log T{sub eff} (124.545075 K for T{sub eff}), 0.198928 dex for log g, and 0.206814 dex for [Fe/H].
Updated Capital Cost Estimates for Utility Scale Electricity Generating Plants
Reports and Publications (EIA)
2013-01-01
The current and future projected cost and performance characteristics of new electric generating capacity are a critical input into the development of energy projections and analyses.
Estimation of anisotropy parameters in organic-rich shale: Rock physics forward modeling approach
Herawati, Ida Winardhi, Sonny; Priyono, Awali
2015-09-30
Anisotropy analysis becomes an important step in processing and interpretation of seismic data. One of the most important things in anisotropy analysis is anisotropy parameter estimation which can be estimated using well data, core data or seismic data. In seismic data, anisotropy parameter calculation is generally based on velocity moveout analysis. However, the accuracy depends on data quality, available offset, and velocity moveout picking. Anisotropy estimation using seismic data is needed to obtain wide coverage of particular layer anisotropy. In anisotropic reservoir, analysis of anisotropy parameters also helps us to better understand the reservoir characteristics. Anisotropy parameters, especially ε, are related to rock property and lithology determination. Current research aims to estimate anisotropy parameter from seismic data and integrate well data with case study in potential shale gas reservoir. Due to complexity in organic-rich shale reservoir, extensive study from different disciplines is needed to understand the reservoir. Shale itself has intrinsic anisotropy caused by lamination of their formed minerals. In order to link rock physic with seismic response, it is necessary to build forward modeling in organic-rich shale. This paper focuses on studying relationship between reservoir properties such as clay content, porosity and total organic content with anisotropy. Organic content which defines prospectivity of shale gas can be considered as solid background or solid inclusion or both. From the forward modeling result, it is shown that organic matter presence increases anisotropy in shale. The relationships between total organic content and other seismic properties such as acoustic impedance and Vp/Vs are also presented.
Wullschleger, Stan D; Gu, Lianhong; Pallardy, Stephen G.; Tu, Kevin; Law, Beverly E.
2010-01-01
The Farquhar-von Caemmerer-Berry (FvCB) model of photosynthesis is a change-point model and structurally overparameterized for interpreting the response of leaf net assimilation (A) to intercellular CO{sub 2} concentration (Ci). The use of conventional fitting methods may lead not only to incorrect parameters but also several previously unrecognized consequences. For example, the relationships between key parameters may be fixed computationally and certain fits may be produced in which the estimated parameters result in contradictory identification of the limitation states of the data. Here we describe a new approach that is better suited to the FvCB model characteristics. It consists of four main steps: (1) enumeration of all possible distributions of limitation states; (2) fitting the FvCB model to each limitation state distribution by minimizing a distribution-wise cost function that has desirable properties for parameter estimation; (3) identification and correction of inadmissible fits; and (4) selection of the best fit from all possible limitation state distributions. The new approach implemented theoretical parameter resolvability with numerical procedures that maximally use the information content of the data. It was tested with model simulations, sampled A/Ci curves, and chlorophyll fluorescence measurements of different tree species. The new approach is accessible through the automated website leafweb.ornl.gov.
Radiatively Important Parameters Best Estimate (RIPBE): An ARM Value-Added Product
McFarlane, S; Shippert, T; Mather, J
2011-06-30
The Radiatively Important Parameters Best Estimate (RIPBE) VAP was developed to create a complete set of clearly identified set of parameters on a uniform vertical and temporal grid to use as input to a radiative transfer model. One of the main drivers for RIPBE was as input to the Broadband Heating Rate Profile (BBHRP) VAP, but we also envision using RIPBE files for user-run radiative transfer codes, as part of cloud/aerosol retrieval testbeds, and as input to averaged datastreams for model evaluation.
Kowalsky, Michael; Finsterle, Stefan; Rubin, Yoram
2003-05-12
Methods for determining the parameters necessary for modeling fluid flow and contaminant transport in the shallow subsurface are in great demand. Soil properties such as permeability, porosity, and water retention are typically estimated through the inversion of hydrological data (e.g., measurements of capillary pressure and water saturation). However, ill-posedness and non-uniqueness commonly arise in such inverse problems making their solutions elusive. Incorporating additional types of data, such as from geophysical methods, may greatly improve the success of inverse modeling. In particular, ground-penetrating radar (GPR) has proven sensitive to subsurface fluid flow processes. In the present work, an inverse technique is presented in which permeability distributions are generated conditional to time-lapsed GPR measurements and hydrological data collected during a transient flow experiment. Specifically, a modified pilot point framework has been implemented in iTOUGH2 allowing for the generation of permeability distributions that preserve point measurements and spatial correlation patterns while reproducing geophysical and hydrological measurements. Through a numerical example, we examine the performance of this method and the benefit of including synthetic GPR data while inverting for fluid flow parameters in the vadose zone. Our hypothesis is that within the inversion framework that we describe, our ability to predict flow across control planes greatly improves with the use of both transient hydrological measurements and geophysical measurements (GPR-derived estimates of water saturation, in particular).
DOE/SC-ARM/TR-097 Radiatively Important Parameters Best Estimate
Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)
7 Radiatively Important Parameters Best Estimate (RIPBE): An ARM Value-Added Product S McFarlane T Shippert J Mather June 2011 DISCLAIMER This report was prepared as an account of work sponsored by the U.S. Government. Neither the United States nor any agency thereof, nor any of their employees, makes any warranty, express or implied, or assumes any legal liability or responsibility for the accuracy, completeness, or usefulness of any information, apparatus, product, or process disclosed, or
Araujo, Marcelo Guimaraes; Magrini, Alessandra; Mahler, Claudio Fernando; Bilitewski, Bernd
2012-02-15
Highlights: Black-Right-Pointing-Pointer Literature of WEEE generation in developing countries is reviewed. Black-Right-Pointing-Pointer We analyse existing estimates of WEEE generation for Brazil. Black-Right-Pointing-Pointer We present a model for WEEE generation estimate. Black-Right-Pointing-Pointer WEEE generation of 3.77 kg/capita year for 2008 is estimated. Black-Right-Pointing-Pointer Use of constant lifetime should be avoided for non-mature market products. - Abstract: Sales of electrical and electronic equipment are increasing dramatically in developing countries. Usually, there are no reliable data about quantities of the waste generated. A new law for solid waste management was enacted in Brazil in 2010, and the infrastructure to treat this waste must be planned, considering the volumes of the different types of electrical and electronic equipment generated. This paper reviews the literature regarding estimation of waste electrical and electronic equipment (WEEE), focusing on developing countries, particularly in Latin America. It briefly describes the current WEEE system in Brazil and presents an updated estimate of generation of WEEE. Considering the limited available data in Brazil, a model for WEEE generation estimation is proposed in which different methods are used for mature and non-mature market products. The results showed that the most important variable is the equipment lifetime, which requires a thorough understanding of consumer behavior to estimate. Since Brazil is a rapidly expanding market, the 'boom' in waste generation is still to come. In the near future, better data will provide more reliable estimation of waste generation and a clearer interpretation of the lifetime variable throughout the years.
Impact of mergers on LISA parameter estimation for nonspinning black hole binaries
McWilliams, Sean T.; Thorpe, James Ira; Baker, John G.; Kelly, Bernard J.
2010-03-15
We investigate the precision with which the parameters describing the characteristics and location of nonspinning black hole binaries can be measured with the Laser Interferometer Space Antenna (LISA). By using complete waveforms including the inspiral, merger, and ringdown portions of the signals, we find that LISA will have far greater precision than previous estimates for nonspinning mergers that ignored the merger and ringdown. Our analysis covers nonspinning waveforms with moderate mass ratios, q{>=}1/10, and total masses 10{sup 5} < or approx. M/M{sub {center_dot}}< or approx. 10{sup 7}. We compare the parameter uncertainties using the Fisher-matrix formalism, and establish the significance of mass asymmetry and higher-order content to the predicted parameter uncertainties resulting from inclusion of the merger. In real-time observations, the later parts of the signal lead to significant improvements in sky-position precision in the last hours and even the final minutes of observation. For comparable-mass systems with total mass M/M{sub {center_dot}{approx}1}0{sup 6}, we find that the increased precision resulting from including the merger is comparable to the increase in signal-to-noise ratio. For the most precise systems under investigation, half can be localized to within O(10 arcmin), and 10% can be localized to within O(1 arcmin).
Heath, G.
2012-06-01
This powerpoint presentation to be presented at the World Renewable Energy Forum on May 14, 2012, in Denver, CO, discusses systematic review and harmonization of life cycle GHG emission estimates for electricity generation technologies.
Thermal hydraulic simulations, error estimation and parameter sensitivity studies in Drekar::CFD
Smith, Thomas Michael; Shadid, John N.; Pawlowski, Roger P.; Cyr, Eric C.; Wildey, Timothy Michael
2014-01-01
This report describes work directed towards completion of the Thermal Hydraulics Methods (THM) CFD Level 3 Milestone THM.CFD.P7.05 for the Consortium for Advanced Simulation of Light Water Reactors (CASL) Nuclear Hub effort. The focus of this milestone was to demonstrate the thermal hydraulics and adjoint based error estimation and parameter sensitivity capabilities in the CFD code called Drekar::CFD. This milestone builds upon the capabilities demonstrated in three earlier milestones; THM.CFD.P4.02 [12], completed March, 31, 2012, THM.CFD.P5.01 [15] completed June 30, 2012 and THM.CFD.P5.01 [11] completed on October 31, 2012.
Impact of spurious shear on cosmological parameter estimates from weak lensing observables
Petri, Andrea; May, Morgan; Haiman, Zoltn; Kratochvil, Jan M.
2014-12-30
We research, residual errors in shear measurements, after corrections for instrument systematics and atmospheric effects, can impact cosmological parameters derived from weak lensing observations. Here we combine convergence maps from our suite of ray-tracing simulations with random realizations of spurious shear. This allows us to quantify the errors and biases of the triplet (?_{m},w,?_{8}) derived from the power spectrum (PS), as well as from three different sets of non-Gaussian statistics of the lensing convergence field: Minkowski functionals (MFs), low-order moments (LMs), and peak counts (PKs). Our main results are as follows: (i) We find an order of magnitude smaller biases from the PS than in previous work. (ii) The PS and LM yield biases much smaller than the morphological statistics (MF, PK). (iii) For strictly Gaussian spurious shear with integrated amplitude as low as its current estimate of ?_{sys}^{2} ? 10^{-7}, biases from the PS and LM would be unimportant even for a survey with the statistical power of Large Synoptic Survey Telescope. However, we find that for surveys larger than ? 100 deg^{2}, non-Gaussianity in the noise (not included in our analysis) will likely be important and must be quantified to assess the biases. (iv) The morphological statistics (MF, PK) introduce important biases even for Gaussian noise, which must be corrected in large surveys. The biases are in different directions in (?m,w,?8) parameter space, allowing self-calibration by combining multiple statistics. Our results warrant follow-up studies with more extensive lensing simulations and more accurate spurious shear estimates.
Impact of spurious shear on cosmological parameter estimates from weak lensing observables
DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)
Petri, Andrea; May, Morgan; Haiman, Zoltán; Kratochvil, Jan M.
2014-12-30
We research, residual errors in shear measurements, after corrections for instrument systematics and atmospheric effects, can impact cosmological parameters derived from weak lensing observations. Here we combine convergence maps from our suite of ray-tracing simulations with random realizations of spurious shear. This allows us to quantify the errors and biases of the triplet (Ωm,w,σ8) derived from the power spectrum (PS), as well as from three different sets of non-Gaussian statistics of the lensing convergence field: Minkowski functionals (MFs), low-order moments (LMs), and peak counts (PKs). Our main results are as follows: (i) We find an order of magnitude smaller biasesmore » from the PS than in previous work. (ii) The PS and LM yield biases much smaller than the morphological statistics (MF, PK). (iii) For strictly Gaussian spurious shear with integrated amplitude as low as its current estimate of σsys2 ≈ 10-7, biases from the PS and LM would be unimportant even for a survey with the statistical power of Large Synoptic Survey Telescope. However, we find that for surveys larger than ≈ 100 deg2, non-Gaussianity in the noise (not included in our analysis) will likely be important and must be quantified to assess the biases. (iv) The morphological statistics (MF, PK) introduce important biases even for Gaussian noise, which must be corrected in large surveys. The biases are in different directions in (Ωm,w,σ8) parameter space, allowing self-calibration by combining multiple statistics. Our results warrant follow-up studies with more extensive lensing simulations and more accurate spurious shear estimates.« less
Impact of spurious shear on cosmological parameter estimates from weak lensing observables
Petri, Andrea; May, Morgan; Haiman, Zoltán; Kratochvil, Jan M.
2014-12-30
We research, residual errors in shear measurements, after corrections for instrument systematics and atmospheric effects, can impact cosmological parameters derived from weak lensing observations. Here we combine convergence maps from our suite of ray-tracing simulations with random realizations of spurious shear. This allows us to quantify the errors and biases of the triplet (Ω_{m},w,σ_{8}) derived from the power spectrum (PS), as well as from three different sets of non-Gaussian statistics of the lensing convergence field: Minkowski functionals (MFs), low-order moments (LMs), and peak counts (PKs). Our main results are as follows: (i) We find an order of magnitude smaller biases from the PS than in previous work. (ii) The PS and LM yield biases much smaller than the morphological statistics (MF, PK). (iii) For strictly Gaussian spurious shear with integrated amplitude as low as its current estimate of σ_{sys}^{2} ≈ 10^{-7}, biases from the PS and LM would be unimportant even for a survey with the statistical power of Large Synoptic Survey Telescope. However, we find that for surveys larger than ≈ 100 deg^{2}, non-Gaussianity in the noise (not included in our analysis) will likely be important and must be quantified to assess the biases. (iv) The morphological statistics (MF, PK) introduce important biases even for Gaussian noise, which must be corrected in large surveys. The biases are in different directions in (Ωm,w,σ8) parameter space, allowing self-calibration by combining multiple statistics. Our results warrant follow-up studies with more extensive lensing simulations and more accurate spurious shear estimates.
Rabl, A.; Leide, B. ); Carvalho, M.J.; Collares-Pereira, M. ); Bourges, B.
1991-01-01
The Collector and System Testing Group (CSTG) of the European Community has developed a procedure for testing the performance of solar water heaters. This procedure treats a solar water heater as a black box with input-output parameters that are determined by all-day tests. In the present study the authors carry out a systematic analysis of the accuracy of this procedure, in order to answer the question: what tolerances should one impose for the measurements and how many days of testing should one demand under what meteorological conditions, in order to be able to quarantee a specified maximum error for the long term performance The methodology is applicable to other test procedures as well. The present paper (Part 1) examines the measurement tolerances of the current version of the procedure and derives a priori estimates of the errors of the parameters; these errors are then compared with the regression results of the Round Robin test series. The companion paper (Part 2) evaluates the consequences for the accuracy of the long term performance prediction. The authors conclude that the CSTG test procedure makes it possible to predict the long term performance with standard errors around 5% for sunny climates (10% for cloudy climates). The apparent precision of individual test sequences is deceptive because of large systematic discrepancies between different sequences. Better results could be obtained by imposing tighter control on the constancy of the cold water supply temperature and on the environment of the test, the latter by enforcing the recommendation for the ventilation of the collector.
Estimating Monthly 1989-2000 Data for Generation, Consumption, and Stocks
U.S. Energy Information Administration (EIA) Indexed Site
Monthly Energy Review, Section 7: Estimating Monthly 1989-2000 Data for Generation, Consumption, and Stocks For 1989-2000, monthly and annual data were collected for electric utilities; however, during this time period, only annual data were collected for independent power producers, commercial plants, and industrial plants. To obtain 1989-2000 monthly estimates for the Electric Power, Commercial, and Industrial Sectors, electric utility patterns were used for each energy source (MonthX =
Ru-Shan Wu; Xiao-Bi Xie
2008-06-08
Our proposed work on high resolution/high fidelity seismic imaging focused on three general areas: (1) development of new, more efficient, wave-equation-based propagators and imaging conditions, (2) developments towards amplitude-preserving imaging in the local angle domain, in particular, imaging methods that allow us to estimate the reflection as a function of angle at a layer boundary, and (3) studies of wave inversion for local parameter estimation. In this report we summarize the results and progress we made during the project period. The report is divided into three parts, totaling 10 chapters. The first part is on resolution analysis and its relation to directional illumination analysis. The second part, which is composed of 6 chapters, is on the main theme of our work, the true-reflection imaging. True-reflection imaging is an advanced imaging technology which aims at keeping the image amplitude proportional to the reflection strength of the local reflectors or to obtain the reflection coefficient as function of reflection-angle. There are many factors which may influence the image amplitude, such as geometrical spreading, transmission loss, path absorption, acquisition aperture effect, etc. However, we can group these into two categories: one is the propagator effect (geometric spreading, path losses); the other is the acquisition-aperture effect. We have made significant progress in both categories. We studied the effects of different terms in the true-amplitude one-way propagators, especially the terms including lateral velocity variation of the medium. We also demonstrate the improvements by optimizing the expansion coefficients in different terms. Our research also includes directional illumination analysis for both the one-way propagators and full-wave propagators. We developed the fast acquisition-aperture correction method in the local angle-domain, which is an important element in the true-reflection imaging. Other developments include the super
Generating human reliability estimates using expert judgment. Volume 1. Main report
Comer, M.K.; Seaver, D.A.; Stillwell, W.G.; Gaddy, C.D.
1984-11-01
The US Nuclear Regulatory Commission is conducting a research program to determine the practicality, acceptability, and usefulness of several different methods for obtaining human reliability data and estimates that can be used in nuclear power plant probabilistic risk assessment (PRA). One method, investigated as part of this overall research program, uses expert judgment to generate human error probability (HEP) estimates and associated uncertainty bounds. The project described in this document evaluated two techniques for using expert judgment: paired comparisons and direct numerical estimation. Volume 1 of this report provides a brief overview of the background of the project, the procedure for using psychological scaling techniques to generate HEP estimates and conclusions from evaluation of the techniques. Results of the evaluation indicate that techniques using expert judgment should be given strong consideration for use in developing HEP estimates. In addition, HEP estimates for 35 tasks related to boiling water reactors (BMRs) were obtained as part of the evaluation. These HEP estimates are also included in the report.
Generating human reliability estimates using expert judgment. Volume 2. Appendices. [PWR; BWR
Comer, M.K.; Seaver, D.A.; Stillwell, W.G.; Gaddy, C.D.
1984-11-01
The US Nuclear Regulatory Commission is conducting a research program to determine the practicality, acceptability, and usefulness of several different methods for obtaining human reliability data and estimates that can be used in nuclear power plant probabilistic risk assessments (PRA). One method, investigated as part of this overall research program, uses expert judgment to generate human error probability (HEP) estimates and associated uncertainty bounds. The project described in this document evaluated two techniques for using expert judgment: paired comparisons and direct numerical estimation. Volume 2 provides detailed procedures for using the techniques, detailed descriptions of the analyses performed to evaluate the techniques, and HEP estimates generated as part of this project. The results of the evaluation indicate that techniques using expert judgment should be given strong consideration for use in developing HEP estimates. Judgments were shown to be consistent and to provide HEP estimates with a good degree of convergent validity. Of the two techniques tested, direct numerical estimation appears to be preferable in terms of ease of application and quality of results.
Discretization error estimation and exact solution generation using the method of nearby problems.
Sinclair, Andrew J.; Raju, Anil; Kurzen, Matthew J.; Roy, Christopher John; Phillips, Tyrone S.
2011-10-01
The Method of Nearby Problems (MNP), a form of defect correction, is examined as a method for generating exact solutions to partial differential equations and as a discretization error estimator. For generating exact solutions, four-dimensional spline fitting procedures were developed and implemented into a MATLAB code for generating spline fits on structured domains with arbitrary levels of continuity between spline zones. For discretization error estimation, MNP/defect correction only requires a single additional numerical solution on the same grid (as compared to Richardson extrapolation which requires additional numerical solutions on systematically-refined grids). When used for error estimation, it was found that continuity between spline zones was not required. A number of cases were examined including 1D and 2D Burgers equation, the 2D compressible Euler equations, and the 2D incompressible Navier-Stokes equations. The discretization error estimation results compared favorably to Richardson extrapolation and had the advantage of only requiring a single grid to be generated.
Boe, Timothy; Lemieux, Paul; Schultheisz, Daniel; Peake, Tom; Hayes, Colin
2013-07-01
Management of debris and waste from a wide-area radiological incident would probably constitute a significant percentage of the total remediation cost and effort. The U.S. Environmental Protection Agency's (EPA's) Waste Estimation Support Tool (WEST) is a unique planning tool for estimating the potential volume and radioactivity levels of waste generated by a radiological incident and subsequent decontamination efforts. The WEST was developed to support planners and decision makers by generating a first-order estimate of the quantity and characteristics of waste resulting from a radiological incident. The tool then allows the user to evaluate the impact of various decontamination/demolition strategies on the waste types and volumes generated. WEST consists of a suite of standalone applications and Esri{sup R} ArcGIS{sup R} scripts for rapidly estimating waste inventories and levels of radioactivity generated from a radiological contamination incident as a function of user-defined decontamination and demolition approaches. WEST accepts Geographic Information System (GIS) shape-files defining contaminated areas and extent of contamination. Building stock information, including square footage, building counts, and building composition estimates are then generated using the Federal Emergency Management Agency's (FEMA's) Hazus{sup R}-MH software. WEST then identifies outdoor surfaces based on the application of pattern recognition to overhead aerial imagery. The results from the GIS calculations are then fed into a Microsoft Excel{sup R} 2007 spreadsheet with a custom graphical user interface where the user can examine the impact of various decontamination/demolition scenarios on the quantity, characteristics, and residual radioactivity of the resulting waste streams. (authors)
Estimation of retired mobile phones generation in China: A comparative study on methodology
Li, Bo; Yang, Jianxin; Lu, Bin; Song, Xiaolong
2015-01-15
Highlights: • The sales data of mobile phones in China was revised by considering the amount of smuggled and counterfeit mobile phones. • The estimation of retired mobile phones in China was made by comparing some relevant methods. • The advanced result of estimation can help improve the policy-making. • The method suggested in this paper can be also used in other countries. • Some discussions on methodology are also conducted in order for the improvement. - Abstract: Due to the rapid development of economy and technology, China has the biggest production and possession of mobile phones around the world. In general, mobile phones have relatively short life time because the majority of users replace their mobile phones frequently. Retired mobile phones represent the most valuable electrical and electronic equipment (EEE) in the main waste stream because of such characteristics as large quantity, high reuse/recovery value and fast replacement frequency. Consequently, the huge amount of retired mobile phones in China calls for a sustainable management system. The generation estimation can provide fundamental information to construct the sustainable management system of retired mobile phones and other waste electrical and electronic equipment (WEEE). However, the reliable estimation result is difficult to get and verify. The priority aim of this paper is to provide proper estimation approach for the generation of retired mobile phones in China, by comparing some relevant methods. The results show that the sales and new method is in the highest priority in estimation of the retired mobile phones. The result of sales and new method shows that there are 47.92 million mobile phones retired in 2002, and it reached to 739.98 million in China in 2012. It presents an increasing tendency with some fluctuations clearly. Furthermore, some discussions on methodology, such as the selection of improper approach and error in the input data, are also conducted in order to
Tzvi Galchen; Mei Xu ); Eberhard, W.L. )
1992-11-30
This work is part of the First International Satellite Land Surface Climatology Project (ISLSCP) Field Experiment (FIFE), an international land-surface-atmosphere experiment aimed at improving the way climate models represent energy, water, heat, and carbon exchanges, and improving the utilization of satellite based remote sensing to monitor such parameters. Here the authors present results on doppler LIDAR measurements used to measure a range of turbulence parameters in the region of the unstable planetary boundary layer (PBL). The parameters include, averaged velocities, cartesian velocities, variances in velocities, parts of the covariance associated with vertical fluxes of horizontal momentum, and third moments of the vertical velocity. They explain their analysis technique, especially as it relates to error reduction of the averaged turbulence parameters from individual measurements with relatively large errors. The scales studied range from 150m to 12km. With this new diagnostic they address questions about the behavior of the convectively unstable PBL, as well as the stable layer which overlies it.
DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)
Burr, Tom; Hamada, Michael S.; Howell, John; Skurikhin, Misha; Ticknor, Larry; Weaver, Brian
2013-01-01
Process monitoring (PM) for nuclear safeguards sometimes requires estimation of thresholds corresponding to small false alarm rates. Threshold estimation dates to the 1920s with the Shewhart control chart; however, because possible new roles for PM are being evaluated in nuclear safeguards, it is timely to consider modern model selection options in the context of threshold estimation. One of the possible new PM roles involves PM residuals, where a residual is defined as residual = data − prediction. This paper reviews alarm threshold estimation, introduces model selection options, and considers a range of assumptions regarding the data-generating mechanism for PM residuals.more » Two PM examples from nuclear safeguards are included to motivate the need for alarm threshold estimation. The first example involves mixtures of probability distributions that arise in solution monitoring, which is a common type of PM. The second example involves periodic partial cleanout of in-process inventory, leading to challenging structure in the time series of PM residuals.« less
McKenna, Sean Andrew; Yoon, Hongkyu; Hart, David Blaine
2010-12-01
Heterogeneity plays an important role in groundwater flow and contaminant transport in natural systems. Since it is impossible to directly measure spatial variability of hydraulic conductivity, predictions of solute transport based on mathematical models are always uncertain. While in most cases groundwater flow and tracer transport problems are investigated in two-dimensional (2D) systems, it is important to study more realistic and well-controlled 3D systems to fully evaluate inverse parameter estimation techniques and evaluate uncertainty in the resulting estimates. We used tracer concentration breakthrough curves (BTCs) obtained from a magnetic resonance imaging (MRI) technique in a small flow cell (14 x 8 x 8 cm) that was packed with a known pattern of five different sands (i.e., zones) having cm-scale variability. In contrast to typical inversion systems with head, conductivity and concentration measurements at limited points, the MRI data included BTCs measured at a voxel scale ({approx}0.2 cm in each dimension) over 13 x 8 x 8 cm with a well controlled boundary condition, but did not have direct measurements of head and conductivity. Hydraulic conductivity and porosity were conceptualized as spatial random fields and estimated using pilot points along layers of the 3D medium. The steady state water flow and solute transport were solved using MODFLOW and MODPATH. The inversion problem was solved with a nonlinear parameter estimation package - PEST. Two approaches to parameterization of the spatial fields are evaluated: (1) The detailed zone information was used as prior information to constrain the spatial impact of the pilot points and reduce the number of parameters; and (2) highly parameterized inversion at cm scale (e.g., 1664 parameters) using singular value decomposition (SVD) methodology to significantly reduce the run-time demands. Both results will be compared to measured BTCs. With MRI, it is easy to change the averaging scale of the observed
Power plant capital investment cost estimates: current trends and sensitivity to economic parameters
Not Available
1980-06-01
This report describes power plant capital investment cost studies that were carried out as part of the activities of the Plans and Analysis Division, Office of Nuclear Energy Programs, US Department of Energy. The activities include investment cost studies prepared by an architect-engineer, including trends, effects of environmental and safety requirements, and construction schedules. A computer code used to prepare capital investment cost estimates under varying economic conditions is described, and application of this code is demonstrated by sensitivity studies.
Wang, Yan; Mohanty, Soumya D.; Jenet, Fredrick A. [Department of Physics and Astronomy, University of Texas at Brownsville, 1 West University Boulevard, Brownsville, TX 78520 (United States)
2014-11-01
The use of a high precision pulsar timing array is a promising approach to detecting gravitational waves in the very low frequency regime (10{sup 6}-10{sup 9} Hz) that is complementary to ground-based efforts (e.g., LIGO, Virgo) at high frequencies (?10-10{sup 3} Hz) and space-based ones (e.g., LISA) at low frequencies (10{sup 4}-10{sup 1} Hz). One of the target sources for pulsar timing arrays is individual supermassive black hole binaries which are expected to form in galactic mergers. In this paper, a likelihood-based method for detection and parameter estimation is presented for a monochromatic continuous gravitational wave signal emitted by such a source. The so-called pulsar terms in the signal that arise due to the breakdown of the long-wavelength approximation are explicitly taken into account in this method. In addition, the method accounts for equality and inequality constraints involved in the semi-analytical maximization of the likelihood over a subset of the parameters. The remaining parameters are maximized over numerically using Particle Swarm Optimization. Thus, the method presented here solves the monochromatic continuous wave detection and parameter estimation problem without invoking some of the approximations that have been used in earlier studies.
Can we estimate plasma density in ICP driver through electrical parameters in RF circuit?
Bandyopadhyay, M. Sudhir, Dass Chakraborty, A.
2015-04-08
To avoid regular maintenance, invasive plasma diagnostics with probes are not included in the inductively coupled plasma (ICP) based ITER Neutral Beam (NB) source design. Even non-invasive probes like optical emission spectroscopic diagnostics are also not included in the present ITER NB design due to overall system design and interface issues. As a result, negative ion beam current through the extraction system in the ITER NB negative ion source is the only measurement which indicates plasma condition inside the ion source. However, beam current not only depends on the plasma condition near the extraction region but also on the perveance condition of the ion extractor system and negative ion stripping. Nevertheless, inductively coupled plasma production region (RF driver region) is placed at distance (∼ 30cm) from the extraction region. Due to that, some uncertainties are expected to be involved if one tries to link beam current with plasma properties inside the RF driver. Plasma characterization in source RF driver region is utmost necessary to maintain the optimum condition for source operation. In this paper, a method of plasma density estimation is described, based on density dependent plasma load calculation.
Wu, M.; Peng, J.
2011-02-24
Freshwater consumption for electricity generation is projected to increase dramatically in the next couple of decades in the United States. The increased demand is likely to further strain freshwater resources in regions where water has already become scarce. Meanwhile, the automotive industry has stepped up its research, development, and deployment efforts on electric vehicles (EVs) and plug-in hybrid electric vehicles (PHEVs). Large-scale, escalated production of EVs and PHEVs nationwide would require increased electricity production, and so meeting the water demand becomes an even greater challenge. The goal of this study is to provide a baseline assessment of freshwater use in electricity generation in the United States and at the state level. Freshwater withdrawal and consumption requirements for power generated from fossil, nonfossil, and renewable sources via various technologies and by use of different cooling systems are examined. A data inventory has been developed that compiles data from government statistics, reports, and literature issued by major research institutes. A spreadsheet-based model has been developed to conduct the estimates by means of a transparent and interactive process. The model further allows us to project future water withdrawal and consumption in electricity production under the forecasted increases in demand. This tool is intended to provide decision makers with the means to make a quick comparison among various fuel, technology, and cooling system options. The model output can be used to address water resource sustainability when considering new projects or expansion of existing plants.
Adams, Brian M.; Ebeida, Mohamed Salah; Eldred, Michael S.; Jakeman, John Davis; Swiler, Laura Painton; Stephens, John Adam; Vigil, Dena M.; Wildey, Timothy Michael; Bohnhoff, William J.; Eddy, John P.; Hu, Kenneth T.; Dalbey, Keith R.; Bauman, Lara E; Hough, Patricia Diane
2014-05-01
The Dakota (Design Analysis Kit for Optimization and Terascale Applications) toolkit provides a exible and extensible interface between simulation codes and iterative analysis methods. Dakota contains algorithms for optimization with gradient and nongradient-based methods; uncertainty quanti cation with sampling, reliability, and stochastic expansion methods; parameter estimation with nonlinear least squares methods; and sensitivity/variance analysis with design of experiments and parameter study methods. These capabilities may be used on their own or as components within advanced strategies such as surrogate-based optimization, mixed integer nonlinear programming, or optimization under uncertainty. By employing object-oriented design to implement abstractions of the key components required for iterative systems analyses, the Dakota toolkit provides a exible and extensible problem-solving environment for design and performance analysis of computational models on high performance computers. This report serves as a user's manual for the Dakota software and provides capability overviews and procedures for software execution, as well as a variety of example studies.
Eldred, Michael Scott; Vigil, Dena M.; Dalbey, Keith R.; Bohnhoff, William J.; Adams, Brian M.; Swiler, Laura Painton; Lefantzi, Sophia; Hough, Patricia Diane; Eddy, John P.
2011-12-01
The DAKOTA (Design Analysis Kit for Optimization and Terascale Applications) toolkit provides a flexible and extensible interface between simulation codes and iterative analysis methods. DAKOTA contains algorithms for optimization with gradient and nongradient-based methods; uncertainty quantification with sampling, reliability, and stochastic expansion methods; parameter estimation with nonlinear least squares methods; and sensitivity/variance analysis with design of experiments and parameter study methods. These capabilities may be used on their own or as components within advanced strategies such as surrogate-based optimization, mixed integer nonlinear programming, or optimization under uncertainty. By employing object-oriented design to implement abstractions of the key components required for iterative systems analyses, the DAKOTA toolkit provides a flexible and extensible problem-solving environment for design and performance analysis of computational models on high performance computers. This report serves as a theoretical manual for selected algorithms implemented within the DAKOTA software. It is not intended as a comprehensive theoretical treatment, since a number of existing texts cover general optimization theory, statistical analysis, and other introductory topics. Rather, this manual is intended to summarize a set of DAKOTA-related research publications in the areas of surrogate-based optimization, uncertainty quantification, and optimization under uncertainty that provide the foundation for many of DAKOTA's iterative analysis capabilities.
Shao Tianjiao [State Key Laboratory of Molecular Reaction Dynamics, Dalian Institute of Chemical Physics, Chinese Academy of Sciences, Dalian 116023 (China); School of Materials Science and Engineering, Dalian University of Technology, Dalian 116024 (China); Zhao Guangjiu; Yang Huan [State Key Laboratory of Molecular Reaction Dynamics, Dalian Institute of Chemical Physics, Chinese Academy of Sciences, Dalian 116023 (China); School of Physics, Shandong University, Jinan 250100 (China); Wen Bin [School of Materials Science and Engineering, Dalian University of Technology, Dalian 116024 (China)
2010-12-15
In the present work, laser-parameter effects on the isolated attosecond pulse generation from two-color high-order harmonic generation (HHG) process are theoretically investigated by use of a wave-packet dynamics method. A 6-fs, 800-nm, 6x10{sup 14}W/cm{sup 2}, linearly polarized laser pulse serves as the fundamental driving pulse and parallel linearly polarized control pulses at 400 nm (second harmonic) and 1600 nm (half harmonic) are superimposed to create a two-color field. Of the two techniques, we demonstrate that using a half-harmonic control pulse with a large relative strength and zero phase shift relative to the fundamental pulse is a more promising way to generate the shortest attosecond pulses. As a consequence, an isolated 12-as pulse is obtained by Fourier transforming an ultrabroad xuv continuum of 300 eV in the HHG spectrum under half-harmonic control scheme when the relative strength {radical}(R)=0.6 and relative phase =0.
Blais, AR; Dekaban, M; Lee, T-Y
2014-08-15
Quantitative analysis of dynamic positron emission tomography (PET) data usually involves minimizing a cost function with nonlinear regression, wherein the choice of starting parameter values and the presence of local minima affect the bias and variability of the estimated kinetic parameters. These nonlinear methods can also require lengthy computation time, making them unsuitable for use in clinical settings. Kinetic modeling of PET aims to estimate the rate parameter k{sub 3}, which is the binding affinity of the tracer to a biological process of interest and is highly susceptible to noise inherent in PET image acquisition. We have developed linearized kinetic models for kinetic analysis of dynamic contrast enhanced computed tomography (DCE-CT)/PET imaging, including a 2-compartment model for DCE-CT and a 3-compartment model for PET. Use of kinetic parameters estimated from DCE-CT can stabilize the kinetic analysis of dynamic PET data, allowing for more robust estimation of k{sub 3}. Furthermore, these linearized models are solved with a non-negative least squares algorithm and together they provide other advantages including: 1) only one possible solution and they do not require a choice of starting parameter values, 2) parameter estimates are comparable in accuracy to those from nonlinear models, 3) significantly reduced computational time. Our simulated data show that when blood volume and permeability are estimated with DCE-CT, the bias of k{sub 3} estimation with our linearized model is 1.97 ± 38.5% for 1,000 runs with a signal-to-noise ratio of 10. In summary, we have developed a computationally efficient technique for accurate estimation of k{sub 3} from noisy dynamic PET data.
Appendix M - GPRA06 estimate of penetration of generating technologies into green power markets
None, None
2009-01-18
The Green Power Market Model (GPMM or the model) identifies and analyzes the potential electric-generating capacity additions that will result from green power programs, which are not captured in the least-cost analyses performed by the National Energy Modeling System (NEMS) and the Market Allocation (MARKAL) model. The term "green power" is used to define power generated from renewable energy sources, such as wind, solar, geothermal, and various forms of biomass. The Green Power market is an increasingly important element of the national renewable energy contribution, with changes in the regulatory and legislative environment and the recent dramatic changes in natural gas prices slowly altering the size of this opportunity.
Ferraioli, Luigi; Hueller, Mauro; Vitale, Stefano; Heinzel, Gerhard; Hewitson, Martin; Monsky, Anneke; Nofrarias, Miquel
2010-08-15
The scientific objectives of the LISA Technology Package experiment on board of the LISA Pathfinder mission demand accurate calibration and validation of the data analysis tools in advance of the mission launch. The level of confidence required in the mission outcomes can be reached only by intensively testing the tools on synthetically generated data. A flexible procedure allowing the generation of a cross-correlated stationary noise time series was set up. A multichannel time series with the desired cross-correlation behavior can be generated once a model for a multichannel cross-spectral matrix is provided. The core of the procedure comprises a noise coloring, multichannel filter designed via a frequency-by-frequency eigendecomposition of the model cross-spectral matrix and a subsequent fit in the Z domain. The common problem of initial transients in a filtered time series is solved with a proper initialization of the filter recursion equations. The noise generator performance was tested in a two-dimensional case study of the closed-loop LISA Technology Package dynamics along the two principal degrees of freedom.
Wang, Ruofan; Wang, Jiang; Deng, Bin Liu, Chen; Wei, Xile; Tsang, K. M.; Chan, W. L.
2014-03-15
A combined method composing of the unscented Kalman filter (UKF) and the synchronization-based method is proposed for estimating electrophysiological variables and parameters of a thalamocortical (TC) neuron model, which is commonly used for studying Parkinson's disease for its relay role of connecting the basal ganglia and the cortex. In this work, we take into account the condition when only the time series of action potential with heavy noise are available. Numerical results demonstrate that not only this method can estimate model parameters from the extracted time series of action potential successfully but also the effect of its estimation is much better than the only use of the UKF or synchronization-based method, with a higher accuracy and a better robustness against noise, especially under the severe noise conditions. Considering the rather important role of TC neuron in the normal and pathological brain functions, the exploration of the method to estimate the critical parameters could have important implications for the study of its nonlinear dynamics and further treatment of Parkinson's disease.
Nelson, C.
1995-08-01
Under the Title III, Section 112 of the 1990 Clean Air Act Amendment, Congress directed the U.S. Environmental Protection Agency (EPA) to perform a study of the hazards to public resulting from pollutants emitted by electric utility system generating units. Radionuclides are among the groups of pollutants listed in the amendment. This report updates previously published data and estimates with more recently available information regarding the radionuclide contents of fossil fuels, associated emissions by steam-electric power plants, and potential health effects to exposed population groups.
Estimate of the Sources of Plutonium-Containing Wastes Generated from MOX Fuel Production in Russia
Kudinov, K.G.; Tretyakov, A.A.; Sorokin, Y.P.; Bondin, V.V.; Manakova, L.F.; Jardine, L.J.
2001-12-01
In Russia, mixed oxide (MOX) fuel is produced in a pilot facility ''Paket'' at ''MAYAK'' Production Association. The Mining-Chemical Combine (MCC) has developed plans to design and build a dedicated industrial-scale plant to produce MOX fuel and fuel assemblies (FA) for VVER-1000 water reactors and the BN-600 fast-breeder reactor, which is pending an official Russian Federation (RF) site-selection decision. The design output of the plant is based on production capacity of 2.75 tons of weapons plutonium per year to produce the resulting fuel assemblies: 1.25 tons for the BN-600 reactor FAs and the remaining 1.5 tons for VVER-1000 FAs. It is likely the quantity of BN-600 FAs will be reduced in actual practice. The process of nuclear disarmament frees a significant amount of weapons plutonium for other uses, which, if unutilized, represents a constant general threat. In France, Great Britain, Belgium, Russia, and Japan, reactor-grade plutonium is used in MOX-fuel production. Making MOX-fuel for CANDU (Canada) and pressurized water reactors (PWR) (Europe) is under consideration Russia. If this latter production is added, as many as 5 tons of Pu per year might be processed into new FAs in Russia. Many years of work and experience are represented in the estimates of MOX fuel production wastes derived in this report. Prior engineering studies and sludge treatment investigations and comparisons have determined how best to treat Pu sludges and MOX fuel wastes. Based upon analyses of the production processes established by these efforts, we can estimate that there will be approximately 1200 kg of residual wastes subject to immobilization per MT of plutonium processed, of which approximately 6 to 7 kg is Pu in the residuals per MT of Pu processed. The wastes are various and complicated in composition. Because organic wastes constitute both the major portion of total waste and of the Pu to be immobilized, the recommended treatment of MOX-fuel production waste is incineration
Estimate of the Sources of Plutonium-Containing Wastes Generated from MOX Fuel Production in Russia
Kudinov, K. G.; Tretyakov, A. A.; Sorokin, Yu. P.; Bondin, V. V.; Manakova, L. F.; Jardine, L. J.
2002-02-26
In Russia, mixed oxide (MOX) fuel is produced in a pilot facility ''Paket'' at ''MAYAK'' Production Association. The Mining-Chemical Combine (MCC) has developed plans to design and build a dedicated industrial-scale plant to produce MOX fuel and fuel assemblies (FA) for VVER-1000 water reactors and the BN-600 fast-breeder reactor, which is pending an official Russian Federation (RF) site-selection decision. The design output of the plant is based on a production capacity of 2.75 tons of weapons plutonium per year to produce the resulting fuel assemblies: 1.25 tons for the BN-600 reactor FAs and the remaining 1.5 tons for VVER-1000 FAs. It is likely the quantity of BN-600 FAs will be reduced in actual practice. The process of nuclear disarmament frees a significant amount of weapons plutonium for other uses, which, if unutilized, represents a constant general threat. In France, Great Britain, Belgium, Russia, and Japan, reactor-grade plutonium is used in MOX-fuel production. Making MOX-fuel for CANDU (Canada) and pressurized water reactors (PWR) (Europe) is under consideration in Russia. If this latter production is added, as many as 5 tons of Pu per year might be processed into new FAs in Russia. Many years of work and experience are represented in the estimates of MOX fuel production wastes derived in this report. Prior engineering studies and sludge treatment investigations and comparisons have determined how best to treat Pu sludges and MOX fuel wastes. Based upon analyses of the production processes established by these efforts, we can estimate that there will be approximately 1200 kg of residual wastes subject to immobilization per MT of plutonium processed, of which approximately 6 to 7 kg is Pu in the residuals per MT of Pu processed. The wastes are various and complicated in composition. Because organic wastes constitute both the major portion of total waste and of the Pu to be immobilized, the recommended treatment of MOX-fuel production waste is
Smith, T.R.
1997-03-01
Three different solar domestic hot water systems are being tested at the Colorado State University Solar Energy Applications Laboratory; an unpressurized drain-back system with a load side heat exchanger, an integral collector storage system, and an ultra low flow natural convection heat exchanger system. The systems are fully instrumented to yield data appropriate for in-depth analyses of performance. The level of detail allows the observation of the performance of the total system and the performance of the individual components. This report evaluates the systems based on in-situ experimental data and compares the performances with simulated performances. The verification of the simulations aids in the rating procedure. The whole system performance measurements are also used to analyze the performance of individual components of a solar hot water system and to develop improved component models. The data are analyzed extensively and the parameters needed to characterize the systems fully are developed. Also resulting from this indepth analysis are suggested design improvements wither to the systems or the system components.
Meyer, Philip D.; Ye, Ming; Rockhold, Mark L.; Neuman, Shlomo P.; Cantrell, Kirk J.
2007-07-30
This report to the Nuclear Regulatory Commission (NRC) describes the development and application of a methodology to systematically and quantitatively assess predictive uncertainty in groundwater flow and transport modeling that considers the combined impact of hydrogeologic uncertainties associated with the conceptual-mathematical basis of a model, model parameters, and the scenario to which the model is applied. The methodology is based on a n extension of a Maximum Likelihood implementation of Bayesian Model Averaging. Model uncertainty is represented by postulating a discrete set of alternative conceptual models for a site with associated prior model probabilities that reflect a belief about the relative plausibility of each model based on its apparent consistency with available knowledge and data. Posterior model probabilities are computed and parameter uncertainty is estimated by calibrating each model to observed system behavior; prior parameter estimates are optionally included. Scenario uncertainty is represented as a discrete set of alternative future conditions affecting boundary conditions, source/sink terms, or other aspects of the models, with associated prior scenario probabilities. A joint assessment of uncertainty results from combining model predictions computed under each scenario using as weight the posterior model and prior scenario probabilities. The uncertainty methodology was applied to modeling of groundwater flow and uranium transport at the Hanford Site 300 Area. Eight alternative models representing uncertainty in the hydrogeologic and geochemical properties as well as the temporal variability were considered. Two scenarios represent alternative future behavior of the Columbia River adjacent to the site were considered. The scenario alternatives were implemented in the models through the boundary conditions. Results demonstrate the feasibility of applying a comprehensive uncertainty assessment to large-scale, detailed groundwater flow
Life Estimation of PWR Steam Generator U-Tubes Subjected to Foreign Object-Induced Fretting Wear
Jo, Jong Chull; Jhung, Myung Jo; Kim, Woong Sik; Kim, Hho Jung
2005-10-15
This paper presents an approach to the remaining life prediction of steam generator (SG) U-tubes, which are intact initially, subjected to fretting-wear degradation due to the interaction between a vibrating tube and a foreign object in operating nuclear power plants. The operating SG shell-side flow field conditions are obtained from a three-dimensional SG flow calculation using the ATHOS3 code. Modal analyses are performed for the finite element models of U-tubes to get the natural frequency, corresponding mode shape, and participation factor. The wear rate of a U-tube caused by a foreign object is calculated using the Archard formula, and the remaining life of the tube is predicted. Also discussed in this study are the effects of the tube modal characteristics, external flow velocity, and tube internal pressure on the estimated results of the remaining life of the tube.
Griffin, Joshua D. (Sandai National Labs, Livermore, CA); Eldred, Michael Scott; Martinez-Canales, Monica L.; Watson, Jean-Paul; Kolda, Tamara Gibson; Adams, Brian M.; Swiler, Laura Painton; Williams, Pamela J.; Hough, Patricia Diane; Gay, David M.; Dunlavy, Daniel M.; Eddy, John P.; Hart, William Eugene; Guinta, Anthony A.; Brown, Shannon L.
2006-10-01
The DAKOTA (Design Analysis Kit for Optimization and Terascale Applications) toolkit provides a flexible and extensible interface between simulation codes and iterative analysis methods. DAKOTA contains algorithms for optimization with gradient and nongradient-based methods; uncertainty quantification with sampling, reliability, and stochastic finite element methods; parameter estimation with nonlinear least squares methods; and sensitivity/variance analysis with design of experiments and parameter study methods. These capabilities may be used on their own or as components within advanced strategies such as surrogate-based optimization, mixed integer nonlinear programming, or optimization under uncertainty. By employing object-oriented design to implement abstractions of the key components required for iterative systems analyses, the DAKOTA toolkit provides a flexible and extensible problem-solving environment for design and performance analysis of computational models on high performance computers. This report serves as a reference manual for the commands specification for the DAKOTA software, providing input overviews, option descriptions, and example specifications.
Dehoff, Ryan R.; Sridharan, Niyanth; Dinwiddie, Ralph; Robson, Alan; Jordan, Brian; Chaudhary, Anil; Babu, Sudarsanam Suresh
2015-09-01
Researchers from Manufacturing Demonstration Facility (MDF) at Oak Ridge National Laboratory (ORNL) worked with Applied Optimization (AO) to understand and evaluate the propensity for defect formation in builds manufactured using DM3D-POM laser direct metal deposition. The main aim of this collaboration was to understand the character of powder jet behavior as a function of the nozzle parameters such as cover gas, carrier gas, and shaping gas. In order to evaluate the sensitivities of the parameters used in model, various experiments were performed with in-situ monitoring of the powder stream characteristics using a high speed camera. A wide variety of conditions while keeping the hopper motor rpm constant, including laser power and travel speed were explored. The cross sections of the deposits were characterized using optical microscopy.
Williams, W.R.; Anderson, J.C.
1995-12-31
The transportation of UF{sup 6} is subject to regulations requiring the evaluation of packaging under a sequence of hypothetical accident conditions including exposure to a 30-min 800{degree}C (1475{degree}F) fire [10 CFR 71.73(c)(3)]. An issue of continuing interest is whether bare cylinders can withstand such a fire without rupturing. To address this issue, a lumped parameter heat transfer/stress analysis model (6FIRE) has been developed to simulate heating to the point of rupture of a cylinder containing UF{sup 6} when it is exposed to a fire. The model is described, then estimates of time to rupture are presented for various cylinder types, fire temperatures, and fill conditions. An assessment of the quantity of UF{sup 6} released from containment after rupture is also presented. Further documentation of the model is referenced.
Griffin, Joshua D.; Eldred, Michael Scott; Martinez-Canales, Monica L.; Watson, Jean-Paul; Kolda, Tamara Gibson; Giunta, Anthony Andrew; Adams, Brian M.; Swiler, Laura Painton; Williams, Pamela J.; Hough, Patricia Diane; Gay, David M.; Dunlavy, Daniel M.; Eddy, John P.; Hart, William Eugene; Brown, Shannon L.
2006-10-01
The DAKOTA (Design Analysis Kit for Optimization and Terascale Applications) toolkit provides a flexible and extensible interface between simulation codes and iterative analysis methods. DAKOTA contains algorithms for optimization with gradient and nongradient-based methods; uncertainty quantification with sampling, reliability, and stochastic finite element methods; parameter estimation with nonlinear least squares methods; and sensitivity/variance analysis with design of experiments and parameter study methods. These capabilities may be used on their own or as components within advanced strategies such as surrogate-based optimization, mixed integer nonlinear programming, or optimization under uncertainty. By employing object-oriented design to implement abstractions of the key components required for iterative systems analyses, the DAKOTA toolkit provides a flexible and extensible problem-solving environment for design and performance analysis of computational models on high performance computers. This report serves as a user's manual for the DAKOTA software and provides capability overviews and procedures for software execution, as well as a variety of example studies.
Eldred, Michael Scott; Dalbey, Keith R.; Bohnhoff, William J.; Adams, Brian M.; Swiler, Laura Painton; Hough, Patricia Diane; Gay, David M.; Eddy, John P.; Haskell, Karen H.
2010-05-01
The DAKOTA (Design Analysis Kit for Optimization and Terascale Applications) toolkit provides a flexible and extensible interface between simulation codes and iterative analysis methods. DAKOTA contains algorithms for optimization with gradient and nongradient-based methods; uncertainty quantification with sampling, reliability, and stochastic finite element methods; parameter estimation with nonlinear least squares methods; and sensitivity/variance analysis with design of experiments and parameter study methods. These capabilities may be used on their own or as components within advanced strategies such as surrogate-based optimization, mixed integer nonlinear programming, or optimization under uncertainty. By employing object-oriented design to implement abstractions of the key components required for iterative systems analyses, the DAKOTA toolkit provides a flexible and extensible problem-solving environment for design and performance analysis of computational models on high performance computers. This report serves as a user's manual for the DAKOTA software and provides capability overviews and procedures for software execution, as well as a variety of example studies.
Eldred, Michael Scott; Dalbey, Keith R.; Bohnhoff, William J.; Adams, Brian M.; Swiler, Laura Painton; Hough, Patricia Diane; Gay, David M.; Eddy, John P.; Haskell, Karen H.
2010-05-01
The DAKOTA (Design Analysis Kit for Optimization and Terascale Applications) toolkit provides a flexible and extensible interface between simulation codes and iterative analysis methods. DAKOTA contains algorithms for optimization with gradient and nongradient-based methods; uncertainty quantification with sampling, reliability, and stochastic finite element methods; parameter estimation with nonlinear least squares methods; and sensitivity/variance analysis with design of experiments and parameter study methods. These capabilities may be used on their own or as components within advanced strategies such as surrogate-based optimization, mixed integer nonlinear programming, or optimization under uncertainty. By employing object-oriented design to implement abstractions of the key components required for iterative systems analyses, the DAKOTA toolkit provides a flexible and extensible problem-solving environment for design and performance analysis of computational models on high performance computers. This report serves as a developers manual for the DAKOTA software and describes the DAKOTA class hierarchies and their interrelationships. It derives directly from annotation of the actual source code and provides detailed class documentation, including all member functions and attributes.
Griffin, Joshua D. (Sandia National lababoratory, Livermore, CA); Eldred, Michael Scott; Martinez-Canales, Monica L.; Watson, Jean-Paul; Kolda, Tamara Gibson (Sandia National lababoratory, Livermore, CA); Giunta, Anthony Andrew; Adams, Brian M.; Swiler, Laura Painton; Williams, Pamela J.; Hough, Patricia Diane (Sandia National lababoratory, Livermore, CA); Gay, David M.; Dunlavy, Daniel M.; Eddy, John P.; Hart, William Eugene; Brown, Shannon L.
2006-10-01
The DAKOTA (Design Analysis Kit for Optimization and Terascale Applications) toolkit provides a flexible and extensible interface between simulation codes and iterative analysis methods. DAKOTA contains algorithms for optimization with gradient and nongradient-based methods; uncertainty quantification with sampling, reliability, and stochastic finite element methods; parameter estimation with nonlinear least squares methods; and sensitivity/variance analysis with design of experiments and parameter study methods. These capabilities may be used on their own or as components within advanced strategies such as surrogate-based optimization, mixed integer nonlinear programming, or optimization under uncertainty. By employing object-oriented design to implement abstractions of the key components required for iterative systems analyses, the DAKOTA toolkit provides a flexible and extensible problem-solving environment for design and performance analysis of computational models on high performance computers. This report serves as a developers manual for the DAKOTA software and describes the DAKOTA class hierarchies and their interrelationships. It derives directly from annotation of the actual source code and provides detailed class documentation, including all member functions and attributes.
Memmi, F.; Falconi, L.; Cappelli, M.; Palomba, M.; Santoro, E.; Bove, R.; Sepielli, M.
2012-07-01
Improvements in the awareness of a system status is an essential requirement to achieve safety in every kind of plant. In particular, in the case of Nuclear Power Plants (NPPs), a progress is crucial to enhance the Human Machine Interface (HMI) in order to optimize monitoring and analyzing processes of NPP operational states. Firstly, as old-fashioned plants are concerned, an upgrading of the whole console instrumentation is desirable in order to replace an analog visualization with a full-digital system. In this work, we present a novel instrument able to interface the control console of a nuclear reactor, developed by using CompactRio, a National Instruments embedded architecture and its dedicated programming language. This real-time industrial controller composed by a real-time processor and FPGA modules has been programmed to visualize the parameters coming from the reactor, and to storage and reproduce significant conditions anytime. This choice has been made on the basis of the FPGA properties: high reliability, determinism, true parallelism and re-configurability, achieved by a simple programming method, based on LabVIEW real-time environment. The system architecture exploits the FPGA capabilities of implementing custom timing and triggering, hardware-based analysis and co-processing, and highest performance control algorithms. Data stored during the supervisory phase can be reproduced by loading data from a measurement file, re-enacting worthwhile operations or conditions. The system has been thought to be used in three different modes, namely Log File Mode, Supervisory Mode and Simulation Mode. The proposed system can be considered as a first step to develop a more complete Decision Support System (DSS): indeed this work is part of a wider project that includes the elaboration of intelligent agents and meta-theory approaches. A synoptic has been created to monitor every kind of action on the plant through an intuitive sight. Furthermore, another important
Zhang, Yunpeng; Li, En Guo, Gaofeng; Xu, Jiadi; Wang, Chao
2014-09-15
A pair of spot-focusing horn lens antenna is the key component in a free-space measurement system. The electromagnetic constitutive parameters of a planar sample are determined using transmitted and reflected electromagnetic beams. These parameters are obtained from the measured scattering parameters by the microwave network analyzer, thickness of the sample, and wavelength of a focused beam on the sample. Free-space techniques introduced by most papers consider the focused wavelength as the free-space wavelength. But in fact, the incident wave projected by a lens into the sample approximates a Gaussian beam, thus, there has an elongation of the wavelength in the focused beam and this elongation should be taken into consideration in dielectric and magnetic measurement. In this paper, elongation of the wavelength has been analyzed and measured. Measurement results show that the focused wavelength in the vicinity of the focus has an elongation of 1%5% relative to the free-space wavelength. Elongation's influence on the measurement result of the permittivity and permeability has been investigated. Numerical analyses show that the elongation of the focused wavelength can cause the increase of the measured value of the permeability relative to traditionally measured value, but for the permittivity, it is affected by several parameters and may increase or decrease relative to traditionally measured value.
DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)
Bhagavatula, Abhijit; Huffman, Gerald; Shah, Naresh; Honaker, Rick
2014-01-01
The thermal evolution profiles and kinetic parameters for the pyrolysis of two Montana coals (DECS-38 subbituminous coal and DECS-25 lignite coal), one biomass sample (corn stover), and their blends (10%, 20%, and 30% by weight of corn stover) have been investigated at a heating rate of 5°C/min in an inert nitrogen atmosphere, using thermogravimetric analysis. The thermal evolution profiles of subbituminous coal and lignite coal display only one major peak over a wide temperature distribution, ~152–814°C and ~175–818°C, respectively, whereas the thermal decomposition profile for corn stover falls in a much narrower band than that of the coals, ~226–608°C. Themore » nonlinearity in the evolution of volatile matter with increasing percentage of corn stover in the blends verifies the possibility of synergistic behavior in the blends with subbituminous coal where deviations from the predicted yield ranging between 2% and 7% were observed whereas very little deviations (1%–3%) from predicted yield were observed in blends with lignite indicating no significant interactions with corn stover. In addition, a single first-order reaction model using the Coats-Redfern approximation was utilized to predict the kinetic parameters of the pyrolysis reaction. The kinetic analysis indicated that each thermal evolution profile may be represented as a single first-order reaction. Three temperature regimes were identified for each of the coals while corn stover and the blends were analyzed using two and four temperature regimes, respectively.« less
McFarquhar, Greg
2015-12-28
We proposed to analyze in-situ cloud data collected during ARM/ASR field campaigns to create databases of cloud microphysical properties and their uncertainties as needed for the development of improved cloud parameterizations for models and remote sensing retrievals, and for evaluation of model simulations and retrievals. In particular, we proposed to analyze data collected over the Southern Great Plains (SGP) during the Mid-latitude Continental Convective Clouds Experiment (MC3E), the Storm Peak Laboratory Cloud Property Validation Experiment (STORMVEX), the Small Particles in Cirrus (SPARTICUS) Experiment and the Routine AAF Clouds with Low Optical Water Depths (CLOWD) Optical Radiative Observations (RACORO) field campaign, over the North Slope of Alaska during the Indirect and Semi-Direct Aerosol Campaign (ISDAC) and the Mixed-Phase Arctic Cloud Experiment (M-PACE), and over the Tropical Western Pacific (TWP) during The Tropical Warm Pool International Cloud Experiment (TWP-ICE), to meet the following 3 objectives; derive statistical databases of single ice particle properties (aspect ratio AR, dominant habit, mass, projected area) and distributions of ice crystals (size distributions SDs, mass-dimension m-D, area-dimension A-D relations, mass-weighted fall speeds, single-scattering properties, total concentrations N, ice mass contents IWC), complete with uncertainty estimates; assess processes by which aerosols modulate cloud properties in arctic stratus and mid-latitude cumuli, and quantify aerosol’s influence in context of varying meteorological and surface conditions; and determine how ice cloud microphysical, single-scattering and fall-out properties and contributions of small ice crystals to such properties vary according to location, environment, surface, meteorological and aerosol conditions, and develop parameterizations of such effects.In this report we describe the accomplishments that we made on all 3 research objectives.
Parameter Estimation with Partial Information. (Conference) ...
Office of Scientific and Technical Information (OSTI)
Resource Relation: Conference: Workshop on Numerical Methods for Uncertainty ... the Workshop on Numerical Methods for Uncertainty Quantification Hausdorff Research ...
Thermal Hydraulic Simulations, Error Estimation and Parameter
Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)
Studies in Drekar::CFD Thomas M. Smith, John N. Shadid, Roger P. Pawlowski, Eric ... Studies in Drekar::CFD Thomas M. Smith, John N. Shadid, Roger P. Pawlowski, Eric ...
Reservoir Temperature Estimator
Energy Science and Technology Software Center (OSTI)
2014-12-08
The Reservoir Temperature Estimator (RTEst) is a program that can be used to estimate deep geothermal reservoir temperature and chemical parameters such as CO2 fugacity based on the water chemistry of shallower, cooler reservoir fluids. This code uses the plugin features provided in The Geochemists Workbench (Bethke and Yeakel, 2011) and interfaces with the model-independent parameter estimation code Pest (Doherty, 2005) to provide for optimization of the estimated parameters based on the minimization of themore » weighted sum of squares of a set of saturation indexes from a user-provided mineral assemblage.« less
Levelized Power Generation Cost Codes
Energy Science and Technology Software Center (OSTI)
1996-04-30
LPGC is a set of nine microcomputer programs for estimating power generation costs for large steam-electric power plants. These programs permit rapid evaluation using various sets of economic and technical ground rules. The levelized power generation costs calculated may be used to compare the relative economics of nuclear and coal-fired plants based on life-cycle costs. Cost calculations include capital investment cost, operation and maintenance cost, fuel cycle cost, decommissioning cost, and total levelized power generationmore » cost. These programs can be used for quick analyses of power generation costs using alternative economic parameters, such as interest rate, escalation rate, inflation rate, plant lead times, capacity factor, fuel prices, etc. The two major types of electric generating plants considered are pressurized water reactor (PWR) and pulverized coal-fired plants. Data are also provided for the Large Scale Prototype Breeder (LSPB) type liquid metal reactor.« less
Broader source: Directives, Delegations, and Requirements [Office of Management (MA)]
1997-03-28
Based on the project's scope, the purpose of the estimate, and the availability of estimating resources, the estimator can choose one or a combination of techniques when estimating an activity or project. Estimating methods, estimating indirect and direct costs, and other estimating considerations are discussed in this chapter.
A simple method to estimate interwell autocorrelation
Pizarro, J.O.S.; Lake, L.W.
1997-08-01
The estimation of autocorrelation in the lateral or interwell direction is important when performing reservoir characterization studies using stochastic modeling. This paper presents a new method to estimate the interwell autocorrelation based on parameters, such as the vertical range and the variance, that can be estimated with commonly available data. We used synthetic fields that were generated from stochastic simulations to provide data to construct the estimation charts. These charts relate the ratio of areal to vertical variance and the autocorrelation range (expressed variously) in two directions. Three different semivariogram models were considered: spherical, exponential and truncated fractal. The overall procedure is demonstrated using field data. We find that the approach gives the most self-consistent results when it is applied to previously identified facies. Moreover, the autocorrelation trends follow the depositional pattern of the reservoir, which gives confidence in the validity of the approach.
Paul, Sabyasachi; Sahoo, G. S.; Tripathy, S. P. E-mail: tripathy@barc.gov.in; Sunil, C.; Bandyopadhyay, T.; Sharma, S. C.; Ramjilal,; Ninawe, N. G.; Gupta, A. K.
2014-06-15
A systematic study on the measurement of neutron spectra emitted from the interaction of protons of various energies with a thick beryllium target has been carried out. The measurements were carried out in the forward direction (at 0 with respect to the direction of protons) using CR-39 detectors. The doses were estimated using the in-house image analyzing program autoTRAK-n, which works on the principle of luminosity variation in and around the track boundaries. A total of six different proton energies starting from 4 MeV to 24 MeV with an energy gap of 4 MeV were chosen for the study of the neutron yields and the estimation of doses. Nearly, 92% of the recoil tracks developed after chemical etching were circular in nature, but the size distributions of the recoil tracks were not found to be linearly dependent on the projectile energy. The neutron yield and dose values were found to be increasing linearly with increasing projectile energies. The response of CR-39 detector was also investigated at different beam currents at two different proton energies. A linear increase of neutron yield with beam current was observed.
Energy Science and Technology Software Center (OSTI)
2014-10-09
This software code is designed to track generator state variables in real time using the Ensemble Kalman Filter method with the aid of PMU measurements. This code can also be used to calibrate dynamic model parameters by augmenting parameters in the state variable vector.
Electricity Generating Portfolios with Small Modular Reactors...
Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site
Electricity Generating Portfolios with Small Modular Reactors Electricity Generating Portfolios with Small Modular Reactors This paper provides a method for estimating the ...
Estimate Radiological Dose for Animals
Energy Science and Technology Software Center (OSTI)
1997-12-18
Estimate Radiological dose for animals in ecological environment using open literature values for parameters such as body weight, plant and soil ingestion rate, rad. halflife, absorbed energy, biological halflife, gamma energy per decay, soil-to-plant transfer factor, ...etc
Parametric Hazard Function Estimation.
Energy Science and Technology Software Center (OSTI)
1999-09-13
Version 00 Phaze performs statistical inference calculations on a hazard function (also called a failure rate or intensity function) based on reported failure times of components that are repaired and restored to service. Three parametric models are allowed: the exponential, linear, and Weibull hazard models. The inference includes estimation (maximum likelihood estimators and confidence regions) of the parameters and of the hazard function itself, testing of hypotheses such as increasing failure rate, and checking ofmore » the model assumptions.« less
Robust and intelligent bearing estimation
Claassen, John P.
2000-01-01
A method of bearing estimation comprising quadrature digital filtering of event observations, constructing a plurality of observation matrices each centered on a time-frequency interval, determining for each observation matrix a parameter such as degree of polarization, linearity of particle motion, degree of dyadicy, or signal-to-noise ratio, choosing observation matrices most likely to produce a set of best available bearing estimates, and estimating a bearing for each observation matrix of the chosen set.
Waist parameter determination from measured spot sizes
Hajek, M. )
1989-12-15
A novel simple method of determination of waist parameters of a Gaussian laser beam as a consequence of geometric treatment of the problem is introduced. The method does not require any least-squares process, ordering of experimental data, or estimates of waist parameters.
Identification of synchronous machine parameters
Shaban, A.O.
1985-01-01
The synchronous machine is an essential component of a power system and determination of its parameters accurately is an important task in securing adequate modes of operation through certain control strategies. An estimation technique based on the Powell algorithm was evaluated for the identification of these parameters on the basis of small-signal input-output data. A fifth order Park domain flux linkage model of a salient pole machine was used for the identification of the parameters. Stator terminal voltages as transformed into the Park domain, field voltage and rotor frequency were used as input signals to the model. The input signals to the actual machine are the stator terminal voltages and the field voltage. The Park domain stator terminal current and field current were used as output signals. Due to the lack of access to real data, digital simulation of an actual machine as used in an effort to establish the machine responses in the time domain to small changes in the input signals. These responses were compared with those obtained from the model with the unknown parameters and utilized in the identification process. The sensitivity of a least-square loss-function with respect to each parameter was tested. The proposed parameter identification method was evaluated with data of two different machines. Careful observation of the results indicates that convergence can only be secured if nonsimultaneous perturbation of the direct - and quadrature - axis components of the terminal voltages is applied.
Shafer, John M
2012-11-05
The three major components of this research were: 1. Application of minimally invasive, cost effective hydrogeophysical techniques (surface and borehole), to generate fine scale (~1m or less) 3D estimates of subsurface heterogeneity. Heterogeneity is defined as spatial variability in hydraulic conductivity and/or hydrolithologic zones. 2. Integration of the fine scale characterization of hydrogeologic parameters with the hydrogeologic facies to upscale the finer scale assessment of heterogeneity to field scale. 3. Determination of the relationship between dual-domain parameters and practical characterization data.
Magnetic nanoparticle temperature estimation
Weaver, John B.; Rauwerdink, Adam M.; Hansen, Eric W.
2009-05-15
The authors present a method of measuring the temperature of magnetic nanoparticles that can be adapted to provide in vivo temperature maps. Many of the minimally invasive therapies that promise to reduce health care costs and improve patient outcomes heat tissue to very specific temperatures to be effective. Measurements are required because physiological cooling, primarily blood flow, makes the temperature difficult to predict a priori. The ratio of the fifth and third harmonics of the magnetization generated by magnetic nanoparticles in a sinusoidal field is used to generate a calibration curve and to subsequently estimate the temperature. The calibration curve is obtained by varying the amplitude of the sinusoidal field. The temperature can then be estimated from any subsequent measurement of the ratio. The accuracy was 0.3 deg. K between 20 and 50 deg. C using the current apparatus and half-second measurements. The method is independent of nanoparticle concentration and nanoparticle size distribution.
The Efficacy of Galaxy Shape Parameters in Photometric Redshift...
Office of Scientific and Technical Information (OSTI)
Journal Article: The Efficacy of Galaxy Shape Parameters in Photometric Redshift Estimation: A Neural Network Approach Citation Details In-Document Search Title: The Efficacy of ...
Parameter Estimation with Missing Data. (Conference) | SciTech...
Office of Scientific and Technical Information (OSTI)
Publication Date: 2013-04-01 OSTI Identifier: 1078649 Report Number(s): SAND2013-2963C 447662 DOE Contract Number: AC04-94AL85000 Resource Type: Conference Resource Relation: ...
Parameter Estimation with Missing Data. (Conference) | SciTech...
Office of Scientific and Technical Information (OSTI)
Publication Date: 2013-04-01 OSTI Identifier: 1073843 Report Number(s): SAND2013-2963C DOE Contract Number: AC04-94AL85000 Resource Type: Conference Resource Relation: Conference: ...
Parameter Estimation and Model Validation of Nonlinear Dynamical Networks
Abarbanel, Henry; Gill, Philip
2015-03-31
In the performance period of this work under a DOE contract, the co-PIs, Philip Gill and Henry Abarbanel, developed new methods for statistical data assimilation for problems of DOE interest, including geophysical and biological problems. This included numerical optimization algorithms for variational principles, new parallel processing Monte Carlo routines for performing the path integrals of statistical data assimilation. These results have been summarized in the monograph: “Predicting the Future: Completing Models of Observed Complex Systems” by Henry Abarbanel, published by Spring-Verlag in June 2013. Additional results and details have appeared in the peer reviewed literature.
Estimating the spatio-temporal distribution of geochemical parameters...
Office of Scientific and Technical Information (OSTI)
biostimulation using spectral induced polarization data and hierarchical Bayesian models ... biostimulation using spectral induced polarization data and hierarchical Bayesian models ...
Estimating the spatio-temporal distribution of geochemical parameters...
Office of Scientific and Technical Information (OSTI)
Authors: Chen, J. ; Hubbard, S. S. ; Williams, K. H. ; Orozco, A. Flores ; Kemna, A. Publication Date: 2012-06-20 OSTI Identifier: 1212432 Report Number(s): LBNL-5663E Journal ID: ...
Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)
next up previous Next: Main Parameters APS Storage Ring Parameters M. Borland, G. Decker, L. Emery, W. Guo, K. Harkay, V. Sajaev, C.-Y. Yao Advanced Photon Source September 8, 2010...
Heath, G.; O'Donoughue, P.; Whitaker, M.
2012-12-01
This research provides a systematic review and harmonization of the life cycle assessment (LCA) literature of electricity generated from conventionally produced natural gas. We focus on estimates of greenhouse gases (GHGs) emitted in the life cycle of electricity generation from conventionally produced natural gas in combustion turbines (NGCT) and combined-cycle (NGCC) systems. A process we term "harmonization" was employed to align several common system performance parameters and assumptions to better allow for cross-study comparisons, with the goal of clarifying central tendency and reducing variability in estimates of life cycle GHG emissions. This presentation summarizes preliminary results.
Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)
Storage Ring Parameters Print General Parameters Parameter Value Beam particle electron Beam energy 1.9 GeV (1.0-1.9 GeV possible) Injection energy 1.9 GeV (1.0-1.9 GeV possible)...
Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)
Photon Source Parameters Photon Source Parameters Print Summary Graph of Brightness Curves for All Insertion Devices Insertion Device and Bend Magnet Parameters Bend Magnet Superbend Magnet U30 Undulator U50 Undulator U80 Undulator U100 Undulator W114 Wiggler The ALS has six elliptically polarizing undulators, two in straight 4, two in straight 11, and one each in straights 6 and 7. All are arranged with chicanes so that two such devices can be installed to feed two independent beamlines. They
Generation Planning (pbl/generation)
Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)
Generation Hydro Power Wind Power Monthly GSP BPA White Book Dry Year Tools Firstgov Generation Planning Thumbnail image of BPA White Book BPA White Book (1998-2014) Draft Dry...
Measuring neutrino oscillation parameters using $\
Backhouse, Christopher James; /Oxford U.
2011-02-01
MINOS is a long-baseline neutrino oscillation experiment. It consists of two large steel-scintillator tracking calorimeters. The near detector is situated at Fermilab, close to the production point of the NuMI muon-neutrino beam. The far detector is 735 km away, 716m underground in the Soudan mine, Northern Minnesota. The primary purpose of the MINOS experiment is to make precise measurements of the 'atmospheric' neutrino oscillation parameters ({Delta}m{sub atm}{sup 2} and sin{sup 2} 2{theta}{sub atm}). The oscillation signal consists of an energy-dependent deficit of {nu}{sub {mu}} interactions in the far detector. The near detector is used to characterize the properties of the beam before oscillations develop. The two-detector design allows many potential sources of systematic error in the far detector to be mitigated by the near detector observations. This thesis describes the details of the {nu}{sub {mu}}-disappearance analysis, and presents a new technique to estimate the hadronic energy of neutrino interactions. This estimator achieves a significant improvement in the energy resolution of the neutrino spectrum, and in the sensitivity of the neutrino oscillation fit. The systematic uncertainty on the hadronic energy scale was re-evaluated and found to be comparable to that of the energy estimator previously in use. The best-fit oscillation parameters of the {nu}{sub {mu}}-disappearance analysis, incorporating this new estimator were: {Delta}m{sup 2} = 2.32{sub -0.08}{sup +0.12} x 10{sup -3} eV{sup 2}, sin {sup 2} 2{theta} > 0.90 (90% C.L.). A similar analysis, using data from a period of running where the NuMI beam was operated in a configuration producing a predominantly {bar {nu}}{sub {mu}} beam, yielded somewhat different best-fit parameters {Delta}{bar m}{sup 2} = (3.36{sub -0.40}{sup +0.46}(stat.) {+-} 0.06(syst.)) x 10{sup -3}eV{sup 2}, sin{sup 2} 2{bar {theta}} = 0.86{sub -0.12}{sup _0.11}(stat.) {+-} 0.01(syst.). The tension between these results is
Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)
Untapped Value of Backup Generation While new guidelines and regulations such as IEEE (Institute of Electrical and Electronics Engineers) 1547 have come a long way in addressing interconnection standards for distributed generation, utilities have largely overlooked the untapped potential of these resources. Under certain conditions, these units (primarily backup generators) represent a significant source of power that can deliver utility services at lower costs than traditional centralized
Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)
and regulations such as IEEE (Institute of Electrical and Electronics Engineers) 1547 have come a long way in addressing interconnection standards for distributed generation, ...
Vector generator scan converter
Moore, J.M.; Leighton, J.F.
1988-02-05
High printing speeds for graphics data are achieved with a laser printer by transmitting compressed graphics data from a main processor over an I/O channel to a vector generator scan converter which reconstructs a full graphics image for input to the laser printer through a raster data input port. The vector generator scan converter includes a microprocessor with associated microcode memory containing a microcode instruction set, a working memory for storing compressed data, vector generator hardware for drawing a full graphic image from vector parameters calculated by the microprocessor, image buffer memory for storing the reconstructed graphics image and an output scanner for reading the graphics image data and inputting the data to the printer. The vector generator scan converter eliminates the bottleneck created by the I/O channel for transmitting graphics data from the main processor to the laser printer, and increases printer speed up to thirty fold. 7 figs.
Vector generator scan converter
Moore, James M.; Leighton, James F.
1990-01-01
High printing speeds for graphics data are achieved with a laser printer by transmitting compressed graphics data from a main processor over an I/O (input/output) channel to a vector generator scan converter which reconstructs a full graphics image for input to the laser printer through a raster data input port. The vector generator scan converter includes a microprocessor with associated microcode memory containing a microcode instruction set, a working memory for storing compressed data, vector generator hardward for drawing a full graphic image from vector parameters calculated by the microprocessor, image buffer memory for storing the reconstructed graphics image and an output scanner for reading the graphics image data and inputting the data to the printer. The vector generator scan converter eliminates the bottleneck created by the I/O channel for transmitting graphics data from the main processor to the laser printer, and increases printer speed up to thirty fold.
Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)
Storage Ring Parameters Storage Ring Parameters Print General Parameters Parameter Value Beam particle electron Beam energy 1.9 GeV (1.0-1.9 GeV possible) Injection energy 1.9 GeV (1.0-1.9 GeV possible) Beam current (all operation is in top-off with ΔI/I ≤ 0.3%) 500 mA in multibunch mode 2 x 17.5 mA in two-bunch mode Filling pattern (multibunch mode) 256-320 bunches; possibility of one or two 5- to 6-mA "camshaft" bunches in filling gaps Bunch spacing: multibunch mode 2 ns Bunch
Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)
There are small sector-to-sector variations in the parameters for a given source angle because of the distortion in the lattice functions of the superbends and the...
Fischer, Noah A.
2012-08-14
The reactor core input generator allows for MCNP input files to be tailored to design specifications and generated in seconds. Full reactor models can now easily be created by specifying a small set of parameters and generating an MCNP input for a full reactor core. Axial zoning of the core will allow for density variation in the fuel and moderator, with pin-by-pin fidelity, so that BWR cores can more accurately be modeled. LWR core work in progress: (1) Reflectivity option for specifying 1/4, 1/2, or full core simulation; (2) Axial zoning for moderator densities that vary with height; (3) Generating multiple types of assemblies for different fuel enrichments; and (4) Parameters for specifying BWR box walls. Fuel pin work in progress: (1) Radial and azimuthal zoning for generating further unique materials in fuel rods; (2) Options for specifying different types of fuel for MOX or multiple burn assemblies; (3) Additional options for replacing fuel rods with burnable poison rods; and (4) Control rod/blade modeling.
Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)
Storage Ring Parameters Print General Parameters Parameter Value Beam particle electron Beam energy 1.9 GeV (1.0-1.9 GeV possible) Injection energy 1.9 GeV (1.0-1.9 GeV possible) Beam current (all operation is in top-off with ΔI/I ≤ 0.3%) 500 mA in multibunch mode 2 x 17.5 mA in two-bunch mode Filling pattern (multibunch mode) 256-320 bunches; possibility of one or two 5- to 6-mA "camshaft" bunches in filling gaps Bunch spacing: multibunch mode 2 ns Bunch spacing: two-bunch mode 328
Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)
Storage Ring Parameters Print General Parameters Parameter Value Beam particle electron Beam energy 1.9 GeV (1.0-1.9 GeV possible) Injection energy 1.9 GeV (1.0-1.9 GeV possible) Beam current (all operation is in top-off with ΔI/I ≤ 0.3%) 500 mA in multibunch mode 2 x 17.5 mA in two-bunch mode Filling pattern (multibunch mode) 256-320 bunches; possibility of one or two 5- to 6-mA "camshaft" bunches in filling gaps Bunch spacing: multibunch mode 2 ns Bunch spacing: two-bunch mode 328
Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)
Storage Ring Parameters Print General Parameters Parameter Value Beam particle electron Beam energy 1.9 GeV (1.0-1.9 GeV possible) Injection energy 1.9 GeV (1.0-1.9 GeV possible) Beam current (all operation is in top-off with ΔI/I ≤ 0.3%) 500 mA in multibunch mode 2 x 17.5 mA in two-bunch mode Filling pattern (multibunch mode) 256-320 bunches; possibility of one or two 5- to 6-mA "camshaft" bunches in filling gaps Bunch spacing: multibunch mode 2 ns Bunch spacing: two-bunch mode 328
Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)
Storage Ring Parameters Print General Parameters Parameter Value Beam particle electron Beam energy 1.9 GeV (1.0-1.9 GeV possible) Injection energy 1.9 GeV (1.0-1.9 GeV possible) Beam current (all operation is in top-off with ΔI/I ≤ 0.3%) 500 mA in multibunch mode 2 x 17.5 mA in two-bunch mode Filling pattern (multibunch mode) 256-320 bunches; possibility of one or two 5- to 6-mA "camshaft" bunches in filling gaps Bunch spacing: multibunch mode 2 ns Bunch spacing: two-bunch mode 328
Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)
Photon Source Parameters Print Summary Graph of Brightness Curves for All Insertion Devices Insertion Device and Bend Magnet Parameters Bend Magnet Superbend Magnet U30 Undulator U50 Undulator U80 Undulator U100 Undulator W114 Wiggler The ALS has six elliptically polarizing undulators, two in straight 4, two in straight 11, and one each in straights 6 and 7. All are arranged with chicanes so that two such devices can be installed to feed two independent beamlines. They can be used in a variety
Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)
Photon Source Parameters Print Summary Graph of Brightness Curves for All Insertion Devices Insertion Device and Bend Magnet Parameters Bend Magnet Superbend Magnet U30 Undulator U50 Undulator U80 Undulator U100 Undulator W114 Wiggler The ALS has six elliptically polarizing undulators, two in straight 4, two in straight 11, and one each in straights 6 and 7. All are arranged with chicanes so that two such devices can be installed to feed two independent beamlines. They can be used in a variety
Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)
Photon Source Parameters Print Summary Graph of Brightness Curves for All Insertion Devices Insertion Device and Bend Magnet Parameters Bend Magnet Superbend Magnet U30 Undulator U50 Undulator U80 Undulator U100 Undulator W114 Wiggler The ALS has six elliptically polarizing undulators, two in straight 4, two in straight 11, and one each in straights 6 and 7. All are arranged with chicanes so that two such devices can be installed to feed two independent beamlines. They can be used in a variety
Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)
Photon Source Parameters Print Summary Graph of Brightness Curves for All Insertion Devices Insertion Device and Bend Magnet Parameters Bend Magnet Superbend Magnet U30 Undulator U50 Undulator U80 Undulator U100 Undulator W114 Wiggler The ALS has six elliptically polarizing undulators, two in straight 4, two in straight 11, and one each in straights 6 and 7. All are arranged with chicanes so that two such devices can be installed to feed two independent beamlines. They can be used in a variety
Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)
Photon Source Parameters Print Summary Graph of Brightness Curves for All Insertion Devices Insertion Device and Bend Magnet Parameters Bend Magnet Superbend Magnet U30 Undulator U50 Undulator U80 Undulator U100 Undulator W114 Wiggler The ALS has six elliptically polarizing undulators, two in straight 4, two in straight 11, and one each in straights 6 and 7. All are arranged with chicanes so that two such devices can be installed to feed two independent beamlines. They can be used in a variety
Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)
Photon Source Parameters Print Summary Graph of Brightness Curves for All Insertion Devices Insertion Device and Bend Magnet Parameters Bend Magnet Superbend Magnet U30 Undulator U50 Undulator U80 Undulator U100 Undulator W114 Wiggler The ALS has six elliptically polarizing undulators, two in straight 4, two in straight 11, and one each in straights 6 and 7. All are arranged with chicanes so that two such devices can be installed to feed two independent beamlines. They can be used in a variety
Roeschke, C.W.
1957-09-24
An improvement in pulse generators is described by which there are produced pulses of a duration from about 1 to 10 microseconds with a truly flat top and extremely rapid rise and fall. The pulses are produced by triggering from a separate input or by modifying the current to operate as a free-running pulse generator. In its broad aspect, the disclosed pulse generator comprises a first tube with an anode capacitor and grid circuit which controls the firing; a second tube series connected in the cathode circuit of the first tube such that discharge of the first tube places a voltage across it as the leading edge of the desired pulse; and an integrator circuit from the plate across the grid of the second tube to control the discharge time of the second tube, determining the pulse length.
Kwan, T.J.T.; Snell, C.M.
1987-03-31
A microwave generator is provided for generating microwaves substantially from virtual cathode oscillation. Electrons are emitted from a cathode and accelerated to an anode which is spaced apart from the cathode. The anode has an annular slit there through effective to form the virtual cathode. The anode is at least one range thickness relative to electrons reflecting from the virtual cathode. A magnet is provided to produce an optimum magnetic field having the field strength effective to form an annular beam from the emitted electrons in substantial alignment with the annular anode slit. The magnetic field, however, does permit the reflected electrons to axially diverge from the annular beam. The reflected electrons are absorbed by the anode in returning to the real cathode, such that substantially no reflexing electrons occur. The resulting microwaves are produced with a single dominant mode and are substantially monochromatic relative to conventional virtual cathode microwave generators. 6 figs.
Enhancing e-waste estimates: Improving data quality by multivariate Input–Output Analysis
Wang, Feng; Huisman, Jaco; Stevels, Ab; Baldé, Cornelis Peter
2013-11-15
Highlights: • A multivariate Input–Output Analysis method for e-waste estimates is proposed. • Applying multivariate analysis to consolidate data can enhance e-waste estimates. • We examine the influence of model selection and data quality on e-waste estimates. • Datasets of all e-waste related variables in a Dutch case study have been provided. • Accurate modeling of time-variant lifespan distributions is critical for estimate. - Abstract: Waste electrical and electronic equipment (or e-waste) is one of the fastest growing waste streams, which encompasses a wide and increasing spectrum of products. Accurate estimation of e-waste generation is difficult, mainly due to lack of high quality data referred to market and socio-economic dynamics. This paper addresses how to enhance e-waste estimates by providing techniques to increase data quality. An advanced, flexible and multivariate Input–Output Analysis (IOA) method is proposed. It links all three pillars in IOA (product sales, stock and lifespan profiles) to construct mathematical relationships between various data points. By applying this method, the data consolidation steps can generate more accurate time-series datasets from available data pool. This can consequently increase the reliability of e-waste estimates compared to the approach without data processing. A case study in the Netherlands is used to apply the advanced IOA model. As a result, for the first time ever, complete datasets of all three variables for estimating all types of e-waste have been obtained. The result of this study also demonstrates significant disparity between various estimation models, arising from the use of data under different conditions. It shows the importance of applying multivariate approach and multiple sources to improve data quality for modelling, specifically using appropriate time-varying lifespan parameters. Following the case study, a roadmap with a procedural guideline is provided to enhance e
Resonance parameter analysis with SAMMY
Larson, N.M.; Perey, F.G.
1988-01-01
The multilevel R-matrix computer code SAMMY has evolved over the past decade to become an important analysis tool for neutron data. SAMMY uses the Reich-Moore approximation to the multilevel R-matrix and includes an optional logarithmic parameterization of the external R-function. Doppler broadening is simulated either by numerical integration using the Gaussian approximation to the free gas model or by a more rigorous solution of the partial differential equation equivalent to the exact free gas model. Resolution broadening of cross sections and derivatives also has new options that more accurately represent the experimental situation. SAMMY treats constant normalization and some types of backgrounds directly and treats other normalizations and/or backgrounds with the introduction of user-generated partial derivatives. The code uses Bayes' method as an efficient alternative to least squares for fitting experimental data. SAMMY allows virtually any parameter to be varied and outputs values, uncertainties, and covariance matrix for all varied parameters. Versions of SAMMY exist for VAX, FPS, and IBM computers.
Pettibone, J.S.; Wheeler, P.C.
1981-06-08
An improved magnetocumulative generator is described that is useful for producing magnetic fields of very high energy content over large spatial volumes. The polar directed pleated magnetocumulative generator has a housing providing a housing chamber with an electrically conducting surface. The chamber forms a coaxial system having a small radius portion and a large radius portion. When a magnetic field is injected into the chamber, from an external source, most of the magnetic flux associated therewith positions itself in the small radius portion. The propagation of an explosive detonation through high-explosive layers disposed adjacent to the housing causes a phased closure of the chamber which sweeps most of the magnetic flux into the large radius portion of the coaxial system. The energy content of the magnetic field is greatly increased by flux stretching as well as by flux compression. The energy enhanced magnetic field is utilized within the housing chamber itself.
Foster, J.S. Jr.
1958-03-11
This patent describes apparatus for producing an electricity neutral ionized gas discharge, termed a plasma, substantially free from contamination with neutral gas particles. The plasma generator of the present invention comprises a plasma chamber wherein gas introduced into the chamber is ionized by a radiofrequency source. A magnetic field is used to focus the plasma in line with an exit. This magnetic field cooperates with a differential pressure created across the exit to draw a uniform and uncontaminated plasma from the plasma chamber.
Pryslak, N.E.
1974-02-26
A thermoelectric generator having a rigid coupling or stack'' between the heat source and the hot strap joining the thermoelements is described. The stack includes a member of an insulating material, such as ceramic, for electrically isolating the thermoelements from the heat source, and a pair of members of a ductile material, such as gold, one each on each side of the insulating member, to absorb thermal differential expansion stresses in the stack. (Official Gazette)
Donchev, Todor I.; Petrov, Ivan G.
2011-05-31
Described herein is an apparatus and a method for producing atom clusters based on a gas discharge within a hollow cathode. The hollow cathode includes one or more walls. The one or more walls define a sputtering chamber within the hollow cathode and include a material to be sputtered. A hollow anode is positioned at an end of the sputtering chamber, and atom clusters are formed when a gas discharge is generated between the hollow anode and the hollow cathode.
Srinivasan-Rao, Triveni
2002-01-01
A photon generator includes an electron gun for emitting an electron beam, a laser for emitting a laser beam, and an interaction ring wherein the laser beam repetitively collides with the electron beam for emitting a high energy photon beam therefrom in the exemplary form of x-rays. The interaction ring is a closed loop, sized and configured for circulating the electron beam with a period substantially equal to the period of the laser beam pulses for effecting repetitive collisions.
Foster, Jr., John S.; Wilson, James R.; McDonald, Jr., Charles A.
1983-01-01
1. In an electrical energy generator, the combination comprising a first elongated annular electrical current conductor having at least one bare surface extending longitudinally and facing radially inwards therein, a second elongated annular electrical current conductor disposed coaxially within said first conductor and having an outer bare surface area extending longitudinally and facing said bare surface of said first conductor, the contiguous coaxial areas of said first and second conductors defining an inductive element, means for applying an electrical current to at least one of said conductors for generating a magnetic field encompassing said inductive element, and explosive charge means disposed concentrically with respect to said conductors including at least the area of said inductive element, said explosive charge means including means disposed to initiate an explosive wave front in said explosive advancing longitudinally along said inductive element, said wave front being effective to progressively deform at least one of said conductors to bring said bare surfaces thereof into electrically conductive contact to progressively reduce the inductance of the inductive element defined by said conductors and transferring explosive energy to said magnetic field effective to generate an electrical potential between undeformed portions of said conductors ahead of said explosive wave front.
Whitaker, M.; Heath, G. A.; O'Donoughue, P.; Vorum, M.
2012-04-01
This systematic review and harmonization of life cycle assessments (LCAs) of utility-scale coal-fired electricity generation systems focuses on reducing variability and clarifying central tendencies in estimates of life cycle greenhouse gas (GHG) emissions. Screening 270 references for quality LCA methods, transparency, and completeness yielded 53 that reported 164 estimates of life cycle GHG emissions. These estimates for subcritical pulverized, integrated gasification combined cycle, fluidized bed, and supercritical pulverized coal combustion technologies vary from 675 to 1,689 grams CO{sub 2}-equivalent per kilowatt-hour (g CO{sub 2}-eq/kWh) (interquartile range [IQR]= 890-1,130 g CO{sub 2}-eq/kWh; median = 1,001) leading to confusion over reasonable estimates of life cycle GHG emissions from coal-fired electricity generation. By adjusting published estimates to common gross system boundaries and consistent values for key operational input parameters (most importantly, combustion carbon dioxide emission factor [CEF]), the meta-analytical process called harmonization clarifies the existing literature in ways useful for decision makers and analysts by significantly reducing the variability of estimates ({approx}53% in IQR magnitude) while maintaining a nearly constant central tendency ({approx}2.2% in median). Life cycle GHG emissions of a specific power plant depend on many factors and can differ from the generic estimates generated by the harmonization approach, but the tightness of distribution of harmonized estimates across several key coal combustion technologies implies, for some purposes, first-order estimates of life cycle GHG emissions could be based on knowledge of the technology type, coal mine emissions, thermal efficiency, and CEF alone without requiring full LCAs. Areas where new research is necessary to ensure accuracy are also discussed.
Cost and Performance Assumptions for Modeling Electricity Generation Technologies
Tidball, Rick; Bluestein, Joel; Rodriguez, Nick; Knoke, Stu
2010-11-01
The goal of this project was to compare and contrast utility scale power plant characteristics used in data sets that support energy market models. Characteristics include both technology cost and technology performance projections to the year 2050. Cost parameters include installed capital costs and operation and maintenance (O&M) costs. Performance parameters include plant size, heat rate, capacity factor or availability factor, and plant lifetime. Conventional, renewable, and emerging electricity generating technologies were considered. Six data sets, each associated with a different model, were selected. Two of the data sets represent modeled results, not direct model inputs. These two data sets include cost and performance improvements that result from increased deployment as well as resulting capacity factors estimated from particular model runs; other data sets represent model input data. For the technologies contained in each data set, the levelized cost of energy (LCOE) was also evaluated, according to published cost, performance, and fuel assumptions.
Parameter 4 | Open Energy Information
Parameter 4 Jump to: navigation, search Name: parameter 4 Place: Dortmund, North Rhine-Westphalia, Germany Zip: 44328 Sector: Buildings Product: Start-up consultants with a focus...
Santos, Mario G.; Cooray, Asantha
2006-10-15
We study the prospects for extracting cosmological and astrophysical parameters from the low radio frequency 21-cm background due to the spin-flip transition of neutral hydrogen during and prior to the reionization of the Universe. We make use of the angular power spectrum of 21-cm anisotropies, which exists due to inhomogeneities in the neutral hydrogen density field, the gas temperature field, the gas velocity field, and the spatial distribution of the Lyman-{alpha} intensity field associated with first luminous sources that emit UV photons. We extract parameters that describe both the underlying mass power spectrum and the global cosmology, as well as a set of simplified astrophysical parameters that connect fluctuations in the dark matter to those that govern 21-cm fluctuations. We also marginalize over a model for the foregrounds at low radio frequencies. In this general description, we find large degeneracies between cosmological parameters and the astrophysical parameters, though such degeneracies are reduced when strong assumptions are made with respect to the spin temperature relative to the cosmic microwave background (CMB) temperature or when complicated sources of anisotropy in the brightness temperature are ignored. Some of the degeneracies between cosmological and astrophysical parameters are broken when 21-cm anisotropy measurements are combined with information from the CMB, such as the temperature and the polarization measurements with Planck. While the overall improvement on the cosmological parameter estimates is not significant when measurements from first-generation interferometers are combined with Planck, such a combination can measure astrophysical parameters such as the ionization fraction in several redshift bins with reasonable accuracy.
Monthly Generation System Peak (pbl/generation)
Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)
Generation > Generation Hydro Power Wind Power Monthly GSP BPA White Book Dry Year Tools Firstgov Monthly Generation System Peak (GSP) This site is no longer maintained. Page last...
Parameters Covariance in Neutron Time of Flight Analysis Explicit Formulae
Odyniec, M.; Blair, J.
2014-12-01
We present here a method that estimates the parameters variance in a parametric model for neutron time of flight (NToF). The analytical formulae for parameter variances, obtained independently of calculation of parameter values from measured data, express the variances in terms of the choice, settings, and placement of the detector and the oscilloscope. Consequently, the method can serve as a tool in planning a measurement setup.
Pettibone, Joseph S.; Wheeler, Paul C.
1983-01-01
An improved magnetocumulative generator is described that is useful for producing magnetic fields of very high energy content over large spatial volumes. The polar directed pleated magnetocumulative generator has a housing (100, 101, 102, 103, 104, 105) providing a housing chamber (106) with an electrically conducting surface. The chamber (106) forms a coaxial system having a small radius portion and a large radius portion. When a magnetic field is injected into the chamber (106), from an external source, most of the magnetic flux associated therewith positions itself in the small radius portion. The propagation of an explosive detonation through high-explosive layers (107, 108) disposed adjacent to the housing causes a phased closure of the chamber (106) which sweeps most of the magnetic flux into the large radius portion of the coaxial system. The energy content of the magnetic field is greatly increased by flux stretching as well as by flux compression. The energy enhanced magnetic field is utilized within the housing chamber itself.
Estimation of stochastic volatility with long memory for index prices of FTSE Bursa Malaysia KLCI
Chen, Kho Chia; Kane, Ibrahim Lawal; Rahman, Haliza Abd; Bahar, Arifah; Ting, Chee-Ming
2015-02-03
In recent years, modeling in long memory properties or fractionally integrated processes in stochastic volatility has been applied in the financial time series. A time series with structural breaks can generate a strong persistence in the autocorrelation function, which is an observed behaviour of a long memory process. This paper considers the structural break of data in order to determine true long memory time series data. Unlike usual short memory models for log volatility, the fractional Ornstein-Uhlenbeck process is neither a Markovian process nor can it be easily transformed into a Markovian process. This makes the likelihood evaluation and parameter estimation for the long memory stochastic volatility (LMSV) model challenging tasks. The drift and volatility parameters of the fractional Ornstein-Unlenbeck model are estimated separately using the least square estimator (lse) and quadratic generalized variations (qgv) method respectively. Finally, the empirical distribution of unobserved volatility is estimated using the particle filtering with sequential important sampling-resampling (SIR) method. The mean square error (MSE) between the estimated and empirical volatility indicates that the performance of the model towards the index prices of FTSE Bursa Malaysia KLCI is fairly well.
Method for estimating processability of a hydrocarbon-containing feedstock for hydroprocessing
Schabron, John F; Rovani, Jr., Joseph F
2014-01-14
Disclosed herein is a method involving the steps of (a) precipitating an amount of asphaltenes from a liquid sample of a first hydrocarbon-containing feedstock having solvated asphaltenes therein with one or more first solvents in a column; (b) determining one or more solubility characteristics of the precipitated asphaltenes; (c) analyzing the one or more solubility characteristics of the precipitated asphaltenes; and (d) correlating a measurement of feedstock reactivity for the first hydrocarbon-containing feedstock sample with a mathematical parameter derived from the results of analyzing the one or more solubility characteristics of the precipitates asphaltenes. Determined parameters and processabilities for a plurality of feedstocks can be used to generate a mathematical relationship between parameter and processability; this relationship can be used to estimate the processability for hydroprocessing for a feedstock of unknown processability.
Wilcox, J.M.; Baker, W.R.
1963-09-17
This invention is a magnetohydrodynamic device for generating a highly ionized ion-electron plasma at a region remote from electrodes and structural members, thus avoiding contamination of the plasma. The apparatus utilizes a closed, gas-filled, cylindrical housing in which an axially directed magnetic field is provided. At one end of the housing, a short cylindrical electrode is disposed coaxially around a short axial inner electrode. A radial electrical discharge is caused to occur between the inner and outer electrodes, creating a rotating hydromagnetic ionization wave that propagates aiong the magnetic field lines toward the opposite end of the housing. A shorting switch connected between the electrodes prevents the wave from striking the opposite end of the housing. (AEC)
Wang, Zhong L; Fan, Fengru; Lin, Long; Zhu, Guang; Pan, Caofeng; Zhou, Yusheng
2015-11-03
A generator includes a thin first contact charging layer and a thin second contact charging layer. The thin first contact charging layer includes a first material that has a first rating on a triboelectric series. The thin first contact charging layer has a first side with a first conductive electrode applied thereto and an opposite second side. The thin second contact charging layer includes a second material that has a second rating on a triboelectric series that is more negative than the first rating. The thin first contact charging layer has a first side with a first conductive electrode applied thereto and an opposite second side. The thin second contact charging layer is disposed adjacent to the first contact charging layer so that the second side of the second contact charging layer is in contact with the second side of the first contact charging layer.
Broader source: Directives, Delegations, and Requirements [Office of Management (MA)]
1997-03-28
This chapter focuses on the components (or elements) of the cost estimation package and their documentation.
How EIA Estimates Natural Gas Production
Reports and Publications (EIA)
2004-01-01
The Energy Information Administration (EIA) publishes estimates monthly and annually of the production of natural gas in the United States. The estimates are based on data EIA collects from gas producing states and data collected by the U. S. Minerals Management Service (MMS) in the Department of Interior. The states and MMS collect this information from producers of natural gas for various reasons, most often for revenue purposes. Because the information is not sufficiently complete or timely for inclusion in EIA's Natural Gas Monthly (NGM), EIA has developed estimation methodologies to generate monthly production estimates that are described in this document.
Robust estimation procedure in panel data model
Shariff, Nurul Sima Mohamad; Hamzah, Nor Aishah
2014-06-19
The panel data modeling has received a great attention in econometric research recently. This is due to the availability of data sources and the interest to study cross sections of individuals observed over time. However, the problems may arise in modeling the panel in the presence of cross sectional dependence and outliers. Even though there are few methods that take into consideration the presence of cross sectional dependence in the panel, the methods may provide inconsistent parameter estimates and inferences when outliers occur in the panel. As such, an alternative method that is robust to outliers and cross sectional dependence is introduced in this paper. The properties and construction of the confidence interval for the parameter estimates are also considered in this paper. The robustness of the procedure is investigated and comparisons are made to the existing method via simulation studies. Our results have shown that robust approach is able to produce an accurate and reliable parameter estimates under the condition considered.
Determining Best Estimates and Uncertainties in Cloud Microphysical
Office of Scientific and Technical Information (OSTI)
Parameters from ARM Field Data: Implications for Models, Retrieval Schemes and Aerosol-Cloud-Radiation Interactions (Technical Report) | SciTech Connect Determining Best Estimates and Uncertainties in Cloud Microphysical Parameters from ARM Field Data: Implications for Models, Retrieval Schemes and Aerosol-Cloud-Radiation Interactions Citation Details In-Document Search Title: Determining Best Estimates and Uncertainties in Cloud Microphysical Parameters from ARM Field Data: Implications for
Freeze, G.A.; Larson, K.W.; Davies, P.B.
1995-10-01
A long-term assessment of the Waste Isolation Pilot Plant (WIPP) repository performance must consider the impact of gas generation resulting from the corrosion and microbial degradation of the emplaced waste. A multiphase fluid flow code, TOUGH2/EOS8, was adapted to model the processes of gas generation, disposal room creep closure, and multiphase (brine and gas) fluid flow, as well as the coupling between the three processes. System response to gas generation was simulated with a single, isolated disposal room surrounded by homogeneous halite containing two anhydrite interbeds, one above and one below the room. The interbeds were assumed to have flow connections to the room through high-permeability, excavation-induced fractures. System behavior was evaluated by tracking four performance measures: (1) peak room pressure; (2) maximum brine volume in the room; (3) total mass of gas expelled from the room; and (4) the maximum gas migration distance in an interbed. Baseline simulations used current best estimates of system parameters, selected through an evaluation of available data, to predict system response to gas generation under best-estimate conditions. Sensitivity simulations quantified the effects of parameter uncertainty by evaluating the change in the performance measures in response to parameter variations. In the sensitivity simulations, a single parameter value was varied to its minimum and maximum values, representative of the extreme expected values, with all other parameters held at best-estimate values. Sensitivity simulations identified the following parameters as important to gas expulsion and migration away from a disposal room: interbed porosity; interbed permeability; gas-generation potential; halite permeability; and interbed threshold pressure. Simulations also showed that the inclusion of interbed fracturing and a disturbed rock zone had a significant impact on system performance.
Simplified Approach for Estimating Impacts of Electricity Generation...
Approach Provides a first-order assessment of the external costs of operating fossil fuel-based, nuclear, hydro, or other power plants, taking into account emissions,...
Check Estimates and Independent Costs
Broader source: Directives, Delegations, and Requirements [Office of Management (MA)]
1997-03-28
Check estimates and independent cost estimates (ICEs) are tools that can be used to validate a cost estimate. Estimate validation entails an objective review of the estimate to ensure that estimate criteria and requirements have been met and well documented, defensible estimate has been developed. This chapter describes check estimates and their procedures and various types of independent cost estimates.
Subsurface Geotechnical Parameters Report
D. Rigby; M. Mrugala; G. Shideler; T. Davidsavor; J. Leem; D. Buesch; Y. Sun; D. Potyondy; M. Christianson
2003-12-17
The Yucca Mountain Project is entering a the license application (LA) stage in its mission to develop the nation's first underground nuclear waste repository. After a number of years of gathering data related to site characterization, including activities ranging from laboratory and site investigations, to numerical modeling of processes associated with conditions to be encountered in the future repository, the Project is realigning its activities towards the License Application preparation. At the current stage, the major efforts are directed at translating the results of scientific investigations into sets of data needed to support the design, and to fulfill the licensing requirements and the repository design activities. This document addresses the program need to address specific technical questions so that an assessment can be made about the suitability and adequacy of data to license and construct a repository at the Yucca Mountain Site. In July 2002, the U.S. Nuclear Regulatory Commission (NRC) published an Integrated Issue Resolution Status Report (NRC 2002). Included in this report were the Repository Design and Thermal-Mechanical Effects (RDTME) Key Technical Issues (KTI). Geotechnical agreements were formulated to resolve a number of KTI subissues, in particular, RDTME KTIs 3.04, 3.05, 3.07, and 3.19 relate to the physical, thermal and mechanical properties of the host rock (NRC 2002, pp. 2.1.1-28, 2.1.7-10 to 2.1.7-21, A-17, A-18, and A-20). The purpose of the Subsurface Geotechnical Parameters Report is to present an accounting of current geotechnical information that will help resolve KTI subissues and some other project needs. The report analyzes and summarizes available qualified geotechnical data. It evaluates the sufficiency and quality of existing data to support engineering design and performance assessment. In addition, the corroborative data obtained from tests performed by a number of research organizations is presented to reinforce
Heat engine generator control system
Rajashekara, K.; Gorti, B.V.; McMullen, S.R.; Raibert, R.J.
1998-05-12
An electrical power generation system includes a heat engine having an output member operatively coupled to the rotor of a dynamoelectric machine. System output power is controlled by varying an electrical parameter of the dynamoelectric machine. A power request signal is related to an engine speed and the electrical parameter is varied in accordance with a speed control loop. Initially, the sense of change in the electrical parameter in response to a change in the power request signal is opposite that required to effectuate a steady state output power consistent with the power request signal. Thereafter, the electrical parameter is varied to converge the output member speed to the speed known to be associated with the desired electrical output power. 8 figs.
Heat engine generator control system
Rajashekara, Kaushik (Carmel, IN); Gorti, Bhanuprasad Venkata (Towson, MD); McMullen, Steven Robert (Anderson, IN); Raibert, Robert Joseph (Fishers, IN)
1998-01-01
An electrical power generation system includes a heat engine having an output member operatively coupled to the rotor of a dynamoelectric machine. System output power is controlled by varying an electrical parameter of the dynamoelectric machine. A power request signal is related to an engine speed and the electrical parameter is varied in accordance with a speed control loop. Initially, the sense of change in the electrical parameter in response to a change in the power request signal is opposite that required to effectuate a steady state output power consistent with the power request signal. Thereafter, the electrical parameter is varied to converge the output member speed to the speed known to be associated with the desired electrical output power.
State Energy Production Estimates
U.S. Energy Information Administration (EIA) Indexed Site
Energy Production Estimates 1960 Through 2012 2012 Summary Tables Table P1. Energy Production Estimates in Physical Units, 2012 Alabama 19,455 215,710 9,525 0 Alaska 2,052 351,259...
Broader source: Directives, Delegations, and Requirements [Office of Management (MA)]
1997-03-28
The chapter describes the estimates required on government-managed projects for both general construction and environmental management.
Preliminary relative permeability estimates of methanehydrate-bearing sand
Seol, Yongkoo; Kneafsey, Timothy J.; Tomutsa, Liviu; Moridis,George J.
2006-05-08
The relative permeability to fluids in hydrate-bearing sediments is an important parameter for predicting natural gas production from gas hydrate reservoirs. We estimated the relative permeability parameters (van Genuchten alpha and m) in a hydrate-bearing sand by means of inverse modeling, which involved matching water saturation predictions with observations from a controlled waterflood experiment. We used x-ray computed tomography (CT) scanning to determine both the porosity and the hydrate and aqueous phase saturation distributions in the samples. X-ray CT images showed that hydrate and aqueous phase saturations are non-uniform, and that water flow focuses in regions of lower hydrate saturation. The relative permeability parameters were estimated at two locations in each sample. Differences between the estimated parameter sets at the two locations were attributed to heterogeneity in the hydrate saturation. Better estimates of the relative permeability parameters require further refinement of the experimental design, and better description of heterogeneity in the numerical inversions.
Property:PotentialOffshoreWindGeneration | Open Energy Information
Property Type Quantity Description The estimated potential energy generation from Offshore Wind for a particular place. Use this type to express a quantity of energy. The...
Property:PotentialEGSGeothermalGeneration | Open Energy Information
Property Type Quantity Description The estimated potential energy generation from EGS Geothermal for a particular place. Use this type to express a quantity of energy. The...
Electricity generator cost data from survey form EIA-860
Gasoline and Diesel Fuel Update (EIA)
Nuclear & Uranium Uranium fuel, nuclear reactors, generation, spent fuel. Total Energy Comprehensive data ... capacity estimates that use direct current (DC) ratings of PV panels. ...
Guidelines for Estimating Unmetered Industrial Water Use
Boyd, Brian K.
2010-08-01
The document provides a methodology to estimate unmetered industrial water use for evaporative cooling systems, steam generating boiler systems, batch process applications, and wash systems. For each category standard mathematical relationships are summarized and provided in a single resource to assist Federal agencies in developing an initial estimate of their industrial water use. The approach incorporates industry norms, general rules of thumb, and industry survey information to provide methodologies for each section.
Calibrated Hydrothermal Parameters, Barrow, Alaska, 2013
DOE Data Explorer [Office of Scientific and Technical Information (OSTI)]
Atchley, Adam; Painter, Scott; Harp, Dylan; Coon, Ethan; Wilson, Cathy; Liljedahl, Anna; Romanovsky, Vladimir
2015-01-29
A model-observation-experiment process (ModEx) is used to generate three 1D models of characteristic micro-topographical land-formations, which are capable of simulating present active thaw layer (ALT) from current climate conditions. Each column was used in a coupled calibration to identify moss, peat and mineral soil hydrothermal properties to be used in up-scaled simulations. Observational soil temperature data from a tundra site located near Barrow, AK (Area C) is used to calibrate thermal properties of moss, peat, and sandy loam soil to be used in the multiphysics Advanced Terrestrial Simulator (ATS) models. Simulation results are a list of calibrated hydrothermal parameters for moss, peat, and mineral soil hydrothermal parameters.
Calibrated Hydrothermal Parameters, Barrow, Alaska, 2013
DOE Data Explorer [Office of Scientific and Technical Information (OSTI)]
Atchley, Adam; Painter, Scott; Harp, Dylan; Coon, Ethan; Wilson, Cathy; Liljedahl, Anna; Romanovsky, Vladimir
A model-observation-experiment process (ModEx) is used to generate three 1D models of characteristic micro-topographical land-formations, which are capable of simulating present active thaw layer (ALT) from current climate conditions. Each column was used in a coupled calibration to identify moss, peat and mineral soil hydrothermal properties to be used in up-scaled simulations. Observational soil temperature data from a tundra site located near Barrow, AK (Area C) is used to calibrate thermal properties of moss, peat, and sandy loam soil to be used in the multiphysics Advanced Terrestrial Simulator (ATS) models. Simulation results are a list of calibrated hydrothermal parameters for moss, peat, and mineral soil hydrothermal parameters.
Analysis of Modeling Parameters on Threaded Screws.
Vigil, Miquela S.; Brake, Matthew Robert; Vangoethem, Douglas
2015-06-01
Assembled mechanical systems often contain a large number of bolted connections. These bolted connections (joints) are integral aspects of the load path for structural dynamics, and, consequently, are paramount for calculating a structure's stiffness and energy dissipation prop- erties. However, analysts have not found the optimal method to model appropriately these bolted joints. The complexity of the screw geometry cause issues when generating a mesh of the model. This paper will explore different approaches to model a screw-substrate connec- tion. Model parameters such as mesh continuity, node alignment, wedge angles, and thread to body element size ratios are examined. The results of this study will give analysts a better understanding of the influences of these parameters and will aide in finding the optimal method to model bolted connections.
WIPP Compliance Certification Application calculations parameters. Part 2: Parameter documentation
Howarth, S.M.
1997-11-14
The Waste Isolation Pilot Plant (WIPP) in southeast New Mexico has been studied as a transuranic waste repository for the past 23 years. During this time, an extensive site characterization, design, construction, and experimental program was completed, which provided in depth understanding of the dominant processes that are most likely to influence the containment of radionuclides for 10,000 years. Nearly 1,500 parameters were developed using information gathered from this program and were input to numerical models for WIPP Compliance Certification Application (CCA) Performance Assessment (PA) calculations. The CCA probability models require input parameters that are defined by a statistical distribution. Developing parameters begins with the assignment of an appropriate distribution type, which is dependent on the type, magnitude, and volume of data or information available. Parameter development may require interpretation or statistical analysis of raw data, combining raw data with literature values, scaling laboratory or field data to fit code grid mesh sizes, or other transformations. Documentation of parameter development is designed to answer two questions: What source information was used to develop this parameter? and Why was this particular data set/information used? Therefore, complete documentation requires integrating information from code sponsors, parameter task leaders, performance assessment analysts, and experimental principal investigators. This paper, Part 2 of 2 parts, contains a discussion of the WIPP CCA PA Parameter Tracking System, document traceability and retrievability, and lessons learned from related audits and reviews.
Quality Assurance Project Plan for the Gas Generation Testing Program at the INEL
NONE
1994-10-01
The data quality objectives (DQOs) for the Program are to evaluate compliance with the limits on total gas generation rates, establish the concentrations of hydrogen and methane in the total gas flow, determine the headspace concentration of VOCs in each drum prior to the start of the test, and obtain estimates of the concentrations of several compounds for mass balance purposes. Criteria for the selection of waste containers at the INEL and the parameters that must be characterized prior to and during the tests are described. Collection of gaseous samples from 55-gallon drums of contact-handled transuranic waste for the gas generation testing is discussed. Analytical methods and calibrations are summarized. Administrative quality control measures described in this QAPjP include the generation, review, and approval of project documentation; control and retention of records; measures to ensure that personnel, subcontractors or vendors, and equipment meet the specifications necessary to achieve the required data quality for the project.
Energy Science and Technology Software Center (OSTI)
2015-05-27
ParFit is a flexible and extendable framework and library of classes for fitting force-field parameters to data from high-level ab-initio calculations on the basis of deterministic and stochastic algorithms. Currently, the code is fitting MM3 and Merck force-field parameters but could easily extend to other force-field types.
Broader source: Directives, Delegations, and Requirements [Office of Management (MA)]
1997-03-28
Specialty costs are those nonstandard, unusual costs that are not typically estimated. Costs for research and development (R&D) projects involving new technologies, costs associated with future regulations, and specialty equipment costs are examples of specialty costs. This chapter discusses those factors that are significant contributors to project specialty costs and methods of estimating costs for specialty projects.
COSMOLOGICAL PARAMETERS FROM SUPERNOVAE ASSOCIATED WITH GAMMA-RAY BURSTS
Li, Xue; Hjorth, Jens; Wojtak, Rados?aw, E-mail: lixue@dark-cosmology.dk [Dark Cosmology Centre, Niels Bohr Institute, University of Copenhagen, Juliane Maries Vej 30, DK-2100 Copenhagen (Denmark)
2014-11-20
We report estimates of the cosmological parameters ? {sub m} and ?{sub ?} obtained using supernovae (SNe) associated with gamma-ray bursts (GRBs) at redshifts up to 0.606. Eight high-fidelity GRB-SNe with well-sampled light curves across the peak are used. We correct their peak magnitudes for a luminosity-decline rate relation to turn them into accurate standard candles with dispersion ? = 0.18mag. We also estimate the peculiar velocity of the low-redshift host galaxy of SN 1998bw using constrained cosmological simulations. In a flat universe, the resulting Hubble diagram leads to best-fit cosmological parameters of (?{sub m},?{sub ?})=(0.58{sub ?0.25}{sup +0.22},0.42{sub ?0.22}{sup +0.25}). This exploratory study suggests that GRB-SNe can potentially be used as standardizable candles to high redshifts to measure distances in the universe and constrain cosmological parameters.
Runchal, A.K.; Merkhofer, M.W.; Olmsted, E.; Davis, J.D.
1984-11-01
The present study implemented a probability encoding method to estimate the probability distributions of selected hydrologic variables for the Cohassett basalt flow top and flow interior, and the anisotropy ratio of the interior of the Cohassett basalt flow beneath the Hanford Site. Site-speciic data for these hydrologic parameters are currently inadequate for the purpose of preliminary assessment of candidate repository performance. However, this information is required to complete preliminary performance assessment studies. Rockwell chose a probability encoding method developed by SRI International to generate credible and auditable estimates of the probability distributions of effective porosity and hydraulic conductivity anisotropy. The results indicate significant differences of opinion among the experts. This was especially true of the values of the effective porosity of the Cohassett basalt flow interior for which estimates differ by more than five orders of magnitude. The experts are in greater agreement about the values of effective porosity of the Cohassett basalt flow top; their estimates for this variable are generally within one to two orders of magnitiude of each other. For anisotropy ratio, the expert estimates are generally within two or three orders of magnitude of each other. Based on this study, the Rockwell hydrologists estimate the effective porosity of the Cohassett basalt flow top to be generally higher than do the independent experts. For the effective porosity of the Cohassett basalt flow top, the estimates of the Rockwell hydrologists indicate a smaller uncertainty than do the estimates of the independent experts. On the other hand, for the effective porosity and anisotropy ratio of the Cohassett basalt flow interior, the estimates of the Rockwell hydrologists indicate a larger uncertainty than do the estimates of the independent experts.
Sensitivity of health risk estimates to air quality adjustment procedure
Whitfield, R.G.
1997-06-30
This letter is a summary of risk results associated with exposure estimates using two-parameter Weibull and quadratic air quality adjustment procedures (AQAPs). New exposure estimates were developed for children and child-occurrences, six urban areas, and five alternative air quality scenarios. In all cases, the Weibull and quadratic results are compared to previous results, which are based on a proportional AQAP.
Broader source: Directives, Delegations, and Requirements [Office of Management (MA)]
2011-05-09
This Guide provides uniform guidance and best practices that describe the methods and procedures that could be used in all programs and projects at DOE for preparing cost estimates. No cancellations.
U.S. Energy Information Administration (EIA) Indexed Site
74-1988 For Methodology Concerning the Derived Estimates Total Consumption of Offsite-Produced Energy for Heat and Power by Industry Group, 1974-1988 Total Energy *** Electricity...
Broader source: Directives, Delegations, and Requirements [Office of Management (MA)]
2011-05-09
This Guide provides uniform guidance and best practices that describe the methods and procedures that could be used in all programs and projects at DOE for preparing cost estimates.
Broader source: Directives, Delegations, and Requirements [Office of Management (MA)]
1997-03-28
The objective of this Guide is to improve the quality of cost estimates and further strengthen the DOE program/project management system. The original 25 separate chapters and three appendices have been combined to create a single document.
Independent Cost Estimate (ICE)
Broader source: Energy.gov [DOE]
Independent Cost Estimate (ICE). On August 8-12, the Office of Project Management Oversight and Assessments (PM) will conduct an ICE on the NNSA Albuquerque Complex Project (NACP) at Albuquerque, NM. This estimate will support the Critical Decision (CD) for establishing the performance baseline and approval to start construction (CD-2/3). This project is at CD-1, with a total project cost range of $183M to $251M.
Performance Bounds on Micro-Doppler Estimation and Adaptive Waveform Design Using OFDM Signals
Sen, Satyabrata; Barhen, Jacob; Glover, Charles Wayne
2014-01-01
We analyze the performance of a wideband orthogonal frequency division multiplexing (OFDM) signal in estimating the micro-Doppler frequency of a target having multiple rotating scatterers (e.g., rotor blades of a helicopter, propellers of a submarine). The presence of rotating scatterers introduces Doppler frequency modulation in the received signal by generating sidebands about the transmitted frequencies. This is called the micro-Doppler effects. The use of a frequency-diverse OFDM signal in this context enables us to independently analyze the micro-Doppler characteristics with respect to a set of orthogonal subcarrier frequencies. Therefore, to characterize the accuracy of micro-Doppler frequency estimation, we compute the Cram er-Rao Bound (CRB) on the angular-velocity estimate of the target while considering the scatterer responses as deterministic but unknown nuisance parameters. Additionally, to improve the accuracy of the estimation procedure, we formulate and solve an optimization problem by minimizing the CRB on the angular-velocity estimate with respect to the transmitting OFDM spectral coefficients. We present several numerical examples to demonstrate the CRB variations at different values of the signal-to-noise ratio (SNR) and the number of OFDM subcarriers. The CRB values not only decrease with the increase in the SNR values, but also reduce as we increase the number of subcarriers implying the significance of frequency-diverse OFDM waveforms. The improvement in estimation accuracy due to the adaptive waveform design is also numerically analyzed. Interestingly, we find that the relative decrease in the CRBs on the angular-velocity estimate is more pronounced for larger number of OFDM subcarriers.
Barnhill, R.E.; Farin, G.; Hamann, B.
1995-12-31
This paper provides a basic overview of NURBS and their application to numerical grid generation. Curve/surface smoothing, accelerated grid generation, and the use of NURBS in a practical grid generation system are discussed.
On a framework for generating PoD curves assisted by numerical simulations
Subair, S. Mohamed Agrawal, Shweta Balasubramaniam, Krishnan Rajagopal, Prabhu; Kumar, Anish; Rao, Purnachandra B.; Tamanna, Jayakumar
2015-03-31
The Probability of Detection (PoD) curve method has emerged as an important tool for the assessment of the performance of NDE techniques, a topic of particular interest to the nuclear industry where inspection qualification is very important. The conventional experimental means of generating PoD curves though, can be expensive, requiring large data sets (covering defects and test conditions), and equipment and operator time. Several methods of achieving faster estimates for PoD curves using physics-based modelling have been developed to address this problem. Numerical modelling techniques are also attractive, especially given the ever-increasing computational power available to scientists today. Here we develop procedures for obtaining PoD curves, assisted by numerical simulation and based on Bayesian statistics. Numerical simulations are performed using Finite Element analysis for factors that are assumed to be independent, random and normally distributed. PoD curves so generated are compared with experiments on austenitic stainless steel (SS) plates with artificially created notches. We examine issues affecting the PoD curve generation process including codes, standards, distribution of defect parameters and the choice of the noise threshold. We also study the assumption of normal distribution for signal response parameters and consider strategies for dealing with data that may be more complex or sparse to justify this. These topics are addressed and illustrated through the example case of generation of PoD curves for pulse-echo ultrasonic inspection of vertical surface-breaking cracks in SS plates.
Berryman, J G
2004-10-07
The most commonly discussed measures of microstructure in composite materials are the spatial correlation functions, which in a porous medium measure either the grain-to-grain correlations, or the pore-to-pore correlations in space. Improved bounds based on this information such as the Beran-Molyneux bounds for bulk modulus and the Beran bounds for conductivity are well-known. It is first shown here how to make direct use of this information to provide estimates that always lie between these upper and lower bounds for any microstructure whenever the microgeometry parameters are known. Then comparisons are made between these estimates, the bounds, and two new types of estimates. One new estimate for elastic constants makes use of the Peselnick-Meister bounds (based on Hashin-Shtrikman methods) for random polycrystals of laminates to generate self-consistent values that always lie between the bounds. A second new type of estimate for conductivity assumes that measurements of formation factors (of which there are at least two distinct types in porous media, associated respectively with pores and grains) are available, and computes new bounds based on this information. The paper compares and contrasts these various methods in order to clarify just what microstructural information and how precisely that information needs to be known in order to be useful for estimating material constants in random and heterogeneous media.
2007 Estimated International Energy Flows
Smith, C A; Belles, R D; Simon, A J
2011-03-10
An energy flow chart or 'atlas' for 136 countries has been constructed from data maintained by the International Energy Agency (IEA) and estimates of energy use patterns for the year 2007. Approximately 490 exajoules (460 quadrillion BTU) of primary energy are used in aggregate by these countries each year. While the basic structure of the energy system is consistent from country to country, patterns of resource use and consumption vary. Energy can be visualized as it flows from resources (i.e. coal, petroleum, natural gas) through transformations such as electricity generation to end uses (i.e. residential, commercial, industrial, transportation). These flow patterns are visualized in this atlas of 136 country-level energy flow charts.
Utility Static Generation Reliability
Energy Science and Technology Software Center (OSTI)
1993-03-05
PICES (Probabilistic Investigation of Capacity and Energy Shortages) was developed for estimating an electric utility''s expected frequency and duration of capacity deficiencies on a daily on and off-peak basis. In addition to the system loss-of-load probability (LOLP) and loss-of-load expectation (LOLE) indices, PICES calculates the expected frequency and duration of system capacity deficiencies and the probability, expectation, and expected frequency and duration of a range of system reserve margin states. Results are aggregated and printedmore » on a weekly, monthly, or annual basis. The program employs hourly load data and either the two-state (on/off) or a more sophisticated three-state (on/partially on/fully off) generating unit representation. Unit maintenance schedules are determined on a weekly, levelized reserve margin basis. In addition to the 8760-hour annual load record, the user provides the following information for each unit: plant capacity, annual maintenance requirement, two or three-state unit failure and repair rates, and for three-state models, the partial state capacity deficiency. PICES can also supply default failure and repair rate values, based on the Edison Electric Institute''s 1979 Report on Equipment Availability for the Ten-Year Period 1968 Through 1977, for many common plant types. Multi-year analysis can be performed by specifying as input data the annual peak load growth rates and plant addition and retirement schedules for each year in the study.« less
GEOTHERMAL POWER GENERATION PLANT
Boyd, Tonya
2013-12-01
Oregon Institute of Technology (OIT) drilled a deep geothermal well on campus (to 5,300 feet deep) which produced 196oF resource as part of the 2008 OIT Congressionally Directed Project. OIT will construct a geothermal power plant (estimated at 1.75 MWe gross output). The plant would provide 50 to 75 percent of the electricity demand on campus. Technical support for construction and operations will be provided by OIT’s Geo-Heat Center. The power plant will be housed adjacent to the existing heat exchange building on the south east corner of campus near the existing geothermal production wells used for heating campus. Cooling water will be supplied from the nearby cold water wells to a cooling tower or air cooling may be used, depending upon the type of plant selected. Using the flow obtained from the deep well, not only can energy be generated from the power plant, but the “waste” water will also be used to supplement space heating on campus. A pipeline will be construction from the well to the heat exchanger building, and then a discharge line will be construction around the east and north side of campus for anticipated use of the “waste” water by facilities in an adjacent sustainable energy park. An injection well will need to be drilled to handle the flow, as the campus existing injection wells are limited in capacity.
Transparency parameters from relativistically expanding outflows
Bgu, D. [University of Roma "Sapienza," I-00185, p.le A. Moro 5, Rome (Italy); Iyyani, S. [Department of Physics, KTH Royal Institute of Technology, AlbaNova University Center, SE-106 91 Stockholm (Sweden)
2014-09-01
In many gamma-ray bursts a distinct blackbody spectral component is present, which is attributed to the emission from the photosphere of a relativistically expanding plasma. The properties of this component (temperature and flux) can be linked to the properties of the outflow and have been presented in the case where there is no sub-photospheric dissipation and the photosphere is in coasting phase. First, we present the derivation of the properties of the outflow for finite winds, including when the photosphere is in the accelerating phase. Second, we study the effect of localized sub-photospheric dissipation on the estimation of the parameters. Finally, we apply our results to GRB 090902B. We find that during the first epoch of this burst the photosphere is most likely to be in the accelerating phase, leading to smaller values of the Lorentz factor than the ones previously estimated. For the second epoch, we find that the photosphere is likely to be in the coasting phase.
Building unbiased estimators from non-gaussian likelihoods with application to shear estimation
Madhavacheril, Mathew S.; McDonald, Patrick; Sehgal, Neelima; Slosar, Anze
2015-01-15
We develop a general framework for generating estimators of a given quantity which are unbiased to a given order in the difference between the true value of the underlying quantity and the fiducial position in theory space around which we expand the likelihood. We apply this formalism to rederive the optimal quadratic estimator and show how the replacement of the second derivative matrix with the Fisher matrix is a generic way of creating an unbiased estimator (assuming choice of the fiducial model is independent of data). Next we apply the approach to estimation of shear lensing, closely following the work of Bernstein and Armstrong (2014). Our first order estimator reduces to their estimator in the limit of zero shear, but it also naturally allows for the case of non-constant shear and the easy calculation of correlation functions or power spectra using standard methods. Both our first-order estimator and Bernstein and Armstrongs estimator exhibit a bias which is quadratic in true shear. Our third-order estimator is, at least in the realm of the toy problem of Bernstein and Armstrong, unbiased to 0.1% in relative shear errors ?g/g for shears up to |g| = 0.2.
Building unbiased estimators from non-Gaussian likelihoods with application to shear estimation
Madhavacheril, Mathew S.; Sehgal, Neelima; McDonald, Patrick; Slosar, Ane E-mail: pvmcdonald@lbl.gov E-mail: anze@bnl.gov
2015-01-01
We develop a general framework for generating estimators of a given quantity which are unbiased to a given order in the difference between the true value of the underlying quantity and the fiducial position in theory space around which we expand the likelihood. We apply this formalism to rederive the optimal quadratic estimator and show how the replacement of the second derivative matrix with the Fisher matrix is a generic way of creating an unbiased estimator (assuming choice of the fiducial model is independent of data). Next we apply the approach to estimation of shear lensing, closely following the work of Bernstein and Armstrong (2014). Our first order estimator reduces to their estimator in the limit of zero shear, but it also naturally allows for the case of non-constant shear and the easy calculation of correlation functions or power spectra using standard methods. Both our first-order estimator and Bernstein and Armstrong's estimator exhibit a bias which is quadratic in true shear. Our third-order estimator is, at least in the realm of the toy problem of Bernstein and Armstrong, unbiased to 0.1% in relative shear errors ?g/g for shears up to |g|=0.2.
Building unbiased estimators from non-gaussian likelihoods with application to shear estimation
Madhavacheril, Mathew S.; Slosar, Anze; McDonald, Patrick; Sehgal, Neelima
2015-01-01
We develop a general framework for generating estimators of a given quantity which are unbiased to a given order in the difference between the true value of the underlying quantity and the fiducial position in theory space around which we expand the likelihood. We apply this formalism to rederive the optimal quadratic estimator and show how the replacement of the second derivative matrix with the Fisher matrix is a generic way of creating an unbiased estimator (assuming choice of the fiducial model is independent of data). Next we apply the approach to estimation of shear lensing, closely following the work of Bernstein and Armstrong (2014). Our first order estimator reduces to their estimator in the limit of zero shear, but it also naturally allows for the case of non-constant shear and the easy calculation of correlation functions or power spectra using standard methods. Both our first-order estimator and Bernstein and Armstrongs estimator exhibit a bias which is quadratic in true shear. Our third-order estimator is, at least in the realm of the toy problem of Bernstein and Armstrong, unbiased to 0.1% in relative shear errors ?g/g for shears up to |g| = 0.2.
Building unbiased estimators from non-gaussian likelihoods with application to shear estimation
DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)
Madhavacheril, Mathew S.; McDonald, Patrick; Sehgal, Neelima; Slosar, Anze
2015-01-15
We develop a general framework for generating estimators of a given quantity which are unbiased to a given order in the difference between the true value of the underlying quantity and the fiducial position in theory space around which we expand the likelihood. We apply this formalism to rederive the optimal quadratic estimator and show how the replacement of the second derivative matrix with the Fisher matrix is a generic way of creating an unbiased estimator (assuming choice of the fiducial model is independent of data). Next we apply the approach to estimation of shear lensing, closely following the workmore » of Bernstein and Armstrong (2014). Our first order estimator reduces to their estimator in the limit of zero shear, but it also naturally allows for the case of non-constant shear and the easy calculation of correlation functions or power spectra using standard methods. Both our first-order estimator and Bernstein and Armstrong’s estimator exhibit a bias which is quadratic in true shear. Our third-order estimator is, at least in the realm of the toy problem of Bernstein and Armstrong, unbiased to 0.1% in relative shear errors Δg/g for shears up to |g| = 0.2.« less
Communications circuit including a linear quadratic estimator
Ferguson, Dennis D.
2015-07-07
A circuit includes a linear quadratic estimator (LQE) configured to receive a plurality of measurements a signal. The LQE is configured to weight the measurements based on their respective uncertainties to produce weighted averages. The circuit further includes a controller coupled to the LQE and configured to selectively adjust at least one data link parameter associated with a communication channel in response to receiving the weighted averages.
Estimating vehicle height using homographic projections
Cunningham, Mark F; Fabris, Lorenzo; Gee, Timothy F; Ghebretati, Jr., Frezghi H; Goddard, James S; Karnowski, Thomas P; Ziock, Klaus-peter
2013-07-16
Multiple homography transformations corresponding to different heights are generated in the field of view. A group of salient points within a common estimated height range is identified in a time series of video images of a moving object. Inter-salient point distances are measured for the group of salient points under the multiple homography transformations corresponding to the different heights. Variations in the inter-salient point distances under the multiple homography transformations are compared. The height of the group of salient points is estimated to be the height corresponding to the homography transformation that minimizes the variations.
Process Equipment Cost Estimation, Final Report
H.P. Loh; Jennifer Lyons; Charles W. White, III
2002-01-01
This report presents generic cost curves for several equipment types generated using ICARUS Process Evaluator. The curves give Purchased Equipment Cost as a function of a capacity variable. This work was performed to assist NETL engineers and scientists in performing rapid, order of magnitude level cost estimates or as an aid in evaluating the reasonableness of cost estimates submitted with proposed systems studies or proposals for new processes. The specific equipment types contained in this report were selected to represent a relatively comprehensive set of conventional chemical process equipment types.
Parameters of cosmological models and recent astronomical observations
Sharov, G.S.; Vorontsova, E.G., E-mail: german.sharov@mail.ru, E-mail: elenavor@inbox.ru [Tver state university, 170002, Sadovyj per. 35, Tver (Russian Federation)
2014-10-01
For different gravitational models we consider limitations on their parameters coming from recent observational data for type Ia supernovae, baryon acoustic oscillations, and from 34 data points for the Hubble parameter H(z) depending on redshift. We calculate parameters of 3 models describing accelerated expansion of the universe: the ?CDM model, the model with generalized Chaplygin gas (GCG) and the multidimensional model of I. Pahwa, D. Choudhury and T.R. Seshadri. In particular, for the ?CDM model 1? estimates of parameters are: H{sub 0}=70.2620.319 km {sup -1}Mp {sup -1}, ?{sub m}=0.276{sub -0.008}{sup +0.009}, ?{sub ?}=0.7690.029, ?{sub k}=-0.0450.032. The GCG model under restriction 0?? is reduced to the ?CDM model. Predictions of the multidimensional model essentially depend on 3 data points for H(z) with z?2.3.
Using Utility Load Data to Estimate Demand for Space Cooling and Potential for Shiftable Loads
Denholm, P.; Ong, S.; Booten, C.
2012-05-01
This paper describes a simple method to estimate hourly cooling demand from historical utility load data. It compares total hourly demand to demand on cool days and compares these estimates of total cooling demand to previous regional and national estimates. Load profiles generated from this method may be used to estimate the potential for aggregated demand response or load shifting via cold storage.
Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site
J slide presentation: hydrogen Generator appliance Gus Block, Nuvera Fuel Cells
Estimated recharge rates at the Hanford Site
Fayer, M.J.; Walters, T.B.
1995-02-01
The Ground-Water Surveillance Project monitors the distribution of contaminants in ground water at the Hanford Site for the U.S. Department of Energy. A subtask called {open_quotes}Water Budget at Hanford{close_quotes} was initiated in FY 1994. The objective of this subtask was to produce a defensible map of estimated recharge rates across the Hanford Site. Methods that have been used to estimate recharge rates at the Hanford Site include measurements (of drainage, water contents, and tracers) and computer modeling. For the simulations of 12 soil-vegetation combinations, the annual rates varied from 0.05 mm/yr for the Ephrata sandy loam with bunchgrass to 85.2 mm/yr for the same soil without vegetation. Water content data from the Grass Site in the 300 Area indicated that annual rates varied from 3.0 to 143.5 mm/yr during an 8-year period. The annual volume of estimated recharge was calculated to be 8.47 {times} 10{sup 9} L for the potential future Hanford Site (i.e., the portion of the current Site bounded by Highway 240 and the Columbia River). This total volume is similar to earlier estimates of natural recharge and is 2 to 10x higher than estimates of runoff and ground-water flow from higher elevations. Not only is the volume of natural recharge significant in comparison to other ground-water inputs, the distribution of estimated recharge is highly skewed to the disturbed sandy soils (i.e., the 200 Areas, where most contaminants originate). The lack of good estimates of the means and variances of the supporting data (i.e., the soil map, the vegetation/land use map, the model parameters) translates into large uncertainties in the recharge estimates. When combined, the significant quantity of estimated recharge, its high sensitivity to disturbance, and the unquantified uncertainty of the data and model parameters suggest that the defensibility of the recharge estimates should be improved.
NREL Raises Rooftop Photovoltaic Technical Potential Estimate - News
Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)
Releases | NREL NREL Raises Rooftop Photovoltaic Technical Potential Estimate New analysis nearly doubles previous estimates and shows U.S. building rooftops could generate close to 40 percent of national electricity sales March 24, 2016 Analysts at the Energy Department's National Renewable Energy Laboratory (NREL) have used detailed light detection and ranging (LiDAR) data for 128 cities nationwide, along with improved data analysis methods and simulation tools, to update its estimate of
Estimating exposure of terrestrial wildlife to contaminants
Sample, B.E.; Suter, G.W. II
1994-09-01
This report describes generalized models for the estimation of contaminant exposure experienced by wildlife on the Oak Ridge Reservation. The primary exposure pathway considered is oral ingestion, e.g. the consumption of contaminated food, water, or soil. Exposure through dermal absorption and inhalation are special cases and are not considered hereIN. Because wildlife mobile and generally consume diverse diets and because environmental contamination is not spatial homogeneous, factors to account for variation in diet, movement, and contaminant distribution have been incorporated into the models. To facilitate the use and application of the models, life history parameters necessary to estimate exposure are summarized for 15 common wildlife species. Finally, to display the application of the models, exposure estimates were calculated for four species using data from a source operable unit on the Oak Ridge Reservation.
Firestone, Richard B; Reijonen, Jani
2014-05-27
An embodiment of a gamma ray generator includes a neutron generator and a moderator. The moderator is coupled to the neutron generator. The moderator includes a neutron capture material. In operation, the neutron generator produces neutrons and the neutron capture material captures at least some of the neutrons to produces gamma rays. An application of the gamma ray generator is as a source of gamma rays for calibration of gamma ray detectors.
REQUESTS FOR RETIREMENT ESTIMATE
Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site
REQUEST FOR RETIREMENT ANNUITY ESTIMATE Instructions: Please read and answer the following questions thoroughly to include checking all applicable boxes. Unanswered questions may delay processing. Print and Fax back your request form to 202.586.6395 or drop request to GM-169. The request will be assigned to your servicing retirement specialist. They will confirm receipt of your request. SECTION A Request Submitted _____________________ ______________________ ________________________
Weekly Coal Production Estimation Methodology
Gasoline and Diesel Fuel Update (EIA)
Weekly Coal Production Estimation Methodology Step 1 (Estimate total amount of weekly U.S. coal production) U.S. coal production for the current week is estimated using a ratio ...
The Smart Grid: An Estimation of the Energy and Carbon Dioxide...
by which the smart grid can reduce energy use and carbon impacts associated with electricity generation and delivery in the United States. The quantitative estimates of...
Estimating pixel variances in the scenes of staring sensors
Simonson, Katherine M. (Cedar Crest, NM); Ma, Tian J. (Albuquerque, NM)
2012-01-24
A technique for detecting changes in a scene perceived by a staring sensor is disclosed. The technique includes acquiring a reference image frame and a current image frame of a scene with the staring sensor. A raw difference frame is generated based upon differences between the reference image frame and the current image frame. Pixel error estimates are generated for each pixel in the raw difference frame based at least in part upon spatial error estimates related to spatial intensity gradients in the scene. The pixel error estimates are used to mitigate effects of camera jitter in the scene between the current image frame and the reference image frame.
Leung, Ka-Ngo
2005-06-14
A cylindrical neutron generator is formed with a coaxial RF-driven plasma ion source and target. A deuterium (or deuterium and tritium) plasma is produced by RF excitation in a cylindrical plasma ion generator using an RF antenna. A cylindrical neutron generating target is coaxial with the ion generator, separated by plasma and extraction electrodes which contain many slots. The plasma generator emanates ions radially over 360.degree. and the cylindrical target is thus irradiated by ions over its entire circumference. The plasma generator and target may be as long as desired. The plasma generator may be in the center and the neutron target on the outside, or the plasma generator may be on the outside and the target on the inside. In a nested configuration, several concentric targets and plasma generating regions are nested to increase the neutron flux.
Leung, Ka-Ngo
2008-04-22
A cylindrical neutron generator is formed with a coaxial RF-driven plasma ion source and target. A deuterium (or deuterium and tritium) plasma is produced by RF excitation in a cylindrical plasma ion generator using an RF antenna. A cylindrical neutron generating target is coaxial with the ion generator, separated by plasma and extraction electrodes which contain many slots. The plasma generator emanates ions radially over 360.degree. and the cylindrical target is thus irradiated by ions over its entire circumference. The plasma generator and target may be as long as desired. The plasma generator may be in the center and the neutron target on the outside, or the plasma generator may be on the outside and the target on the inside. In a nested configuration, several concentric targets and plasma generating regions are nested to increase the neutron flux.
Leung, Ka-Ngo
2009-12-29
A cylindrical neutron generator is formed with a coaxial RF-driven plasma ion source and target. A deuterium (or deuterium and tritium) plasma is produced by RF excitation in a cylindrical plasma ion generator using an RF antenna. A cylindrical neutron generating target is coaxial with the ion generator, separated by plasma and extraction electrodes which contain many slots. The plasma generator emanates ions radially over 360.degree. and the cylindrical target is thus irradiated by ions over its entire circumference. The plasma generator and target may be as long as desired. The plasma generator may be in the center and the neutron target on the outside, or the plasma generator may be on the outside and the target on the inside. In a nested configuration, several concentric targets and plasma generating regions are nested to increase the neutron flux.
Using Backup Generators: Choosing the Right Backup Generator...
Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site
Homeowners Using Backup Generators: Choosing the Right Backup Generator - Homeowners Using Backup Generators: Choosing the Right Backup Generator - Homeowners Determine the amount ...
State energy data report 1996: Consumption estimates
1999-02-01
The State Energy Data Report (SEDR) provides annual time series estimates of State-level energy consumption by major economic sectors. The estimates are developed in the Combined State Energy Data System (CSEDS), which is maintained and operated by the Energy Information Administration (EIA). The goal in maintaining CSEDS is to create historical time series of energy consumption by State that are defined as consistently as possible over time and across sectors. CSEDS exists for two principal reasons: (1) to provide State energy consumption estimates to Members of Congress, Federal and State agencies, and the general public and (2) to provide the historical series necessary for EIA`s energy models. To the degree possible, energy consumption has been assigned to five sectors: residential, commercial, industrial, transportation, and electric utility sectors. Fuels covered are coal, natural gas, petroleum, nuclear electric power, hydroelectric power, biomass, and other, defined as electric power generated from geothermal, wind, photovoltaic, and solar thermal energy. 322 tabs.
Power Generation for River and Tidal Generators
Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)
Power Generation for River and Tidal Generators Eduard Muljadi, Alan Wright, and Vahan Gevorgian National Renewable Energy Laboratory James Donegan, Cian Marnagh, and Jarlath McEntee Ocean Renewable Power Company Technical Report NREL/TP-5D00-66097 June 2016 NREL is a national laboratory of the U.S. Department of Energy Office of Energy Efficiency & Renewable Energy Operated by the Alliance for Sustainable Energy, LLC This report is available at no cost from the National Renewable Energy
State Energy Production Estimates
Gasoline and Diesel Fuel Update (EIA)
Production Estimates 1960 Through 2014 2014 Summary Tables U.S. Energy Information Administration | State Energy Data 2014: Production 1 Table P1. Energy Production Estimates in Physical Units, 2014 Alabama 16,377 181,054 9,828 0 Alaska 1,502 345,331 181,175 0 Arizona 8,051 106 56 1,044 Arkansas 94 1,123,678 6,845 0 California 0 252,718 204,269 4,462 Colorado 24,007 1,631,391 95,192 3,133 Connecticut 0 0 0 0 Delaware 0 0 0 0 District of Columbia 0 0 0 0 Florida 0 369 2,227 0 Georgia 0 0 0 2,517
Bowley, W.W.
1983-05-10
Apparatus and method for generating electrical power by disposing a plurality of power producing modules in a substantially constant velocity ocean current and mechanically coupling the output of the modules to drive a single electrical generator is disclosed.
Quantum random number generator
Pooser, Raphael C.
2016-05-10
A quantum random number generator (QRNG) and a photon generator for a QRNG are provided. The photon generator may be operated in a spontaneous mode below a lasing threshold to emit photons. Photons emitted from the photon generator may have at least one random characteristic, which may be monitored by the QRNG to generate a random number. In one embodiment, the photon generator may include a photon emitter and an amplifier coupled to the photon emitter. The amplifier may enable the photon generator to be used in the QRNG without introducing significant bias in the random number and may enable multiplexing of multiple random numbers. The amplifier may also desensitize the photon generator to fluctuations in power supplied thereto while operating in the spontaneous mode. In one embodiment, the photon emitter and amplifier may be a tapered diode amplifier.
Use of Cost Estimating Relationships
Broader source: Directives, Delegations, and Requirements [Office of Management (MA)]
1997-03-28
Cost Estimating Relationships (CERs) are an important tool in an estimator's kit, and in many cases, they are the only tool. Thus, it is important to understand their limitations and characteristics. This chapter discusses considerations of which the estimator must be aware so the Cost Estimating Relationships can be properly used.
Sundararaman, P. ); Moldowan, J.M. )
1993-03-01
Correlations are demonstrated between steriod maturity parameters and the porphyrin maturity parameter (PMP) which is based on the ratio of specific vanadyl porphyrins C[sub 28]E/(C[sub 28]E + C[sub 32]D) measured by HPLC. Measurements from a global selection of >100 rock extracts and oils show that PMP parallels changes in the C[sub 29]-sterane 20S/(20S + 20R) and tri/(tri + mono) aromatic steroid ratios, and that all three parameters appear to attain their maximum values at similar maturity levels. The triaromatic steroid side chain cracking parameter, TA I/(I + II), reaches approximately 20% of its maximum value when PMP has reached 100%. These results suggest that PMP is effective in the early to peak portion of the oil window. A new parameter, PMP-2, based on changes in the relative concentrations of two peaks in the HPLC fingerprint (vanadyl [open quotes]etio[close quotes] porphyrins), appears effective in assessing the maturity of source rocks beyond peak oil generation. In combination with PMP this parameter extends the effective range of vanadyl porphyrins parameters to higher maturities as demonstrated by a suite of oils from the Oriente Basin, Ecuador, South America. 22 refs., 6 figs., 1 tab.
Neutron Resonance Parameters and Covariance Matrix of 239Pu
Derrien, Herve; Leal, Luiz C; Larson, Nancy M
2008-08-01
In order to obtain the resonance parameters in a single energy range and the corresponding covariance matrix, a reevaluation of 239Pu was performed with the code SAMMY. The most recent experimental data were analyzed or reanalyzed in the energy range thermal to 2.5 keV. The normalization of the fission cross section data was reconsidered by taking into account the most recent measurements of Weston et al. and Wagemans et al. A full resonance parameter covariance matrix was generated. The method used to obtain realistic uncertainties on the average cross section calculated by SAMMY or other processing codes was examined.
Hickam, Christopher Dale
2008-05-13
A motor/generator is provided for connecting between a transmission input shaft and an output shaft of a prime mover. The motor/generator may include a motor/generator housing, a stator mounted to the motor/generator housing, a rotor mounted at least partially within the motor/generator housing and rotatable about a rotor rotation axis, and a transmission-shaft coupler drivingly coupled to the rotor. The transmission-shaft coupler may include a clamp, which may include a base attached to the rotor and a plurality of adjustable jaws.
AN OVERVIEW OF TOOL FOR RESPONSE ACTION COST ESTIMATING (TRACE)
FERRIES SR; KLINK KL; OSTAPKOWICZ B
2012-01-30
Tools and techniques that provide improved performance and reduced costs are important to government programs, particularly in current times. An opportunity for improvement was identified for preparation of cost estimates used to support the evaluation of response action alternatives. As a result, CH2M HILL Plateau Remediation Company has developed Tool for Response Action Cost Estimating (TRACE). TRACE is a multi-page Microsoft Excel{reg_sign} workbook developed to introduce efficiencies into the timely and consistent production of cost estimates for response action alternatives. This tool combines costs derived from extensive site-specific runs of commercially available remediation cost models with site-specific and estimator-researched and derived costs, providing the best estimating sources available. TRACE also provides for common quantity and key parameter links across multiple alternatives, maximizing ease of updating estimates and performing sensitivity analyses, and ensuring consistency.
Solar thermoelectric generator
Toberer, Eric S.; Baranowski, Lauryn L.; Warren, Emily L.
2016-05-03
Solar thermoelectric generators (STEGs) are solid state heat engines that generate electricity from concentrated sunlight. A novel detailed balance model for STEGs is provided and applied to both state-of-the-art and idealized materials. STEGs can produce electricity by using sunlight to heat one side of a thermoelectric generator. While concentrated sunlight can be used to achieve extremely high temperatures (and thus improved generator efficiency), the solar absorber also emits a significant amount of black body radiation. This emitted light is the dominant loss mechanism in these generators. In this invention, we propose a solution to this problem that eliminates virtually all of the emitted black body radiation. This enables solar thermoelectric generators to operate at higher efficiency and achieve said efficient with lower levels of optical concentration. The solution is suitable for both single and dual axis solar thermoelectric generators.
Cordaro, Joseph Gabriel; Jones, Reese E.; Neel, Wiley Christopher
2015-09-01
In this report we explore the sensitivities of the insulation resistance between two loops of wire embedded in insulating materials with a simple, approximate model. We discuss limita- tions of the model and ideas for improvements.
Estimation of constitutive parameters for the Belridge Diatomite, South Belridge Diatomite Field
Fossum, A.F.; Fredrich, J.T.
1998-06-01
A cooperative national laboratory/industry research program was initiated in 1994 that improved understanding of the geomechanical processes causing well casing damage during oil production from weak, compactible formations. The program focused on the shallow diatomaceous oil reservoirs located in California`s San Joaquin Valley, and combined analyses of historical field data, experimental determination of rock mechanical behavior, and geomechanical simulation of the reservoir and overburden response to production and injection. Sandia National Laboratories` quasi-static, large-deformation structural mechanics finite element code JAS3D was used to perform the three-dimensional geomechanical simulations. One of the material models implemented in JAS3D to simulate the time-independent inelastic (non-linear) deformation of geomaterials is a generalized version of the Sandler and Rubin cap plasticity model (Sandler and Rubin, 1979). This report documents the experimental rock mechanics data and material cap plasticity models that were derived to describe the Belridge Diatomite reservoir rock at the South Belridge Diatomite Field, Section 33.
Method to estimate the vertical dispersion parameter in a 10 Km range
Xiaoen, L.; Xinyuan, J.; Jinte, Y.
1983-12-01
Based on the Monin-Batchelor Similarity Theory and the concept of effective roughness length, this paper presented an empirical vertical dispersion model in a 10 kilometer range. It could be used under a flat and homogeneous, as well as complex, topographical condition.
Estimates of HE-LHC beam parameters at different injection energies
Sen, Tanaji; /Fermilab
2010-11-01
A future upgrade to the LHC envisions increasing the top energy to 16.5 TeV and upgrading the injectors. There are two proposals to replace the SPS as the injector to the LHC. One calls for a superconducting ring in the SPS tunnel while the other calls for an injector (LER) in the LHC tunnel. In both scenarios, the injection energy to the LHC will increase. In this note we look at some of the consequences of increased injection energy to the beam dynamics in the LHC.
PHYSICAL PARAMETERS OF STANDARD AND BLOWOUT JETS
Pucci, Stefano; Romoli, Marco; Poletto, Giannina; Sterling, Alphonse C.
2013-10-10
The X-ray Telescope on board the Hinode mission revealed the occurrence, in polar coronal holes, of much more numerous jets than previously indicated by the Yohkoh/Soft X-ray Telescope. These plasma ejections can be of two types, depending on whether they fit the standard reconnection scenario for coronal jets or if they include a blowout-like eruption. In this work, we analyze two jets, one standard and one blowout, that have been observed by the Hinode and STEREO experiments. We aim to infer differences in the physical parameters that correspond to the different morphologies of the events. To this end, we adopt spectroscopic techniques and determine the profiles of the plasma temperature, density, and outflow speed versus time and position along the jets. The blowout jet has a higher outflow speed, a marginally higher temperature, and is rooted in a stronger magnetic field region than the standard event. Our data provide evidence for recursively occurring reconnection episodes within both the standard and the blowout jet, pointing either to bursty reconnection or to reconnection occurring at different locations over the jet lifetimes. We make a crude estimate of the energy budget of the two jets and show how energy is partitioned among different forms. Also, we show that the magnetic energy that feeds the blowout jet is a factor of 10 higher than the magnetic energy that fuels the standard event.
Preliminary relative permeability estimates of methanehydrate-bearing sand
Seol, Yongkoo; Kneafsey, Timothy J.; Tomutsa, Liviu; Moridis,George J.
2006-05-08
The relative permeability to fluids in hydrate-bearingsediments is an important parameter for predicting natural gas productionfrom gas hydrate reservoirs. We estimated the relative permeabilityparameters (van Genuchten alpha and m) in a hydrate-bearing sand by meansof inverse modeling, which involved matching water saturation predictionswith observations from a controlled waterflood experiment. We used x-raycomputed tomography (CT) scanning to determine both the porosity and thehydrate and aqueous phase saturation distributions in the samples. X-rayCT images showed that hydrate and aqueous phase saturations arenon-uniform, and that water flow focuses in regions of lower hydratesaturation. The relative permeability parameters were estimated at twolocations in each sample. Differences between the estimated parametersets at the two locations were attributed to heterogeneity in the hydratesaturation. Better estimates of the relative permeability parametersrequire further refinement of the experimental design, and betterdescription of heterogeneity in the numerical inversions.
Stanley, B.J.; Guiochon, G. |
1993-08-01
The expectation-maximization (EM) method of parameter estimation is used to calculate adsorption energy distributions of molecular probes from their adsorption isotherms. EM does not require prior knowledge of the distribution function or the isotherm, requires no smoothing of the isotherm data, and converges with high stability towards the maximum-likelihood estimate. The method is therefore robust and accurate at high iteration numbers. The EM algorithm is tested with simulated energy distributions corresponding to unimodal Gaussian, bimodal Gaussian, Poisson distributions, and the distributions resulting from Misra isotherms. Theoretical isotherms are generated from these distributions using the Langmuir model, and then chromatographic band profiles are computed using the ideal model of chromatography. Noise is then introduced in the theoretical band profiles comparable to those observed experimentally. The isotherm is then calculated using the elution-by-characteristic points method. The energy distribution given by the EM method is compared to the original one. Results are contrasted to those obtained with the House and Jaycock algorithm HILDA, and shown to be superior in terms of robustness, accuracy, and information theory. The effect of undersampling of the high-pressure/low-energy region of the adsorption is reported and discussed for the EM algorithm, as well as the effect of signal-to-noise ratio on the degree of heterogeneity that may be estimated experimentally.
Energy Science and Technology Software Center (OSTI)
1996-04-15
AES6.1 is a PC software package developed to aid in the preparation and reporting of cost estimates. AES6.1 provides an easy means for entering and updating the detailed cost, schedule information, project work breakdown structure, and escalation information contained in a typical project cost estimate through the use of menus and formatted input screens. AES6.1 combines this information to calculate both unescalated and escalated cost for a project which can be reported at varying levelsmore » of detail. Following are the major modifications to AES6.0f: Contingency update was modified to provide greater flexibility for user updates, Schedule Update was modified to provide user ability to schedule Bills of Material at the WBS/Participant/Cost Code level, Schedule Plot was modified to graphically show schedule by WBS/Participant/Cost Code, All Fiscal Year reporting has been modified to use the new schedule format, The Schedule 1-B-7, Cost Schedule, and the WBS/Participant reprorts were modified to determine Phase of Work from the B/M Cost Code, Utility program was modified to allow selection by cost code and update cost code in the Global Schedule update, Generic summary and line item download were added to the utility program, and an option was added to all reports which allows the user to indicate where overhead is to be reported (bottom line or in body of report)« less
Performance of internal covariance estimators for cosmic shear correlation functions
DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)
Friedrich, O.; Seitz, S.; Eifler, T. F.; Gruen, D.
2015-12-31
Data re-sampling methods such as the delete-one jackknife are a common tool for estimating the covariance of large scale structure probes. In this paper we investigate the concepts of internal covariance estimation in the context of cosmic shear two-point statistics. We demonstrate how to use log-normal simulations of the convergence field and the corresponding shear field to carry out realistic tests of internal covariance estimators and find that most estimators such as jackknife or sub-sample covariance can reach a satisfactory compromise between bias and variance of the estimated covariance. In a forecast for the complete, 5-year DES survey we show that internally estimated covariance matrices can provide a large fraction of the true uncertainties on cosmological parameters in a 2D cosmic shear analysis. The volume inside contours of constant likelihood in themore » $$\\Omega_m$$-$$\\sigma_8$$ plane as measured with internally estimated covariance matrices is on average $$\\gtrsim 85\\%$$ of the volume derived from the true covariance matrix. The uncertainty on the parameter combination $$\\Sigma_8 \\sim \\sigma_8 \\Omega_m^{0.5}$$ derived from internally estimated covariances is $$\\sim 90\\%$$ of the true uncertainty.« less
Performance of internal covariance estimators for cosmic shear correlation functions
Friedrich, O.; Seitz, S.; Eifler, T. F.; Gruen, D.
2015-12-31
Data re-sampling methods such as the delete-one jackknife are a common tool for estimating the covariance of large scale structure probes. In this paper we investigate the concepts of internal covariance estimation in the context of cosmic shear two-point statistics. We demonstrate how to use log-normal simulations of the convergence field and the corresponding shear field to carry out realistic tests of internal covariance estimators and find that most estimators such as jackknife or sub-sample covariance can reach a satisfactory compromise between bias and variance of the estimated covariance. In a forecast for the complete, 5-year DES survey we show that internally estimated covariance matrices can provide a large fraction of the true uncertainties on cosmological parameters in a 2D cosmic shear analysis. The volume inside contours of constant likelihood in the $\\Omega_m$-$\\sigma_8$ plane as measured with internally estimated covariance matrices is on average $\\gtrsim 85\\%$ of the volume derived from the true covariance matrix. The uncertainty on the parameter combination $\\Sigma_8 \\sim \\sigma_8 \\Omega_m^{0.5}$ derived from internally estimated covariances is $\\sim 90\\%$ of the true uncertainty.
Quantum random number generation
DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)
Ma, Xiongfeng; Yuan, Xiao; Cao, Zhu; Zhang, Zhen; Qi, Bing
2016-06-28
Here, quantum physics can be exploited to generate true random numbers, which play important roles in many applications, especially in cryptography. Genuine randomness from the measurement of a quantum system reveals the inherent nature of quantumness -- coherence, an important feature that differentiates quantum mechanics from classical physics. The generation of genuine randomness is generally considered impossible with only classical means. Based on the degree of trustworthiness on devices, quantum random number generators (QRNGs) can be grouped into three categories. The first category, practical QRNG, is built on fully trusted and calibrated devices and typically can generate randomness at amore » high speed by properly modeling the devices. The second category is self-testing QRNG, where verifiable randomness can be generated without trusting the actual implementation. The third category, semi-self-testing QRNG, is an intermediate category which provides a tradeoff between the trustworthiness on the device and the random number generation speed.« less