INNOVATIVE CONCEPTS FOR ON-LINE SYNCHRONOUS GENERATOR PARAMETER ESTIMATION
and other routine power engineering studies. These studies are critical for the operation of the power temperature, magnetic saturation, and coupling between the generator and external systems. The method proposed and on a simplified synchronous generator model. The method is developed to be used with a Visual C++ engine
Meliopoulos, Sakis; Cokkinides, George; Fardanesh, Bruce; Hedrington, Clinton
2013-12-31T23:59:59.000Z
This is the final report for this project that was performed in the period: October1, 2009 to June 30, 2013. In this project, a fully distributed high-fidelity dynamic state estimator (DSE) that continuously tracks the real time dynamic model of a wide area system with update rates better than 60 times per second is achieved. The proposed technology is based on GPS-synchronized measurements but also utilizes data from all available Intelligent Electronic Devices in the system (numerical relays, digital fault recorders, digital meters, etc.). The distributed state estimator provides the real time model of the system not only the voltage phasors. The proposed system provides the infrastructure for a variety of applications and two very important applications (a) a high fidelity generating unit parameters estimation and (b) an energy function based transient stability monitoring of a wide area electric power system with predictive capability. Also the dynamic distributed state estimation results are stored (the storage scheme includes data and coincidental model) enabling an automatic reconstruction and “play back” of a system wide disturbance. This approach enables complete play back capability with fidelity equal to that of real time with the advantage of “playing back” at a user selected speed. The proposed technologies were developed and tested in the lab during the first 18 months of the project and then demonstrated on two actual systems, the USVI Water and Power Administration system and the New York Power Authority’s Blenheim-Gilboa pumped hydro plant in the last 18 months of the project. The four main thrusts of this project, mentioned above, are extremely important to the industry. The DSE with the achieved update rates (more than 60 times per second) provides a superior solution to the “grid visibility” question. The generator parameter identification method fills an important and practical need of the industry. The “energy function” based transient stability monitoring opens up new ways to protect the power grid, better manage disturbances, confine their impact and in general improve the reliability and security of the system. Finally, as a by-product of the proposed research project, the developed system is able to “play back” disturbances by a click of a mouse. The importance of this by-product is evident by considering the tremendous effort exerted after the August 2003 blackout to piece together all the disturbance recordings, align them and recreate the sequence of events. This project has moved the state of art from fault recording by individual devices to system wide disturbance recording with “play back” capability.
the LIGO Scientific Collaboration; the Virgo Collaboration; J. Aasi; J. Abadie; B. P. Abbott; R. Abbott; T. D. Abbott; M. Abernathy; T. Accadia; F. Acernese; C. Adams; T. Adams; P. Addesso; R. Adhikari; C. Affeldt; M. Agathos; K. Agatsuma; P. Ajith; B. Allen; A. Allocca; E. Amador Ceron; D. Amariutei; S. B. Anderson; W. G. Anderson; K. Arai; M. C. Araya; S. Ast; S. M. Aston; P. Astone; D. Atkinson; P. Aufmuth; C. Aulbert; B. E. Aylott; S. Babak; P. Baker; G. Ballardin; S. Ballmer; Y. Bao; J. C. B. Barayoga; D. Barker; F. Barone; B. Barr; L. Barsotti; M. Barsuglia; M. A. Barton; I. Bartos; R. Bassiri; M. Bastarrika; A. Basti; J. Batch; J. Bauchrowitz; Th. S. Bauer; M. Bebronne; D. Beck; B. Behnke; M. Bejger; M. G. Beker; A. S. Bell; C. Bell; I. Belopolski; M. Benacquista; J. M. Berliner; A. Bertolini; J. Betzwieser; N. Beveridge; P. T. Beyersdorf; T. Bhadbade; I. A. Bilenko; G. Billingsley; J. Birch; R. Biswas; M. Bitossi; M. A. Bizouard; E. Black; J. K. Blackburn; L. Blackburn; D. Blair; B. Bland; M. Blom; O. Bock; T. P. Bodiya; C. Bogan; C. Bond; R. Bondarescu; F. Bondu; L. Bonelli; R. Bonnand; R. Bork; M. Born; V. Boschi; S. Bose; L. Bosi; B. Bouhou; S. Braccini; C. Bradaschia; P. R. Brady; V. B. Braginsky; M. Branchesi; J. E. Brau; J. Breyer; T. Briant; D. O. Bridges; A. Brillet; M. Brinkmann; V. Brisson; M. Britzger; A. F. Brooks; D. A. Brown; T. Bulik; H. J. Bulten; A. Buonanno; J. Burguet--Castell; D. Buskulic; C. Buy; R. L. Byer; L. Cadonati; G. Cagnoli; E. Calloni; J. B. Camp; P. Campsie; K. Cannon; B. Canuel; J. Cao; C. D. Capano; F. Carbognani; L. Carbone; S. Caride; S. Caudill; M. Cavaglià; F. Cavalier; R. Cavalieri; G. Cella; C. Cepeda; E. Cesarini; T. Chalermsongsak; P. Charlton; E. Chassande-Mottin; W. Chen; X. Chen; Y. Chen; A. Chincarini; A. Chiummo; H. S. Cho; J. Chow; N. Christensen; S. S. Y. Chua; C. T. Y. Chung; S. Chung; G. Ciani; F. Clara; D. E. Clark; J. A. Clark; J. H. Clayton; F. Cleva; E. Coccia; P. -F. Cohadon; C. N. Colacino; A. Colla; M. Colombini; A. Conte; R. Conte; D. Cook; T. R. Corbitt; M. Cordier; N. Cornish; A. Corsi; C. A. Costa; M. Coughlin; J. -P. Coulon; P. Couvares; D. M. Coward; M. Cowart; D. C. Coyne; J. D. E. Creighton; T. D. Creighton; A. M. Cruise; A. Cumming; L. Cunningham; E. Cuoco; R. M. Cutler; K. Dahl; M. Damjanic; S. L. Danilishin; S. D'Antonio; K. Danzmann; V. Dattilo; B. Daudert; H. Daveloza; M. Davier; E. J. Daw; T. Dayanga; R. De Rosa; D. DeBra; G. Debreczeni; J. Degallaix; W. Del Pozzo; T. Dent; V. Dergachev; R. DeRosa; S. Dhurandhar; L. Di Fiore; A. Di Lieto; I. Di Palma; M. Di Paolo Emilio; A. Di Virgilio; M. Díaz; A. Dietz; F. Donovan; K. L. Dooley; S. Doravari; S. Dorsher; M. Drago; R. W. P. Drever; J. C. Driggers; Z. Du; J. -C. Dumas; S. Dwyer; T. Eberle; M. Edgar; M. Edwards; A. Effler; P. Ehrens; G. Endröczi; R. Engel; T. Etzel; K. Evans; M. Evans; T. Evans; M. Factourovich; V. Fafone; S. Fairhurst; B. F. Farr; W. M. Farr; M. Favata; D. Fazi; H. Fehrmann; D. Feldbaum; F. Feroz; I. Ferrante; F. Ferrini; F. Fidecaro; L. S. Finn; I. Fiori; R. P. Fisher; R. Flaminio; S. Foley; E. Forsi; L. A. Forte; N. Fotopoulos; J. -D. Fournier; J. Franc; S. Franco; S. Frasca; F. Frasconi; M. Frede; M. A. Frei; Z. Frei; A. Freise; R. Frey; T. T. Fricke; D. Friedrich; P. Fritschel; V. V. Frolov; M. -K. Fujimoto; P. J. Fulda; M. Fyffe; J. Gair; M. Galimberti; L. Gammaitoni; J. Garcia; F. Garufi; M. E. Gáspár; G. Gelencser; G. Gemme; E. Genin; A. Gennai; L. Á. Gergely; S. Ghosh; J. A. Giaime; S. Giampanis; K. D. Giardina; A. Giazotto; S. Gil-Casanova; C. Gill; J. Gleason; E. Goetz; G. González; M. L. Gorodetsky; S. Goßler; R. Gouaty; C. Graef; P. B. Graff; M. Granata; A. Grant; C. Gray; R. J. S. Greenhalgh; A. M. Gretarsson; C. Griffo; H. Grote; K. Grover; S. Grunewald; G. M. Guidi; C. Guido; R. Gupta; E. K. Gustafson; R. Gustafson; J. M. Hallam; D. Hammer; G. Hammond; J. Hanks; C. Hanna; J. Hanson; J. Harms; G. M. Harry; I. W. Harry; E. D. Harstad; M. T. Hartman; C. -J. Haster; K. Haughian; K. Hayama; J. -F. Hayau; J. Heefner; A. Heidmann; M. C. Heintze; H. Heitmann; P. Hello; G. Hemming; M. A. Hendry; I. S. Heng; A. W. Heptonstall; V. Herrera; M. Heurs; M. Hewitson; S. Hild; D. Hoak; K. A. Hodge; K. Holt; M. Holtrop; T. Hong; S. Hooper; J. Hough; E. J. Howell; B. Hughey; S. Husa; S. H. Huttner; T. Huynh-Dinh; D. R. Ingram; R. Inta; T. Isogai; A. Ivanov; K. Izumi; M. Jacobson; E. James; Y. J. Jang; P. Jaranowski; E. Jesse; W. W. Johnson; D. I. Jones; R. Jones; R. J. G. Jonker; L. Ju; P. Kalmus; V. Kalogera; S. Kandhasamy; G. Kang; J. B. Kanner; M. Kasprzack; R. Kasturi; E. Katsavounidis; W. Katzman; H. Kaufer; K. Kaufman; K. Kawabe; S. Kawamura; F. Kawazoe; D. Keitel; D. Kelley; W. Kells; D. G. Keppel; Z. Keresztes; A. Khalaidovski; F. Y. Khalili; E. A. Khazanov; B. K. Kim; C. Kim; H. Kim; K. Kim; N. Kim; Y. M. Kim; P. J. King
2013-10-22T23:59:59.000Z
Compact binary systems with neutron stars or black holes are one of the most promising sources for ground-based gravitational wave detectors. Gravitational radiation encodes rich information about source physics; thus parameter estimation and model selection are crucial analysis steps for any detection candidate events. Detailed models of the anticipated waveforms enable inference on several parameters, such as component masses, spins, sky location and distance that are essential for new astrophysical studies of these sources. However, accurate measurements of these parameters and discrimination of models describing the underlying physics are complicated by artifacts in the data, uncertainties in the waveform models and in the calibration of the detectors. Here we report such measurements on a selection of simulated signals added either in hardware or software to the data collected by the two LIGO instruments and the Virgo detector during their most recent joint science run, including a "blind injection" where the signal was not initially revealed to the collaboration. We exemplify the ability to extract information about the source physics on signals that cover the neutron star and black hole parameter space over the individual mass range 1 Msun - 25 Msun and the full range of spin parameters. The cases reported in this study provide a snap-shot of the status of parameter estimation in preparation for the operation of advanced detectors.
Design of Optimal Experiments for Parameter Estimation of Microalgae
Paris-Sud XI, Université de
. INTRODUCTION Microalgae have received a specific attention in the frame- work of renewable energy generationDesign of Optimal Experiments for Parameter Estimation of Microalgae Growth Models Rafael Mu microalgal production towards a profitable process of renewable energy generation. To render models
How to fool CMB parameter estimation
William H. Kinney
2000-05-19T23:59:59.000Z
With the release of the data from the Boomerang and MAXIMA-1 balloon flights, estimates of cosmological parameters based on the Cosmic Microwave Background (CMB) have reached unprecedented precision. In this paper I show that it is possible for these estimates to be substantially biased by features in the primordial density power spectrum. I construct primordial power spectra which mimic to within cosmic variance errors the effect of changing parameters such as the baryon density and neutrino mass, meaning that even an ideal measurement would be unable to resolve the degeneracy. Complementary measurements are necessary to resolve this ambiguity in parameter estimation efforts based on CMB temperature fluctuations alone.
Language model parameter estimation using user transcriptions
Hsu, Bo-June
In limited data domains, many effective language modeling techniques construct models with parameters to be estimated on an in-domain development set. However, in some domains, no such data exist beyond the unlabeled test ...
Compressing measurements in quantum dynamic parameter estimation
Magesan, Easwar
We present methods that can provide an exponential savings in the resources required to perform dynamic parameter estimation using quantum systems. The key idea is to merge classical compressive sensing techniques with ...
Frequency tracking and parameter estimation for robust quantum state estimation
Ralph, Jason F. [Department of Electrical Engineering and Electronics, University of Liverpool, Brownlow Hill, Liverpool L69 3GJ (United Kingdom); Jacobs, Kurt [Department of Physics, University of Massachusetts at Boston, 100 Morrissey Blvd, Boston, Massachusetts 02125 (United States); Hill, Charles D. [Centre for Quantum Computation and Communication Technology, School of Physics, University of Melbourne, Victoria 3010 (Australia)
2011-11-15T23:59:59.000Z
In this paper we consider the problem of tracking the state of a quantum system via a continuous weak measurement. If the system Hamiltonian is known precisely, this merely requires integrating the appropriate stochastic master equation. However, even a small error in the assumed Hamiltonian can render this approach useless. The natural answer to this problem is to include the parameters of the Hamiltonian as part of the estimation problem, and the full Bayesian solution to this task provides a state estimate that is robust against uncertainties. However, this approach requires considerable computational overhead. Here we consider a single qubit in which the Hamiltonian contains a single unknown parameter. We show that classical frequency estimation techniques greatly reduce the computational overhead associated with Bayesian estimation and provide accurate estimates for the qubit frequency.
Synchronous Machine Parameter Estimation Using Orthogonal Series Expansion
, such as analysis of linear time invariant and time varying systems, model reduction; optimal control and system an alternative to estimate armature circuit parameters of large utility generators using real time operating data and currents measurements) and/or synthetic input-output data. This allows writing a set of linear algebraic
PARAMETER ESTIMATION IN PETROLEUM AND GROUNDWATER MODELING
Ewing, Richard E.
PARAMETER ESTIMATION IN PETROLEUM AND GROUNDWATER MODELING R.E. Ewing, M.S. Pilant, J.G. Wade the location and subsequent remediation of contaminants in groundwater to the optimization of production on grand challenge problems. In today's petroleum industry, reservoir simulators are routinely used
Modeling and Parameter Estimation of Interpenetrating
Grossmann, Ignacio E.
Modeling and Parameter Estimation of Interpenetrating Polymer Network ProcessPolymer Network, PA 15213 #12;Interpenetrating Polymer Network Processp g y Monomer Initiator P l i ti tPolymerization reactor Seed particle Monomer droplet Aqueous mediaq Seed Polymer A Monomer B Seed Polymer A 2Fig 1. Seed
SWOT Satellite Mission: Combined State Parameter Estimation
Washington at Seattle, University of
-parameter estimation problem Data assimilation experiments Water depth Discharge Channel width Roughness coefficient #12;3 Need for a surface water mission Importance to hydrology gauge measurements insufficient hydraulics Amazon Siberia Ohio #12;4 Global gauge measurements #12;5 SWOT Technology These surface water
adaptive parameter estimation: Topics by E-print Network
Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)
Tokamak Heat Computer Technologies and Information Sciences Websites Summary: . Keywords: Thermonuclear fusion, distributed parameter systems, input state and parameter estimation,...
Cosmological parameter estimation: impact of CMB aberration
Catena, Riccardo [Institut für Theoretische Physik, Friedrich-Hund-Platz 1, 37077 Göttingen (Germany); Notari, Alessio, E-mail: riccardo.catena@theorie.physik.uni-goettingen.de, E-mail: notari@ffn.ub.es [Departament de Física Fondamental i Institut de Ciéncies del Cosmos, Universitat de Barcelona, Martí i Franqués 1, 08028 Barcelona (Spain)
2013-04-01T23:59:59.000Z
The peculiar motion of an observer with respect to the CMB rest frame induces an apparent deflection of the observed CMB photons, i.e. aberration, and a shift in their frequency, i.e. Doppler effect. Both effects distort the temperature multipoles a{sub lm}'s via a mixing matrix at any l. The common lore when performing a CMB based cosmological parameter estimation is to consider that Doppler affects only the l = 1 multipole, and neglect any other corrections. In this paper we reconsider the validity of this assumption, showing that it is actually not robust when sky cuts are included to model CMB foreground contaminations. Assuming a simple fiducial cosmological model with five parameters, we simulated CMB temperature maps of the sky in a WMAP-like and in a Planck-like experiment and added aberration and Doppler effects to the maps. We then analyzed with a MCMC in a Bayesian framework the maps with and without aberration and Doppler effects in order to assess the ability of reconstructing the parameters of the fiducial model. We find that, depending on the specific realization of the simulated data, the parameters can be biased up to one standard deviation for WMAP and almost two standard deviations for Planck. Therefore we conclude that in general it is not a solid assumption to neglect aberration in a CMB based cosmological parameter estimation.
Estimating Wind Turbine Parameters and Quantifying Their Effects on Dynamic Behavior
Hiskens, Ian A.
1 Estimating Wind Turbine Parameters and Quantifying Their Effects on Dynamic Behavior Jonathan variable-speed wind turbines in grid stability studies. Often the values for model parameters are poorly parameters on the dynamic behavior of wind turbine generators. A parameter estimation process is then used
Adaptive Distributed Parameter and Input Estimation in Plasma Tokamak Heat
Boyer, Edmond
. Keywords: Thermonuclear fusion, distributed parameter systems, input state and parameter estimation, adaptive infinite-dimensional estimation, Galerkin method 1. INTRODUCTION In a controlled thermonuclear fusion reactor, the plasma thermal diffusivity and heating energy play an important role
FUZZY SUPERNOVA TEMPLATES. II. PARAMETER ESTIMATION
Rodney, Steven A. [Department of Physics and Astronomy, Johns Hopkins University, Baltimore, MD 21218 (United States); Tonry, John L., E-mail: rodney@jhu.ed, E-mail: jt@ifa.hawaii.ed [Institute for Astronomy, University of Hawaii, Honolulu, HI 96822 (United States)
2010-05-20T23:59:59.000Z
Wide-field surveys will soon be discovering Type Ia supernovae (SNe) at rates of several thousand per year. Spectroscopic follow-up can only scratch the surface for such enormous samples, so these extensive data sets will only be useful to the extent that they can be characterized by the survey photometry alone. In a companion paper we introduced the Supernova Ontology with Fuzzy Templates (SOFT) method for analyzing SNe using direct comparison to template light curves, and demonstrated its application for photometric SN classification. In this work we extend the SOFT method to derive estimates of redshift and luminosity distance for Type Ia SNe, using light curves from the Sloan Digital Sky Survey (SDSS) and Supernova Legacy Survey (SNLS) as a validation set. Redshifts determined by SOFT using light curves alone are consistent with spectroscopic redshifts, showing an rms scatter in the residuals of rms{sub z} = 0.051. SOFT can also derive simultaneous redshift and distance estimates, yielding results that are consistent with the currently favored {Lambda}CDM cosmological model. When SOFT is given spectroscopic information for SN classification and redshift priors, the rms scatter in Hubble diagram residuals is 0.18 mag for the SDSS data and 0.28 mag for the SNLS objects. Without access to any spectroscopic information, and even without any redshift priors from host galaxy photometry, SOFT can still measure reliable redshifts and distances, with an increase in the Hubble residuals to 0.37 mag for the combined SDSS and SNLS data set. Using Monte Carlo simulations, we predict that SOFT will be able to improve constraints on time-variable dark energy models by a factor of 2-3 with each new generation of large-scale SN surveys.
Neural Network Based Modeling of a Large Steam Turbine-Generator Rotor Body Parameters from On technique to estimate and model rotor- body parameters of a large steam turbine-generator from real time
System and method for motor parameter estimation
Luhrs, Bin; Yan, Ting
2014-03-18T23:59:59.000Z
A system and method for determining unknown values of certain motor parameters includes a motor input device connectable to an electric motor having associated therewith values for known motor parameters and an unknown value of at least one motor parameter. The motor input device includes a processing unit that receives a first input from the electric motor comprising values for the known motor parameters for the electric motor and receive a second input comprising motor data on a plurality of reference motors, including values for motor parameters corresponding to the known motor parameters of the electric motor and values for motor parameters corresponding to the at least one unknown motor parameter value of the electric motor. The processor determines the unknown value of the at least one motor parameter from the first input and the second input and determines a motor management strategy for the electric motor based thereon.
Analysis of neutron scattering data: Visualization and parameter estimation
Beauchamp, J.J.; Fedorov, V.; Hamilton, W.A.; Yethiraj, M.
1998-09-01T23:59:59.000Z
Traditionally, small-angle neutron and x-ray scattering (SANS and SAXS) data analysis requires measurements of the signal and corrections due to the empty sample container, detector efficiency and time-dependent background. These corrections are then made on a pixel-by-pixel basis and estimates of relevant parameters (e.g., the radius of gyration) are made using the corrected data. This study was carried out in order to determine whether treatment of the detector efficiency and empty sample cell in a more statistically sound way would significantly reduce the uncertainties in the parameter estimators. Elements of experiment design are shortly discussed in this paper. For instance, we studied the way the time for a measurement should be optimally divided between the counting for signal, background and detector efficiency. In Section 2 we introduce the commonly accepted models for small-angle neutron and x-scattering and confine ourselves to the Guinier and Rayleigh models and their minor generalizations. The traditional approaches of data analysis are discussed only to the extent necessary to allow their comparison with the proposed techniques. Section 3 describes the main stages of the proposed method: visual data exploration, fitting the detector sensitivity function, and fitting a compound model. This model includes three additive terms describing scattering by the sampler, scattering with an empty container and a background noise. We compare a few alternatives for the first term by applying various scatter plots and computing sums of standardized squared residuals. Possible corrections due to smearing effects and randomness of estimated parameters are also shortly discussed. In Section 4 the robustness of the estimators with respect to low and upper bounds imposed on the momentum value is discussed. We show that for the available data set the most accurate and stable estimates are generated by models containing double terms either of Guinier's or Rayleigh's type. The optimal partitioning of the total experimental time between measuring various signals is discussed in Section 5. We applied a straightforward optimization instead of some special experimental techniques because of the numerical simplicity of the corresponding problem. As a criterion of optimality we selected the variance of the gyration radius maximum likelihood estimator. The statistical background of the proposed approach is given in the appendix. The properties of the maximum likelihood estimators and the corresponding iterated estimator together with its possible numerical realization are presented in subsection A.1. In subsection A.2 we prove that the use of a compound model leads to more efficient estimators than a stage-wise analysis of different components entering that model.
Parameter estimation for energy balance models with memory
Parameter estimation for energy balance models with memory By Lionel Roques1,*, MickaÂ¨el D parameter estimation for one-dimensional energy balance models with mem- ory (EBMMs) given localized estimate is still possible in certain cases. Keywords: age dating; Bayesian inference; energy balance model
High-Speed Parameter Estimation Algorithms For Nonlinear Smart Materials
for ferroelectric, ferromagnetic, and ferroelastic materials is the estimation or identification of material alters the position of the cutting head. The nonlinear material behavior creates difficulty whenHigh-Speed Parameter Estimation Algorithms For Nonlinear Smart Materials Jon M. Ernstberger
Parameter Estimation from an Optimal Projection in a Local Environment
A. Bijaoui; A. Recio-Blanco; P. de Laverny
2008-11-03T23:59:59.000Z
The parameter fit from a model grid is limited by our capability to reduce the number of models, taking into account the number of parameters and the non linear variation of the models with the parameters. The Local MultiLinear Regression (LMLR) algorithms allow one to fit linearly the data in a local environment. The MATISSE algorithm, developed in the context of the estimation of stellar parameters from the Gaia RVS spectra, is connected to this class of estimators. A two-steps procedure was introduced. A raw parameter estimation is first done in order to localize the parameter environment. The parameters are then estimated by projection on specific vectors computed for an optimal estimation. The MATISSE method is compared to the estimation using the objective analysis. In this framework, the kernel choice plays an important role. The environment needed for the parameter estimation can result from it. The determination of a first parameter set can be also avoided for this analysis. These procedures based on a local projection can be fruitfully applied to non linear parameter estimation if the number of data sets to be fitted is greater than the number of models.
Calibration as Parameter Estimation in Sensor Networks Kamin Whitehouse
Whitehouse, Kamin
Calibration as Parameter Estimation in Sensor Networks Kamin Whitehouse UC Berkeley Berkeley, CA an ad-hoc localization system for sensor net- works and explain why traditional calibration methods are inadequate for this system. Building upon previous work, we frame calibration as a parameter estimation
Estimation of Parameters in Carbon Sequestration Models from Net Ecosystem
White, Luther
Estimation of Parameters in Carbon Sequestration Models from Net Ecosystem Exchange Data Luther in the context of a deterministic com- partmental carbon sequestration system. Sensitivity and approximation usefulness in the estimation of parameters within a compartmental carbon sequestration model. Previously we
Seismic shape parameters estimation and ground-roll suppression using
Spagnolini, Umberto
Seismic shape parameters estimation and ground-roll suppression using vector-sensor beamforming the problem of estimating the shape parameters of seismic wavefields in linear arrays. The purpose of the subsurface layers from the seismic wavefields registered by surface sensors. However, only the waves
Parameter estimation for agenda-based user simulation
Simon Keizer; Filip Jur?í?ek; François Mairesse; Blaise Thomson; Kai Yu; Steve Young
This paper presents an agenda-based user simulator which has been extended to be trainable on real data with the aim of more closely modelling the complex rational behaviour exhibited by real users. The trainable part is formed by a set of random decision points that may be encountered during the process of receiving a system act and responding with a user act. A samplebased method is presented for using real user data to estimate the parameters that control these decisions. Evaluation results are given both in terms of statistics of generated user behaviour and the quality of policies trained with different simulators. Compared to a handcrafted simulator, the trained system provides a much better fit to corpus data and evaluations suggest that this better fit should result in improved dialogue performance. 1
Estimating atmospheric parameters and reducing noise for multispectral imaging
Conger, James Lynn
2014-02-25T23:59:59.000Z
A method and system for estimating atmospheric radiance and transmittance. An atmospheric estimation system is divided into a first phase and a second phase. The first phase inputs an observed multispectral image and an initial estimate of the atmospheric radiance and transmittance for each spectral band and calculates the atmospheric radiance and transmittance for each spectral band, which can be used to generate a "corrected" multispectral image that is an estimate of the surface multispectral image. The second phase inputs the observed multispectral image and the surface multispectral image that was generated by the first phase and removes noise from the surface multispectral image by smoothing out change in average deviations of temperatures.
Kalman filter data assimilation: Targeting observations and parameter estimation
Bellsky, Thomas, E-mail: bellskyt@asu.edu; Kostelich, Eric J.; Mahalov, Alex [School of Mathematical and Statistical Sciences, Arizona State University, Tempe, Arizona 85287 (United States)] [School of Mathematical and Statistical Sciences, Arizona State University, Tempe, Arizona 85287 (United States)
2014-06-15T23:59:59.000Z
This paper studies the effect of targeted observations on state and parameter estimates determined with Kalman filter data assimilation (DA) techniques. We first provide an analytical result demonstrating that targeting observations within the Kalman filter for a linear model can significantly reduce state estimation error as opposed to fixed or randomly located observations. We next conduct observing system simulation experiments for a chaotic model of meteorological interest, where we demonstrate that the local ensemble transform Kalman filter (LETKF) with targeted observations based on largest ensemble variance is skillful in providing more accurate state estimates than the LETKF with randomly located observations. Additionally, we find that a hybrid ensemble Kalman filter parameter estimation method accurately updates model parameters within the targeted observation context to further improve state estimation.
Parameter Estimation of Gravitational Waves from Precessing BH-NS Inspirals with higher harmonics
R. O'Shaughnessy; B. Farr; E. Ochsner; H. S. Cho; V. Raymond; C. Kim; C. H. Lee
2014-04-11T23:59:59.000Z
Precessing black hole-neutron star (BH-NS) binaries produce a rich gravitational wave signal, encoding the binary's nature and inspiral kinematics. Using the lalinference\\_mcmc Markov-chain Monte Carlo parameter estimation code, we use two fiducial examples to illustrate how the geometry and kinematics are encoded into the modulated gravitational wave signal, using coordinates well-adapted to precession. Even for precessing binaries, we show the performance of detailed parameter estimation can be estimated by "effective" estimates: comparisons of a prototype signal with its nearest neighbors, adopting a fixed sky location and idealized two-detector network. We use detailed and effective approaches to show higher harmonics provide nonzero but small local improvement when estimating the parameters of precessing BH-NS binaries. That said, we show higher harmonics can improve parameter estimation accuracy for precessing binaries ruling out approximately-degenerate source orientations. Our work illustrates quantities gravitational wave measurements can provide, such as reliable component masses and the precise orientation of a precessing short gamma ray burst progenitor relative to the line of sight. "Effective" estimates may provide a simple way to estimate trends in the performance of parameter estimation for generic precessing BH-NS binaries in next-generation detectors. For example, our results suggest that the orbital chirp rate, precession rate, and precession geometry are roughly-independent observables, defining natural variables to organize correlations in the high-dimensional BH-NS binary parameter space.
Adaptive Online Battery Parameters/SOC/Capacity Co-estimation
Chow, Mo-Yuen
and even storage ageing of the battery. Following our previous publications in which we developed an onlineAdaptive Online Battery Parameters/SOC/Capacity Co-estimation Habiballah Rahimi-Eichi and Mo parameters to characterize the performance and application of a battery. Although the nominal capacity
Sequential estimation of intramuscular EMG model parameters for prosthesis control
Paris-Sud XI, Université de
Sequential estimation of intramuscular EMG model parameters for prosthesis control Jonathan parameters which can lead to an active drive of an upper limb prosthesis. A system model will be presented an upper limb prosthesis using signals that express motoneuron activity. Therefore, the com- mand signals
On Parameter Estimation of Urban Storm-Water Runoff Model
On Parameter Estimation of Urban Storm-Water Runoff Model Pedro Avellaneda1 ; Thomas P. Ballestero2 of these parameters are provided for modeling purposes and other urban storm-water quality applications. A normal runoff models are commonly used for urban storm-water quality applications DeCoursey 1985; Tsi- hrintzis
Robust quantum parameter estimation: Coherent magnetometry with feedback
Stockton, John K.; Geremia, J.M.; Doherty, Andrew C.; Mabuchi, Hideo [Norman Bridge Laboratory of Physics, Mail Code 12-33, California Institute of Technology, Pasadena, California 91125 (United States)
2004-03-01T23:59:59.000Z
We describe the formalism for optimally estimating and controlling both the state of a spin ensemble and a scalar magnetic field with information obtained from a continuous quantum limited measurement of the spin precession due to the field. The full quantum parameter estimation model is reduced to a simplified equivalent representation to which classical estimation and control theory is applied. We consider both the tracking of static and fluctuating fields in the transient and steady-state regimes. By using feedback control, the field estimation can be made robust to uncertainty about the total spin number.
Optimal quantum multi-parameter estimation and application to dipole- and exchange-coupled qubits
Kevin C. Young; Mohan Sarovar; Robert Kosut; K. Birgitta Whaley
2009-06-04T23:59:59.000Z
We consider the problem of quantum multi-parameter estimation with experimental constraints and formulate the solution in terms of a convex optimization. Specifically, we outline an efficient method to identify the optimal strategy for estimating multiple unknown parameters of a quantum process and apply this method to a realistic example. The example is two electron spin qubits coupled through the dipole and exchange interactions with unknown coupling parameters -- explicitly, the position vector relating the two qubits and the magnitude of the exchange interaction are unknown. This coupling Hamiltonian generates a unitary evolution which, when combined with arbitrary single-qubit operations, produces a universal set of quantum gates. However, the unknown parameters must be known precisely to generate high-fidelity gates. We use the Cram\\'er-Rao bound on the variance of a point estimator to construct the optimal series of experiments to estimate these free parameters, and present a complete analysis of the optimal experimental configuration. Our method of transforming the constrained optimal parameter estimation problem into a convex optimization is powerful and widely applicable to other systems.
Camera Parameters Estimation from Hand-labelled Sun Sositions
Treuille, Adrien
Camera Parameters Estimation from Hand-labelled Sun Sositions in Image Sequences Jean the sun is visible in an image sequence. The technique requires a user to label the position of the sun Results on Real Data 7 6 Summary 8 #12;#12;1 1 Introduction In this document, we show that if the sun
UNSCENTED KALMAN FILTERING FOR SPACECRAFT ATTITUDE STATE AND PARAMETER ESTIMATION
Hall, Christopher D.
AAS-04-115 UNSCENTED KALMAN FILTERING FOR SPACECRAFT ATTITUDE STATE AND PARAMETER ESTIMATION Matthew C. VanDyke , Jana L. Schwartz , Christopher D. Hall An Unscented Kalman Filter (UKF) is derived with an Extended Kalman Filter (EKF). The EKF is an extension of the linear Kalman Filter for nonlinear systems
Estimating the Parameters of the Marshall Olkin Bivariate Weibull
Kundu, Debasis
Estimating the Parameters of the Marshall Olkin Bivariate Weibull Distribution by EM Algorithm Debasis Kundu & Arabin Kumar Dey Abstract In this paper we consider the Marshall-Olkin bivariate Weibull distribution. The Marshall-Olkin bivariate Weibull distribution is a singular distribution, whose both
Estimation of steady-state basic parameters of stars
B. V. Vasiliev
2000-03-30T23:59:59.000Z
From a minimum of total energy of celestial bodies, their basic parameters are obtained. The steady-state values of the mass, radius, and temperature of stars and white dwarfs, as well as masses of pulsars are calculated. The luminosity and giromagnetic ratio of celestial bodies are estimated. All the obtained values are in a satisfactory agreement with observation data.
Parameter estimation of permanent magnet stepper motors without mechanical sensors
Paris-Sud XI, UniversitÃ© de
.1016/j.conengprac.2014.01.015 #12;1. Introduction Permanent Magnet Stepper Motors (PMSM's) are widely used in indus- try for position control, especially in manufacturing applications. PMSM's are more-time adaptation, and fault detection. The estimation of PMSM parameters was studied in (Blauch et al., 1993), 2
Dresser, George Brayton
1969-01-01T23:59:59.000Z
are pre- ;ected. The operating ratio is defined and its use for regulatory pur- poses discussed. Procedures for the estimation of operating ratios from CTS and cost data are explained. An approximate formula for the variance of the ratio of two random... 31 TEXAS INTRASTATE TRAFFIC 1967 (Operating Ratio by Actual Weight of Shipment). . 32 TEXAS INTRASTATE TRAFFIC 1967 (Operating Ratio by Number Miles Shipm nt Moved) 33 CHAPTER I STATEMENT QF THE PRGBLEM 1. 1 Introduction The Southwestern...
Parameter estimation of quantum processes using convex optimization
Gábor Balló; Katalin M. Hangos
2010-04-29T23:59:59.000Z
A convex optimization based method is proposed for quantum process tomography, in the case of known channel model structure, but unknown channel parameters. The main idea is to select an affine parametrization of the Choi matrix as a set of optimization variables, and formulate a semidefinite programming problem with a least squares objective function. Possible convex relations between the optimization variables are also taken into account to improve the estimation. Simulation case studies show, that the proposed method can significantly increase the accuracy of the parameter estimation, if the channel model structure is known. Beside the convex part, the determination of the channel parameters from the optimization variables is a nonconvex step in general. In the case of Pauli channels however, the method reduces to a purely convex optimization problem, allowing to obtain a globally optimal solution.
Pankow, C; Ochsner, E; O'Shaughnessy, R
2015-01-01T23:59:59.000Z
We introduce a highly-parallelizable architecture for estimating parameters of compact binary coalescence using gravitational-wave data and waveform models. Using a spherical harmonic mode decomposition, the waveform is expressed as a sum over modes that depend on the intrinsic parameters (e.g. masses) with coefficients that depend on the observer dependent extrinsic parameters (e.g. distance, sky position). The data is then prefiltered against those modes, at fixed intrinsic parameters, enabling efficiently evaluation of the likelihood for generic source positions and orientations, independent of waveform length or generation time. We efficiently parallelize our intrinsic space calculation by integrating over all extrinsic parameters using a Monte Carlo integration strategy. Since the waveform generation and prefiltering happens only once, the cost of integration dominates the procedure. Also, we operate hierarchically, using information from existing gravitational-wave searches to identify the regions of pa...
Force Field Parameter Estimation of Functional Perfluoropolyether Lubricants
Smith, R.; Chung, P.S.; Steckel, J; Jhon, M.S.; Biegler, L.T.
2011-01-01T23:59:59.000Z
The head disk interface in a hard disk drive can be considered to be one of the hierarchical multiscale systems, which require the hybridization of multiscale modeling methods with coarse-graining procedure. However, the fundamental force field parameters are required to enable the coarse-graining procedure from atomistic/molecular scale to mesoscale models. In this paper, we investigate beyond molecular level and perform ab initio calculations to obtain the force field parameters. Intramolecular force field parameters for Zdol and Ztetraol were evaluated with truncated PFPE molecules to allow for feasible quantum calculations while still maintaining the characteristic chemical structure of the end groups. Using the harmonic approximation to the bond and angle potentials, the parameters were derived from the Hessian matrix, and the dihedral force constants are fit to the torsional energy profiles generated by a series of constrained molecular geometry optimization.
Force Field Parameter Estimation of Functional Perfluoropolyether Lubricants
Smith, R.; Chung, P.S.; Steckel, J; Jhon, M.S.; Biegler, L.T.
2011-01-01T23:59:59.000Z
The head disk interface in hard disk drive can be considered one of the hierarchical multiscale systems, which require the hybridization of multiscale modeling methods with coarse-graining procedure. However, the fundamental force field parameters are required to enable the coarse-graining procedure from atomistic/molecular scale to mesoscale models .In this paper, we investigate beyond molecular level and perform ab-initio calculations to obtain the force field parameters. Intramolecular force field parameters for the Zdol and Ztetraol were evaluated with truncated PFPE molecules to allow for feasible quantum calculations while still maintaining the characteristic chemical structure of the end groups. Using the harmonic approximation to the bond and angle potentials, the parameters were derived from the Hessian matrix, and the dihedral force constants are fit to the torsional energy profiles generated by a series of constrained molecular geometry optimization.
Bounds on Quantum Multiple-Parameter Estimation with Gaussian State
Yang Gao; Hwang Lee
2014-07-28T23:59:59.000Z
We investigate the quantum Cramer-Rao bounds on the joint multiple-parameter estimation with the Gaussian state as a probe. We derive the explicit right logarithmic derivative and symmetric logarithmic derivative operators in such a situation. We compute the corresponding quantum Fisher information matrices, and find that they can be fully expressed in terms of the mean displacement and covariance matrix of the Gaussian state. Finally, we give some examples to show the utility of our analytical results.
Combined Estimation of Hydrogeologic Conceptual Model and Parameter Uncertainty
Meyer, Philip D.; Ye, Ming; Neuman, Shlomo P.; Cantrell, Kirk J.
2004-03-01T23:59:59.000Z
The objective of the research described in this report is the development and application of a methodology for comprehensively assessing the hydrogeologic uncertainties involved in dose assessment, including uncertainties associated with conceptual models, parameters, and scenarios. This report describes and applies a statistical method to quantitatively estimate the combined uncertainty in model predictions arising from conceptual model and parameter uncertainties. The method relies on model averaging to combine the predictions of a set of alternative models. Implementation is driven by the available data. When there is minimal site-specific data the method can be carried out with prior parameter estimates based on generic data and subjective prior model probabilities. For sites with observations of system behavior (and optionally data characterizing model parameters), the method uses model calibration to update the prior parameter estimates and model probabilities based on the correspondence between model predictions and site observations. The set of model alternatives can contain both simplified and complex models, with the requirement that all models be based on the same set of data. The method was applied to the geostatistical modeling of air permeability at a fractured rock site. Seven alternative variogram models of log air permeability were considered to represent data from single-hole pneumatic injection tests in six boreholes at the site. Unbiased maximum likelihood estimates of variogram and drift parameters were obtained for each model. Standard information criteria provided an ambiguous ranking of the models, which would not justify selecting one of them and discarding all others as is commonly done in practice. Instead, some of the models were eliminated based on their negligibly small updated probabilities and the rest were used to project the measured log permeabilities by kriging onto a rock volume containing the six boreholes. These four projections, and associated kriging variances, were averaged using the posterior model probabilities as weights. Finally, cross-validation was conducted by eliminating from consideration all data from one borehole at a time, repeating the above process, and comparing the predictive capability of the model-averaged result with that of each individual model. Using two quantitative measures of comparison, the model-averaged result was superior to any individual geostatistical model of log permeability considered.
The generation of shared cryptographic keys through channel impulse response estimation at 60 GHz.
Young, Derek P.; Forman, Michael A.; Dowdle, Donald Ryan
2010-09-01T23:59:59.000Z
Methods to generate private keys based on wireless channel characteristics have been proposed as an alternative to standard key-management schemes. In this work, we discuss past work in the field and offer a generalized scheme for the generation of private keys using uncorrelated channels in multiple domains. Proposed cognitive enhancements measure channel characteristics, to dynamically change transmission and reception parameters as well as estimate private key randomness and expiration times. Finally, results are presented on the implementation of a system for the generation of private keys for cryptographic communications using channel impulse-response estimation at 60 GHz. The testbed is composed of commercial millimeter-wave VubIQ transceivers, laboratory equipment, and software implemented in MATLAB. Novel cognitive enhancements are demonstrated, using channel estimation to dynamically change system parameters and estimate cryptographic key strength. We show for a complex channel that secret key generation can be accomplished on the order of 100 kb/s.
Effect of noncircularity of experimental beam on CMB parameter estimation
Das, Santanu; Paulson, Sonu Tabitha
2015-01-01T23:59:59.000Z
Measurement of Cosmic Microwave Background (CMB) anisotropies has been playing a lead role in precision cosmology by providing some of the tightest constrains on cosmological models and parameters. However, precision can only be meaningful when all major systematic effects are taken into account. Non-circular beams in CMB experiments can cause large systematic deviation in the angular power spectrum, not only by modifying the measurement at a given multipole, but also introducing coupling between different multipoles through a deterministic bias matrix. Here we add a mechanism for emulating the effect of a full bias matrix to the Planck likelihood code through the parameter estimation code SCoPE. We show that if the angular power spectrum was measured with a non-circular beam, the assumption of circular Gaussian beam or considering only the diagonal part of the bias matrix can lead to huge error in parameter estimation. We demonstrate that, at least for elliptical Gaussian beams, use of scalar beam window fun...
Error estimates and specification parameters for functional renormalization
Schnoerr, David [Institute for Theoretical Physics, University of Heidelberg, D-69120 Heidelberg (Germany)] [Institute for Theoretical Physics, University of Heidelberg, D-69120 Heidelberg (Germany); Boettcher, Igor, E-mail: I.Boettcher@thphys.uni-heidelberg.de [Institute for Theoretical Physics, University of Heidelberg, D-69120 Heidelberg (Germany)] [Institute for Theoretical Physics, University of Heidelberg, D-69120 Heidelberg (Germany); Pawlowski, Jan M. [Institute for Theoretical Physics, University of Heidelberg, D-69120 Heidelberg (Germany) [Institute for Theoretical Physics, University of Heidelberg, D-69120 Heidelberg (Germany); ExtreMe Matter Institute EMMI, GSI Helmholtzzentrum für Schwerionenforschung mbH, D-64291 Darmstadt (Germany); Wetterich, Christof [Institute for Theoretical Physics, University of Heidelberg, D-69120 Heidelberg (Germany)] [Institute for Theoretical Physics, University of Heidelberg, D-69120 Heidelberg (Germany)
2013-07-15T23:59:59.000Z
We present a strategy for estimating the error of truncated functional flow equations. While the basic functional renormalization group equation is exact, approximated solutions by means of truncations do not only depend on the choice of the retained information, but also on the precise definition of the truncation. Therefore, results depend on specification parameters that can be used to quantify the error of a given truncation. We demonstrate this for the BCS–BEC crossover in ultracold atoms. Within a simple truncation the precise definition of the frequency dependence of the truncated propagator affects the results, indicating a shortcoming of the choice of a frequency independent cutoff function.
Multi-parameter estimating photometric redshifts with artificial neural networks
Lili Li; Yanxia Zhang; Yongheng Zhao; Dawei Yang
2007-04-17T23:59:59.000Z
We calculate photometric redshifts from the Sloan Digital Sky Survey Data Release 2 Galaxy Sample using artificial neural networks (ANNs). Different input patterns based on various parameters (e.g. magnitude, color index, flux information) are explored and their performances for redshift prediction are compared. For ANN technique, any parameter may be easily incorporated as input, but our results indicate that using dereddening magnitude produces photometric redshift accuracies often better than the Petrosian magnitude or model magnitude. Similarly, the model magnitude is also superior to Petrosian magnitude. In addition, ANNs also show better performance when the more effective parameters increase in the training set. Finally, the method is tested on a sample of 79, 346 galaxies from the SDSS DR2. When using 19 parameters based on the dereddening magnitude, the rms error in redshift estimation is sigma(z)=0.020184. The ANN is highly competitive tool when compared with traditional template-fitting methods where a large and representative training set is available.
Chauhan, Sanjay S.
of dam failure for use in Dam Safety Risk Assessments. For the emergency preparedness applications inundation characteristics. The scale of estimated consequences associated with dam failure, and especially making up the dam, and the reservoir head and volume at the time of failure. Defining breach parameters
Optimal Bayesian experimental design for contaminant transport parameter estimation
Tsilifis, Panagiotis; Hajali, Paris
2015-01-01T23:59:59.000Z
Experimental design is crucial for inference where limitations in the data collection procedure are present due to cost or other restrictions. Optimal experimental designs determine parameters that in some appropriate sense make the data the most informative possible. In a Bayesian setting this is translated to updating to the best possible posterior. Information theoretic arguments have led to the formation of the expected information gain as a design criterion. This can be evaluated mainly by Monte Carlo sampling and maximized by using stochastic approximation methods, both known for being computationally expensive tasks. We propose an alternative framework where a lower bound of the expected information gain is used as the design criterion. In addition to alleviating the computational burden, this also addresses issues concerning estimation bias. The problem of permeability inference in a large contaminated area is used to demonstrate the validity of our approach where we employ the massively parallel vers...
Cosmological parameter estimation and Bayesian model comparison using VSA data
Anze Slosar; Pedro Carreira; Kieran Cleary; Rod D. Davies; Richard J. Davis; Clive Dickinson; Ricardo Genova-Santos; Keith Grainge; Carlos M. Gutierrez; Yaser A. Hafez; Michael P. Hobson; Michael E. Jones; Rudiger Kneissl; Katy Lancaster; Anthony Lasenby; J. P. Leahy; Klaus Maisinger; Phil J. Marshall; Guy G. Pooley; Rafael Rebolo; Jose Alberto Rubino-Martin; Ben Rusholme; Richard D. E. Saunders; Richard Savage; Paul F. Scott; Pedro J. Sosa Molina; Angela C. Taylor; David Titterington; Elizabeth Waldram; Robert A. Watson; Althea Wilkinson
2003-02-28T23:59:59.000Z
We constrain the basic comological parameters using the first observations by the Very Small Array (VSA) in its extended configuration, together with existing cosmic microwave background data and other cosmological observations. We estimate cosmological parameters for four different models of increasing complexity. In each case, careful consideration is given to implied priors and the Bayesian evidence is calculated in order to perform model selection. We find that the data are most convincingly explained by a simple flat Lambda-CDM cosmology without tensor modes. In this case, combining just the VSA and COBE data sets yields the 68 per cent confidence intervals Omega_b h^2=0.034 (+0.007, -0.007), Omega_dm h^2 = 0.18 (+0.06, -0.04), h=0.72 (+0.15,-0.13), n_s=1.07 (+0.06,-0.06) and sigma_8=1.17 (+0.25, -0.20). The most general model considered includes spatial curvature, tensor modes, massive neutrinos and a parameterised equation of state for the dark energy. In this case, by combining all recent cosmological data, we find, in particular, 95 percent limit on the tensor-to-scalar ratio R < 0.63 and on the fraction of massive neutrinos f_nu < 0.11; we also obtain the 68 per cent confidence interval w=-1.06 (+0.20, -0.25) on the equation of state of dark energy.
Direct Reservoir Parameter Estimation Using Joint Inversion of Marine Seismic AVA & CSEM Data
2005-01-01T23:59:59.000Z
estimation of reservoir parameters from geophysical data isthe seismic data fit at times below the reservoir. InversionReservoir Parameter Estimation Using Joint Inversion of Marine Seismic AVA & CSEM Data
C. Pankow; P. Brady; E. Ochsner; R. O'Shaughnessy
2015-02-15T23:59:59.000Z
We introduce a highly-parallelizable architecture for estimating parameters of compact binary coalescence using gravitational-wave data and waveform models. Using a spherical harmonic mode decomposition, the waveform is expressed as a sum over modes that depend on the intrinsic parameters (e.g. masses) with coefficients that depend on the observer dependent extrinsic parameters (e.g. distance, sky position). The data is then prefiltered against those modes, at fixed intrinsic parameters, enabling efficiently evaluation of the likelihood for generic source positions and orientations, independent of waveform length or generation time. We efficiently parallelize our intrinsic space calculation by integrating over all extrinsic parameters using a Monte Carlo integration strategy. Since the waveform generation and prefiltering happens only once, the cost of integration dominates the procedure. Also, we operate hierarchically, using information from existing gravitational-wave searches to identify the regions of parameter space to emphasize in our sampling. As proof of concept and verification of the result, we have implemented this algorithm using standard time-domain waveforms, processing each event in less than one hour on recent computing hardware. For most events we evaluate the marginalized likelihood (evidence) with statistical errors of less than about 5%, and even smaller in many cases. With a bounded runtime independent of the waveform model starting frequency, a nearly-unchanged strategy could estimate NS-NS parameters in the 2018 advanced LIGO era. Our algorithm is usable with any noise curve and existing time-domain model at any mass, including some waveforms which are computationally costly to evolve.
UWB channel estimation using new generating TR transceivers
Nekoogar, Faranak (San Ramon, CA); Dowla, Farid U. (Castro Valley, CA); Spiridon, Alex (Palo Alto, CA); Haugen, Peter C. (Livermore, CA); Benzel, Dave M. (Livermore, CA)
2011-06-28T23:59:59.000Z
The present invention presents a simple and novel channel estimation scheme for UWB communication systems. As disclosed herein, the present invention maximizes the extraction of information by incorporating a new generation of transmitted-reference (Tr) transceivers that utilize a single reference pulse(s) or a preamble of reference pulses to provide improved channel estimation while offering higher Bit Error Rate (BER) performance and data rates without diluting the transmitter power.
Estimating crop net primary production using inventory data and MODIS-derived parameters
Bandaru, Varaprasad; West, Tristram O.; Ricciuto, Daniel M.; Izaurralde, Roberto C.
2013-06-03T23:59:59.000Z
National estimates of spatially-resolved cropland net primary production (NPP) are needed for diagnostic and prognostic modeling of carbon sources, sinks, and net carbon flux. Cropland NPP estimates that correspond with existing cropland cover maps are needed to drive biogeochemical models at the local scale and over national and continental extents. Existing satellite-based NPP products tend to underestimate NPP on croplands. A new Agricultural Inventory-based Light Use Efficiency (AgI-LUE) framework was developed to estimate individual crop biophysical parameters for use in estimating crop-specific NPP. The method is documented here and evaluated for corn and soybean crops in Iowa and Illinois in years 2006 and 2007. The method includes a crop-specific enhanced vegetation index (EVI) from the Moderate Resolution Imaging Spectroradiometer (MODIS), shortwave radiation data estimated using Mountain Climate Simulator (MTCLIM) algorithm and crop-specific LUE per county. The combined aforementioned variables were used to generate spatially-resolved, crop-specific NPP that correspond to the Cropland Data Layer (CDL) land cover product. The modeling framework represented well the gradient of NPP across Iowa and Illinois, and also well represented the difference in NPP between years 2006 and 2007. Average corn and soybean NPP from AgI-LUE was 980 g C m-2 yr-1 and 420 g C m-2 yr-1, respectively. This was 2.4 and 1.1 times higher, respectively, for corn and soybean compared to the MOD17A3 NPP product. Estimated gross primary productivity (GPP) derived from AgI-LUE were in close agreement with eddy flux tower estimates. The combination of new inputs and improved datasets enabled the development of spatially explicit and reliable NPP estimates for individual crops over large regional extents.
On the empirical statistics of parameter estimates in parametric modeling
Zhu, Yao
1988-01-01T23:59:59.000Z
. The PARCOR parametrization has received considerable attention because of its attractive features. One of these features shows that the magnitudes of the k?'s obtained from the autocorrelation method are guaranteed to be less than one (6]. 10 2. 1. 2... are asymptotically unbiased estimates with covsriance matrix CA, because A are the maximum likelihood estimates. Similarly, the asymptotic probability density function of the PARCOR coeffi- cient estimates K = (kq, ks, ~ ~ ~, k&) is Gaussian and given by f(K...
Xu, Wen, 1967-
2001-01-01T23:59:59.000Z
Matched-field methods concern estimation of source location and/or ocean environmental parameters by exploiting full wave modeling of acoustic waveguide propagation. Typical estimation performance demonstrates two fundamental ...
Chaudhari, Qasim Mahmood
2009-05-15T23:59:59.000Z
. This dissertation focuses on deriving e±cient estimators for the clock parameters of the network nodes for synchronization with the reference node and the estimators variance thresholds are obtained to lower bound the maximum achievable performance. For any general...
Szilagyi, Jozsef
boundary conditions Jozsef Szilagyi Conservation and Survey Division, University of Nebraska analysis Citation: Szilagyi, J., Sensitivity analysis of aquifer parameter estimations based on the Laplace
Tonn, B.; Hwang, Ho-Ling; Elliot, S. [Oak Ridge National Lab., TN (United States); Peretz, J.; Bohm, R.; Hendrucko, B. [Univ. of Tennessee, Knoxville, TN (United States)
1994-04-01T23:59:59.000Z
This report contains descriptions of methodologies to be used to estimate the one-time generation of hazardous waste associated with five different types of remediation programs: Superfund sites, RCRA Corrective Actions, Federal Facilities, Underground Storage Tanks, and State and Private Programs. Estimates of the amount of hazardous wastes generated from these sources to be shipped off-site to commercial hazardous waste treatment and disposal facilities will be made on a state by state basis for the years 1993, 1999, and 2013. In most cases, estimates will be made for the intervening years, also.
Genetic parameter estimation of mohair production traits in Angora goats
Podisi, Baitsi
1998-01-01T23:59:59.000Z
analyzed included fiber diameter (FD; n = 4329), grease fleece weight (FW; n = 7073), body weight (BW; n = 4171) and fertility (FERT; n = 2118). Heritability estimates were obtained for all the traits using REML procedures with a multivariate animal model...
Global neutrino parameter estimation using Markov Chain Monte Carlo
Steen Hannestad
2007-10-10T23:59:59.000Z
We present a Markov Chain Monte Carlo global analysis of neutrino parameters using both cosmological and experimental data. Results are presented for the combination of all presently available data from oscillation experiments, cosmology, and neutrinoless double beta decay. In addition we explicitly study the interplay between cosmological, tritium decay and neutrinoless double beta decay data in determining the neutrino mass parameters. We furthermore discuss how the inference of non-neutrino cosmological parameters can benefit from future neutrino mass experiments such as the KATRIN tritium decay experiment or neutrinoless double beta decay experiments.
Estimating Building Simulation Parameters via Bayesian Structure Learning
Edwards, Richard E [ORNL; New, Joshua Ryan [ORNL; Parker, Lynne Edwards [ORNL
2013-01-01T23:59:59.000Z
Many key building design policies are made using sophisticated computer simulations such as EnergyPlus (E+), the DOE flagship whole-building energy simulation engine. E+ and other sophisticated computer simulations have several major problems. The two main issues are 1) gaps between the simulation model and the actual structure, and 2) limitations of the modeling engine's capabilities. Currently, these problems are addressed by having an engineer manually calibrate simulation parameters to real world data or using algorithmic optimization methods to adjust the building parameters. However, some simulations engines, like E+, are computationally expensive, which makes repeatedly evaluating the simulation engine costly. This work explores addressing this issue by automatically discovering the simulation's internal input and output dependencies from 20 Gigabytes of E+ simulation data, future extensions will use 200 Terabytes of E+ simulation data. The model is validated by inferring building parameters for E+ simulations with ground truth building parameters. Our results indicate that the model accurately represents parameter means with some deviation from the means, but does not support inferring parameter values that exist on the distribution's tail.
A novel method for improving the accuracy of parameter estimates
Otter, Russell William
1985-01-01T23:59:59.000Z
one fluid, Darcy's law can be written for each fluid. In the specific case of oil and water, designated by the subscripts o and w respectively, the equations are -Kk, ? aP v Bx -Kk?ap v sx (2) (3) Here k, ? and kryo are the relative... efficient oil production scemes. In order for the simulations to be accurate the mathematical models used must be appropriate and the parameters in the model equations must be correct. The parameters of importance, to petroleum reservoir simulation...
Parameter Estimation and Tracking in Physical Layer Network Coding
Jain, Manish
2012-07-16T23:59:59.000Z
to the receiver at the relay node. Our approach will first jointly estimate the timing o sets and fading gains of both signals using a known pilot sequence sent by both transmitters in the beginning of the packet and then perform Maximum Likelihood detection...
Outline The Ensemble Kalman Filter Parameter estimation Test example Conclusion History matching via
Mosegaard, Klaus
Outline The Ensemble Kalman Filter Parameter estimation Test example Conclusion History matching via Ensemble Kalman Filtering for a synthetic test case Jan Frydendall IMM, CERE jf@imm.dtu.dk Jan Frydendall, CERE and IMM 1 #12;Outline The Ensemble Kalman Filter Parameter estimation Test example
Wavelet-Based Parameter Estimation for Trend Contaminated Fractionally Differenced Processes
Washington at Seattle, University of
Wavelet-Based Parameter Estimation for Trend Contaminated Fractionally Differenced Processes Peter to scientific problems in the environmental and ecological sciences. #12;#12;Wavelet-Based Parameter Estimation of polynomial trend plus FD noise and apply the discrete wavelet transform (DWT) to separate a time series
Integrated Estimation and Tracking of Performance Model Parameters with Autoregressive Trends
Woodside, C. Murray
1 Integrated Estimation and Tracking of Performance Model Parameters with Autoregressive Trends Tao the model parameters can be tracked by an estimator such as a Kalman Filter, so that decisions can excessive cost (as is usually the case for the CPU time of a service). Because there may be significant
Cohen, Israel
Simultaneous parameter estimation and state smoothing of complex GARCH process in the presence 2010 Keywords: GARCH Parameter estimation Noisy data Maximum likelihood Recursive maximum likelihood a b s t r a c t ARCH and GARCH models have been used recently in model-based signal processing
Data-driven Techniques to Estimate Parameters in the Homogenized Energy Model for Shape Memory. In this paper, we focus on the homogenized energy model for shape memory alloys (SMA). Specifically, we develop parameters are compared to the initial estimates. 1 Introduction Shape memory alloys (SMA) are novel
Parameter Estimation in Groundwater Flow Models with Distributed and Pointwise Observations*
Parameter Estimation in Groundwater Flow Models with Distributed and Pointwise Observations* Ben G concerning the least sqaures estimation of parameters in a groundwater flow model. As is typically the caseÂ93Â1Â0153. #12; 1 Introduction Understanding the flow of groundwater is an important scientific and engineering
On the estimation of galaxy structural parameters: the Sersic Model
Ignacio Trujillo; Alister W. Graham; Nicola Caon
2001-02-22T23:59:59.000Z
This paper addresses some questions which have arisen from the use of the S\\'ersic r^{1/n} law in modelling the luminosity profiles of early type galaxies. The first issue deals with the trend between the half-light radius and the structural parameter n. We show that the correlation between these two parameters is not only real, but is a natural consequence from the previous relations found to exist between the model-independent parameters: total luminosity, effective radius and effective surface brightness. We also define a new galaxy concentration index which is largely independent of the image exposure depth, and monotonically related with n. The second question concerns the curious coincidence between the form of the Fundamental Plane and the coupling between _e and r_e when modelling a light profile. We explain, through a mathematical analysis of the S\\'ersic law, why the quantity r_e_e^{0.7} appears almost constant for an individual galaxy, regardless of the value of n (over a large range) adopted in the fit to the light profile. Consequently, Fundamental Planes of the form r_e_e^{0.7} propto sigma_0^x (for any x, and where sigma_0 is the central galaxy velocity dispersion) are insensitive to galaxy structure. Finally, we address the problematic issue of the use of model-dependent galaxy light profile parameters versus model-independent quantities for the half-light radii, mean surface brightness and total galaxy magnitude. The former implicitly assume that the light profile model can be extrapolated to infinity, while the latter quantities, in general, are derived from a signal-to-noise truncated profile. We quantify (mathematically) how these parameters change as one reduces the outer radius of an r^{1/n} profile, and reveal how these can vary substantially when n>4.
Acquaviva, Viviana; Gawiser, Eric
2015-01-01T23:59:59.000Z
We seek to improve the accuracy of joint galaxy photometric redshift estimation and spectral energy distribution (SED) fitting. By simulating different sources of uncorrected systematic errors, we demonstrate that if the uncertainties on the photometric redshifts are estimated correctly, so are those on the other SED fitting parameters, such as stellar mass, stellar age, and dust reddening. Furthermore, we find that if the redshift uncertainties are over(under)-estimated, the uncertainties in SED parameters tend to be over(under)-estimated by similar amounts. These results hold even in the presence of severe systematics and provide, for the first time, a mechanism to validate the uncertainties on these parameters via comparison with spectroscopic redshifts. We propose a new technique (annealing) to re-calibrate the joint uncertainties in the photo-z and SED fitting parameters without compromising the performance of the SED fitting + photo-z estimation. This procedure provides a consistent estimation of the mu...
Chen, Jinsong
Joint stochastic inversion of geophysical data for reservoir parameter estimation Jinsong Chen the stochastic framework, both reservoir parameters and geophysical attributes at unsampled locations. Introduction Conventional methods for reservoir parameter estimation using multiple sources of geophysical data
Estimating home energy decision parameters for a hybrid energyYeconomy policy model
Estimating home energy decision parameters for a hybrid energyYeconomy policy model Mark Jaccard, Canada E-mail: jaccard@sfu.ca Hybrid energyYeconomy models combine the advantages of a technologically parameters translate into the behavioral parameters of a hybrid model. We then simulate household energy
ASYMPTOTIC DISTRIBUTION OF ESTIMATES FOR A TIME-VARYING PARAMETER IN A HARMONIC MODEL
Irizarry, Rafael A.
ASYMPTOTIC DISTRIBUTION OF ESTIMATES FOR A TIME-VARYING PARAMETER IN A HARMONIC MODEL WITH MULTIPLE harmonic regression models are useful for cases where harmonic parameters appear to be time-varying. Least, harmonic regression, signal processing, sound analysis, time-varying parameters, weighted least squares
Madankan, R. [Department of Mechanical and Aerospace Engineering, University at Buffalo (United States); Pouget, S. [Department of Geology, University at Buffalo (United States); Singla, P., E-mail: psingla@buffalo.edu [Department of Mechanical and Aerospace Engineering, University at Buffalo (United States); Bursik, M. [Department of Geology, University at Buffalo (United States); Dehn, J. [Geophysical Institute, University of Alaska, Fairbanks (United States); Jones, M. [Center for Computational Research, University at Buffalo (United States); Patra, A. [Department of Mechanical and Aerospace Engineering, University at Buffalo (United States); Pavolonis, M. [NOAA-NESDIS, Center for Satellite Applications and Research (United States); Pitman, E.B. [Department of Mathematics, University at Buffalo (United States); Singh, T. [Department of Mechanical and Aerospace Engineering, University at Buffalo (United States); Webley, P. [Geophysical Institute, University of Alaska, Fairbanks (United States)
2014-08-15T23:59:59.000Z
Volcanic ash advisory centers are charged with forecasting the movement of volcanic ash plumes, for aviation, health and safety preparation. Deterministic mathematical equations model the advection and dispersion of these plumes. However initial plume conditions – height, profile of particle location, volcanic vent parameters – are known only approximately at best, and other features of the governing system such as the windfield are stochastic. These uncertainties make forecasting plume motion difficult. As a result of these uncertainties, ash advisories based on a deterministic approach tend to be conservative, and many times over/under estimate the extent of a plume. This paper presents an end-to-end framework for generating a probabilistic approach to ash plume forecasting. This framework uses an ensemble of solutions, guided by Conjugate Unscented Transform (CUT) method for evaluating expectation integrals. This ensemble is used to construct a polynomial chaos expansion that can be sampled cheaply, to provide a probabilistic model forecast. The CUT method is then combined with a minimum variance condition, to provide a full posterior pdf of the uncertain source parameters, based on observed satellite imagery. The April 2010 eruption of the Eyjafjallajökull volcano in Iceland is employed as a test example. The puff advection/dispersion model is used to hindcast the motion of the ash plume through time, concentrating on the period 14–16 April 2010. Variability in the height and particle loading of that eruption is introduced through a volcano column model called bent. Output uncertainty due to the assumed uncertain input parameter probability distributions, and a probabilistic spatial-temporal estimate of ash presence are computed.
Online parameter estimation applied to mixed conduction/radiation
Shah, Tejas Jagdish
2005-08-29T23:59:59.000Z
covariance : P0 = E[(x0 ? E[x0])(x0 ? E[x0])T ] (3.21) Time update equations State estimate propagation ^k? = f(k; ^k?1) (3.22) Error covariance propagation P?k = Fk;k?1Pk?1FTk;k?1 + Qk?1 (3.23) Measurement update equations Kalman gain matrix Gk = P?k HTk [HkPk...)(xa0 ? ^a0)T = 2 66 66 4 P0 0 0 0 Rv 0 0 0 Rn 3 77 77 5 (3.36) Calculate sigma points : Xak?1 = ? ^k?1a ^k?1a + ?pPak?1 ^k?1a ? ?pPak?1 ? (3.37) Time update equations Xxkjk?1 = F(Xxkjk?1;uk?1;Xvk?1) (3.38) ^k? = 2LX i=0 W(m)i Xxi;kjk?1 (3.39) P?k = 2LX...
Test models for improving filtering with model errors through stochastic parameter estimation
Gershgorin, B. [Department of Mathematics and Center for Atmosphere and Ocean Science, Courant Institute of Mathematical Sciences, New York University, NY 10012 (United States); Harlim, J. [Department of Mathematics, North Carolina State University, NC 27695 (United States)], E-mail: jharlim@ncsu.edu; Majda, A.J. [Department of Mathematics and Center for Atmosphere and Ocean Science, Courant Institute of Mathematical Sciences, New York University, NY 10012 (United States)
2010-01-01T23:59:59.000Z
The filtering skill for turbulent signals from nature is often limited by model errors created by utilizing an imperfect model for filtering. Updating the parameters in the imperfect model through stochastic parameter estimation is one way to increase filtering skill and model performance. Here a suite of stringent test models for filtering with stochastic parameter estimation is developed based on the Stochastic Parameterization Extended Kalman Filter (SPEKF). These new SPEKF-algorithms systematically correct both multiplicative and additive biases and involve exact formulas for propagating the mean and covariance including the parameters in the test model. A comprehensive study is presented of robust parameter regimes for increasing filtering skill through stochastic parameter estimation for turbulent signals as the observation time and observation noise are varied and even when the forcing is incorrectly specified. The results here provide useful guidelines for filtering turbulent signals in more complex systems with significant model errors.
A Monte Carlo study of the distribution of parameter estimators in a dual exponential decay model
Garcia, Raul
1969-01-01T23:59:59.000Z
of an estimate of the reliability of the parameter estimates calculated. In 1965, Bell and Garcia [2] developed a computer program which permits a solution of the parameters without the time-consuming effort of manual calcu- lations. The same year, Rossing [3...A MONTE CARLO STUDY OF THE DISTRIBUTION OF PARAMETER ESTIMATORS IN A DUAL EXPONENTIAL DECAY MODEL A Thesis by SAUL GARCIA Submitted to the Graduate College of Texas A&M University in partial fulfillment of the requirements for the degree...
Boyer, Edmond
Consequences of a multi-generation exposure to uranium on Caenorhabditis elegans life parameters) Consequences of a multi- generation exposure to uranium on Caenorhabditis el- egans life parametersM of uranium. Several gen- erations were selected to assess growth, reproduction, survival, and dose
Estimation of the parameters of the Weibull distribution from multi-censored samples
Sprinkle, Edgar Eugene
1969-01-01T23:59:59.000Z
ESTIMATION OF THE PARAMETERS OF THE WEIBULL DISTRIBUTION FROM MULTI-CENSORED SAMPLES A Thesis by EDGAR EUGENE SPRINKLE, III Submitted to the Graduate College of Texas A&M University in partial fulfillment of the requirement for the degree... of MASTER OF SCIENCE May 1969 Major Subject: Statistics ESTIMATION OF THE PARAMETERS OF THE WEIBULL DISTRIBUTION FROM MULTI-CENSORED SAMPLES A Thesis EDGAR EUGENE SPRINKLE, III Approved as to style and content by: (Head of Department) (Member...
Failure Pressure Estimates of Steam Generator Tubes Containing Wear-type Defects
Yoon-Suk Chang; Jong-Min Kim; Nam-Su Huh; Young-Jin Kim [School of Mechanical Engineering, Sungkyunkwan University (Korea, Republic of); Seong Sik Hwang; Joung-Soo Kim [Korea Atomic Energy Research Institute (Korea, Republic of)
2006-07-01T23:59:59.000Z
It is commonly requested that steam generator tubes with defects exceeding 40% of wall thickness in depth should be plugged to sustain all postulated loads with appropriate margin. The critical defect dimensions have been determined based on the concept of plastic instability. This criterion, however, is known to be too conservative for some locations and types of defects. In this context, the accurate failure estimation for steam generator tubes with a defect draws increasing attention. Although several guidelines have been developed and are used for assessing the integrity of defected tubes, most of these guidelines are related to stress corrosion cracking or wall-thinning phenomena. As some of steam generator tubes are also failed due to fretting and so on, alternative failure estimation schemes for relevant defects are required. In this paper, three-dimensional finite element (FE) analyses are carried out under internal pressure condition to simulate the failure behavior of steam generator tubes with different defect configurations; elliptical wastage type, wear scar type and rectangular wastage type defects. Maximum pressures based on material strengths are obtained from more than a hundred FE results to predict the failure of the steam generator tube. After investigating the effect of key parameters such as wastage depth, wastage length and wrap angle, simplified failure estimation equations are proposed in relation to the equivalent stress at the deepest point in wastage region. Comparison of failure pressures predicted according to the proposed estimation scheme with some corresponding burst test data shows good agreement, which provides a confidence in the use of the proposed equations to assess the integrity of steam generator tubes with wear-type defects. (authors)
ESTIMATING DAMPING PARAMETERS IN MULTI-DEGREE-OF-FREEDOM VIBRATION SYSTEMS BY BALANCING ENERGY0
Feeny, Brian
ESTIMATING DAMPING PARAMETERS IN MULTI-DEGREE-OF-FREEDOM VIBRATION SYSTEMS BY BALANCING ENERGY0 B is outlined, involving a balance of dissipated and supplied energies over a cycle of pe- riodic vibration a damping estimation method based on the balance of energy. The idea is to compute the energy input per
Estimating Canopy Fuel Parameters with In-Situ and Remote Sensing Data
Mutlu, Muge
2012-02-14T23:59:59.000Z
is to estimate the forest canopy fuel parameters including crown base height (CBH) and crown bulk density (CBD), and to investigate the potential of using airborne lidar data in east Texas. The specific objectives are to: (1) propose allometric estimators of CBD...
Convolution particle filtering for parameter estimation in general state-space models
Paris-Sud XI, UniversitÃ© de
of these aspects [6] [4]. The second approach takes place in a classical Bayesian framework, a prior probability suited, given the context of parameter estimation. Firstly the usual non Bayesian statistical estimates results in practice but suffer from an absence of theoretical backing. The particle filters propose a good
Estimation of regional aquifer parameters using baseflow recession data Victor M. Ponce
Ponce, V. Miguel
's (1963) theoretical model of groundwater flow to a stream is used to estimate regional aquifer parameters diffusiv- ity, hydrogeology, Mexico, Papaloapan. 1 #12;1. Introduction In groundwater hydrology basin. More recent studies have applied Rorabaugh's model to estimate groundwater recharge in diverse
Short communication Real-time estimation of lead-acid battery parameters: A dynamic
Ray, Asok
-charged and over-discharged; similarly, reliable SOH estimates enhance preventive maintenance and life cycle cost situations. Â© 2014 Elsevier B.V. All rights reserved. 1. Introduction Lead-acid batteries provide low-costShort communication Real-time estimation of lead-acid battery parameters: A dynamic data
Ko, Kyungduk
2005-11-01T23:59:59.000Z
The main goal of this research is to estimate the model parameters and to detect multiple change points in the long memory parameter of Gaussian ARFIMA(p, d, q) processes. Our approach is Bayesian and inference is done on wavelet domain. Long memory...
Archisman Ghosh; Walter Del Pozzo; Parameswaran Ajith
2015-05-21T23:59:59.000Z
We characterize the expected statistical errors with which the parameters of black-hole binaries can be measured from gravitational-wave (GW) observations of their inspiral, merger and ringdown by a network of second-generation ground-based GW observatories. We simulate a population of black-hole binaries with uniform distribution of component masses in the interval $(3,80)~M_\\odot$, distributed uniformly in comoving volume, with isotropic orientations. From signals producing signal-to-noise ratio $\\geq 5$ in at least two detectors, we estimate the posterior distributions of the binary parameters using the Bayesian parameter estimation code LALInference. The GW signals will be redshifted due to the cosmological expansion and we measure only the "redshifted" masses. By assuming a cosmology, it is possible to estimate the gravitational masses by inferring the redshift from the measured posterior of the luminosity distance. We find that the measurement of the gravitational masses will be in general dominated by the error in measuring the luminosity distance. In spite of this, the component masses of more than $50\\%$ of the population can be measured with accuracy better than $\\sim 25\\%$ using the Advanced LIGO-Virgo network. Additionally, the mass of the final black hole can be measured with median accuracy $\\sim 18\\%$. Spin of the final black hole can be measured with median accuracy $\\sim 5\\% ~(17\\%)$ for binaries with non-spinning (aligned-spin) black holes. Additional detectors in Japan and India significantly improve the accuracy of sky localization, and moderately improve the estimation of luminosity distance, and hence, that of all mass parameters. We discuss the implication of these results on the observational evidence of intermediate-mass black holes and the estimation of cosmological parameters using GW observations.
Singal, J.; Shmakova, M.; Gerke, B.; /KIPAC, Menlo Park /SLAC /Stanford U.; Griffith, R.L.; /Caltech, JPL; Lotz, J.; /NOAO, Tucson
2011-05-20T23:59:59.000Z
We present a determination of the effects of including galaxy morphological parameters in photometric redshift estimation with an artificial neural network method. Neural networks, which recognize patterns in the information content of data in an unbiased way, can be a useful estimator of the additional information contained in extra parameters, such as those describing morphology, if the input data are treated on an equal footing. We show that certain principal components of the morphology information are correlated with galaxy type. However, we find that for the data used the inclusion of morphological information does not have a statistically significant benefit for photometric redshift estimation with the techniques employed here. The inclusion of these parameters may result in a trade-off between extra information and additional noise, with the additional noise becoming more dominant as more parameters are added.
Ilya Mandel; Christopher P L Berry; Frank Ohme; Stephen Fairhurst; Will M Farr
2014-07-23T23:59:59.000Z
Gravitational-wave astronomy seeks to extract information about astrophysical systems from the gravitational-wave signals they emit. For coalescing compact-binary sources this requires accurate model templates for the inspiral and, potentially, the subsequent merger and ringdown. Models with frequency-domain waveforms that terminate abruptly in the sensitive band of the detector are often used for parameter-estimation studies. We show that the abrupt waveform termination contains significant information that affects parameter-estimation accuracy. If the sharp cutoff is not physically motivated, this extra information can lead to misleadingly good accuracy claims. We also show that using waveforms with a cutoff as templates to recover complete signals can lead to biases in parameter estimates. We evaluate when the information content in the cutoff is likely to be important in both cases. We also point out that the standard Fisher matrix formalism, frequently employed for approximately predicting parameter-estimation accuracy, cannot properly incorporate an abrupt cutoff that is present in both signals and templates; this observation explains some previously unexpected results found in the literature. These effects emphasize the importance of using complete waveforms with accurate merger and ringdown phases for parameter estimation.
Direct Reservoir Parameter Estimation Using Joint Inversion ofMarine Seismic AVA&CSEM Data
Hoversten, G. Michael; Cassassuce, Florence; Gasperikova, Erika; Newman,Gregory A.; Rubin, Yoram; Zhangshuan, Hou; Vasco, Don
2005-01-12T23:59:59.000Z
A new joint inversion algorithm to directly estimate reservoir parameters is described. This algorithm combines seismic amplitude versus angle (AVA) and marine controlled source electromagnetic (CSEM) data. The rock-properties model needed to link the geophysical parameters to the reservoir parameters is described. Errors in the rock-properties model parameters, measured in percent, introduce errors of comparable size in the joint inversion reservoir parameter estimates. Tests of the concept on synthetic one-dimensional models demonstrate improved fluid saturation and porosity estimates for joint AVA-CSEM data inversion (compared to AVA or CSEM inversion alone). Comparing inversions of AVA, CSEM, and joint AVA-CSEM data over the North Sea Troll field, at a location with well control, shows that the joint inversion produces estimated gas saturation, oil saturation and porosity that is closest (as measured by the RMS difference, L1 norm of the difference, and net over the interval) to the logged values whereas CSEM inversion provides the closest estimates of water saturation.
Enhancing parameter precision of optimal quantum estimation by direct quantum feedback
Qiang Zheng; Li Ge; Yao Yao; Qi-jun Zhi
2015-03-01T23:59:59.000Z
Various schemes have been proposed to overcome the drawback of the decoherence on quantum-enhanced parameter estimation. Here we suggest an alternative method, quantum feedback, to enhance the parameter precision of optimal quantum estimation of a dissipative qubit by investigating its dynamics of quantum Fisher information. We find that compared with the case without feedback, the quantum Fisher information of the dissipative qubit in the case of feedback has a large maximum value in time evolution and a smaller decay rate in the long time.
Mukhopadhyay, S.; Tsang, Y.; Finsterle, S.
2009-01-15T23:59:59.000Z
A simple conceptual model has been recently developed for analyzing pressure and temperature data from flowing fluid temperature logging (FFTL) in unsaturated fractured rock. Using this conceptual model, we developed an analytical solution for FFTL pressure response, and a semianalytical solution for FFTL temperature response. We also proposed a method for estimating fracture permeability from FFTL temperature data. The conceptual model was based on some simplifying assumptions, particularly that a single-phase airflow model was used. In this paper, we develop a more comprehensive numerical model of multiphase flow and heat transfer associated with FFTL. Using this numerical model, we perform a number of forward simulations to determine the parameters that have the strongest influence on the pressure and temperature response from FFTL. We then use the iTOUGH2 optimization code to estimate these most sensitive parameters through inverse modeling and to quantify the uncertainties associated with these estimated parameters. We conclude that FFTL can be utilized to determine permeability, porosity, and thermal conductivity of the fracture rock. Two other parameters, which are not properties of the fractured rock, have strong influence on FFTL response. These are pressure and temperature in the borehole that were at equilibrium with the fractured rock formation at the beginning of FFTL. We illustrate how these parameters can also be estimated from FFTL data.
Gershgorin, B. [Department of Mathematics and Center for Atmosphere and Ocean Science, Courant Institute of Mathematical Sciences, New York University, NY 10012 (United States); Harlim, J. [Department of Mathematics, North Carolina State University, NC 27695 (United States)], E-mail: jharlim@ncsu.edu; Majda, A.J. [Department of Mathematics and Center for Atmosphere and Ocean Science, Courant Institute of Mathematical Sciences, New York University, NY 10012 (United States)
2010-01-01T23:59:59.000Z
The filtering and predictive skill for turbulent signals is often limited by the lack of information about the true dynamics of the system and by our inability to resolve the assumed dynamics with sufficiently high resolution using the current computing power. The standard approach is to use a simple yet rich family of constant parameters to account for model errors through parameterization. This approach can have significant skill by fitting the parameters to some statistical feature of the true signal; however in the context of real-time prediction, such a strategy performs poorly when intermittent transitions to instability occur. Alternatively, we need a set of dynamic parameters. One strategy for estimating parameters on the fly is a stochastic parameter estimation through partial observations of the true signal. In this paper, we extend our newly developed stochastic parameter estimation strategy, the Stochastic Parameterization Extended Kalman Filter (SPEKF), to filtering sparsely observed spatially extended turbulent systems which exhibit abrupt stability transition from time to time despite a stable average behavior. For our primary numerical example, we consider a turbulent system of externally forced barotropic Rossby waves with instability introduced through intermittent negative damping. We find high filtering skill of SPEKF applied to this toy model even in the case of very sparse observations (with only 15 out of the 105 grid points observed) and with unspecified external forcing and damping. Additive and multiplicative bias corrections are used to learn the unknown features of the true dynamics from observations. We also present a comprehensive study of predictive skill in the one-mode context including the robustness toward variation of stochastic parameters, imperfect initial conditions and finite ensemble effect. Furthermore, the proposed stochastic parameter estimation scheme applied to the same spatially extended Rossby wave system demonstrates high predictive skill, comparable with the skill of the perfect model for a duration of many eddy turnover times especially in the unstable regime.
Oxford, University of
A forward microphysical model to predict the size- distribution parameters of laboratory generated Interactions Â Condensational Growth and Coagulation, Submitted for Indian Aerosol Science and Technology Microphysical Model for the UTLS (FAMMUS) is applied to predict the size-distribution parameters of laboratory
Non-Stationary Spectral Estimation for Wind Turbine Induction Generator Faults Detection
Paris-Sud XI, Université de
- or indirect-drive, fixed- or variable-speed turbine generators, advanced signal processing tools are required on the generator stator current. The detection algorithm uses a recursive maximum likelihood estimator to track, induction machine, faults de- tection, stator current, spectral estimation, maximum likelihood estimator. I
Parameter Estimation and Capacity Fade Analysis of Lithium-Ion Batteries Using Reformulated Models
Subramanian, Venkat
Parameter Estimation and Capacity Fade Analysis of Lithium-Ion Batteries Using Reformulated Models and characterize capacity fade in lithium-ion batteries. As a comple- ment to approaches to mathematically model been made in developing lithium-ion battery models that incor- porate transport phenomena
Parameter Estimation and Life Modeling of Lithium-Ion Cells Shriram Santhanagopalan,*,a
Parameter Estimation and Life Modeling of Lithium-Ion Cells Shriram Santhanagopalan,*,a Qi Zhang Carolina, Columbia, South Carolina 29208, USA Lithium-ion pouch cells were cycled at five different. The lithium-ion cell is among the most popular candidates con- sidered actively as a replacement for nickel
IEEE SIGNAL PROCESSING LETTERS, VOL. 4, NO. 7, JULY 1997 195 Parameter Estimation in the Presence
Sayed, Ali
of bounded data uncertainties. The new method is suitable when a priori bounds on the uncertain data in estimation is to recover, to good accuracy, a set of unobservable parameters from corrupted data. Several of having had the most applications, are criteria that are based on quadratic cost functions. The most
Parameter Estimation for Semi-Solid Aluminum Alloys using Transient Experiments
Georgiou, Georgios
Parameter Estimation for Semi-Solid Aluminum Alloys using Transient Experiments A. Alexandrou1 , G aluminum alloys, one prepared by Magneto Hydrodynamic Stirring (MHD), and the other by the Semi with the inner cylinder replaced by a 4-bladed Anviloy 1150TM alloy vane. Two vanes with length of 43 mm
Luhua, Lai
Estimating ProteinLigand Binding Free Energy: Atomic Solvation Parameters for Partition Coefficient and Solvation Free Energy Calculation Jianfeng Pei,1,2 Qi Wang,1,2 Jiaju Zhou,3 and Luhua Lai1 free energy and the correct scoring in docking studies. We have developed a new solvation energy
Nicholls, Geoff
Estimating mutation parameters, population history and genealogy simultaneously from temporally and population size that incorporates the uncertainty in the genealogy of such temporally spaced sequences features of this approach on a genealogy of HIV-1 envelope (env) partial sequences. #12;1 Introduction One
Modeling and Bayesian parameter estimation for shape memory alloy bending actuators
Modeling and Bayesian parameter estimation for shape memory alloy bending actuators John H. Crewsa energy model (HEM) for shape memory alloy (SMA) bending actuators. Additionally, we utilize a Bayesian. Keywords: shape memory alloys, uncertainty quantificiation, markov chain monte carlo 1. INTRODUCTION Shape
Statistical Simulation to Estimate Uncertain Behavioral Parameters of Hybrid Energy-Economy Models
Statistical Simulation to Estimate Uncertain Behavioral Parameters of Hybrid Energy-Economy Models 2011 # Springer Science+Business Media B.V. 2011 Abstract In energy-economy modeling, new hybrid models) backcasting a hybrid energy- economy model over a historical time period; and (3) the application of Markov
On the Parameter Estimation of Linear Models of Aggregate Power System Loads
Cañizares, Claudio A.
1 On the Parameter Estimation of Linear Models of Aggregate Power System Loads Valery Knyazkin-- This paper addressed some theoretical and practical issues relevant to the problem of power system load, and the corresponding results are used to validate a commonly used linear model of aggre- gate power system load
ACCEPTED TO IEEE TRANSACTIONS ON POWER SYSTEMS 1 On the Parameter Estimation and Modeling of
Cañizares, Claudio A.
ACCEPTED TO IEEE TRANSACTIONS ON POWER SYSTEMS 1 On the Parameter Estimation and Modeling of Aggregate Power System Loads Valery Knyazkin, Student Member, IEEE, Claudio Ca~nizares, Senior Member, IEEE relevant to the problem of power system load modeling and identification. Two identification techniques
PEAS: A toolbox to assess the accuracy of estimated parameters in environmental models
Checchi, Elisabetta Giusti, Stefano Marsili-Libelli* Department of Systems and Computers, University, in addition to parameter estimation, such as error function plotting, trajectory sensitivity, Monte Carlo regions are computed and a confidence test is pro- duced. The Monte Carlo analysis is available
MODAL PARAMETER ESTIMATION FOR OPERATIONAL WIND TURBINES Emilio Di Lorenzo1, 2
Boyer, Edmond
MODAL PARAMETER ESTIMATION FOR OPERATIONAL WIND TURBINES Emilio Di Lorenzo1, 2 , Simone Manzato1 Claudio 21, 80125 Naples, Italy emilio.dilorenzo@lmsintl.com ABSTRACT Wind turbines are time. This assumption holds in the case of parked wind turbines, but not in the case of operating wind turbines
Parameter estimation in commodity markets: a filtering approach Robert J. Elliott
Hyndman, Cody
as crude oil) using futures price data. A one-factor model for the spot commodity price is used the implementation of commodity market models is that one or more of the factors may be unobservable. In practice the model parameters to market data and to estimate the time series of the unobservable factors. The method
Nasser, Hassan
2014-01-01T23:59:59.000Z
We propose a numerical method to learn Maximum Entropy (MaxEnt) distributions with spatio-temporal constraints from experimental spike trains. This is an extension of two papers [10] and [4] who proposed the estimation of parameters where only spatial constraints were taken into account. The extension we propose allows to properly handle memory effects in spike statistics, for large sized neural networks.
Estimation of interval anisotropy parameters using velocity-independent layer stripping
Tsvankin, Ilya
by VILS in the shale layer above the reservoir are more plausible and less influenced by noise than those Wang1 and Ilya Tsvankin1 ABSTRACT Moveout analysis of long-spread P-wave data is widely used it to interval parameter estimation in orthorhombic media using wide-azimuth, long- spread data
A nonlinear observer to estimate unknown parameters during the synchronization of chaotic systems
L. Torres
2014-06-20T23:59:59.000Z
This paper proposes an Extended-Kalman-Filter-like observer for parameter estimation during synchronization of chaotic systems. The exponential stability of the observer is guaranteed by a persistent excitation condition. This approach is shown to be suitable for various classical chaotic systems and several simulations are presented accordingly.
An iterative stochastic ensemble method for parameter estimation of subsurface flow models
Elsheikh, Ahmed H., E-mail: aelsheikh@ices.utexas.edu [Center for Subsurface Modeling (CSM), Institute for Computational Engineering and Sciences (ICES), University of Texas at Austin, TX (United States); Dept. of Earth Sciences and Engineering, King Abdullah University of Science and Technology (KAUST), Thuwal (Saudi Arabia); Dept. of Applied Mathematics and Computational Sciences, King Abdullah University of Science and Technology (KAUST), Thuwal (Saudi Arabia); Wheeler, Mary F. [Center for Subsurface Modeling (CSM), Institute for Computational Engineering and Sciences (ICES), University of Texas at Austin, TX (United States)] [Center for Subsurface Modeling (CSM), Institute for Computational Engineering and Sciences (ICES), University of Texas at Austin, TX (United States); Hoteit, Ibrahim [Dept. of Earth Sciences and Engineering, King Abdullah University of Science and Technology (KAUST), Thuwal (Saudi Arabia) [Dept. of Earth Sciences and Engineering, King Abdullah University of Science and Technology (KAUST), Thuwal (Saudi Arabia); Dept. of Applied Mathematics and Computational Sciences, King Abdullah University of Science and Technology (KAUST), Thuwal (Saudi Arabia)
2013-06-01T23:59:59.000Z
Parameter estimation for subsurface flow models is an essential step for maximizing the value of numerical simulations for future prediction and the development of effective control strategies. We propose the iterative stochastic ensemble method (ISEM) as a general method for parameter estimation based on stochastic estimation of gradients using an ensemble of directional derivatives. ISEM eliminates the need for adjoint coding and deals with the numerical simulator as a blackbox. The proposed method employs directional derivatives within a Gauss–Newton iteration. The update equation in ISEM resembles the update step in ensemble Kalman filter, however the inverse of the output covariance matrix in ISEM is regularized using standard truncated singular value decomposition or Tikhonov regularization. We also investigate the performance of a set of shrinkage based covariance estimators within ISEM. The proposed method is successfully applied on several nonlinear parameter estimation problems for subsurface flow models. The efficiency of the proposed algorithm is demonstrated by the small size of utilized ensembles and in terms of error convergence rates.
Yellapantula, Sudha
2010-01-16T23:59:59.000Z
for each realization of X. Given an N-point data set, fX[0], X[1],. . . ,X[n]g, parameter estimation is the process of de ning an estimator which best estimates the underlying unknown parameter [2], [3]. ^ = g(X [0];X [1];:::;X [n]) The journal model..., which is de ned in Matlab as twice the integral of the gaussian distribution function with zero mean and variance = 0.5. erf(x) = 2p xZ 0 e t2dt (2.37) Hence, erf(1) = 1 and erf( 1) = 1. f(X) = 1p2 e (X ) 2 2 (2.38) E[g(X1)] = LZ 1 g(X1)f(X1)dX1...
EFFECT OF UNCERTAINTIES IN STELLAR MODEL PARAMETERS ON ESTIMATED MASSES AND RADII OF SINGLE STARS
Basu, Sarbani [Department of Astronomy, Yale University, P.O. Box 208101, New Haven, CT 06520-8101 (United States); Verner, Graham A.; Chaplin, William J.; Elsworth, Yvonne, E-mail: sarbani.basu@yale.edu, E-mail: gav@bison.ph.bham.ac.uk, E-mail: w.j.chaplin@bham.ac.uk, E-mail: y.p.elsworth@bham.ac.uk [School of Physics and Astronomy, University of Birmingham, Edgbaston, Birmingham B15 2TT (United Kingdom)
2012-02-10T23:59:59.000Z
Accurate and precise values of radii and masses of stars are needed to correctly estimate properties of extrasolar planets. We examine the effect of uncertainties in stellar model parameters on estimates of the masses, radii, and average densities of solar-type stars. We find that in the absence of seismic data on solar-like oscillations, stellar masses can be determined to a greater accuracy than either stellar radii or densities; but to get reasonably accurate results the effective temperature, log g, and metallicity must be measured to high precision. When seismic data are available, stellar density is the most well-determined property, followed by radius, with mass the least well-determined property. Uncertainties in stellar convection, quantified in terms of uncertainties in the value of the mixing length parameter, cause the most significant errors in the estimates of stellar properties.
Kao, Jim [Los Alamos National Laboratory, Applied Physics Division, P.O. Box 1663, MS T086, Los Alamos, NM 87545 (United States)]. E-mail: kao@lanl.gov; Flicker, Dawn [Los Alamos National Laboratory, Applied Physics Division, P.O. Box 1663, MS T086, Los Alamos, NM 87545 (United States); Ide, Kayo [University of California at Los Angeles (United States); Ghil, Michael [University of California at Los Angeles (United States)
2006-05-20T23:59:59.000Z
This paper builds upon our recent data assimilation work with the extended Kalman filter (EKF) method [J. Kao, D. Flicker, R. Henninger, S. Frey, M. Ghil, K. Ide, Data assimilation with an extended Kalman filter for an impact-produced shock-wave study, J. Comp. Phys. 196 (2004) 705-723.]. The purpose is to test the capability of EKF in optimizing a model's physical parameters. The problem is to simulate the evolution of a shock produced through a high-speed flyer plate. In the earlier work, we have showed that the EKF allows one to estimate the evolving state of the shock wave from a single pressure measurement, assuming that all model parameters are known. In the present paper, we show that imperfectly known model parameters can also be estimated accordingly, along with the evolving model state, from the same single measurement. The model parameter optimization using the EKF can be achieved through a simple modification of the original EKF formalism by including the model parameters into an augmented state variable vector. While the regular state variables are governed by both deterministic and stochastic forcing mechanisms, the parameters are only subject to the latter. The optimally estimated model parameters are thus obtained through a unified assimilation operation. We show that improving the accuracy of the model parameters also improves the state estimate. The time variation of the optimized model parameters results from blending the data and the corresponding values generated from the model and lies within a small range, of less than 2%, from the parameter values of the original model. The solution computed with the optimized parameters performs considerably better and has a smaller total variance than its counterpart using the original time-constant parameters. These results indicate that the model parameters play a dominant role in the performance of the shock-wave hydrodynamic code at hand.
Available Alarms in CDE for Next-Day Generation Estimates - March...
Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)
submit Generation Estimates through the Customer Data Entry (CDE) in web Trans or the WECC Electronic Industrial Data Exchange (EIDE) as described in BPAT's Business Practices....
Massive Black Hole Binary Inspirals: Results from the LISA Parameter Estimation Taskforce
K. G. Arun; Stas Babak; Emanuele Berti; Neil Cornish; Curt Cutler; Jonathan Gair; Scott A. Hughes; Bala R. Iyer; Ryan N. Lang; Ilya Mandel; Edward K. Porter; Bangalore S. Sathyaprakash; Siddhartha Sinha; Alicia M. Sintes; Miquel Trias; Chris Van Den Broeck; Marta Volonteri
2009-03-30T23:59:59.000Z
The LISA Parameter Estimation (LISAPE) Taskforce was formed in September 2007 to provide the LISA Project with vetted codes, source distribution models, and results related to parameter estimation. The Taskforce's goal is to be able to quickly calculate the impact of any mission design changes on LISA's science capabilities, based on reasonable estimates of the distribution of astrophysical sources in the universe. This paper describes our Taskforce's work on massive black-hole binaries (MBHBs). Given present uncertainties in the formation history of MBHBs, we adopt four different population models, based on (i) whether the initial black-hole seeds are small or large, and (ii) whether accretion is efficient or inefficient at spinning up the holes. We compare four largely independent codes for calculating LISA's parameter-estimation capabilities. All codes are based on the Fisher-matrix approximation, but in the past they used somewhat different signal models, source parametrizations and noise curves. We show that once these differences are removed, the four codes give results in extremely close agreement with each other. Using a code that includes both spin precession and higher harmonics in the gravitational-wave signal, we carry out Monte Carlo simulations and determine the number of events that can be detected and accurately localized in our four population models.
Optimal Estimation of Several Linear Parameters in the Presence of Lorentzian Thermal Noise
Steffen, Jason H; Boynton, Paul E
2008-01-01T23:59:59.000Z
We expand on the results of a previous article to derive the optimal estimation of several, linear parameters for a continuous time series. We show that working in the basis of the thermal driving force both simplifies the calculations and provides additional insight into the efficacy of different estimation techniques. To illustrate this point, we compare the variances in the optimal estimators for thermal noise with those of two approximate methods which, like the optimal estimators, supress the contribution to the variance that comes from the unwanted, resonant motion of the oscillator. We discuss how these methods fare when the dominant noise process is either white displacement noise or noise where the noise power is inversely proportional to the frequency (1/f noise), which is common in modern torsion pendulum experiments. A method to transform a parameter estimating function between the displacement basis and the basis of the thermal driving force is shown for the case of a high-Q oscillator. To find t...
Yang, Chao; Jiang, Wen; Chen, Dong-Hua; Adiga, Umesh; Ng, Esmond G.; Chiu, Wah
2008-07-28T23:59:59.000Z
The three-dimensional reconstruction of macromolecules from two-dimensional single-particle electron images requires determination and correction of the contrast transfer function (CTF) and envelope function. A computational algorithm based on constrained non-linear optimization is developed to estimate the essential parameters in the CTF and envelope function model simultaneously and automatically. The application of this estimation method is demonstrated with focal series images of amorphous carbon film as well as images of ice-embedded icosahedral virus particles suspended across holes.
Generator parameter uncertainties in the frequency-and-duration of cumulative margin events
Tram, Nhat-Hanh
1977-01-01T23:59:59.000Z
Hea f Depar ent Member Member Mem r 4f~d Q~ (, /U~d~m~ Member May 1977 ABSTRACT Generator Parameter Uncertainties in the Frequency-and-Duration of Cumulative Margin Events. (May 1977) Nhat-Hanh Tram, B. S. , Texas A&M University Chairman... VITA 69 LIST OF TABLES Table Page 1. Generating Unit Parameters (Example for Sensitivity Studies) . 10 2. K Constants for Sensitivity Studies . 3. Frequency-and-Duration Sensitivities to A 4. Frequency-and-Duration Sensitivities to u 13 13 14...
Radiatively Important Parameters Best Estimate (RIPBE): An ARM Value-Added Product
McFarlane, S; Shippert, T; Mather, J
2011-06-30T23:59:59.000Z
The Radiatively Important Parameters Best Estimate (RIPBE) VAP was developed to create a complete set of clearly identified set of parameters on a uniform vertical and temporal grid to use as input to a radiative transfer model. One of the main drivers for RIPBE was as input to the Broadband Heating Rate Profile (BBHRP) VAP, but we also envision using RIPBE files for user-run radiative transfer codes, as part of cloud/aerosol retrieval testbeds, and as input to averaged datastreams for model evaluation.
Derivative-free optimization for parameter estimation in computational nuclear physics
Stefan M. Wild; Jason Sarich; Nicolas Schunck
2014-09-17T23:59:59.000Z
We consider optimization problems that arise when estimating a set of unknown parameters from experimental data, particularly in the context of nuclear density functional theory. We examine the cost of not having derivatives of these functionals with respect to the parameters. We show that the POUNDERS code for local derivative-free optimization obtains consistent solutions on a variety of computationally expensive energy density functional calibration problems. We also provide a primer on the operation of the POUNDERS software in the Toolkit for Advanced Optimization.
Nair, Remya; Tanaka, Takahiro
2015-01-01T23:59:59.000Z
We study the advantage of the co-existence of future ground and space based gravitational wave detectors, in estimating the parameters of a binary coalescence. Using the post-Newtonian waveform for the inspiral of non-spinning neutron star-black hole pairs in circular orbits, we study how the estimates for chirp mass, symmetric mass ratio, and time and phase at coalescence are improved by combining the data from different space-ground detector pairs. Since the gravitational waves produced by binary coalescence also provide a suitable domain where we can study strong field gravity, we also study the deviations from general relativity using the parameterized post-Einsteinian framework. As an example, focusing on the Einstein telescope and DECIGO pair, we demonstrate that there exists a sweet spot range of sensitivity in the pre-DECIGO phase where the best enhancement due to the synergy effect can be obtained for the estimates of the post-Newtonian waveform parameters as well as the modification parameters to ge...
Biomass Power Generation Market Capacity is Estimated to Reach...
Energy Concerns to Push Global Market to Grow at 8.1% CAGR from 2013 to 2019 Oil Shale Market is Estimated to Reach USD 7,400.70 Million by 2022 more Group members (32)...
Switching Mode Generation and Optimal Estimation with Application to Skid-Steering
Hartmann, Mitra J. Z.
to treat the skid-steered vehicle as a switched system, the vehicle's ground interaction is modeled using; optimal estimation; optimal control; estimation algorithms 1 Introduction The skid-steered vehicle (SSVSwitching Mode Generation and Optimal Estimation with Application to Skid-Steering T. M. Caldwell
Subramanian, Venkat
Parameter Estimation and Capacity Fade Analysis of Lithium-Ion Batteries Using First parameters of lithium-ion batteries are estimated using a first-principles electrochemical engineering model and understanding of lithium-ion batteries using physics-based first-principles models. These models are based
Estimation of scalar moments from explosion-generated surface waves
Stevens, J.L.
1985-04-01T23:59:59.000Z
Rayleigh waves from underground nuclear explosions are used to estimate scaler moments for 40 Nevada Test Site (NTS) explosions and 18 explosions at the Soviet East Kazakh test site. The Rayleigh wave spectrum is written as a product of functions that depend on the elastic structure of the travel path, the elastic structure of the source region and the Q structure of the path. Results are used to examine the worldwide variability of each factor and the resulting variability of surface wave amplitudes. The path elastic structure and Q structure are found by inversion of Rayleigh wave phase and group velocities and spectral amplitudes. The Green's function derived from this structure is used to estimate the moments of explosions observed along the same path. This procedure produces more consistent amplitude estimates than conventional magnitude measurements. Network scatter in log moment is typically 0.1. In contrast with time-domain amplitudes, the elastic structure of the travel path causes little variability in spectral amplitudes. When the mantle Q is constrained to a value of approximately 100 at depths greater than 120 km, the inversion for Q and moment produces moments that remain constant with distance. Based on the best models available, surface waves from NTS explosions should be larger than surface waves from East Kazakh explosions with the same moment. Estimated scaler moments for the largest East Kazakh explosions since 1976 are smaller than the estimated moments for the largest NTS explosions for the same time period.
Chen, H.W. (Los Alamos National Lab., NM (United States). Biophysics Group M715)
1995-01-01T23:59:59.000Z
Structural classification and parameter estimation (SCPE) methods are used for studying single-input single-output (SISO) parallel linear-nonlinear-linear (LNL), linear-nonlinear (LN), and nonlinear-linear (NL) system models from input-output (I-O) measurements. The uniqueness of the I-O mappings (see the definition of the I-O mapping in Section 3-A) of some model structures is discussed. The uniqueness of the I-O mappings (see the definition of the I-O mapping in Section 3-A) of some model structures is discussed. The uniqueness of I-O mappings of different models tells them in what conditions different model structures can be differentiated from one another. Parameter uniqueness of the I-O mapping of a given structural model is also discussed, which tells the authors in what conditions a given model's parameters can be uniquely estimated from I-O measurements. These methods are then generalized so that they can be used to study single-input multi-output (SIMO), multi-input single-output (MISO), as well as multi-input multi-output (MIMO) nonlinear system models. Parameter estimation of the two-input single-output nonlinear system model (denoted as the 2f-structure in 2 cited references), which was left unsolved previously, can now be obtained using the newly derived algorithms. Applications of SCPE methods for modeling visual cortical neurons, system fault detection, modeling and identification of communication networks, biological systems, and natural and artificial neural networks are also discussed. The feasibility of these methods is demonstrated using simulated examples. SCPE methods presented in this paper can be further developed to study more complicated block-structures models, and will therefore have future potential for modeling and identifying highly complex multi-input multi-output nonlinear systems.
Effect of squeezing on parameter estimation of gravitational waves emitted by compact binary systems
Ryan Lynch; Salvatore Vitale; Lisa Barsotti; Matthew Evans; Sheila Dwyer
2014-11-06T23:59:59.000Z
The LIGO gravitational wave (GW) detectors will begin collecting data in 2015, with Virgo following shortly after. The use of squeezing has been proposed as a way to reduce the quantum noise without increasing the laser power, and has been successfully tested at one of the LIGO sites and at GEO in Germany. When used in Advanced LIGO without a filter cavity, the squeezer improves the performances of detectors above about 100 Hz, at the cost of a higher noise floor in the low frequency regime. Frequency-dependent squeezing, on the other hand, will lower the noise floor throughout the entire band. Squeezing technology will have a twofold impact: it will change the number of expected detections and it will impact the quality of parameter estimation for the detected signals. In this work we consider three different GW detector networks, each utilizing a different type of squeezer, all corresponding to plausible implementations. Using LALInference, a powerful Monte Carlo parameter estimation algorithm, we study how each of these networks estimates the parameters of GW signals emitted by compact binary systems, and compare the results with a baseline advanced LIGO-Virgo network. We find that, even in its simplest implementation, squeezing has a large positive impact: the sky error area of detected signals will shrink by about 30% on average, increasing the chances of finding an electromagnetic counterpart to the GW detection. Similarly, we find that the measurability of tidal deformability parameters for neutron stars in binaries increases by about 30%, which could aid in determining the equation of state of neutron stars. The degradation in the measurement of the chirp mass, as a result of the higher low-frequency noise, is shown to be negligible when compared to systematic errors.
A Bayesian Approach for Parameter Estimation and Prediction using a Computationally Intensive Model
Dave Higdon; Jordan D. McDonnell; Nicolas Schunck; Jason Sarich; Stefan M. Wild
2014-09-17T23:59:59.000Z
Bayesian methods have been very successful in quantifying uncertainty in physics-based problems in parameter estimation and prediction. In these cases, physical measurements y are modeled as the best fit of a physics-based model $\\eta(\\theta)$ where $\\theta$ denotes the uncertain, best input setting. Hence the statistical model is of the form $y = \\eta(\\theta) + \\epsilon$, where $\\epsilon$ accounts for measurement, and possibly other error sources. When non-linearity is present in $\\eta(\\cdot)$, the resulting posterior distribution for the unknown parameters in the Bayesian formulation is typically complex and non-standard, requiring computationally demanding computational approaches such as Markov chain Monte Carlo (MCMC) to produce multivariate draws from the posterior. While quite generally applicable, MCMC requires thousands, or even millions of evaluations of the physics model $\\eta(\\cdot)$. This is problematic if the model takes hours or days to evaluate. To overcome this computational bottleneck, we present an approach adapted from Bayesian model calibration. This approach combines output from an ensemble of computational model runs with physical measurements, within a statistical formulation, to carry out inference. A key component of this approach is a statistical response surface, or emulator, estimated from the ensemble of model runs. We demonstrate this approach with a case study in estimating parameters for a density functional theory (DFT) model, using experimental mass/binding energy measurements from a collection of atomic nuclei. We also demonstrate how this approach produces uncertainties in predictions for recent mass measurements obtained at Argonne National Laboratory (ANL).
Quantum estimation of the Schwarzschild space-time parameters of the Earth
David Edward Bruschi; Animesh Datta; Rupert Ursin; Timothy C. Ralph; Ivette Fuentes
2014-08-31T23:59:59.000Z
We propose a quantum experiment to measure with high precision the Schwarzschild space-time parameters of the Earth. The scheme can also be applied to measure distances by taking into account the curvature of the Earth's space-time. As a wave-packet of (entangled) light is sent from the Earth to a satellite it is red-shifted and deformed due to the curvature of space-time. Measurements after the propagation enable the estimation of the space-time parameters. We compare our results with the state of the art, which involves classical measurement methods, and discuss what developments are required in space-based quantum experiments to improve on the current measurement of the Schwarzschild radius of the Earth.
Exceptional points for parameter estimation in open quantum systems: Analysis of the Bloch equations
Morag Am-Shallem; Ronnie Kosloff; Nimrod Moiseyev
2014-12-14T23:59:59.000Z
The dynamics of open quantum systems is typically described by a quantum dynamical semigroup generator ${\\cal L}$. The eigenvalues of ${\\cal L}$ are complex, reflecting unitary as well as dissipative dynamics. For certain values of parameters defining ${\\cal L}$, non-hermitian degeneracies emerge, i.e. exceptional points ($EP$). We study the implications of such points in the open system dynamics of a two-level-system described by the Bloch equation. This open system has become the paradigm of diverse fields in physics, from NMR to quantum information and elementary particles. We find as a function of detuning and driving amplitude a continuous line of exceptional points merging into two cusps of triple degeneracy. The dynamical signature of these $EP$ points is a unique time evolution. This unique feature can be employed experimentally to locate the $EP$ points and thereby to determine the intrinsic system parameters for any desired accuracy.
Heath, G.
2012-06-01T23:59:59.000Z
This powerpoint presentation to be presented at the World Renewable Energy Forum on May 14, 2012, in Denver, CO, discusses systematic review and harmonization of life cycle GHG emission estimates for electricity generation technologies.
Novel Method for Incorporating Model Uncertainties into Gravitational Wave Parameter Estimates
Christopher J. Moore; Jonathan R. Gair
2014-12-11T23:59:59.000Z
Posterior distributions on parameters computed from experimental data using Bayesian techniques are only as accurate as the models used to construct them. In many applications these models are incomplete, which both reduces the prospects of detection and leads to a systematic error in the parameter estimates. In the analysis of data from gravitational wave detectors, for example, accurate waveform templates can be computed using numerical methods, but the prohibitive cost of these simulations means this can only be done for a small handful of parameters. In this work a novel method to fold model uncertainties into data analysis is proposed; the waveform uncertainty is analytically marginalised over using with a prior distribution constructed by using Gaussian process regression to interpolate the waveform difference from a small training set of accurate templates. The method is well motivated, easy to implement, and no more computationally expensive than standard techniques. The new method is shown to perform extremely well when applied to a toy problem. While we use the application to gravitational wave data analysis to motivate and illustrate the technique, it can be applied in any context where model uncertainties exist.
Sophie Schirmer; Frank Langbein
2014-12-13T23:59:59.000Z
We compare the accuracy, precision and reliability of different methods for estimating key system parameters for two-level systems subject to Hamiltonian evolution and decoherence. It is demonstrated that the use of Bayesian modelling and maximum likelihood estimation is superior to common techniques based on Fourier analysis. Even for simple two-parameter estimation problems, the Bayesian approach yields higher accuracy and precision for the parameter estimates obtained. It requires less data, is more flexible in dealing with different model systems, can deal better with uncertainty in initial conditions and measurements, and enables adaptive refinement of the estimates. The comparison results shows that this holds for measurements of large ensembles of spins and atoms limited by Gaussian noise as well as projection noise limited data from repeated single-shot measurements of a single quantum device.
Chen, Yong
or applying an estimation method that is robust to the error structure assumption in modelling the dynamicsCan a more realistic model error structure improve the parameter estimation in modelling the dynamics of ®sh populations? Y. Chena,* , J.E. Paloheimob a Fisheries Conservation Chair Program, Fisheries
Study of turbine-generator shaft parameters from the viewpoint of subsynchronous resonance
de Mello, F.P.; Chang, K.Q.; Hannett, L.N; Feltes, J.W.; Undrill, J.M.
1982-09-01T23:59:59.000Z
An investigation of susceptibility to subsynchronous resonance (SSR) problems in turbine generators as function of shaft, network and load characteristics is the subject of this report. The evaluation of susceptibility was done through calculations of electrical damping to rotor speed perturbations and relating this to the value of modal damping required to cancel the estimated natural damping due to mechanical effects. The methodology of calculation is documented and results are presented for a number of network scenarios, machine and load characteristics. From these results some insight into the incidence of SSR phenomenon as function of shaft, machine, network and load characteristics has been derived.
Yao, Shuang
2014-07-30T23:59:59.000Z
(x) = h 2B(x) + ? V (x) nh3 Zn + op(h 2 + (nh3)?1/2), (2.4) where B(x) = ( µ4?µ22 2µ2 ) g??(x)f ?(x) f(x) + µ4g???(x) 6µ2 , V (x) = ?2?2(x)/[µ22f(x)], Zn is a mean zero, unit variance random variable (Zn d? N(0, 1) under some standard regularity conditions... of ?ˆLL(x), and choose the smoothing parameter h to minimize a weighted version of the integrated (leading) squared bias and variance of ?ˆLL(x). This approach requires one to obtain initial estimates of g(x) and f(x) and their derivative functions up...
The impact of spurious shear on cosmological parameter estimates from weak lensing observables
DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)
Petri, Andrea [Brookhaven National Laboratory (BNL), Upton, NY (United States); Columbia Univ., New York, NY (United States); May, Morgan [Brookhaven National Laboratory (BNL), Upton, NY (United States); Haiman, Zoltan [Columbia Univ., New York, NY (United States); Kratochvil, Jan M. [Univ. of KwaZulu-Natal, Durban (South Africa)
2014-12-01T23:59:59.000Z
Residual errors in shear measurements, after corrections for instrument systematics and atmospheric effects, can impact cosmological parameters derived from weak lensing observations. Here we combine convergence maps from our suite of ray-tracing simulations with random realizations of spurious shear. This allows us to quantify the errors and biases of the triplet (?m,w,?8) derived from the power spectrum (PS), as well as from three different sets of non-Gaussian statistics of the lensing convergence field: Minkowski functionals (MFs), low-order moments (LMs), and peak counts (PKs). Our main results are as follows: (i) We find an order of magnitude smaller biases from the PS than in previous work. (ii) The PS and LM yield biases much smaller than the morphological statistics (MF, PK). (iii) For strictly Gaussian spurious shear with integrated amplitude as low as its current estimate of ?sys2?10-7, biases from the PS and LM would be unimportant even for a survey with the statistical power of Large Synoptic Survey Telescope. However, we find that for surveys larger than ?100 deg2, non-Gaussianity in the noise (not included in our analysis) will likely be important and must be quantified to assess the biases. (iv) The morphological statistics (MF, PK) introduce important biases even for Gaussian noise, which must be corrected in large surveys. The biases are in different directions in (?m,w,?8) parameter space, allowing self-calibration by combining multiple statistics. Our results warrant follow-up studies with more extensive lensing simulations and more accurate spurious shear estimates.
The impact of spurious shear on cosmological parameter estimates from weak lensing observables
Petri, Andrea [Brookhaven National Laboratory (BNL), Upton, NY (United States); Columbia Univ., New York, NY (United States); May, Morgan [Brookhaven National Laboratory (BNL), Upton, NY (United States); Haiman, Zoltan [Columbia Univ., New York, NY (United States); Kratochvil, Jan M. [Univ. of KwaZulu-Natal, Durban (South Africa)
2014-12-01T23:59:59.000Z
Residual errors in shear measurements, after corrections for instrument systematics and atmospheric effects, can impact cosmological parameters derived from weak lensing observations. Here we combine convergence maps from our suite of ray-tracing simulations with random realizations of spurious shear. This allows us to quantify the errors and biases of the triplet (?_{m},w,?_{8}) derived from the power spectrum (PS), as well as from three different sets of non-Gaussian statistics of the lensing convergence field: Minkowski functionals (MFs), low-order moments (LMs), and peak counts (PKs). Our main results are as follows: (i) We find an order of magnitude smaller biases from the PS than in previous work. (ii) The PS and LM yield biases much smaller than the morphological statistics (MF, PK). (iii) For strictly Gaussian spurious shear with integrated amplitude as low as its current estimate of ?_{sys}^{2}?10^{-7}, biases from the PS and LM would be unimportant even for a survey with the statistical power of Large Synoptic Survey Telescope. However, we find that for surveys larger than ?100 deg^{2}, non-Gaussianity in the noise (not included in our analysis) will likely be important and must be quantified to assess the biases. (iv) The morphological statistics (MF, PK) introduce important biases even for Gaussian noise, which must be corrected in large surveys. The biases are in different directions in (?m,w,?8) parameter space, allowing self-calibration by combining multiple statistics. Our results warrant follow-up studies with more extensive lensing simulations and more accurate spurious shear estimates.
The impact of spurious shear on cosmological parameter estimates from weak lensing observables
DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)
Petri, Andrea; May, Morgan; Haiman, Zoltan; Kratochvil, Jan M.
2014-12-01T23:59:59.000Z
Residual errors in shear measurements, after corrections for instrument systematics and atmospheric effects, can impact cosmological parameters derived from weak lensing observations. Here we combine convergence maps from our suite of ray-tracing simulations with random realizations of spurious shear. This allows us to quantify the errors and biases of the triplet (?m,w,?8) derived from the power spectrum (PS), as well as from three different sets of non-Gaussian statistics of the lensing convergence field: Minkowski functionals (MFs), low-order moments (LMs), and peak counts (PKs). Our main results are as follows: (i) We find an order of magnitude smaller biases from themore »PS than in previous work. (ii) The PS and LM yield biases much smaller than the morphological statistics (MF, PK). (iii) For strictly Gaussian spurious shear with integrated amplitude as low as its current estimate of ?sys2?10-7, biases from the PS and LM would be unimportant even for a survey with the statistical power of Large Synoptic Survey Telescope. However, we find that for surveys larger than ?100 deg2, non-Gaussianity in the noise (not included in our analysis) will likely be important and must be quantified to assess the biases. (iv) The morphological statistics (MF, PK) introduce important biases even for Gaussian noise, which must be corrected in large surveys. The biases are in different directions in (?m,w,?8) parameter space, allowing self-calibration by combining multiple statistics. Our results warrant follow-up studies with more extensive lensing simulations and more accurate spurious shear estimates.« less
Thorsten Stahn; Laurent Gizon
2008-03-14T23:59:59.000Z
Quantitative helio- and asteroseismology require very precise measurements of the frequencies, amplitudes, and lifetimes of the global modes of stellar oscillation. It is common knowledge that the precision of these measurements depends on the total length (T), quality, and completeness of the observations. Except in a few simple cases, the effect of gaps in the data on measurement precision is poorly understood, in particular in Fourier space where the convolution of the observable with the observation window introduces correlations between different frequencies. Here we describe and implement a rather general method to retrieve maximum likelihood estimates of the oscillation parameters, taking into account the proper statistics of the observations. Our fitting method applies in complex Fourier space and exploits the phase information. We consider both solar-like stochastic oscillations and long-lived harmonic oscillations, plus random noise. Using numerical simulations, we demonstrate the existence of cases for which our improved fitting method is less biased and has a greater precision than when the frequency correlations are ignored. This is especially true of low signal-to-noise solar-like oscillations. For example, we discuss a case where the precision on the mode frequency estimate is increased by a factor of five, for a duty cycle of 15%. In the case of long-lived sinusoidal oscillations, a proper treatment of the frequency correlations does not provide any significant improvement; nevertheless we confirm that the mode frequency can be measured from gapped data at a much better precision than the 1/T Rayleigh resolution.
7.6 MLE for Transformed Parameters Given PDF p(x; ) but want an estimate of = g ( )
Fowler, Mark
1 7.6 MLE for Transformed Parameters Given PDF p(x; ) but want an estimate of = g ( ) What-to-one Otherwise... can "argue" that maximization over inside definition for modified LF ensures the result. #12;3 Ex. 7.9: Estimate Power of DC Level in AWGN x[n] = A + w[n] noise is N(0,2) & White Want to Est
Discretization error estimation and exact solution generation using the method of nearby problems.
Sinclair, Andrew J. (Auburn University Auburn, AL); Raju, Anil (Auburn University Auburn, AL); Kurzen, Matthew J. (Virginia Tech Blacksburg, VA); Roy, Christopher John (Virginia Tech Blacksburg, VA); Phillips, Tyrone S. (Virginia Tech Blacksburg, VA)
2011-10-01T23:59:59.000Z
The Method of Nearby Problems (MNP), a form of defect correction, is examined as a method for generating exact solutions to partial differential equations and as a discretization error estimator. For generating exact solutions, four-dimensional spline fitting procedures were developed and implemented into a MATLAB code for generating spline fits on structured domains with arbitrary levels of continuity between spline zones. For discretization error estimation, MNP/defect correction only requires a single additional numerical solution on the same grid (as compared to Richardson extrapolation which requires additional numerical solutions on systematically-refined grids). When used for error estimation, it was found that continuity between spline zones was not required. A number of cases were examined including 1D and 2D Burgers equation, the 2D compressible Euler equations, and the 2D incompressible Navier-Stokes equations. The discretization error estimation results compared favorably to Richardson extrapolation and had the advantage of only requiring a single grid to be generated.
Boe, Timothy [Oak Ridge Institute for Science and Education, Research Triangle Park, NC 27711 (United States)] [Oak Ridge Institute for Science and Education, Research Triangle Park, NC 27711 (United States); Lemieux, Paul [U.S. Environmental Protection Agency, Research Triangle Park, NC 27711 (United States)] [U.S. Environmental Protection Agency, Research Triangle Park, NC 27711 (United States); Schultheisz, Daniel; Peake, Tom [U.S. Environmental Protection Agency, Washington, DC 20460 (United States)] [U.S. Environmental Protection Agency, Washington, DC 20460 (United States); Hayes, Colin [Eastern Research Group, Inc, Morrisville, NC 26560 (United States)] [Eastern Research Group, Inc, Morrisville, NC 26560 (United States)
2013-07-01T23:59:59.000Z
Management of debris and waste from a wide-area radiological incident would probably constitute a significant percentage of the total remediation cost and effort. The U.S. Environmental Protection Agency's (EPA's) Waste Estimation Support Tool (WEST) is a unique planning tool for estimating the potential volume and radioactivity levels of waste generated by a radiological incident and subsequent decontamination efforts. The WEST was developed to support planners and decision makers by generating a first-order estimate of the quantity and characteristics of waste resulting from a radiological incident. The tool then allows the user to evaluate the impact of various decontamination/demolition strategies on the waste types and volumes generated. WEST consists of a suite of standalone applications and Esri{sup R} ArcGIS{sup R} scripts for rapidly estimating waste inventories and levels of radioactivity generated from a radiological contamination incident as a function of user-defined decontamination and demolition approaches. WEST accepts Geographic Information System (GIS) shape-files defining contaminated areas and extent of contamination. Building stock information, including square footage, building counts, and building composition estimates are then generated using the Federal Emergency Management Agency's (FEMA's) Hazus{sup R}-MH software. WEST then identifies outdoor surfaces based on the application of pattern recognition to overhead aerial imagery. The results from the GIS calculations are then fed into a Microsoft Excel{sup R} 2007 spreadsheet with a custom graphical user interface where the user can examine the impact of various decontamination/demolition scenarios on the quantity, characteristics, and residual radioactivity of the resulting waste streams. (authors)
Paris-Sud XI, Université de
Information bounds and MCMC parameter estimation for the pile-up model Tabea Rebafkaa,b$, François Abstract This paper is concerned with the pile-up model defined as a nonlinear transformation of a distribution of interest. An observation of the pile-up model is the minimum of a random number of independent
Yao, Bin
Integrated Direct/Indirect Adaptive Robust Precision Control of Linear Motor Drive Systems The focus of the paper is on the synthesis of nonlinear adaptive robust controllers for precision linear control of linear motor drive systems but with an improved estimation model, in which accurate parameter
Walter, M.Todd
Estimating basin-wide hydraulic parameters of a semi-arid mountainous watershed by recession 2002; accepted 23 April 2003 Abstract Insufficient sub-surface hydraulic data from watersheds often and in watersheds with low population densities because well-drilling to obtain the hydraulic data is expensive
Estimation of Volatility The values of the parameters r, t, St, T, and K used to price a call op-
Privault, Nicolas
is the price of light sweet crude oil futures traded on the New York Mercantile Exchange (NYMEX), basedChapter 7 Estimation of Volatility The values of the parameters r, t, St, T, and K used to price the historical, implied, and local volatility models, and refer to [26] for stochastic volatility models. 7
Rozo, Eduardo; /U. Chicago /Chicago U., KICP; Wu, Hao-Yi; /KIPAC, Menlo Park; Schmidt, Fabian; /Caltech
2011-11-04T23:59:59.000Z
When extracting the weak lensing shear signal, one may employ either locally normalized or globally normalized shear estimators. The former is the standard approach when estimating cluster masses, while the latter is the more common method among peak finding efforts. While both approaches have identical signal-to-noise in the weak lensing limit, it is possible that higher order corrections or systematic considerations make one estimator preferable over the other. In this paper, we consider the efficacy of both estimators within the context of stacked weak lensing mass estimation in the Dark Energy Survey (DES). We find that the two estimators have nearly identical statistical precision, even after including higher order corrections, but that these corrections must be incorporated into the analysis to avoid observationally relevant biases in the recovered masses. We also demonstrate that finite bin-width effects may be significant if not properly accounted for, and that the two estimators exhibit different systematics, particularly with respect to contamination of the source catalog by foreground galaxies. Thus, the two estimators may be employed as a systematic cross-check of each other. Stacked weak lensing in the DES should allow for the mean mass of galaxy clusters to be calibrated to {approx}2% precision (statistical only), which can improve the figure of merit of the DES cluster abundance experiment by a factor of {approx}3 relative to the self-calibration expectation. A companion paper investigates how the two types of estimators considered here impact weak lensing peak finding efforts.
Assessing Invariance of Factor Structures and Polytomous Item Response Model Parameter Estimates
Reyes, Jennifer McGee
2012-02-14T23:59:59.000Z
.e., identical items, different people) for the homogenous graded response model (Samejima, 1969) and the partial credit model (Masters, 1982)? To evaluate measurement invariance using IRT methods, the item discrimination and item difficulty parameters... obtained from the GRM need to be equivalent across datasets. The YFCY02 and YFCY03 GRM item discrimination parameters (slope) correlation was 0.828. The YFCY02 and YFCY03 GRM item difficulty parameters (location) correlation was 0...
will comprise three stator windings, one field winding and two damper windings as shown in Fig. 1. Magnetic coupling is a function of the rotor position and therefore, the flux linking each winding is also
Barsotti, Lisa
Compact binary systems with neutron stars or black holes are one of the most promising sources for ground-based gravitational-wave detectors. Gravitational radiation encodes rich information about source physics; thus ...
Aaron Rogan; Sukanta Bose
2006-05-01T23:59:59.000Z
We study the limits on how accurately LISA will be able to estimate the parameters of low-mass compact binaries, comprising white dwarfs (WDs), neutron stars (NSs) or black holes (BHs), while battling the amplitude, frequency, and phase modulations of their signals. We show that Doppler-phase modulation aids sky-position resolution in every direction, improving it especially for sources near the poles of the ecliptic coordinate system. However, it increases the frequency estimation error by a factor of over 1.5 at any sky position, and at f=3 mHz. Since accounting for Doppler-phase modulation is absolutely essential at all LISA frequencies and for all chirp masses in order to avoid a fractional loss of signal-to-noise ratio (SNR) of more than 30%, LISA science will be simultaneously aided and limited by it. For a source with f > 2.5mHz, searching for its frequency evolution for 1 year worsens the error in the frequency estimation by a factor of over 3.5 relative to that of sources with f < 1mHz. Increasing the integration time to 2 years reduces this relative error factor to about 2, which still adversely affects the resolvability of the galactic binary confusion noise. Thus, unless the mission lifetime is increased several folds, the only other recourse available for reducing the errors is to exclude the chirp parameter from ones search templates. Doing so improves the SNR-normalized parameter estimates. This works for the lightest binaries since their SNR itself does not suffer from that exclusion. However, for binaries involving a neutron star, a black hole, or both, the SNR and, therefore, the parameter estimation, can take a significant hit, thus, severely affecting the ability to resolve such members in LISA's confusion noise.
An equivalent circuit for the Brushless Doubly Fed Machine (BDFM) including parameter estimation
Cambridge, University of
are presented. The machine is intended for use as a variable speed generator, or drive. A per phase equivalent generator for wind turbines, although the benefits of the BDFM for variable speed drives have also been of operation in a doubly- fed mode, in which the shaft speed has a fixed relationship to the two excitation
Estimation of Inflation parameters for Perturbed Power Law model using recent CMB measurements
Suvodip Mukherjee; Santanu Das; Minu Joy; Tarun Souradeep
2015-01-31T23:59:59.000Z
Cosmic Microwave Background (CMB) is an important probe for understanding the inflationary era of the Universe. We consider the Perturbed Power Law (PPL) model of inflation which is a soft deviation from Power Law (PL) inflationary model. This model captures the effect of higher order derivative of Hubble parameter during inflation, which in turn leads to a non-zero effective mass $m_{\\rm eff}$ for the inflaton field. The higher order derivatives of Hubble parameter at leading order sources constant difference in the spectral index for scalar and tensor perturbation going beyond PL model of inflation. PPL model have two observable independent parameters, namely spectral index for tensor perturbation $\
Estimation of Inflation parameters for Perturbed Power Law model using recent CMB measurements
Mukherjee, Suvodip; Joy, Minu; Souradeep, Tarun
2014-01-01T23:59:59.000Z
Cosmic Microwave Background (CMB) is an important probe for understanding the inflationary era of the Universe. We consider the Perturbed Power Law (PPL) model of inflation which is a soft deviation from Power Law (PL) inflationary model. This model captures the effect of higher order derivative of Hubble parameter during inflation, which in turn leads to a non-zero effective mass $m_{\\rm eff}$ for the inflaton field. The higher order derivatives of Hubble parameter at leading order sources constant difference in the spectral index for scalar and tensor perturbation going beyond PL model of inflation. PPL model have two observable independent parameters, namely spectral index for tensor perturbation $\
Using Markov chain Monte Carlo methods for estimating parameters with gravitational radiation data
Nelson Christensen; Renate Meyer
2001-02-05T23:59:59.000Z
We present a Bayesian approach to the problem of determining parameters for coalescing binary systems observed with laser interferometric detectors. By applying a Markov Chain Monte Carlo (MCMC) algorithm, specifically the Gibbs sampler, we demonstrate the potential that MCMC techniques may hold for the computation of posterior distributions of parameters of the binary system that created the gravity radiation signal. We describe the use of the Gibbs sampler method, and present examples whereby signals are detected and analyzed from within noisy data.
DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)
Burr, Tom; Hamada, Michael S.; Howell, John; Skurikhin, Misha; Ticknor, Larry; Weaver, Brian
2013-01-01T23:59:59.000Z
Process monitoring (PM) for nuclear safeguards sometimes requires estimation of thresholds corresponding to small false alarm rates. Threshold estimation dates to the 1920s with the Shewhart control chart; however, because possible new roles for PM are being evaluated in nuclear safeguards, it is timely to consider modern model selection options in the context of threshold estimation. One of the possible new PM roles involves PM residuals, where a residual is defined as residual = data ? prediction. This paper reviews alarm threshold estimation, introduces model selection options, and considers a range of assumptions regarding the data-generating mechanism for PM residuals.more »Two PM examples from nuclear safeguards are included to motivate the need for alarm threshold estimation. The first example involves mixtures of probability distributions that arise in solution monitoring, which is a common type of PM. The second example involves periodic partial cleanout of in-process inventory, leading to challenging structure in the time series of PM residuals.« less
Sharon Qi, X; Yang, Q; Lee, SP; Allen Li, X; Wang, D
2012-01-01T23:59:59.000Z
plan quality. For example, a Siemens Primus step-and-shootphoton beam generated by a Siemens Primus accelerator at ourAccording to [1], for a Siemens step-and-shoot machine, the
Robust estimation of the parameters of a disturbed non-stationary Gaussian process
Sergio Frasca; Pia Astone
2009-05-15T23:59:59.000Z
A typical problem in the detection of the gravitational waves in the data of gravitational antennas is the non-stationarity of the Gaussian noise (and so the varying sensitivity) and the presence of big impulsive disturbances. In such conditions the estimation of the standard deviation of the Gaussian process done with a classical estimator applied after a "rough" cleaning of the big pulses often gives poor results. We propose a method based on a matched filter applied to an AR histogram of the absolute value of the data
Greulich, Francis E.
in the more efficient use of existing capital and labor resources. Efficiency in timber harvesting starts YARDING PRODUCTION ESTIMATION Francis E. Greulich1 ABSTRACT.--The concept of the effective external-growth timber, very competitive bidding for logging contracts, high regional labor costs, shorter contract
Dual attitude and parameter estimation of passively magnetically stabilized nano satellites$
panels. By using the existing solar panels, no additional components are being added to the spacecraft, and no additional mass, volume or power budget is being used. From differential solar panel currents, an estimate, CA 94305, United States b University of Michigan, 1320 Beal Ave, Ann Arbor, MI 48109, United States
Estimation of thermo-hydrodynamic parameters in energy production systems using non-stationary
Paris-Sud XI, Université de
. The construction of such systems, as well as the operating conditions, impose the use of non-intrusive techniques of the propagation times. Keywords: Non-intrusive measurement, ultrasounds, temperature and flow rate estimation by the fact that it must work while the system is on, and without affecting its functioning. A non-intrusive
Accuracy Analysis on the Estimation of Camera Parameters for Active Vision Systems
Chen, Sheng-Wei
and orientation of the camera. They may include the effective focal length, the width and height of a photo sensor and resolution of the image sensor, the average object distance, the relative object depth, the 2D observation of the optical axis and the image sensor plane). Extrinsic camera parameters are essentially the position
Parameter estimation of permanent magnet stepper motors without position or velocity sensors
Paris-Sud XI, UniversitÃ© de
theory I. INTRODUCTION Permanent Magnet Stepper Motors (PMSM's) are widely used in industry for position control, especially in manu- facturing applications. PMSM's are more robust than brush DC motors the question of parameter identification without position or velocity sensors. The es- timation of PMSM
Matthew G. Walker; Mario Mateo; Edward W. Olszewski; Bodhisattva Sen; Michael Woodroofe
2008-11-12T23:59:59.000Z
(abridged) We develop an algorithm for estimating parameters of a distribution sampled with contamination, employing a statistical technique known as ``expectation maximization'' (EM). Given models for both member and contaminant populations, the EM algorithm iteratively evaluates the membership probability of each discrete data point, then uses those probabilities to update parameter estimates for member and contaminant distributions. The EM approach has wide applicability to the analysis of astronomical data. Here we tailor an EM algorithm to operate on spectroscopic samples obtained with the Michigan-MIKE Fiber System (MMFS) as part of our Magellan survey of stellar radial velocities in nearby dwarf spheroidal (dSph) galaxies. These samples are presented in a companion paper and contain discrete measurements of line-of-sight velocity, projected position, and Mg index for ~1000 - 2500 stars per dSph, including some fraction of contamination by foreground Milky Way stars. The EM algorithm quantifies both dSph and contaminant distributions, returning maximum-likelihood estimates of the means and variances, as well as the probability that each star is a dSph member. Applied to our MMFS data, the EM algorithm identifies more than 5000 probable dSph members. We test the performance of the EM algorithm on simulated data sets that represent a range of sample size, level of contamination, and amount of overlap between dSph and contaminant velocity distributions. The simulations establish that for samples ranging from large (N ~3000) to small (N~30), the EM algorithm distinguishes members from contaminants and returns accurate parameter estimates much more reliably than conventional methods of contaminant removal (e.g., sigma clipping).
Parameters of the prompt gamma-ray burst emission estimated with the opening angle of jets
B. -B. Zhang; Y. -P. Qin
2006-02-04T23:59:59.000Z
We present in this paper an approach to estimate the initial Lorentz factor of gamma-ray bursts (GRBs) without referring to the delayed emission of the early afterglow. Under the assumption that the afterglow of the bursts concerned occurs well before the prompt emission dies away, the Lorentz factor measured at the time when the duration of the prompt emission is ended could be estimated by applying the well-known relations of GRB jets. With the concept of the efficiency for converting the explosion energy to radiation, this Lorentz factor can be related to the initial Lorentz factor of the source. The corresponding rest frame peak energy can accordingly be calculated. Applying this method, we estimate the initial Lorentz factor of the bulk motion and the corresponding rest frame spectral peak energy of GRBs for a new sample where the redshift and the break time in the afterglow are known. Our analysis shows that, in the circumstances, the initial Lorentz factor of the sample would peak at 200 and would be distributed mainly within $(100,400)$, and the peak of the distribution of the corresponding rest frame peak energy would be $0.8keV$ and its main region would be $(0.3keV,3keV)$.
Estimation of end of life mobile phones generation: The case study of the Czech Republic
Polak, Milos, E-mail: mpolak@remasystem.cz; Drapalova, Lenka
2012-08-15T23:59:59.000Z
Highlights: Black-Right-Pointing-Pointer In this paper, we define lifespan of mobile phones and estimate their average total lifespan. Black-Right-Pointing-Pointer The estimation of lifespan distribution is based on large sample of EoL mobile phones. Black-Right-Pointing-Pointer Total lifespan of Czech mobile phones is surprisingly long, exactly 7.99 years. Black-Right-Pointing-Pointer In the years 2010-20, about 26.3 million pieces of EoL mobile phones will be generated in the Czech Republic. - Abstract: The volume of waste electrical and electronic equipment (WEEE) has been rapidly growing in recent years. In the European Union (EU), legislation promoting the collection and recycling of WEEE has been in force since the year 2003. Yet, both current and recently suggested collection targets for WEEE are completely ineffective when it comes to collection and recycling of small WEEE (s-WEEE), with mobile phones as a typical example. Mobile phones are the most sold EEE and at the same time one of appliances with the lowest collection rate. To improve this situation, it is necessary to assess the amount of generated end of life (EoL) mobile phones as precisely as possible. This paper presents a method of assessment of EoL mobile phones generation based on delay model. Within the scope of this paper, the method has been applied on the Czech Republic data. However, this method can be applied also to other EoL appliances in or outside the Czech Republic. Our results show that the average total lifespan of Czech mobile phones is surprisingly long, exactly 7.99 years. We impute long lifespan particularly to a storage time of EoL mobile phones at households, estimated to be 4.35 years. In the years 1990-2000, only 45 thousands of EoL mobile phones were generated in the Czech Republic, while in the years 2000-2010 the number grew to 6.5 million pieces and it is estimated that in the years 2010-2020 about 26.3 million pieces will be generated. Current European legislation sets targets on collection and recycling of WEEE in general, but no specific collection target for EoL mobile phone exists. In the year 2010 only about 3-6% of Czech EoL mobile phones were collected for recovery and recycling. If we make similar estimation using an estimated average EU value, then within the next 10 years about 1.3 billion of EoL mobile phones would be available for recycling in the EU. This amount contains about 31 tonnes of gold and 325 tonnes of silver. Since Europe is dependent on import of many raw materials, efficient recycling of EoL products could help reduce this dependence. To set a working system of collection, it will be necessary to set new and realistic collection targets.
Huang, Zhenyu; Du, Pengwei; Kosterev, Dmitry; Yang, Steve
2013-05-01T23:59:59.000Z
Disturbance data recorded by phasor measurement units (PMU) offers opportunities to improve the integrity of dynamic models. However, manually tuning parameters through play-back events demands significant efforts and engineering experiences. In this paper, a calibration method using the extended Kalman filter (EKF) technique is proposed. The formulation of EKF with parameter calibration is discussed. Case studies are presented to demonstrate its validity. The proposed calibration method is cost-effective, complementary to traditional equipment testing for improving dynamic model quality.
DISLOCATION GENERATION IN Si: A THERMO-MECHANICAL MODEL BASED ON MEASURABLE PARAMETERS*
Balzar, Davor
cause severe degradation of the cell performance. Indeed, it is now known that dislocation clusters dislocations in Si can be generated in any high-temperature processing step, the dislocations in photovoltaic
Al-Nasir, Abdul Majid Hamza
1968-01-01T23:59:59.000Z
ORDER RELATIONS AND PRIOR DISTRIBU 'IONS IN 1:-IE ESTXYJiTION OF MULTIVARIATE NOPSLAL PARAI'E&iTiS NI~N PARTIAL DATA A Thesis by ABDUL MAJID HA?ZA AL-NASZR Submitt d o the Grad. nate College oi' Texas UM Univ rsity in partial fu' fillment . f... as to style and content by: Chairman oi Committee Head oF Department ?'? Aug st 1968 ABS ~~CT Order Relations and Prior Distributions in the Bstimation of Multivariate Normal Parameters with Part'al Data. (August 1)68) Abdul Madrid Hamza Al-N!asir B...
Bayesian parameter estimation of core collapse supernovae using gravitational wave simulations
Matthew C. Edwards; Renate Meyer; Nelson Christensen
2014-07-28T23:59:59.000Z
Using the latest numerical simulations of rotating stellar core collapse, we present a Bayesian framework to extract the physical information encoded in noisy gravitational wave signals. We fit Bayesian principal component regression models with known and unknown signal arrival times to reconstruct gravitational wave signals, and subsequently fit known astrophysical parameters on the posterior means of the principal component coefficients using a linear model. We predict the ratio of rotational kinetic energy to gravitational energy of the inner core at bounce by sampling from the posterior predictive distribution, and find that these predictions are generally very close to the true parameter values, with $90\\%$ credible intervals $\\sim 0.04$ and $\\sim 0.06$ wide for the known and unknown arrival time models respectively. Two supervised machine learning methods are implemented to classify precollapse differential rotation, and we find that these methods discriminate rapidly rotating progenitors particularly well. We also introduce a constrained optimization approach to model selection to find an optimal number of principal components in the signal reconstruction step. Using this approach, we select 14 principal components as the most parsimonious model.
Wu, M.; Peng, J. (Energy Systems); ( NE)
2011-02-24T23:59:59.000Z
Freshwater consumption for electricity generation is projected to increase dramatically in the next couple of decades in the United States. The increased demand is likely to further strain freshwater resources in regions where water has already become scarce. Meanwhile, the automotive industry has stepped up its research, development, and deployment efforts on electric vehicles (EVs) and plug-in hybrid electric vehicles (PHEVs). Large-scale, escalated production of EVs and PHEVs nationwide would require increased electricity production, and so meeting the water demand becomes an even greater challenge. The goal of this study is to provide a baseline assessment of freshwater use in electricity generation in the United States and at the state level. Freshwater withdrawal and consumption requirements for power generated from fossil, nonfossil, and renewable sources via various technologies and by use of different cooling systems are examined. A data inventory has been developed that compiles data from government statistics, reports, and literature issued by major research institutes. A spreadsheet-based model has been developed to conduct the estimates by means of a transparent and interactive process. The model further allows us to project future water withdrawal and consumption in electricity production under the forecasted increases in demand. This tool is intended to provide decision makers with the means to make a quick comparison among various fuel, technology, and cooling system options. The model output can be used to address water resource sustainability when considering new projects or expansion of existing plants.
HF beam parameters in ELF/VLF wave generation via modulated heating of the ionosphere
Program (HAARP) facility near Gakona, AK, we investigate the effect of HF frequency and beam size-ionosphere waveguide generally decreases with increasing HF frequency between 2.759.50 MHz. HAARP is also capable is then applied to also predict the effect of HF beam parameters on magnetospheric injection with HAARP. Citation
John Veitch; Vivien Raymond; Benjamin Farr; Will M. Farr; Philip Graff; Salvatore Vitale; Ben Aylott; Kent Blackburn; Nelson Christensen; Michael Coughlin; Walter Del Pozzo; Farhan Feroz; Jonathan Gair; Carl-Johan Haster; Vicky Kalogera; Tyson Littenberg; Ilya Mandel; Richard O'Shaughnessy; Matthew Pitkin; Carl Rodriguez; Christian Röver; Trevor Sidery; Rory Smith; Marc Van Der Sluys; Alberto Vecchio; Will Vousden; Leslie Wade
2015-02-16T23:59:59.000Z
The Advanced LIGO and Advanced Virgo gravitational wave (GW) detectors will begin operation in the coming years, with compact binary coalescence events a likely source for the first detections. The gravitational waveforms emitted directly encode information about the sources, including the masses and spins of the compact objects. Recovering the physical parameters of the sources from the GW observations is a key analysis task. This work describes the LALInference software library for Bayesian parameter estimation of compact binary signals, which builds on several previous methods to provide a well-tested toolkit which has already been used for several studies. We show that our implementation is able to correctly recover the parameters of compact binary signals from simulated data from the advanced GW detectors. We demonstrate this with a detailed comparison on three compact binary systems: a binary neutron star, a neutron star black hole binary and a binary black hole, where we show a cross-comparison of results obtained using three independent sampling algorithms. These systems were analysed with non-spinning, aligned spin and generic spin configurations respectively, showing that consistent results can be obtained even with the full 15-dimensional parameter space of the generic spin configurations. We also demonstrate statistically that the Bayesian credible intervals we recover correspond to frequentist confidence intervals under correct prior assumptions by analysing a set of 100 signals drawn from the prior. We discuss the computational cost of these algorithms, and describe the general and problem-specific sampling techniques we have used to improve the efficiency of sampling the compact binary coalescence parameter space.
Estimating the Parameters of Sgr A*'s Accretion Flow Via Millimeter VLBI
Avery E. Broderick; Vincent L. Fish; Sheperd S. Doeleman; Abraham Loeb
2009-03-03T23:59:59.000Z
Recent millimeter-VLBI observations of Sagittarius A* (Sgr A*) have, for the first time, directly probed distances comparable to the horizon scale of a black hole. This provides unprecedented access to the environment immediately around the horizon of an accreting black hole. We leverage both existing spectral and polarization measurements and our present understanding of accretion theory to produce a suite of generic radiatively inefficient accretion flow (RIAF) models of Sgr A*, which we then fit to these recent millimeter-VLBI observations. We find that if the accretion flow onto Sgr A* is well described by a RIAF model, the orientation and magnitude of the black hole's spin is constrained to a two-dimensional surface in the spin, inclination, position angle parameter space. For each of these we find the likeliest values and their 1-sigma & 2-sigma errors to be a=0(+0.4+0.7), inclination=50(+10+30)(-10-10) degrees, and position angle=-20(+31+107)(-16-29) degrees, when the resulting probability distribution is marginalized over the others. The most probable combination is a=0(+0.2+0.4), inclination=90(-40-50) degrees and position angle=-14(+7+11)(-7-11) degrees, though the uncertainties on these are very strongly correlated, and high probability configurations exist for a variety of inclination angles above 30 degrees and spins below 0.99. Nevertheless, this demonstrates the ability millimeter-VLBI observations, even with only a few stations, to significantly constrain the properties of Sgr A*.
Hideyuki Tagoshi; Chandra Kant Mishra; Archana Pai; K. G. Arun
2014-12-12T23:59:59.000Z
We investigate the effects of using the {\\it full} waveform (FWF) over the conventional {\\it restricted} waveform (RWF) of the inspiral signal from a coalescing compact binary (CCB) system in extracting the parameters of the source, using a global network of second generation interferometric detectors. We study a hypothetical population of (1.4-10)$M_\\odot$ NS-BH binaries (uniformly distributed and oriented in the sky) by employing the full post-Newtonian waveforms, which not only include contributions from various harmonics other than the dominant one (quadrupolar mode) but also the post-Newtonian amplitude corrections associated with each harmonic, of the inspiral signal expected from this system. It is expected that the GW detector network consisting of the two LIGO detectors and a Virgo detector will be joined by KAGRA and by proposed LIGO-India. We study the problem of parameter estimation with all 16 possible detector configurations. Comparing medians of error distributions obtained using FWFs with those obtained using RWFs (which only include contributions from the dominant harmonic with Newtonian amplitude) we find that the measurement accuracies for luminosity distance and the cosine of the inclination angle improve almost by a factor of 1.5-2 depending upon the network under consideration. Although the use of FWF does not improve the source localization accuracy much, the global network consisting of five detectors will improve the source localization accuracy by a factor of 4 as compared to the estimates using a 3 detector LIGO-Virgo network for the same waveform model.
Husain, A.; Lewis, Brent J.
2003-02-27T23:59:59.000Z
Radioactive waste packages containing water and/or organic substances have the potential to radiolytically generate hydrogen and other combustible gases. Typically, the radiolytic gas generation rate is estimated from the energy deposition rate and the radiolytic gas yield. Estimation of the energy deposition rate must take into account the contributions from all radionuclides. While the contributions from non-gamma emitting radionuclides are relatively easy to estimate, an average geometry factor must be computed to determine the contribution from gamma emitters. Hitherto, no satisfactory method existed for estimating the geometry factors for a cylindrical package. In the present study, a formulation was developed taking into account the effect of photon buildup. A prototype code, called PC-CAGE, was developed to numerically solve the integrals involved. Based on the selected dimensions for a cylinder, the specified waste material, the photon energy of interest and a value for either the absorption or attenuation coefficient, the code outputs values for point and average geometry factors. These can then be used to estimate the internal dose rate to the material in the cylinder and hence to calculate the radiolytic gas generation rate. Besides the ability to estimate the rates of radiolytic gas generation, PC-CAGE can also estimate the dose received by the container material. This is based on values for the point geometry factors at the surface of the cylinder. PC-CAGE was used to calculate geometry factors for a number of cylindrical geometries. Estimates for the absorbed dose rate in container material were also obtained. The results for Ontario Power Generation's 3 m3 resin containers indicate that about 80% of the source gamma energy is deposited internally. In general, the fraction of gamma energy deposited internally depends on the dimensions of the cylinder, the material within it and the photon energy; the fraction deposited increases with increasing dimensions of the cylinder and decreases with increasing photon energy.
Kearns, Michael
Does Beta React to Market Conditions?: Estimates of Bull and Bear Betas using a Nonlinear Market Model with Endogenous Threshold Parameter by George Woodward and Heather Anderson Department transition between bull and bear states and allows the data to determine the threshold value. The estimated
Shao Tianjiao [State Key Laboratory of Molecular Reaction Dynamics, Dalian Institute of Chemical Physics, Chinese Academy of Sciences, Dalian 116023 (China); School of Materials Science and Engineering, Dalian University of Technology, Dalian 116024 (China); Zhao Guangjiu; Yang Huan [State Key Laboratory of Molecular Reaction Dynamics, Dalian Institute of Chemical Physics, Chinese Academy of Sciences, Dalian 116023 (China); School of Physics, Shandong University, Jinan 250100 (China); Wen Bin [School of Materials Science and Engineering, Dalian University of Technology, Dalian 116024 (China)
2010-12-15T23:59:59.000Z
In the present work, laser-parameter effects on the isolated attosecond pulse generation from two-color high-order harmonic generation (HHG) process are theoretically investigated by use of a wave-packet dynamics method. A 6-fs, 800-nm, 6x10{sup 14}W/cm{sup 2}, linearly polarized laser pulse serves as the fundamental driving pulse and parallel linearly polarized control pulses at 400 nm (second harmonic) and 1600 nm (half harmonic) are superimposed to create a two-color field. Of the two techniques, we demonstrate that using a half-harmonic control pulse with a large relative strength and zero phase shift relative to the fundamental pulse is a more promising way to generate the shortest attosecond pulses. As a consequence, an isolated 12-as pulse is obtained by Fourier transforming an ultrabroad xuv continuum of 300 eV in the HHG spectrum under half-harmonic control scheme when the relative strength {radical}(R)=0.6 and relative phase =0.
Estimations of Mo X-pinch plasma parameters on QiangGuang-1 facility by L-shell spectral analyses
Wu, Jian; Qiu, Aici [State Key Laboratory of Electrical Insulation and Power Equipment, Xi'an Jiaotong University, Shaanxi 710049 (China) [State Key Laboratory of Electrical Insulation and Power Equipment, Xi'an Jiaotong University, Shaanxi 710049 (China); State Key Laboratory of Intense Pulsed Radiation Simulation and Effect, Northwest Institute of Nuclear Technology, Xi'an 710024 (China); Li, Mo; Wang, Liangping; Wu, Gang; Ning, Guo; Qiu, Mengtong [State Key Laboratory of Intense Pulsed Radiation Simulation and Effect, Northwest Institute of Nuclear Technology, Xi'an 710024 (China)] [State Key Laboratory of Intense Pulsed Radiation Simulation and Effect, Northwest Institute of Nuclear Technology, Xi'an 710024 (China); Li, Xingwen [State Key Laboratory of Electrical Insulation and Power Equipment, Xi'an Jiaotong University, Shaanxi 710049 (China)] [State Key Laboratory of Electrical Insulation and Power Equipment, Xi'an Jiaotong University, Shaanxi 710049 (China)
2013-08-15T23:59:59.000Z
Plasma parameters of molybdenum (Mo) X-pinches on the 1-MA QiangGuang-1 facility were estimated by L-shell spectral analysis. X-ray radiation from X-pinches had a pulsed width of 1 ns, and its spectra in 2–3 keV were measured with a time-integrated X-ray spectrometer. Relative intensities of spectral features were derived by correcting for the spectral sensitivity of the spectrometer. With an open source, atomic code FAC (flexible atomic code), ion structures, and various atomic radiative-collisional rates for O-, F-, Ne-, Na-, Mg-, and Al-like ionization stages were calculated, and synthetic spectra were constructed at given plasma parameters. By fitting the measured spectra with the modeled, Mo X-pinch plasmas on the QiangGuang-1 facility had an electron density of about 10{sup 21} cm{sup ?3} and the electron temperature of about 1.2 keV.
Eldred, Michael Scott; Vigil, Dena M.; Dalbey, Keith R.; Bohnhoff, William J.; Adams, Brian M.; Swiler, Laura Painton; Lefantzi, Sophia (Sandia National Laboratories, Livermore, CA); Hough, Patricia Diane (Sandia National Laboratories, Livermore, CA); Eddy, John P.
2011-12-01T23:59:59.000Z
The DAKOTA (Design Analysis Kit for Optimization and Terascale Applications) toolkit provides a flexible and extensible interface between simulation codes and iterative analysis methods. DAKOTA contains algorithms for optimization with gradient and nongradient-based methods; uncertainty quantification with sampling, reliability, and stochastic expansion methods; parameter estimation with nonlinear least squares methods; and sensitivity/variance analysis with design of experiments and parameter study methods. These capabilities may be used on their own or as components within advanced strategies such as surrogate-based optimization, mixed integer nonlinear programming, or optimization under uncertainty. By employing object-oriented design to implement abstractions of the key components required for iterative systems analyses, the DAKOTA toolkit provides a flexible and extensible problem-solving environment for design and performance analysis of computational models on high performance computers. This report serves as a theoretical manual for selected algorithms implemented within the DAKOTA software. It is not intended as a comprehensive theoretical treatment, since a number of existing texts cover general optimization theory, statistical analysis, and other introductory topics. Rather, this manual is intended to summarize a set of DAKOTA-related research publications in the areas of surrogate-based optimization, uncertainty quantification, and optimization under uncertainty that provide the foundation for many of DAKOTA's iterative analysis capabilities.
Adams, Brian M.; Ebeida, Mohamed Salah; Eldred, Michael S.; Jakeman, John Davis; Swiler, Laura Painton; Stephens, John Adam; Vigil, Dena M.; Wildey, Timothy Michael; Bohnhoff, William J.; Eddy, John P.; Hu, Kenneth T.; Dalbey, Keith R.; Bauman, Lara E; Hough, Patricia Diane
2014-05-01T23:59:59.000Z
The Dakota (Design Analysis Kit for Optimization and Terascale Applications) toolkit provides a exible and extensible interface between simulation codes and iterative analysis methods. Dakota contains algorithms for optimization with gradient and nongradient-based methods; uncertainty quanti cation with sampling, reliability, and stochastic expansion methods; parameter estimation with nonlinear least squares methods; and sensitivity/variance analysis with design of experiments and parameter study methods. These capabilities may be used on their own or as components within advanced strategies such as surrogate-based optimization, mixed integer nonlinear programming, or optimization under uncertainty. By employing object-oriented design to implement abstractions of the key components required for iterative systems analyses, the Dakota toolkit provides a exible and extensible problem-solving environment for design and performance analysis of computational models on high performance computers. This report serves as a user's manual for the Dakota software and provides capability overviews and procedures for software execution, as well as a variety of example studies.
Columbia University
analyses the global waste market, with particular reference to municipal solid waste (MSW). Key NoteGlobal MSW Generation in 2007 estimated at two billion tons Global Waste Management Market between growth in wealth and increase in waste -- the more affluent a society becomes, the more waste
Chen, Jinsong; Dickens, Thomas
2008-01-01T23:59:59.000Z
of reservoir parameters from geophysical data. TraditionalCSEM data, which are functions of reservoir resistivity rreservoir parameters from seismic AVA and CSEM data. In
Lo, Min-Hui; Famiglietti, James S; Yeh, P. J.-F.; Syed, T. H
2010-01-01T23:59:59.000Z
2007), Estimating ground water storage changes in thestorage (i.e. , all of the snow, ice, surface water, soil moisture, and ground-
StCaire, Lorri; Olynick, Deirdre L.; Chao, Weilun L.; Lewis, Mark D.; Lu, Haoren; Dhuey, Scott D.; Liddle, J. Alexander
2008-07-01T23:59:59.000Z
We have implemented a technique to identify candidate polymer solvents for spinning, developing, and rinsing for a high resolution, negative electron beam resist hexa-methyl acetoxy calix(6)arene to elicit the optimum pattern development performance. Using the three dimensional Hansen solubility parameters for over 40 solvents, we have constructed a Hansen solubility sphere. From this sphere, we have estimated the Flory Huggins interaction parameter for solvents with hexa-methyl acetoxy calix(6)arene and found a correlation between resist development contrast and the Flory-Huggins parameter. This provides new insights into the development behavior of resist materials which are necessary for obtaining the ultimate lithographic resolution.
Deng, Song
This thesis proposes and validates a simplified model appropriate for parameter identification and evaluates several different inverse parameter identification schemes suitable for use when heating and cooling data from a commercial building...
Deng, Song Jiu
1997-01-01T23:59:59.000Z
This thesis proposes and validates a simplified model appropriate for parameter identification and evaluates several different inverse parameter identification schemes suitable for use when heating and cooling data from a commercial building...
Kim, D. S. [Department of Physics, Ulsan National Institute of Science and Technology (UNIST), Ulsan 689-798 (Korea, Republic of); Lee, W. S.; So, J. H. [Agency for Defence Development (ADD), Daejeon 305-152 (Korea, Republic of); Choi, E. M. [Department of Physics, Ulsan National Institute of Science and Technology (UNIST), Ulsan 689-798 (Korea, Republic of); School of Electrical and Computer Engineering, Ulsan National Institute of Science and Technology (UNIST), Ulsan 689-798 (Korea, Republic of)
2013-06-15T23:59:59.000Z
We report simulation results on generation of free electrons due to the presence of radioactive materials under controlled pressure and gases using a general Monte Carlo transport code (MCNPX). A radioactive material decays to lower atomic number, simultaneously producing high energy gamma rays that can generate free electrons via various scattering mechanisms. This paper shows detailed simulation works for answering how many free electrons can be generated under the existence of shielded radioactive materials as a function of pressure and types of gases.
Copyright © 2008 IEEE. Reprinted from J. Rose, and I. Hiskens. Estimating Wind Turbine Parameters, July 2008. This material is posted here with permission of the IEEE. Such permission of the IEEE does or services. Internal or personal use of this material is permitted. However, permission to reprint
Giuseppe Palmiotti; Massimo Salvatores
2015-01-01T23:59:59.000Z
This paper aims to show the main diffrences between the COMMARA-2.0 and COMMARA-2.1 evaluated covariance data in the uncertainty estimation of integral parameters of interest for a large number of typical innovative fast neutron systems.
Estimation of fracture compliance from tubewaves generated at a fracture intersecting a borehole
Bakku, Sudhish Kumar
2011-01-01T23:59:59.000Z
Understanding fracture compliance is important for characterizing fracture networks and for inferring fluid flow in the subsurface. In an attempt to estimate fracture compliance in the field, we developed a new model to ...
Feeny, Brian
and dry-friction damping estimates obtained from the experimental system are compared to those obtained in the large- amplitude responses, and that Coulomb friction dominates in the small-amplitude oscillations. An exact formulation for the simultaneous estimation of Coulomb and viscous friction in oscillators has
Wang, Ruofan; Wang, Jiang; Deng, Bin, E-mail: dengbin@tju.edu.cn; Liu, Chen; Wei, Xile [Department of Electrical and Automation Engineering, Tianjin University, Tianjin (China)] [Department of Electrical and Automation Engineering, Tianjin University, Tianjin (China); Tsang, K. M.; Chan, W. L. [Department of Electrical Engineering, The Hong Kong Polytechnic University, Kowloon (Hong Kong)] [Department of Electrical Engineering, The Hong Kong Polytechnic University, Kowloon (Hong Kong)
2014-03-15T23:59:59.000Z
A combined method composing of the unscented Kalman filter (UKF) and the synchronization-based method is proposed for estimating electrophysiological variables and parameters of a thalamocortical (TC) neuron model, which is commonly used for studying Parkinson's disease for its relay role of connecting the basal ganglia and the cortex. In this work, we take into account the condition when only the time series of action potential with heavy noise are available. Numerical results demonstrate that not only this method can estimate model parameters from the extracted time series of action potential successfully but also the effect of its estimation is much better than the only use of the UKF or synchronization-based method, with a higher accuracy and a better robustness against noise, especially under the severe noise conditions. Considering the rather important role of TC neuron in the normal and pathological brain functions, the exploration of the method to estimate the critical parameters could have important implications for the study of its nonlinear dynamics and further treatment of Parkinson's disease.
Natesan, Prathiba
2009-05-15T23:59:59.000Z
Disciplines????.... 18 2 BIAS, RMSD, and Pearson?s Correlation (r) for Discrimination and Threshold Parameters for Simulated Conditions???????.... 44 3 ?2 Values from Factorial ANOVAs... for the Simulation Design Features Explaining the Variabilities in Discrimination and Threshold Parameters????............................................................... 49 4 Ethnic Composition...
Nelson, C.
1995-08-01T23:59:59.000Z
Under the Title III, Section 112 of the 1990 Clean Air Act Amendment, Congress directed the U.S. Environmental Protection Agency (EPA) to perform a study of the hazards to public resulting from pollutants emitted by electric utility system generating units. Radionuclides are among the groups of pollutants listed in the amendment. This report updates previously published data and estimates with more recently available information regarding the radionuclide contents of fossil fuels, associated emissions by steam-electric power plants, and potential health effects to exposed population groups.
Carter, Joshua A.; Winn, Joshua N., E-mail: carterja@mit.ed, E-mail: jwinn@mit.ed [Department of Physics and Kavli Institute for Astrophysics and Space Research, Massachusetts Institute of Technology, Cambridge, MA 02139 (United States)
2009-10-10T23:59:59.000Z
We consider the problem of fitting a parametric model to time-series data that are afflicted by correlated noise. The noise is represented by a sum of two stationary Gaussian processes: one that is uncorrelated in time, and another that has a power spectral density varying as 1/f{sup g}amma. We present an accurate and fast [O(N)] algorithm for parameter estimation based on computing the likelihood in a wavelet basis. The method is illustrated and tested using simulated time-series photometry of exoplanetary transits, with particular attention to estimating the mid-transit time. We compare our method to two other methods that have been used in the literature, the time-averaging method and the residual-permutation method. For noise processes that obey our assumptions, the algorithm presented here gives more accurate results for mid-transit times and truer estimates of their uncertainties.
Parameter Estimates for a PEMFC Cathode Qingzhi Guo,* Vijay A. Sethuraman,* and Ralph E. White**,z
Sethuraman, Vijay A.
PEMFC cathode the volume fraction of gas pores in the gas diffusion layer, the volume fraction of gas in this work indicate that ionic conduction and gas-phase transport are two processes significantly influencing submitted air cathode model8 that includes gas pores in the CAL to estimate the values of the volume
Povinelli, Richard J.
to the fact that in a no-load situation only the electrical circuits of the stator windings carry the currents estimation approach. In this technique, the stator currents, voltages and motor speed are used as the input detection [1-6]. Some of these works have mainly used the frequency spectrum of the stator current for rotor
Biomass Power Generation Market Capacity is Estimated to Reach 122,331.6 MW
AFDC Printable Version Share this resource Send a link to EERE: Alternative Fuels Data Center Home Page to someone by E-mail Share EERE: Alternative Fuels Data Center Home Page on Facebook Tweet about EERE: Alternative Fuels Data Center Home Page on Twitter Bookmark EERE: Alternative Fuels Data Center Home Page onYou are now leaving Energy.gov You are now leaving Energy.gov You are beingZealand Jump to:EzfeedflagBiomass ConversionsSouthby 2022 | OpenEI Community Biomass Power Generation
Paris-Sud XI, Université de
Hydraulic Parameters Marcel Bawindsom Kébré1,2 , Fabien Cherblanc1 , François Ouédraogo2 , Jean-Claude Bénet BP 7021, Burkina Faso Abstract The unsaturated soil hydraulic properties (the soil water. After a short review of alternative modeling approaches for the hydraulic functions from saturation
Estimate of the Sources of Plutonium-Containing Wastes Generated from MOX Fuel Production in Russia
Kudinov, K. G.; Tretyakov, A. A.; Sorokin, Yu. P.; Bondin, V. V.; Manakova, L. F.; Jardine, L. J.
2002-02-26T23:59:59.000Z
In Russia, mixed oxide (MOX) fuel is produced in a pilot facility ''Paket'' at ''MAYAK'' Production Association. The Mining-Chemical Combine (MCC) has developed plans to design and build a dedicated industrial-scale plant to produce MOX fuel and fuel assemblies (FA) for VVER-1000 water reactors and the BN-600 fast-breeder reactor, which is pending an official Russian Federation (RF) site-selection decision. The design output of the plant is based on a production capacity of 2.75 tons of weapons plutonium per year to produce the resulting fuel assemblies: 1.25 tons for the BN-600 reactor FAs and the remaining 1.5 tons for VVER-1000 FAs. It is likely the quantity of BN-600 FAs will be reduced in actual practice. The process of nuclear disarmament frees a significant amount of weapons plutonium for other uses, which, if unutilized, represents a constant general threat. In France, Great Britain, Belgium, Russia, and Japan, reactor-grade plutonium is used in MOX-fuel production. Making MOX-fuel for CANDU (Canada) and pressurized water reactors (PWR) (Europe) is under consideration in Russia. If this latter production is added, as many as 5 tons of Pu per year might be processed into new FAs in Russia. Many years of work and experience are represented in the estimates of MOX fuel production wastes derived in this report. Prior engineering studies and sludge treatment investigations and comparisons have determined how best to treat Pu sludges and MOX fuel wastes. Based upon analyses of the production processes established by these efforts, we can estimate that there will be approximately 1200 kg of residual wastes subject to immobilization per MT of plutonium processed, of which approximately 6 to 7 kg is Pu in the residuals per MT of Pu processed. The wastes are various and complicated in composition. Because organic wastes constitute both the major portion of total waste and of the Pu to be immobilized, the recommended treatment of MOX-fuel production waste is incineration or calcination, alkali sintering, and dissolution of sintered products in nitric acid. Insoluble residues are then mixed with vitrifying components and Pu sludges, vitrified, and sent for storage and disposal. Implementation of the intergovernmental agreement between Russia and the United States (US) regarding the utilization of 34 tons of weapons plutonium will also require treatment of Pu containing MOX fabrication wastes at the MCC radiochemical production plant.
Holappa, Lauri; Asikainen, Timo
2015-01-01T23:59:59.000Z
In this paper, we study two sets of local geomagnetic indices from 26 stations using the principal component (PC) and the independent component (IC) analysis methods. We demonstrate that the annually averaged indices can be accurately represented as linear combinations of two first components with weights systematically depending on latitude. We show that the annual contributions of coronal mass ejections (CMEs) and high speed streams (HSSs) to geomagnetic activity are highly correlated with the first and second IC. The first and second ICs are also found to be very highly correlated with the strength of the interplanetary magnetic field (IMF) and the solar wind speed, respectively, because solar wind speed is the most important parameter driving geomagnetic activity during HSSs while IMF strength dominates during CMEs. These results help in better understanding the long-term driving of geomagnetic activity and in gaining information about the long-term evolution of solar wind parameters and the different sol...
Gonzales, Sergio Eduardo
2013-07-23T23:59:59.000Z
in the area of oil shales, in order to design more efficient, accurate and cost-effective hydraulic fracture jobs, there must be a better understanding of the relationships between reservoir and fracture parameters, and how they affect the performance... methane (CBM), basin-centered gas, shale gas, gas hydrates, natural bitumen, and oil shale deposits. Typically, such accumulations require specialized extraction technology (e.g., dewatering of CBM, massive fracturing programs for shale gas, steam and...
Life Estimation of PWR Steam Generator U-Tubes Subjected to Foreign Object-Induced Fretting Wear
Jo, Jong Chull; Jhung, Myung Jo; Kim, Woong Sik; Kim, Hho Jung [Korea Institute of Nuclear Safety (Korea, Republic of)
2005-10-15T23:59:59.000Z
This paper presents an approach to the remaining life prediction of steam generator (SG) U-tubes, which are intact initially, subjected to fretting-wear degradation due to the interaction between a vibrating tube and a foreign object in operating nuclear power plants. The operating SG shell-side flow field conditions are obtained from a three-dimensional SG flow calculation using the ATHOS3 code. Modal analyses are performed for the finite element models of U-tubes to get the natural frequency, corresponding mode shape, and participation factor. The wear rate of a U-tube caused by a foreign object is calculated using the Archard formula, and the remaining life of the tube is predicted. Also discussed in this study are the effects of the tube modal characteristics, external flow velocity, and tube internal pressure on the estimated results of the remaining life of the tube.
Griffin, Joshua D. (Sandai National Labs, Livermore, CA); Eldred, Michael Scott; Martinez-Canales, Monica L. (Sandai National Labs, Livermore, CA); Watson, Jean-Paul; Kolda, Tamara Gibson (Sandai National Labs, Livermore, CA); Adams, Brian M.; Swiler, Laura Painton; Williams, Pamela J. (Sandai National Labs, Livermore, CA); Hough, Patricia Diane (Sandai National Labs, Livermore, CA); Gay, David M.; Dunlavy, Daniel M.; Eddy, John P.; Hart, William Eugene; Guinta, Anthony A.; Brown, Shannon L.
2006-10-01T23:59:59.000Z
The DAKOTA (Design Analysis Kit for Optimization and Terascale Applications) toolkit provides a flexible and extensible interface between simulation codes and iterative analysis methods. DAKOTA contains algorithms for optimization with gradient and nongradient-based methods; uncertainty quantification with sampling, reliability, and stochastic finite element methods; parameter estimation with nonlinear least squares methods; and sensitivity/variance analysis with design of experiments and parameter study methods. These capabilities may be used on their own or as components within advanced strategies such as surrogate-based optimization, mixed integer nonlinear programming, or optimization under uncertainty. By employing object-oriented design to implement abstractions of the key components required for iterative systems analyses, the DAKOTA toolkit provides a flexible and extensible problem-solving environment for design and performance analysis of computational models on high performance computers. This report serves as a reference manual for the commands specification for the DAKOTA software, providing input overviews, option descriptions, and example specifications.
Eldred, Michael Scott; Dalbey, Keith R.; Bohnhoff, William J.; Adams, Brian M.; Swiler, Laura Painton; Hough, Patricia Diane (Sandia National Laboratories, Livermore, CA); Gay, David M.; Eddy, John P.; Haskell, Karen H.
2010-05-01T23:59:59.000Z
The DAKOTA (Design Analysis Kit for Optimization and Terascale Applications) toolkit provides a flexible and extensible interface between simulation codes and iterative analysis methods. DAKOTA contains algorithms for optimization with gradient and nongradient-based methods; uncertainty quantification with sampling, reliability, and stochastic finite element methods; parameter estimation with nonlinear least squares methods; and sensitivity/variance analysis with design of experiments and parameter study methods. These capabilities may be used on their own or as components within advanced strategies such as surrogate-based optimization, mixed integer nonlinear programming, or optimization under uncertainty. By employing object-oriented design to implement abstractions of the key components required for iterative systems analyses, the DAKOTA toolkit provides a flexible and extensible problem-solving environment for design and performance analysis of computational models on high performance computers. This report serves as a reference manual for the commands specification for the DAKOTA software, providing input overviews, option descriptions, and example specifications.
Boyer, Edmond
1 Propylene glycol monomethyl ether. A 3-generation study of isomer effects on reproductive tel +33 (0) 3 44 55 62 64, fax +33 (0)3 44 55 66 05, E- mail : emmanuel.lemazurier@ineris.fr Propylene reproductive toxicity E Lemazurier et al. ineris-00961896,version1-20Mar2014 #12;3 Introduction Propylene
Eldred, Michael Scott; Dalbey, Keith R.; Bohnhoff, William J.; Adams, Brian M.; Swiler, Laura Painton; Hough, Patricia Diane (Sandia National Laboratories, Livermore, CA); Gay, David M.; Eddy, John P.; Haskell, Karen H.
2010-05-01T23:59:59.000Z
The DAKOTA (Design Analysis Kit for Optimization and Terascale Applications) toolkit provides a flexible and extensible interface between simulation codes and iterative analysis methods. DAKOTA contains algorithms for optimization with gradient and nongradient-based methods; uncertainty quantification with sampling, reliability, and stochastic finite element methods; parameter estimation with nonlinear least squares methods; and sensitivity/variance analysis with design of experiments and parameter study methods. These capabilities may be used on their own or as components within advanced strategies such as surrogate-based optimization, mixed integer nonlinear programming, or optimization under uncertainty. By employing object-oriented design to implement abstractions of the key components required for iterative systems analyses, the DAKOTA toolkit provides a flexible and extensible problem-solving environment for design and performance analysis of computational models on high performance computers. This report serves as a developers manual for the DAKOTA software and describes the DAKOTA class hierarchies and their interrelationships. It derives directly from annotation of the actual source code and provides detailed class documentation, including all member functions and attributes.
Griffin, Joshua D. (Sandai National Labs, Livermore, CA); Eldred, Michael Scott; Martinez-Canales, Monica L. (Sandai National Labs, Livermore, CA); Watson, Jean-Paul; Kolda, Tamara Gibson (Sandai National Labs, Livermore, CA); Giunta, Anthony Andrew; Adams, Brian M.; Swiler, Laura Painton; Williams, Pamela J. (Sandai National Labs, Livermore, CA); Hough, Patricia Diane (Sandai National Labs, Livermore, CA); Gay, David M.; Dunlavy, Daniel M.; Eddy, John P.; Hart, William Eugene; Brown, Shannon L.
2006-10-01T23:59:59.000Z
The DAKOTA (Design Analysis Kit for Optimization and Terascale Applications) toolkit provides a flexible and extensible interface between simulation codes and iterative analysis methods. DAKOTA contains algorithms for optimization with gradient and nongradient-based methods; uncertainty quantification with sampling, reliability, and stochastic finite element methods; parameter estimation with nonlinear least squares methods; and sensitivity/variance analysis with design of experiments and parameter study methods. These capabilities may be used on their own or as components within advanced strategies such as surrogate-based optimization, mixed integer nonlinear programming, or optimization under uncertainty. By employing object-oriented design to implement abstractions of the key components required for iterative systems analyses, the DAKOTA toolkit provides a flexible and extensible problem-solving environment for design and performance analysis of computational models on high performance computers. This report serves as a user's manual for the DAKOTA software and provides capability overviews and procedures for software execution, as well as a variety of example studies.
Griffin, Joshua D. (Sandia National lababoratory, Livermore, CA); Eldred, Michael Scott; Martinez-Canales, Monica L. (Sandia National lababoratory, Livermore, CA); Watson, Jean-Paul; Kolda, Tamara Gibson (Sandia National lababoratory, Livermore, CA); Giunta, Anthony Andrew; Adams, Brian M.; Swiler, Laura Painton; Williams, Pamela J. (Sandia National lababoratory, Livermore, CA); Hough, Patricia Diane (Sandia National lababoratory, Livermore, CA); Gay, David M.; Dunlavy, Daniel M.; Eddy, John P.; Hart, William Eugene; Brown, Shannon L.
2006-10-01T23:59:59.000Z
The DAKOTA (Design Analysis Kit for Optimization and Terascale Applications) toolkit provides a flexible and extensible interface between simulation codes and iterative analysis methods. DAKOTA contains algorithms for optimization with gradient and nongradient-based methods; uncertainty quantification with sampling, reliability, and stochastic finite element methods; parameter estimation with nonlinear least squares methods; and sensitivity/variance analysis with design of experiments and parameter study methods. These capabilities may be used on their own or as components within advanced strategies such as surrogate-based optimization, mixed integer nonlinear programming, or optimization under uncertainty. By employing object-oriented design to implement abstractions of the key components required for iterative systems analyses, the DAKOTA toolkit provides a flexible and extensible problem-solving environment for design and performance analysis of computational models on high performance computers. This report serves as a developers manual for the DAKOTA software and describes the DAKOTA class hierarchies and their interrelationships. It derives directly from annotation of the actual source code and provides detailed class documentation, including all member functions and attributes.
Eldred, Michael Scott; Dalbey, Keith R.; Bohnhoff, William J.; Adams, Brian M.; Swiler, Laura Painton; Hough, Patricia Diane (Sandia National Laboratories, Livermore, CA); Gay, David M.; Eddy, John P.; Haskell, Karen H.
2010-05-01T23:59:59.000Z
The DAKOTA (Design Analysis Kit for Optimization and Terascale Applications) toolkit provides a flexible and extensible interface between simulation codes and iterative analysis methods. DAKOTA contains algorithms for optimization with gradient and nongradient-based methods; uncertainty quantification with sampling, reliability, and stochastic finite element methods; parameter estimation with nonlinear least squares methods; and sensitivity/variance analysis with design of experiments and parameter study methods. These capabilities may be used on their own or as components within advanced strategies such as surrogate-based optimization, mixed integer nonlinear programming, or optimization under uncertainty. By employing object-oriented design to implement abstractions of the key components required for iterative systems analyses, the DAKOTA toolkit provides a flexible and extensible problem-solving environment for design and performance analysis of computational models on high performance computers. This report serves as a user's manual for the DAKOTA software and provides capability overviews and procedures for software execution, as well as a variety of example studies.
, in Human Computer Interfaces (HCI) gaze information coordinated with other user inputs can lead fields. In psychology and sociology, gaze information helps to infer inner states of people Color distributions Visual target Geometric parameters Eyelids openinglocation Person-wise parameters
Applications of the Bayesian approach for experimentation and estimation
De Man, Patrick A. P. (Patrick Antonius Petrus)
2006-01-01T23:59:59.000Z
A Bayesian framework for systematic data collection and parameter estimation is proposed to aid experimentalists in effectively generating and interpreting data. The four stages of the Bayesian framework are: system ...
Kissling, W. Daniel
2013-01-01T23:59:59.000Z
interactions Estimating species-level extinction risk underin predicting species-level extinction risk under climateto assess extinction risk of select species under climate
Lee, Young Sun; Beers, Timothy C. [Department of Physics and Astronomy and JINA (Joint Institute for Nuclear Astrophysics), Michigan State University, East Lansing, MI 48824 (United States); Prieto, Carlos Allende [Instituto de Astrofisica de Canarias, E-38205 La Laguna, Tenerife (Spain); Lai, David K.; Rockosi, Constance M. [UCO/Lick Observatory, Department of Astronomy and Astrophysics, University of California, Santa Cruz, CA 95064 (United States); Morrison, Heather L. [Department of Astronomy, Case Western Reserve University, Cleveland, OH 44106 (United States); Johnson, Jennifer A. [Department of Astronomy, Ohio State University, Columbus, OH 43210 (United States); An, Deokkeun [Department of Science Education, Ewha Womans University, Seoul 120-750 (Korea, Republic of); Sivarani, Thirupathi [Indian Institute of Astrophysics, 2nd block Koramangala, Bangalore 560034 (India); Yanny, Brian, E-mail: lee@pa.msu.edu, E-mail: beers@pa.msu.edu, E-mail: cap@mssl.ucl.ac.uk, E-mail: david@ucolick.org, E-mail: crockosi@ucolikc.org, E-mail: heather@vegemite.case.edu, E-mail: jaj@astronomy.ohio-state.edu, E-mail: deokkeun@ewha.ac.kr, E-mail: sivarani@iiap.res.in, E-mail: yanny@fnal.gov [Fermi National Accelerator Laboratory, Batavia, IL 60510 (United States)
2011-03-15T23:59:59.000Z
We present a method for the determination of [{alpha}/Fe] ratios from low-resolution (R = 2000) SDSS/SEGUE stellar spectra. By means of a star-by-star comparison with degraded spectra from the ELODIE spectral library and with a set of moderately high-resolution (R = 15, 000) and medium-resolution (R = 6000) spectra of SDSS/SEGUE stars, we demonstrate that we are able to measure [{alpha}/Fe] from SDSS/SEGUE spectra (with S/N>20/1) to a precision of better than 0.1 dex, for stars with atmospheric parameters in the range T{sub eff} = [4500, 7000] K, log g = [1.5, 5.0], and [Fe/H] = [-1.4, +0.3], over the range [{alpha}/Fe] = [-0.1, +0.6]. For stars with [Fe/H] <-1.4, our method requires spectra with slightly higher signal-to-noise to achieve this precision (S/N>25/1). Over the full temperature range considered, the lowest metallicity star for which a confident estimate of [{alpha}/Fe] can be obtained from our approach is [Fe/H] {approx}-2.5; preliminary tests indicate that a metallicity limit as low as [Fe/H] {approx}-3.0 may apply to cooler stars. As a further validation of this approach, weighted averages of [{alpha}/Fe] obtained for SEGUE spectra of likely member stars of Galactic globular clusters (M15, M13, and M71) and open clusters (NGC 2420, M67, and NGC 6791) exhibit good agreement with the values of [{alpha}/Fe] from previous studies. The results of the comparison with NGC 6791 imply that the metallicity range for the method may extend to {approx}+0.5.
Kan, Jimmy Hung-Kei
1958-01-01T23:59:59.000Z
by the Regression M e t h o d..... 66 Diallel Mating ............................................... . 67 Test of Significance of Interaction Mean Square ........ ES Estimation of Non-Additive Gene Effects ................. 7^ (a) The First Method... .................... 3^ 6. The portions of genetic variance contained in the different components ....................................... 37 7* A comparison of reproductive efficiency of the four mating types ................................................ ^0 8...
Zhang, Yunpeng; Li, En, E-mail: lien@uestc.edu.cn; Guo, Gaofeng; Xu, Jiadi; Wang, Chao [School of Electronic Engineering, University of Electronic Science and Technology of China, Chengdu 611731 (China)
2014-09-15T23:59:59.000Z
A pair of spot-focusing horn lens antenna is the key component in a free-space measurement system. The electromagnetic constitutive parameters of a planar sample are determined using transmitted and reflected electromagnetic beams. These parameters are obtained from the measured scattering parameters by the microwave network analyzer, thickness of the sample, and wavelength of a focused beam on the sample. Free-space techniques introduced by most papers consider the focused wavelength as the free-space wavelength. But in fact, the incident wave projected by a lens into the sample approximates a Gaussian beam, thus, there has an elongation of the wavelength in the focused beam and this elongation should be taken into consideration in dielectric and magnetic measurement. In this paper, elongation of the wavelength has been analyzed and measured. Measurement results show that the focused wavelength in the vicinity of the focus has an elongation of 1%–5% relative to the free-space wavelength. Elongation's influence on the measurement result of the permittivity and permeability has been investigated. Numerical analyses show that the elongation of the focused wavelength can cause the increase of the measured value of the permeability relative to traditionally measured value, but for the permittivity, it is affected by several parameters and may increase or decrease relative to traditionally measured value.
Stovall, Therese K [ORNL] [ORNL; Bogdan, mary [Honeywell, Inc.] [Honeywell, Inc.
2008-01-01T23:59:59.000Z
The thermal conductivity of many closed-cell foam insulation products changes over time as production gases diffuse out of the cell matrix and atmospheric gases diffuse into the cells. Thin slicing has been shown to be an effective means of accelerating this process in such a way as to produce meaningful results. Recent efforts to produce a more prescriptive version of the ASTM standard test method have led to the initiation of a broad ruggedness test. This test includes the aging of full size insulation specimens for time periods up to five years for later comparison to the predicted results. Experimental parameters under investigation include: slice thickness, slice origin (at the surface or from the core of the slab), thin slice stack composition, product facings, original product thickness, product density, and product type. This paper will cover the structure of the ruggedness test and provide a glimpse of some early trends
Joko Tingkir program for estimating tsunami potential rapidly
Madlazim,, E-mail: m-lazim@physics.its.ac.id; Hariyono, E., E-mail: m-lazim@physics.its.ac.id [Department of Physics, Faculty of Mathematics and Natural Sciences, Universitas Negeri Surabaya (UNESA) , Jl. Ketintang, Surabaya 60231 (Indonesia)
2014-09-25T23:59:59.000Z
The purpose of the study was to estimate P-wave rupture durations (T{sub dur}), dominant periods (T{sub d}) and exceeds duration (T{sub 50Ex}) simultaneously for local events, shallow earthquakes which occurred off the coast of Indonesia. Although the all earthquakes had parameters of magnitude more than 6,3 and depth less than 70 km, part of the earthquakes generated a tsunami while the other events (Mw=7.8) did not. Analysis using Joko Tingkir of the above stated parameters helped understand the tsunami generation of these earthquakes. Measurements from vertical component broadband P-wave quake velocity records and determination of the above stated parameters can provide a direct procedure for assessing rapidly the potential for tsunami generation. The results of the present study and the analysis of the seismic parameters helped explain why the events generated a tsunami, while the others did not.
New developments in event generator tuning techniques
Andy Buckley; Hendrik Hoeth; Heiko Lacker; Holger Schulz; Jan Eike von Seggern
2010-05-28T23:59:59.000Z
Data analyses in hadron collider physics depend on background simulations performed by Monte Carlo (MC) event generators. However, calculational limitations and non-perturbative effects require approximate models with adjustable parameters. In fact, we need to simultaneously tune many phenomenological parameters in a high-dimensional parameter-space in order to make the MC generator predictions fit the data. It is desirable to achieve this goal without spending too much time or computing resources iterating parameter settings and comparing the same set of plots over and over again. We present extensions and improvements to the MC tuning system, Professor, which addresses the aforementioned problems by constructing a fast analytic model of a MC generator which can then be easily fitted to data. Using this procedure it is for the first time possible to get a robust estimate of the uncertainty of generator tunings. Furthermore, we can use these uncertainty estimates to study the effect of new (pseudo-) data on the quality of tunings and therefore decide if a measurement is worthwhile in the prospect of generator tuning. The potential of the Professor method outside the MC tuning area is presented as well.
A Sparse Representation Approach to Online Estimation of Power System Distribution Factors
Liberzon, Daniel
, constructed from the transmission network, line parameters, and historical and forecasted power generation.g., a transmission line or generator), a condition known as N-1 security [2]. Using an up-to- date system model1 A Sparse Representation Approach to Online Estimation of Power System Distribution Factors Yu
Lognormal parameter estimation with censored data
Zeis, Charles David
1970-01-01T23:59:59.000Z
ae (3. 29) vhereby the expected bias, B(6), is approximstely giver. . as B(e) = E(e ? e) = m( ?, 'd). (3. 3O) More explicitly, the approximate expected biases are: b(u) =v E(- ? ) +v E( ? ) +v E( ? ), 3L 3L aL 11 Bu 12 Bo 13 Bx (3. 31) b(o) = v... E( ? ) + v E( ? ) + v E( ? ), az. BL 21 Bu 22 Bo 23 3x l 3. 32) b(r) =v ( ? )+v E( ? )+v E( ? ), 3L BL BL Bu 32 aa 33 ar ' (3. 33) vhere v. is the ij element of the variance-covarience matrix. th ig It can be seen that the evaluation of the biases...
Parameter estimation of vector controlled induction machine
Rahman, Tahmid Ur
2002-01-01T23:59:59.000Z
and operation at low speed, indirect vector control is becoming popular. But as the rotor time constant varies the detuning of the control system occurs. Without exact rotor time constant the slip frequency calculation becomes inexact. In this thesis most...
Cardiovascular parameter estimation using a computational model
Samar, Zaid
2005-01-01T23:59:59.000Z
Modern intensive care units are equipped with a wide range of patient monitoring devices, each continuously recording signals produced by the human body. Currently, these signals need to be interpreted by a clinician in ...
Nonlinear parameter estimation in parallel computing environments
Li, Jie
1996-01-01T23:59:59.000Z
to solve these issues with respect to PEST. We then propose a hierarchical parallel control structure for PEST based on the manager-worker parallel programming model. We also discuss in detail the implementation of the parallel version of PEST in an Intel...
Thermal Hydraulic Simulations, Error Estimation and Parameter
Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)
AFDC Printable Version Share this resource Send a link to EERE: Alternative Fuels Data Center Home Page to someone by E-mail Share EERE: Alternative Fuels Data Center Home Page on Facebook Tweet about EERE: Alternative Fuels Data Center Home Page on Twitter Bookmark EERE: Alternative1 First Use of Energy for All Purposes (Fuel and Nonfuel), 2002; Level:Energy: Grid Integration Redefining What'sis Taking Over OurThe Iron Spin Transition in theTheoretical Study onThermal Hydraulic Simulations,
Importance and sensitivity of parameters affecting the Zion Seismic Risk
George, L.L.; O'Connell, W.J.
1985-06-01T23:59:59.000Z
This report presents the results of a study on the importance and sensitivity of structures, systems, equipment, components and design parameters used in the Zion Seismic Risk Calculations. This study is part of the Seismic Safety Margins Research Program (SSMRP) supported by the NRC Office of Nuclear Regulatory Research. The objective of this study is to provide the NRC with results on the importance and sensitivity of parameters used to evaluate seismic risk. These results can assist the NRC in making decisions dealing with the allocation of research resources on seismic issues. This study uses marginal analysis in addition to importance and sensitivity analysis to identify subject areas (input parameter areas) for improvements that reduce risk, estimate how much the improvement dfforts reduce risk, and rank the subject areas for improvements. Importance analysis identifies the systems, components, and parameters that are important to risk. Sensitivity analysis estimates the change in risk per unit improvement. Marginal analysis indicates the reduction in risk or uncertainty for improvement effort made in each subject area. The results described in this study were generated using the SEISIM (Systematic Evaluation of Important Safety Improvement Measures) and CHAIN computer codes. Part 1 of the SEISIM computer code generated the failure probabilities and risk values. Part 2 of SEISIM, along with the CHAIN computer code, generated the importance and sensitivity measures.
A simple method to estimate interwell autocorrelation
Pizarro, J.O.S.; Lake, L.W. [Univ. of Texas, Austin, TX (United States)
1997-08-01T23:59:59.000Z
The estimation of autocorrelation in the lateral or interwell direction is important when performing reservoir characterization studies using stochastic modeling. This paper presents a new method to estimate the interwell autocorrelation based on parameters, such as the vertical range and the variance, that can be estimated with commonly available data. We used synthetic fields that were generated from stochastic simulations to provide data to construct the estimation charts. These charts relate the ratio of areal to vertical variance and the autocorrelation range (expressed variously) in two directions. Three different semivariogram models were considered: spherical, exponential and truncated fractal. The overall procedure is demonstrated using field data. We find that the approach gives the most self-consistent results when it is applied to previously identified facies. Moreover, the autocorrelation trends follow the depositional pattern of the reservoir, which gives confidence in the validity of the approach.
Broader source: Directives, Delegations, and Requirements [Office of Management (MA)]
1997-03-28T23:59:59.000Z
Based on the project's scope, the purpose of the estimate, and the availability of estimating resources, the estimator can choose one or a combination of techniques when estimating an activity or project. Estimating methods, estimating indirect and direct costs, and other estimating considerations are discussed in this chapter.
Paul, Sabyasachi; Sahoo, G. S.; Tripathy, S. P., E-mail: sam.tripathy@gmail.com, E-mail: tripathy@barc.gov.in; Sunil, C.; Bandyopadhyay, T. [Health Physics Division, Bhabha Atomic Research Centre, Mumbai 400085 (India); Sharma, S. C.; Ramjilal,; Ninawe, N. G.; Gupta, A. K. [Nuclear Physics Division, Bhabha Atomic Research Centre, Mumbai 400085 (India)
2014-06-15T23:59:59.000Z
A systematic study on the measurement of neutron spectra emitted from the interaction of protons of various energies with a thick beryllium target has been carried out. The measurements were carried out in the forward direction (at 0° with respect to the direction of protons) using CR-39 detectors. The doses were estimated using the in-house image analyzing program autoTRAK-n, which works on the principle of luminosity variation in and around the track boundaries. A total of six different proton energies starting from 4 MeV to 24 MeV with an energy gap of 4 MeV were chosen for the study of the neutron yields and the estimation of doses. Nearly, 92% of the recoil tracks developed after chemical etching were circular in nature, but the size distributions of the recoil tracks were not found to be linearly dependent on the projectile energy. The neutron yield and dose values were found to be increasing linearly with increasing projectile energies. The response of CR-39 detector was also investigated at different beam currents at two different proton energies. A linear increase of neutron yield with beam current was observed.
The energy balancing parameter
Walton R. Gutierrez
2011-05-10T23:59:59.000Z
A parameter method is introduced in order to estimate the relationship among the various variables of a system in equilibrium, where the potential energy functions are incompletely known or the quantum mechanical calculations very difficult. No formal proof of the method is given; instead, a sufficient number of valuable examples are shown to make the case for the method's usefulness in classical and quantum systems. The mathematical methods required are quite elementary: basic algebra and minimization of power functions. This method blends advantageously with a simple but powerful approximate method for quantum mechanics, sidestepping entirely formal operators and differential equations. It is applied to the derivation of various well-known results involving centrally symmetric potentials for a quantum particle such as the hydrogen-like atom, the elastic potential and other cases of interest. The same formulas provide estimates for previously unsolved cases. PACS: 03.65.-w 30.00.00
Nicoli, Monica
Abstract 3-D seismic surveys generate 5-D data volume. In order to estimate the horizons for interpretation and further processing, the traveltime picking needs to be performed on n-D subsets of this 5-D to support the interpreters in the estimation of the events by preserving their depth continuity. The HP
Intermolecular potential parameters and combining rules determined from viscosity data
Bastien, Lucas A.J.; Price, Phillip N.; Brown, Nancy J.
2010-05-07T23:59:59.000Z
The Law of Corresponding States has been demonstrated for a number of pure substances and binary mixtures, and provides evidence that the transport properties viscosity and diffusion can be determined from a molecular shape function, often taken to be a Lennard-Jones 12-6 potential, that requires two scaling parameters: a well depth {var_epsilon}{sub ij} and a collision diameter {sigma}{sub ij}, both of which depend on the interacting species i and j. We obtain estimates for {var_epsilon}{sub ij} and {sigma}{sub ij} of interacting species by finding the values that provide the best fit to viscosity data for binary mixtures, and compare these to calculated parameters using several 'combining rules' that have been suggested for determining parameter values for binary collisions from parameter values that describe collisions of like molecules. Different combining rules give different values for {sigma}{sub ij} and {var_epsilon}{sub ij} and for some mixtures the differences between these values and the best-fit parameter values are rather large. There is a curve in ({var_epsilon}{sub ij}, {sigma}{sub ij}) space such that parameter values on the curve generate a calculated viscosity in good agreement with measurements for a pure gas or a binary mixture. The various combining rules produce couples of parameters {var_epsilon}{sub ij}, {sigma}{sub ij} that lie close to the curve and therefore generate predicted mixture viscosities in satisfactory agreement with experiment. Although the combining rules were found to underpredict the viscosity in most of the cases, Kong's rule was found to work better than the others, but none of the combining rules consistently yields parameter values near the best-fit values, suggesting that improved rules could be developed.
Updated Capital Cost Estimates for Utility Scale Electricity
Updated Capital Cost Estimates for Utility Scale Electricity Generating Plants April 2013 Information Administration | Updated Capital Cost Estimates for Utility Scale Electricity Generating Plants ii for Utility Scale Electricity Generating Plants ii Contents Introduction
Sun, Yu; Hou, Zhangshuan; Huang, Maoyi; Tian, Fuqiang; Leung, Lai-Yung R.
2013-12-10T23:59:59.000Z
This study demonstrates the possibility of inverting hydrologic parameters using surface flux and runoff observations in version 4 of the Community Land Model (CLM4). Previous studies showed that surface flux and runoff calculations are sensitive to major hydrologic parameters in CLM4 over different watersheds, and illustrated the necessity and possibility of parameter calibration. Two inversion strategies, the deterministic least-square fitting and stochastic Markov-Chain Monte-Carlo (MCMC) - Bayesian inversion approaches, are evaluated by applying them to CLM4 at selected sites. The unknowns to be estimated include surface and subsurface runoff generation parameters and vadose zone soil water parameters. We find that using model parameters calibrated by the least-square fitting provides little improvements in the model simulations but the sampling-based stochastic inversion approaches are consistent - as more information comes in, the predictive intervals of the calibrated parameters become narrower and the misfits between the calculated and observed responses decrease. In general, parameters that are identified to be significant through sensitivity analyses and statistical tests are better calibrated than those with weak or nonlinear impacts on flux or runoff observations. Temporal resolution of observations has larger impacts on the results of inverse modeling using heat flux data than runoff data. Soil and vegetation cover have important impacts on parameter sensitivities, leading to the different patterns of posterior distributions of parameters at different sites. Overall, the MCMC-Bayesian inversion approach effectively and reliably improves the simulation of CLM under different climates and environmental conditions. Bayesian model averaging of the posterior estimates with different reference acceptance probabilities can smooth the posterior distribution and provide more reliable parameter estimates, but at the expense of wider uncertainty bounds.
Shafer, John M
2012-11-05T23:59:59.000Z
The three major components of this research were: 1. Application of minimally invasive, cost effective hydrogeophysical techniques (surface and borehole), to generate fine scale (~1m or less) 3D estimates of subsurface heterogeneity. Heterogeneity is defined as spatial variability in hydraulic conductivity and/or hydrolithologic zones. 2. Integration of the fine scale characterization of hydrogeologic parameters with the hydrogeologic facies to upscale the finer scale assessment of heterogeneity to field scale. 3. Determination of the relationship between dual-domain parameters and practical characterization data.
Peregudov, A.; Andrianova, O.; Raskach, K.; Tsibulya, A. [Inst. for Physics and Power Engineering, Bondarenko Square 1, Obninsk 244033, Kaluga Region (Russian Federation)
2012-07-01T23:59:59.000Z
A number of recent studies have been devoted to the estimation of errors of reactor calculation parameters by the GRS (Generation Random Sampled) method. This method is based on direct sampling input data resulting in formation of random sets of input parameters which are used for multiple calculations. Once these calculations are performed, statistical processing of the calculation results is carried out to determine the mean value and the variance of each calculation parameter of interest. In our study this method is used for estimation of errors of calculation parameters (K{sub eff}, power density, dose rate) of a perspective sodium-cooled fast reactor. Neutron transport calculations were performed by the nodal diffusion code TRIGEX and Monte Carlo code MMK. (authors)
Heath, G.; O'Donoughue, P.; Whitaker, M.
2012-12-01T23:59:59.000Z
This research provides a systematic review and harmonization of the life cycle assessment (LCA) literature of electricity generated from conventionally produced natural gas. We focus on estimates of greenhouse gases (GHGs) emitted in the life cycle of electricity generation from conventionally produced natural gas in combustion turbines (NGCT) and combined-cycle (NGCC) systems. A process we term "harmonization" was employed to align several common system performance parameters and assumptions to better allow for cross-study comparisons, with the goal of clarifying central tendency and reducing variability in estimates of life cycle GHG emissions. This presentation summarizes preliminary results.
Estimation of food consumption
Callaway, J.M. Jr.
1992-04-01T23:59:59.000Z
The research reported in this document was conducted as a part of the Hanford Environmental Dose Reconstruction (HEDR) Project. The objective of the HEDR Project is to estimate the radiation doses that people could have received from operations at the Hanford Site. Information required to estimate these doses includes estimates of the amounts of potentially contaminated foods that individuals in the region consumed during the study period. In that general framework, the objective of the Food Consumption Task was to develop a capability to provide information about the parameters of the distribution(s) of daily food consumption for representative groups in the population for selected years during the study period. This report describes the methods and data used to estimate food consumption and presents the results developed for Phase I of the HEDR Project.
Parameters and error of a theoretical model
Moeller, P.; Nix, J.R.; Swiatecki, W.
1986-09-01T23:59:59.000Z
We propose a definition for the error of a theoretical model of the type whose parameters are determined from adjustment to experimental data. By applying a standard statistical method, the maximum-likelihoodlmethod, we derive expressions for both the parameters of the theoretical model and its error. We investigate the derived equations by solving them for simulated experimental and theoretical quantities generated by use of random number generators. 2 refs., 4 tabs.
Thermodynamic estimation: Ionic materials
Glasser, Leslie, E-mail: l.glasser@curtin.edu.au
2013-10-15T23:59:59.000Z
Thermodynamics establishes equilibrium relations among thermodynamic parameters (“properties”) and delineates the effects of variation of the thermodynamic functions (typically temperature and pressure) on those parameters. However, classical thermodynamics does not provide values for the necessary thermodynamic properties, which must be established by extra-thermodynamic means such as experiment, theoretical calculation, or empirical estimation. While many values may be found in the numerous collected tables in the literature, these are necessarily incomplete because either the experimental measurements have not been made or the materials may be hypothetical. The current paper presents a number of simple and relible estimation methods for thermodynamic properties, principally for ionic materials. The results may also be used as a check for obvious errors in published values. The estimation methods described are typically based on addition of properties of individual ions, or sums of properties of neutral ion groups (such as “double” salts, in the Simple Salt Approximation), or based upon correlations such as with formula unit volumes (Volume-Based Thermodynamics). - Graphical abstract: Thermodynamic properties of ionic materials may be readily estimated by summation of the properties of individual ions, by summation of the properties of ‘double salts’, and by correlation with formula volume. Such estimates may fill gaps in the literature, and may also be used as checks of published values. This simplicity arises from exploitation of the fact that repulsive energy terms are of short range and very similar across materials, while coulombic interactions provide a very large component of the attractive energy in ionic systems. Display Omitted - Highlights: • Estimation methods for thermodynamic properties of ionic materials are introduced. • Methods are based on summation of single ions, multiple salts, and correlations. • Heat capacity, entropy, lattice energy, enthalpy, Gibbs energy values are available.
D. S. Veloso; A. V. Dodonov
2015-04-19T23:59:59.000Z
We consider the nonstationary circuit QED architecture, where a single artificial two-level atom interacts with a cavity field mode under external modulation of one or more system parameters. Two different approaches are employed to study the effects of Markovian dissipation on modulation-induced transitions between the atom-field dressed states: the standard master equation of Quantum Optics and the recently formulated dressed-picture master equation. We estimate the associated transition rates and show that photon generation from vacuum ("dynamical Casimir effect", DCE) and coherent photon annihilation from nonvacuum states ("Anti-DCE") are possible with the current state-of-the-art parameters.
Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)
Main Parameters APS Storage Ring Parameters M. Borland, G. Decker, L. Emery, W. Guo, K. Harkay, V. Sajaev, C.-Y. Yao Advanced Photon Source September 8, 2010 This document list the...
The effect of weak lensing on distance estimates from supernovae
Smith, Mathew; Maartens, Roy [Department of Physics, University of the Western Cape, Cape Town 7535 (South Africa); Bacon, David J.; Nichol, Robert C.; Campbell, Heather; D'Andrea, Chris B. [Institute of Cosmology and Gravitation, University of Portsmouth, Portsmouth, PO1 3FX (United Kingdom); Clarkson, Chris [Astrophysics, Cosmology and Gravity Centre (ACGC), Department of Mathematics and Applied Mathematics, University of Cape Town, Rondebosch 7701 (South Africa); Bassett, Bruce A. [South African Astronomical Observatory, P.O. Box 9, Observatory 7935 (South Africa); Cinabro, David [Wayne State University, Department of Physics and Astronomy, Detroit, MI 48202 (United States); Finley, David A.; Frieman, Joshua A. [Center for Particle Astrophysics, Fermi National Accelerator Laboratory, P.O. Box 500, Batavia, IL 60510 (United States); Galbany, Lluis [CENTRA Centro Multidisciplinar de Astrofísica, Instituto Superior Técnico, Av. Rovisco Pais 1, 1049-001 Lisbon (Portugal); Garnavich, Peter M. [Department of Physics, University of Notre Dame, Notre Dame, IN 46556 (United States); Olmstead, Matthew D. [Department of Physics and Astronomy, University of Utah, Salt Lake City, UT 84112 (United States); Schneider, Donald P. [Department of Astronomy and Astrophysics, The Pennsylvania State University, University Park, PA 16802 (United States); Shapiro, Charles [Jet Propulsion Laboratory, California Institute of Technology, 4800 Oak Grove Drive, La Canada Flintridge, CA 91109 (United States); Sollerman, Jesper, E-mail: matsmith2@gmail.com [The Oskar Klein Centre, Department of Astronomy, AlbaNova, SE-106 91 Stockholm (Sweden)
2014-01-01T23:59:59.000Z
Using a sample of 608 Type Ia supernovae from the SDSS-II and BOSS surveys, combined with a sample of foreground galaxies from SDSS-II, we estimate the weak lensing convergence for each supernova line of sight. We find that the correlation between this measurement and the Hubble residuals is consistent with the prediction from lensing (at a significance of 1.7?). Strong correlations are also found between the residuals and supernova nuisance parameters after a linear correction is applied. When these other correlations are taken into account, the lensing signal is detected at 1.4?. We show, for the first time, that distance estimates from supernovae can be improved when lensing is incorporated, by including a new parameter in the SALT2 methodology for determining distance moduli. The recovered value of the new parameter is consistent with the lensing prediction. Using cosmic microwave background data from WMAP7, H {sub 0} data from Hubble Space Telescope and Sloan Digital Sky Survey (SDSS) Baryon acoustic oscillations measurements, we find the best-fit value of the new lensing parameter and show that the central values and uncertainties on ? {sub m} and w are unaffected. The lensing of supernovae, while only seen at marginal significance in this low-redshift sample, will be of vital importance for the next generation of surveys, such as DES and LSST, which will be systematics-dominated.
Parameter Estimations for Industrial Enzyme Processes Using Genetic Algorithms
Rus, Teodor
as the kinetic modeling of an enzyme system. The kinetic expression of the enzyme is mathematically defined the industrial batch production of maltose as an example. The mathematical model of the enzyme kinetics proposed 6 and summarized in Section 7. 2. Mathematical Model The enzyme kinetic system, EKS, is defined
Parameter Estimation Using Dual Fractional Power Filters Jason M. Kinser
Kinser, Jason M.
discriminant functions (SDF) which are reviewed in ref. 9. Unlike the previous methods, the SDF class of the SDF class. These filters are Fractional Power Filters (FPFs) which will be reviewed in Section 2 is a superset of two standard SDF-class filters: the SDF and the MACE filter. This section will review the SDF
Efficient Bayesian Parameter Estimation in Large Discrete Domains
Friedman, Nir
of words that follow a particular word, say ``Bosnia''. If we do not have any prior knowledge, we can to believe that in fact, only few words, such as ``Herzegovina'', should naturally follow the word ``Bosnia
Efficient Bayesian Parameter Estimation in Large Discrete Domains
Friedman, Nir
of words that follow a particular word, say "Bosnia". If we do not have any prior knowledge, we can to believe that in fact, only few words, such as "Herzegovina", should naturally follow the word "Bosnia
PARAMETER ESTIMATION BASED MODELS OF WATER SOURCE HEAT PUMPS
......................................................................................................... 4 2.1. Heat Pump and Chiller Models
Approximation Results for Parameter Estimation in Nonlinear Elastomers
ÂHookean elastomer rod is given by aeAw tt +A 1 w +A 2 w t +D \\Lambda ~ g(Dw) = F in V \\Lambda : (1.6) If this model; `). Then equation (1.4) with the specified boundary conditions can be written in the variational form: aeAw tt +A 1 w +D \\Lambda ~ g(Dw) = F in V \\Lambda ; (1.5) where A 1 2 L(V; V \\Lambda ) is given by hA 1 '; /i V
Parameter Estimation for Bayesian Classification of Multispectral Data
Farag, Aly A.
for Bayes classifier, e.g. the well known k-means algorithms [4]. The k-means algorithm #12;2 Refaat M of each class in the data set. Performance comparison of the presented algorithms shows that the SVM
Flight test techniques for aircraft parameter estimation in ground effect
Clark, James Matthew
1993-01-01T23:59:59.000Z
of attack CDU Variation of drag with airspeed CDq Variation of drag with pitch rate CDa Variation of drag with angle of attack /rad or /deg CDSe Variation of drag with elevator deflection /rad or /deg CL0 Lift coefficient at zero angle of attack CLU... Variation of lift with airspeed CL* Variation of lift with pitch rate CLa Variation of lift with angle of attack /rad or /deg CLSe Variation of lift with elevator deflection /rad or /deg Clp Variation of rolling moment with roll rate Clr Variation...
Sensor Scheduling for Multiple Parameters Estimation Under Energy Constraint
Liu, Mingyan
, unattended ground sensors (UGS) have been increasingly used to enhance situational awareness for surveillance
Analysis of Scattered Signal to Estimate Reservoir Fracture Parameters
Grandi, Samantha K.
We detect fracture corridors and determine their orientation and average spacing based on an analysis of seismic coda in the frequency-wave number (f-k ) domain. Fracture corridors have dimensions similar to seismic ...
Estimating type curve parameters with the cumulative curvature method
Harris, Dan Edward
1986-01-01T23:59:59.000Z
curvature of Ramey type curves at a forward span of 40'$ . 32 15 Cumulative curvature of Ramey type curves at a forward span of 50$ . 33 16 Cumulative curvature of Ramey type curves at a forward span of 60$ 34 ix LIST OF FIGURES icontinued) 17..."wand spans ranging f;om 15$ to 60$ are presented in Figure 10 through 12. Since data that bately reaches past the end of the unit slope region is too vague even for this technique, the graph with a forward span of 0$ to 15$ is omitied here because...
Parameter Estimates for High-Level Nuclear Transport in Fractured ...
2001-10-11T23:59:59.000Z
Oct 11, 2001 ... that takes the relevant time scales of the ow and the nuclear decay. ... ably accurate description of the transport and dispersion of nuclear ...
TWO-DIMENSIONAL POLYNOMIAL PHASE SIGNALS: PARAMETER ESTIMATION AND BOUNDS
Francos, Joseph M.
, the problem of modeling and analyzing Synthetic Aperture Radar (SAR) data, and in particular Interferometric SAR (INSAR) images, involves the analysis of complex valued 2-D non-homogeneous signals. Perspective such as camera calibration and the computation of shape from texture. Existing solutions to problems where
The Estimation of Statistical Parameters for Local Alignment Score Distributions
Bundschuh, Ralf
Information, National Library of Medicine, National Institutes of Health, Bethesda, MD 20894 2 Department. Altschul 1 , Ralf Bundschuh 2 , Rolf Olsen 2 and Terence Hwa 2 1 National Center for Biotechnology Abstract The distribution of optimal local alignment scores of random sequences plays a vital role
Estimation of Groundwater Flow Parameters Using Least Squares
conductivity. Wells are expensive to drill, and the cost of time, equipment and manpower to make accurate) is based on Darcy's emperical law for fluid flow through a porous media. This states that ~v = \\GammaK ~ rh
ARM - Evaluation Product - Radiatively Important Parameters Best Estimate
Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)
AFDC Printable Version Share this resource Send a link to EERE: Alternative Fuels Data Center Home Page to someone by E-mail Share EERE: Alternative Fuels Data Center Home Page on Facebook Tweet about EERE: Alternative Fuels Data Center Home Page on Twitter Bookmark EERE: Alternative1 First Use of Energy for All Purposes (Fuel and Nonfuel), 2002; Level: National5Sales for4,645 3,625 1,006Datastreamstwrcam40m Documentation DataDatastreamsxsaprhsrhi1-minProductsMicroPulse LIDARCartesian(RIPBE)
ESPRIT-Based Estimation of Location and Motion Dependent Parameters
Gesbert, David
algorithms applicable to Non-Line-of-Sight (NLoS) environments1 . I. INTRODUCTION Traditional geometrical of localization techniques that perform good in strictly NLoS environments can be found in [1] for static channelsEurecom's research is partially supported by its industrial members: BMW Group Research & Technology
Estimation of parameters governing the transmission dynamics of ...
humans is assessed using prevalence of morbidity as a measure of the level of .... squares fits by negative exponentials (solid curves). .... J. Bethony et al., Exposure to Schistosoma mansoni infection in a rural area in Brazil II: Household risk.
Estimating type curve parameters with the cumulative curvature method
Harris, Dan Edward
1986-01-01T23:59:59.000Z
. 40847 3. 60387 3. 80408 4. 00!124 4. 21950 4. 43501 4. 65592 4. 88239 5. 11459 5. 35268 5. 59683 5. 847?4 6. 10407 6. 36753 6. 63780 6. 91508 7. 19959 7. 49153 1. 820 1. 810 1. 800 1. 789 1. 729 1. 712 1. 694 1. 677 1 659 1. 641...
Measuring neutrino oscillation parameters using $\
Backhouse, Christopher James; /Oxford U.
2011-02-01T23:59:59.000Z
MINOS is a long-baseline neutrino oscillation experiment. It consists of two large steel-scintillator tracking calorimeters. The near detector is situated at Fermilab, close to the production point of the NuMI muon-neutrino beam. The far detector is 735 km away, 716m underground in the Soudan mine, Northern Minnesota. The primary purpose of the MINOS experiment is to make precise measurements of the 'atmospheric' neutrino oscillation parameters ({Delta}m{sub atm}{sup 2} and sin{sup 2} 2{theta}{sub atm}). The oscillation signal consists of an energy-dependent deficit of {nu}{sub {mu}} interactions in the far detector. The near detector is used to characterize the properties of the beam before oscillations develop. The two-detector design allows many potential sources of systematic error in the far detector to be mitigated by the near detector observations. This thesis describes the details of the {nu}{sub {mu}}-disappearance analysis, and presents a new technique to estimate the hadronic energy of neutrino interactions. This estimator achieves a significant improvement in the energy resolution of the neutrino spectrum, and in the sensitivity of the neutrino oscillation fit. The systematic uncertainty on the hadronic energy scale was re-evaluated and found to be comparable to that of the energy estimator previously in use. The best-fit oscillation parameters of the {nu}{sub {mu}}-disappearance analysis, incorporating this new estimator were: {Delta}m{sup 2} = 2.32{sub -0.08}{sup +0.12} x 10{sup -3} eV{sup 2}, sin {sup 2} 2{theta} > 0.90 (90% C.L.). A similar analysis, using data from a period of running where the NuMI beam was operated in a configuration producing a predominantly {bar {nu}}{sub {mu}} beam, yielded somewhat different best-fit parameters {Delta}{bar m}{sup 2} = (3.36{sub -0.40}{sup +0.46}(stat.) {+-} 0.06(syst.)) x 10{sup -3}eV{sup 2}, sin{sup 2} 2{bar {theta}} = 0.86{sub -0.12}{sup _0.11}(stat.) {+-} 0.01(syst.). The tension between these results is intriguing, and additional antineutrino data is currently being taken in order to further investigate this apparent discrepancy.
Vector generator scan converter
Moore, James M. (Livermore, CA); Leighton, James F. (Livermore, CA)
1990-01-01T23:59:59.000Z
High printing speeds for graphics data are achieved with a laser printer by transmitting compressed graphics data from a main processor over an I/O (input/output) channel to a vector generator scan converter which reconstructs a full graphics image for input to the laser printer through a raster data input port. The vector generator scan converter includes a microprocessor with associated microcode memory containing a microcode instruction set, a working memory for storing compressed data, vector generator hardward for drawing a full graphic image from vector parameters calculated by the microprocessor, image buffer memory for storing the reconstructed graphics image and an output scanner for reading the graphics image data and inputting the data to the printer. The vector generator scan converter eliminates the bottleneck created by the I/O channel for transmitting graphics data from the main processor to the laser printer, and increases printer speed up to thirty fold.
Vector generator scan converter
Moore, J.M.; Leighton, J.F.
1988-02-05T23:59:59.000Z
High printing speeds for graphics data are achieved with a laser printer by transmitting compressed graphics data from a main processor over an I/O channel to a vector generator scan converter which reconstructs a full graphics image for input to the laser printer through a raster data input port. The vector generator scan converter includes a microprocessor with associated microcode memory containing a microcode instruction set, a working memory for storing compressed data, vector generator hardware for drawing a full graphic image from vector parameters calculated by the microprocessor, image buffer memory for storing the reconstructed graphics image and an output scanner for reading the graphics image data and inputting the data to the printer. The vector generator scan converter eliminates the bottleneck created by the I/O channel for transmitting graphics data from the main processor to the laser printer, and increases printer speed up to thirty fold. 7 figs.
Rank-Based Estimation for GARCH Processes Beth Andrews
Andrews, Beth
Rank-Based Estimation for GARCH Processes Beth Andrews Northwestern University September 7, 2011 Abstract We consider a rank-based technique for estimating GARCH model parameters, some of which are scale transformations of conventional GARCH parameters. The estimators are obtained by minimizing a rank-based residual
Fischer, Noah A. [Los Alamos National Laboratory
2012-08-14T23:59:59.000Z
The reactor core input generator allows for MCNP input files to be tailored to design specifications and generated in seconds. Full reactor models can now easily be created by specifying a small set of parameters and generating an MCNP input for a full reactor core. Axial zoning of the core will allow for density variation in the fuel and moderator, with pin-by-pin fidelity, so that BWR cores can more accurately be modeled. LWR core work in progress: (1) Reflectivity option for specifying 1/4, 1/2, or full core simulation; (2) Axial zoning for moderator densities that vary with height; (3) Generating multiple types of assemblies for different fuel enrichments; and (4) Parameters for specifying BWR box walls. Fuel pin work in progress: (1) Radial and azimuthal zoning for generating further unique materials in fuel rods; (2) Options for specifying different types of fuel for MOX or multiple burn assemblies; (3) Additional options for replacing fuel rods with burnable poison rods; and (4) Control rod/blade modeling.
Second generation PFB for advanced power generation
Robertson, A.; Van Hook, J.
1995-11-01T23:59:59.000Z
Research is being conducted under a United States Department of Energy (USDOE) contract to develop a new type of coal-fueled plant for electric power generation. This new type of plant-called an advanced or second-generation pressurized fluidized bed combustion (APFBC) plant-offers the promise of 45-percent efficiency (HHV), with emissions and a cost of electricity that are significantly lower than conventional pulverized-coal-fired plants with scrubbers. This paper summarizes the pilot plant R&D work being conducted to develop this new type of plant. Although pilot plant testing is still underway, preliminary estimates indicate the commercial plant Will perform better than originally envisioned. Efficiencies greater than 46 percent are now being predicted.
Enhancing e-waste estimates: Improving data quality by multivariate Input–Output Analysis
Wang, Feng, E-mail: fwang@unu.edu [Institute for Sustainability and Peace, United Nations University, Hermann-Ehler-Str. 10, 53113 Bonn (Germany); Design for Sustainability Lab, Faculty of Industrial Design Engineering, Delft University of Technology, Landbergstraat 15, 2628CE Delft (Netherlands); Huisman, Jaco [Institute for Sustainability and Peace, United Nations University, Hermann-Ehler-Str. 10, 53113 Bonn (Germany); Design for Sustainability Lab, Faculty of Industrial Design Engineering, Delft University of Technology, Landbergstraat 15, 2628CE Delft (Netherlands); Stevels, Ab [Design for Sustainability Lab, Faculty of Industrial Design Engineering, Delft University of Technology, Landbergstraat 15, 2628CE Delft (Netherlands); Baldé, Cornelis Peter [Institute for Sustainability and Peace, United Nations University, Hermann-Ehler-Str. 10, 53113 Bonn (Germany); Statistics Netherlands, Henri Faasdreef 312, 2492 JP Den Haag (Netherlands)
2013-11-15T23:59:59.000Z
Highlights: • A multivariate Input–Output Analysis method for e-waste estimates is proposed. • Applying multivariate analysis to consolidate data can enhance e-waste estimates. • We examine the influence of model selection and data quality on e-waste estimates. • Datasets of all e-waste related variables in a Dutch case study have been provided. • Accurate modeling of time-variant lifespan distributions is critical for estimate. - Abstract: Waste electrical and electronic equipment (or e-waste) is one of the fastest growing waste streams, which encompasses a wide and increasing spectrum of products. Accurate estimation of e-waste generation is difficult, mainly due to lack of high quality data referred to market and socio-economic dynamics. This paper addresses how to enhance e-waste estimates by providing techniques to increase data quality. An advanced, flexible and multivariate Input–Output Analysis (IOA) method is proposed. It links all three pillars in IOA (product sales, stock and lifespan profiles) to construct mathematical relationships between various data points. By applying this method, the data consolidation steps can generate more accurate time-series datasets from available data pool. This can consequently increase the reliability of e-waste estimates compared to the approach without data processing. A case study in the Netherlands is used to apply the advanced IOA model. As a result, for the first time ever, complete datasets of all three variables for estimating all types of e-waste have been obtained. The result of this study also demonstrates significant disparity between various estimation models, arising from the use of data under different conditions. It shows the importance of applying multivariate approach and multiple sources to improve data quality for modelling, specifically using appropriate time-varying lifespan parameters. Following the case study, a roadmap with a procedural guideline is provided to enhance e-waste estimation studies.
Whitaker, M.; Heath, G. A.; O'Donoughue, P.; Vorum, M.
2012-04-01T23:59:59.000Z
This systematic review and harmonization of life cycle assessments (LCAs) of utility-scale coal-fired electricity generation systems focuses on reducing variability and clarifying central tendencies in estimates of life cycle greenhouse gas (GHG) emissions. Screening 270 references for quality LCA methods, transparency, and completeness yielded 53 that reported 164 estimates of life cycle GHG emissions. These estimates for subcritical pulverized, integrated gasification combined cycle, fluidized bed, and supercritical pulverized coal combustion technologies vary from 675 to 1,689 grams CO{sub 2}-equivalent per kilowatt-hour (g CO{sub 2}-eq/kWh) (interquartile range [IQR]= 890-1,130 g CO{sub 2}-eq/kWh; median = 1,001) leading to confusion over reasonable estimates of life cycle GHG emissions from coal-fired electricity generation. By adjusting published estimates to common gross system boundaries and consistent values for key operational input parameters (most importantly, combustion carbon dioxide emission factor [CEF]), the meta-analytical process called harmonization clarifies the existing literature in ways useful for decision makers and analysts by significantly reducing the variability of estimates ({approx}53% in IQR magnitude) while maintaining a nearly constant central tendency ({approx}2.2% in median). Life cycle GHG emissions of a specific power plant depend on many factors and can differ from the generic estimates generated by the harmonization approach, but the tightness of distribution of harmonized estimates across several key coal combustion technologies implies, for some purposes, first-order estimates of life cycle GHG emissions could be based on knowledge of the technology type, coal mine emissions, thermal efficiency, and CEF alone without requiring full LCAs. Areas where new research is necessary to ensure accuracy are also discussed.
Stochastic Wireless Channel Modeling, Estimation and Identification from Measurements
Olama, Mohammed M [ORNL; Djouadi, Seddik M [ORNL; Li, Yanyan [ORNL
2008-07-01T23:59:59.000Z
This paper is concerned with stochastic modeling of wireless fading channels, parameter estimation, and system identification from measurement data. Wireless channels are represented by stochastic state-space form, whose parameters and state variables are estimated using the expectation maximization algorithm and Kalman filtering, respectively. The latter are carried out solely from received signal measurements. These algorithms estimate the channel inphase and quadrature components and identify the channel parameters recursively. The proposed algorithm is tested using measurement data, and the results are presented.
Thermoelectric Generators 1. Thermoelectric generator
Lee, Ho Sung
1 Thermoelectric Generators HoSung Lee 1. Thermoelectric generator 1.1 Basic Equations In 1821 effects are called the thermoelectric effects. The mechanisms of thermoelectricity were not understood. Cold Hot I - -- - - - - -- Figure 1 Electron concentration in a thermoelectric material. #12;2 A large
Fourier methods for estimating power system stability limits
Marceau, R.J.; Galiana, F.D. (McGill Univ., Montreal, Quebec (Canada). Dept. of Electrical Engineering); Mailhot, R.; Denomme, F.; McGillis, D.T. (Hydro Quebec, Montreal, Quebec (Canada))
1994-05-01T23:59:59.000Z
This paper shows how the use of new generation tools such as a generalized shell for dynamic security analysis can help improve the understanding of fundamental power systems behavior. Using the ELISA prototype shell as a laboratory tool, it is shown that the signal energy of the network impulse response acts as a barometer to define the relative severity of a contingency with respect to some parameter, for instance power generation or power transfer. In addition, for a given contingency, as the parameter is varied and a network approaches instability, signal energy increases smoothly and predictably towards an asymptote which defines the network's stability limit: this, in turn, permits comparison of the severity of different contingencies. Using a Fourier transform approach, it is shown that this behavior can be explained in terms of the effect of increasing power on the damping component of a power system's dominant poles. A simple function is derived which estimates network stability limits with surprising accuracy from two or three simulations, provided that at least one of these is within 5% of the limit. These results hold notwithstanding the presence of many active, nonlinear voltage-support elements (i.e. generators, synchronous condensers, SVCs, static excitation systems, etc.) in the network.
Cost and Performance Assumptions for Modeling Electricity Generation Technologies
Tidball, R.; Bluestein, J.; Rodriguez, N.; Knoke, S.
2010-11-01T23:59:59.000Z
The goal of this project was to compare and contrast utility scale power plant characteristics used in data sets that support energy market models. Characteristics include both technology cost and technology performance projections to the year 2050. Cost parameters include installed capital costs and operation and maintenance (O&M) costs. Performance parameters include plant size, heat rate, capacity factor or availability factor, and plant lifetime. Conventional, renewable, and emerging electricity generating technologies were considered. Six data sets, each associated with a different model, were selected. Two of the data sets represent modeled results, not direct model inputs. These two data sets include cost and performance improvements that result from increased deployment as well as resulting capacity factors estimated from particular model runs; other data sets represent model input data. For the technologies contained in each data set, the levelized cost of energy (LCOE) was also evaluated, according to published cost, performance, and fuel assumptions.
Atmospheric parameters, spectral indexes and their relation to CPV spectral performance
Núñez, Rubén, E-mail: ruben.nunez@ies-def.upm.es; Antón, Ignacio, E-mail: ruben.nunez@ies-def.upm.es; Askins, Steve, E-mail: ruben.nunez@ies-def.upm.es; Sala, Gabriel, E-mail: ruben.nunez@ies-def.upm.es [Instituto de Energía Solar - Universidad Politécnica de Madrid, Instituto de Energía Solar, ETSI Telecomunicación, Ciudad Universitaria 28040 Madrid (Spain)
2014-09-26T23:59:59.000Z
Air Mass and atmosphere components (basically aerosol (AOD) and precipitable water (PW)) define the absorption of the sunlight that arrive to Earth. Radiative models such as SMARTS or MODTRAN use these parameters to generate an equivalent spectrum. However, complex and expensive instruments (as AERONET network devices) are needed to obtain AOD and PW. On the other hand, the use of isotype cells is a convenient way to characterize spectrally a place for CPV considering that they provide the photocurrent of the different internal subcells individually. Crossing data from AERONET station and a Tri-band Spectroheliometer, a model that correlates Spectral Mismatch Ratios and atmospheric parameters is proposed. Considering the amount of stations of AERONET network, this model may be used to estimate the spectral influence on energy performance of CPV systems close to all the stations worldwide.
Fracture compliance estimation using borehole tube waves
Bakku, Sudhish Kumar
We tested two models, one for tube-wave generation and the other for tube-wave attenuation at a fracture intersecting a borehole that can be used to estimate fracture compliance, fracture aperture, and lateral extent. In ...
Wavelet Based Estimation for Univariate Stable Laws
GonÃ§alves, Paulo
Wavelet Based Estimation for Univariate Stable Laws Anestis Antoniadis Laboratoire IMAG to implement. This article describes a fast, wavelet-based, regression-type method for estimating the parameters of a stable distribution. Fourier domain representations, combined with a wavelet multiresolution
Pico: Parameters for the Impatient Cosmologist
William A. Fendt; Benjamin D. Wandelt
2006-06-29T23:59:59.000Z
We present a fast, accurate, robust and flexible method of accelerating parameter estimation. This algorithm, called Pico, can compute the CMB power spectrum and matter transfer function as well as any computationally expensive likelihoods in a few milliseconds. By removing these bottlenecks from parameter estimation codes, Pico decreases their computational time by 1 or 2 orders of magnitude. Pico has several important properties. First, it is extremely fast and accurate over a large volume of parameter space. Furthermore, its accuracy can continue to be improved by using a larger training set. This method is generalizable to an arbitrary number of cosmological parameters and to any range of l-values in multipole space. Pico is approximately 3000 times faster than CAMB for flat models, and approximately 2000 times faster then the WMAP 3 year likelihood code. In this paper, we demonstrate that using Pico to compute power spectra and likelihoods produces parameter posteriors that are very similar to those using CAMB and the official WMAP3 code, but in only a fraction of the time. Pico and an interface to CosmoMC are made publicly available at http://www.astro.uiuc.edu/~bwandelt/pico/.
UPRE method for total variation parameter selection
Wohlberg, Brendt [Los Alamos National Laboratory; Lin, Youzuo [Los Alamos National Laboratory
2008-01-01T23:59:59.000Z
Total Variation (TV) Regularization is an important method for solving a wide variety of inverse problems in image processing. In order to optimize the reconstructed image, it is important to choose the optimal regularization parameter. The Unbiased Predictive Risk Estimator (UPRE) has been shown to give a very good estimate of this parameter for Tikhonov Regularization. In this paper we propose an approach to extend UPRE method to the TV problem. However, applying the extended UPRE is impractical in the case of inverse problems such as de blurring, due to the large scale of the associated linear problem. We also propose an approach to reducing the large scale problem to a small problem, significantly reducing computational requirements while providing a good approximation to the original problem.
Identifying Suitable Degradation Parameters for Individual-Based Prognostics
Coble, Jamie B.; Hines, Wes
2012-09-30T23:59:59.000Z
The ultimate goal of most prognostic systems is accurate prediction of the remaining useful life of individual systems or components based on their use and performance. Traditionally, individual-based prognostic methods use a measure of degradation to make lifetime estimates. Degradation measures may include sensed measurements, such as temperature or vibration level, or inferred measurements, such as model residuals or physics-based model predictions. Often, it is beneficial to combine several measures of degradation into a single parameter. Parameter features such as trendability, monotonicity, and prognosability can be used to compare candidate prognostic parameters to determine which is most useful for individual-based prognosis. By quantifying these features for a given parameter, the metrics can be used with any traditional optimization technique to identify an appropriate parameter. This parameter may be used with a parametric extrapolation model to make prognostic estimates for an individual unit. The proposed methods are illustrated with an application to simulated turbofan engine data.
Pavement Thickness Design Parameter
Pavement Thickness Design Parameter Impacts 2012 Municipal Streets Seminar November 14, 2012 Paul D. Wiegand, P.E. #12;Pavement Thickness Design · How do cities decide how thick to build their pavements;Pavement Thickness Design · Correct answer A data-based analysis! · Doesn't have to be difficult and time
Parameters’ Covariance in Neutron Time of Flight Analysis – Explicit Formulae
Odyniec, M. [NSTec; Blair, J. [NSTec
2014-12-01T23:59:59.000Z
We present here a method that estimates the parameters’ variance in a parametric model for neutron time of flight (NToF). The analytical formulae for parameter variances, obtained independently of calculation of parameter values from measured data, express the variances in terms of the choice, settings, and placement of the detector and the oscilloscope. Consequently, the method can serve as a tool in planning a measurement setup.
Method for estimating processability of a hydrocarbon-containing feedstock for hydroprocessing
Schabron, John F; Rovani, Jr., Joseph F
2014-01-14T23:59:59.000Z
Disclosed herein is a method involving the steps of (a) precipitating an amount of asphaltenes from a liquid sample of a first hydrocarbon-containing feedstock having solvated asphaltenes therein with one or more first solvents in a column; (b) determining one or more solubility characteristics of the precipitated asphaltenes; (c) analyzing the one or more solubility characteristics of the precipitated asphaltenes; and (d) correlating a measurement of feedstock reactivity for the first hydrocarbon-containing feedstock sample with a mathematical parameter derived from the results of analyzing the one or more solubility characteristics of the precipitates asphaltenes. Determined parameters and processabilities for a plurality of feedstocks can be used to generate a mathematical relationship between parameter and processability; this relationship can be used to estimate the processability for hydroprocessing for a feedstock of unknown processability.
A dimensionless parameter model for arc welding processes
Fuerschbach, P.W.
1994-12-31T23:59:59.000Z
A dimensionless parameter model previously developed for C0{sub 2} laser beam welding has been shown to be applicable to GTAW and PAW autogenous arc welding processes. The model facilitates estimates of weld size, power, and speed based on knowledge of the material`s thermal properties. The dimensionless parameters can also be used to estimate the melting efficiency, which eases development of weld schedules with lower heat input to the weldment. The mathematical relationship between the dimensionless parameters in the model has been shown to be dependent on the heat flow geometry in the weldment.
Parameterizing the Deceleration Parameter
Diego Pavón; Ivan Duran; Sergio del Campo; Ramón Herrera
2012-12-31T23:59:59.000Z
We propose and constrain with the latest observational data three parameterizations of the deceleration parameter, valid from the matter era to the far future. They are well behaved and do not diverge at any redshift. On the other hand, they are model independent in the sense that in constructing them the only assumption made was that the Universe is homogeneous and isotropic at large scales.
Application of the Continuous EUR Method to Estimate Reserves in Unconventional Gas Reservoirs
Currie, Stephanie M.
2010-10-12T23:59:59.000Z
to generate a time-dependent profile of the estimated ultimate recovery (EUR). The "objective" is to estimate the final EUR value(s) from several complimentary analyses. In this work we present the "Continuous EUR Method" to estimate reserves...
Statistical Methods for Estimating the Minimum Thickness Along a Pipeline
along the pipeline can be used to estimate corrosion levels. The traditional parametric model method for this problem is to estimate parameters of a specified corrosion distribution and then to use these parameters companies use pipelines to transfer oil, gas and other materials from one place to another. Manufactures
EIGHT CHANNEL PROGRAMMABLE PULSE GENERATOR
Kleinfeld, David
Master-8 EIGHT CHANNEL PROGRAMMABLE PULSE GENERATOR Operation Manual A.M.P.I. A.M.P.I. 123Uzlel St and the programming simple and easy to learn. Master-8 is an attractive unit and you will enjoy working with its eight -- Modes of operation 11 -- Setting the parameters 13 -- Triggering 14 -- Eight stored paradigms 14
Modeling of leachate generation in municipal solid waste landfills
Beck, James Bryan
1994-01-01T23:59:59.000Z
and the inclusion of compaction effects and leachate generation and movement effects by Mehevec (1994) should provide the user with a tool for estimating leachate generation values and landfill capacity figures for a variety of initial design and operational...
Improved diagnostic model for estimating wind energy
Endlich, R.M.; Lee, J.D.
1983-03-01T23:59:59.000Z
Because wind data are available only at scattered locations, a quantitative method is needed to estimate the wind resource at specific sites where wind energy generation may be economically feasible. This report describes a computer model that makes such estimates. The model uses standard weather reports and terrain heights in deriving wind estimates; the method of computation has been changed from what has been used previously. The performance of the current model is compared with that of the earlier version at three sites; estimates of wind energy at four new sites are also presented.
Cardiovascular Signal Decomposition and Estimation with the Extended Kalman Smoother
Cardiovascular Signal Decomposition and Estimation with the Extended Kalman Smoother James Mc of cardiovascular signals that can be used with the extended Kalman filter or smoother to simultaneously estimate with the extended Kalman filter and smoother to estimate and track all the model parameters of interest including
Determination of useful performance parameters for the ALR8(SI) plutonium pit container system
Pierce, Mark Alan
2000-01-01T23:59:59.000Z
A thorough list of potentially useful performance parameters is generated, and a systematic method is designed to assess which parameters will provide the most significant or useful information about the long-term performance of the ALR8(SI...
Minimizing electricity costs with an auxiliary generator using stochastic programming
Rafiuly, Paul, 1976-
2000-01-01T23:59:59.000Z
This thesis addresses the problem of minimizing a facility's electricity costs by generating optimal responses using an auxiliary generator as the parameter of the control systems. The-goal of the thesis is to find an ...
Generation Planning (pbl/generation)
Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)
AFDC Printable Version Share this resource Send a link to EERE: Alternative Fuels Data Center Home Page to someone by E-mail Share EERE: Alternative Fuels Data Center Home Page on Facebook Tweet about EERE: Alternative Fuels Data Center Home Page on Twitter Bookmark EERE: Alternative1 First Use of Energy for All Purposes (Fuel and Nonfuel), 2002; Level: National5Sales for4,645U.S. DOEThe Bonneville Power AdministrationField8,Dist.Newof EnergyFundingGene ControlsCounselGeneral User Generation
Robust estimation procedure in panel data model
Shariff, Nurul Sima Mohamad [Faculty of Science of Technology, Universiti Sains Islam Malaysia (USIM), 71800, Nilai, Negeri Sembilan (Malaysia); Hamzah, Nor Aishah [Institute of Mathematical Sciences, Universiti Malaya, 50630, Kuala Lumpur (Malaysia)
2014-06-19T23:59:59.000Z
The panel data modeling has received a great attention in econometric research recently. This is due to the availability of data sources and the interest to study cross sections of individuals observed over time. However, the problems may arise in modeling the panel in the presence of cross sectional dependence and outliers. Even though there are few methods that take into consideration the presence of cross sectional dependence in the panel, the methods may provide inconsistent parameter estimates and inferences when outliers occur in the panel. As such, an alternative method that is robust to outliers and cross sectional dependence is introduced in this paper. The properties and construction of the confidence interval for the parameter estimates are also considered in this paper. The robustness of the procedure is investigated and comparisons are made to the existing method via simulation studies. Our results have shown that robust approach is able to produce an accurate and reliable parameter estimates under the condition considered.
Heat engine generator control system
Rajashekara, Kaushik (Carmel, IN); Gorti, Bhanuprasad Venkata (Towson, MD); McMullen, Steven Robert (Anderson, IN); Raibert, Robert Joseph (Fishers, IN)
1998-01-01T23:59:59.000Z
An electrical power generation system includes a heat engine having an output member operatively coupled to the rotor of a dynamoelectric machine. System output power is controlled by varying an electrical parameter of the dynamoelectric machine. A power request signal is related to an engine speed and the electrical parameter is varied in accordance with a speed control loop. Initially, the sense of change in the electrical parameter in response to a change in the power request signal is opposite that required to effectuate a steady state output power consistent with the power request signal. Thereafter, the electrical parameter is varied to converge the output member speed to the speed known to be associated with the desired electrical output power.
Heat engine generator control system
Rajashekara, K.; Gorti, B.V.; McMullen, S.R.; Raibert, R.J.
1998-05-12T23:59:59.000Z
An electrical power generation system includes a heat engine having an output member operatively coupled to the rotor of a dynamoelectric machine. System output power is controlled by varying an electrical parameter of the dynamoelectric machine. A power request signal is related to an engine speed and the electrical parameter is varied in accordance with a speed control loop. Initially, the sense of change in the electrical parameter in response to a change in the power request signal is opposite that required to effectuate a steady state output power consistent with the power request signal. Thereafter, the electrical parameter is varied to converge the output member speed to the speed known to be associated with the desired electrical output power. 8 figs.
Preliminary relative permeability estimates of methanehydrate-bearing sand
Seol, Yongkoo; Kneafsey, Timothy J.; Tomutsa, Liviu; Moridis,George J.
2006-05-08T23:59:59.000Z
The relative permeability to fluids in hydrate-bearing sediments is an important parameter for predicting natural gas production from gas hydrate reservoirs. We estimated the relative permeability parameters (van Genuchten alpha and m) in a hydrate-bearing sand by means of inverse modeling, which involved matching water saturation predictions with observations from a controlled waterflood experiment. We used x-ray computed tomography (CT) scanning to determine both the porosity and the hydrate and aqueous phase saturation distributions in the samples. X-ray CT images showed that hydrate and aqueous phase saturations are non-uniform, and that water flow focuses in regions of lower hydrate saturation. The relative permeability parameters were estimated at two locations in each sample. Differences between the estimated parameter sets at the two locations were attributed to heterogeneity in the hydrate saturation. Better estimates of the relative permeability parameters require further refinement of the experimental design, and better description of heterogeneity in the numerical inversions.
Laplante, P.A. [Center for Nuclear Waste Regulatory Analyses, Rockville, MD (United States); Maheras, S.J. [Maheras (S.J.), Idaho Falls, ID (United States); Jarzemba, M.S. [Center for Nuclear Waste Regulatory Analyses, San Antonio, TX (United States)
1996-08-01T23:59:59.000Z
To develop capabilities for compliance determination, the Nuclear Regulatory Commission (NRC) conducts total system performance assessment (TSPA) for the proposed repository at Yucca Mountain (YM) in an iterative manner. Because the new Environmental Protection Agency (EPA) standard for YM may set a dose or risk limit, an auxiliary study was conducted to develop estimates of site-specific dose assessment parameters for future TSPAS. YM site-relevant data was obtained for irrigation, agriculture, resuspension, crop interception, and soil. A Monte Carlo based importance analysis was used to identify predominant parameters for the groundwater pathway. In this analysis, the GENII-S code generated individual annual total effective dose equivalents (TEDEs) for 20 nuclides and 43 sampled parameters based upon unit groundwater concentrations. Scatter plots and correlation results indicate the crop interception fraction, food transfer factors, consumption rates, and irrigation rate are correlated with TEDEs for specific nuclides. Influential parameter groups correspond to expected pathway readily to plants, such as {sup 99}Tc, indicate crop ingestion pathway parameters are most highly correlated with the TEDE, and those that transfer to milk ({sup 59}Ni) or beef ({sup 79}Se, {sup 129}I, {sup 135}Cs, {sup 137}Cs) show predominant correlations with animal product ingestion pathway parameters. Such relationships provide useful insight to important parameters and exposure pathways applicable to doses from specific nuclides.
Broader source: Directives, Delegations, and Requirements [Office of Management (MA)]
1997-03-28T23:59:59.000Z
This chapter focuses on the components (or elements) of the cost estimation package and their documentation.
Robust Single-Qubit Process Calibration via Robust Phase Estimation
Shelby Kimmel; Guang Hao Low; Theodore J. Yoder
2015-02-09T23:59:59.000Z
An important step in building a quantum computer is calibrating experimentally implemented quantum gates to produce operations that are close to ideal unitaries. The calibration step involves estimating the error in gates and then using controls to correct the implementation. Quantum process tomography is a standard technique for estimating these errors, but is both time consuming, (when one only wants to learn a few key parameters), and requires resources, like perfect state preparation and measurement, that might not be available. With the goal of efficiently estimating specific errors using minimal resources, we develop a parameter estimation technique, which can gauge two key parameters (amplitude and off-resonance errors) in a single-qubit gate with provable robustness and efficiency. In particular, our estimates achieve the optimal efficiency, Heisenberg scaling. Our main theorem making this possible is a robust version of the phase estimation procedure of Higgins et al. [B. L. Higgins, New J. Phys. 11, 073023 (2009)].
LUNAR SOIL SIMULATION TRAFFICABILITY PARAMETERS
Rathbun, Julie A.
LUNAR SOIL SIMULATION and TRAFFICABILITY PARAMETERS by W. David Carrier, III Lunar Geotechnical.0 RECOMMENDED LUNAR SOIL TRAFFICABILITY PARAMETERS Table 9.14 in the Lunar Sourcebook (Carrier et al. 1991, p. 529) lists the current recommended lunar soil trafficability parameters: bc = 0.017 N/cm2 bN = 35° K
GRID-BASED EXPLORATION OF COSMOLOGICAL PARAMETER SPACE WITH SNAKE
Mikkelsen, K.; Næss, S. K.; Eriksen, H. K., E-mail: kristin.mikkelsen@astro.uio.no [Institute of Theoretical Astrophysics, University of Oslo, P.O. Box 1029, Blindern, NO-0315 Oslo (Norway)
2013-11-10T23:59:59.000Z
We present a fully parallelized grid-based parameter estimation algorithm for investigating multidimensional likelihoods called Snake, and apply it to cosmological parameter estimation. The basic idea is to map out the likelihood grid-cell by grid-cell according to decreasing likelihood, and stop when a certain threshold has been reached. This approach improves vastly on the 'curse of dimensionality' problem plaguing standard grid-based parameter estimation simply by disregarding grid cells with negligible likelihood. The main advantages of this method compared to standard Metropolis-Hastings Markov Chain Monte Carlo methods include (1) trivial extraction of arbitrary conditional distributions; (2) direct access to Bayesian evidences; (3) better sampling of the tails of the distribution; and (4) nearly perfect parallelization scaling. The main disadvantage is, as in the case of brute-force grid-based evaluation, a dependency on the number of parameters, N{sub par}. One of the main goals of the present paper is to determine how large N{sub par} can be, while still maintaining reasonable computational efficiency; we find that N{sub par} = 12 is well within the capabilities of the method. The performance of the code is tested by comparing cosmological parameters estimated using Snake and the WMAP-7 data with those obtained using CosmoMC, the current standard code in the field. We find fully consistent results, with similar computational expenses, but shorter wall time due to the perfect parallelization scheme.
Effects of breach formation parameter uncertainty on inundation risk area and consequence analysis
Skousen, Benjamin Don [Los Alamos National Laboratory; David, Judi [Los Alamos National Laboratory; Mc Pherson, Timothy [Los Alamos National Laboratory; Burian, Steve [UNIV OF UTAH
2010-01-01T23:59:59.000Z
According to the national inventory of dams (NID), there are approximately 79,500 dams in the United States, with 11,800 of these dams being classified as high-hazard. It has been recommended that each high-hazard dam in the United States have an emergency action plan (EAP), but it has been found that only about 60% of the high-hazard dams have a complete EAP. A major aspect of these plans is inundation risk area identification and associated impacts in the event of dam failure. In order to determine the inundation risk area an estimation of breach discharge must be completed. Most methods used to determine breach discharge, including the NWS-DAMBRK model, require modelers to select size, shape, and time of breach formation. Federal agencies (e.g. Bureau of Reclamation, Federal Energy Regulatory Commission) with oversight of U.S. dams have recommended ranges of values for each of these parameters based on dam type. However, variations in these parameters even within the recommended range have the potential to impose significant transformation on the discharge hydrograph relative to both timing and magnitude of the peak discharge. Therefore, it has also been recommended that sensitivity of these parameters be investigated when performing breach inundation analyses. This paper presents a sensitivity analysis of three breach parameters (average breach width, side slope, and time to failure) on a case study dam located in the United States. The sensitivity analysis employed was based on the 3{sup 3} factorial design, in which three levels (e.g. low, medium, and high) were selected for each of the three parameters, resulting in twenty-seven combinations. The three levels remained within the recommended range of values for each parameter type. With each combination of input parameters, a discharge hydrograph was generated and used as a source condition for inundation analysis using a two-dimensional shallow water equation model. The resulting simulations were compared to determine the sensitivity of flood inundation area, flood arrival time, peak flood depths, and socio-economic impacts (e.g. population at risk, direct and indirect economic loss) to changes in individual parameters and parameter interactions. Results and discussion from this sensitivity analysis will be presented in detail in the paper.
Guidelines for Estimating Unmetered Industrial Water Use
Boyd, Brian K.
2010-08-01T23:59:59.000Z
The document provides a methodology to estimate unmetered industrial water use for evaporative cooling systems, steam generating boiler systems, batch process applications, and wash systems. For each category standard mathematical relationships are summarized and provided in a single resource to assist Federal agencies in developing an initial estimate of their industrial water use. The approach incorporates industry norms, general rules of thumb, and industry survey information to provide methodologies for each section.
Check Estimates and Independent Costs
Broader source: Directives, Delegations, and Requirements [Office of Management (MA)]
1997-03-28T23:59:59.000Z
Check estimates and independent cost estimates (ICEs) are tools that can be used to validate a cost estimate. Estimate validation entails an objective review of the estimate to ensure that estimate criteria and requirements have been met and well documented, defensible estimate has been developed. This chapter describes check estimates and their procedures and various types of independent cost estimates.
Does Geometric Coupling Generates Resonances?
I. C. Jardim; G. Alencar; R. R. Landim; R. N. Costa Filho
2015-05-08T23:59:59.000Z
Geometrical coupling in a co-dimensional one Randall-Sundrum scenario (RS) is used to study resonances of $p-$form fields. The resonances are calculated using the transfer matrix method. The model studied consider the standard RS with delta-like branes, and branes generated by kinks and domain-wall as well. The parameters are changed to control the thickness of the smooth brane. With this a very interesting pattern is found for the resonances. The geometrical coupling does not generate resonances for the reduced $p-$form in all cases considered.
A procedure for oscillatory parameter identification
Trudnowski, D.J.; Donnelly, M.K. [Pacific Northwest Lab., Richland, WA (United States); Hauer, J.F. [Bonneville Power Administration, Portland, OR (United States)
1994-02-01T23:59:59.000Z
A procedure is proposed where a power system is excited with a low-level pseduo-random probing signal and the frequency, damping, magnitude, and shape of oscillatory modes are identified using spectral density estimation and frequency-domain transfer-function identification. Attention is focussed on identifying system modes in the presence of noise. Two examples cases are studied: identification of electromechanical oscillation modes in a 16-machine power system; and turbine-generator shaft modes of a 3-machine power plant feeding a series-compensated 500-kV network.
Subsurface Geotechnical Parameters Report
D. Rigby; M. Mrugala; G. Shideler; T. Davidsavor; J. Leem; D. Buesch; Y. Sun; D. Potyondy; M. Christianson
2003-12-17T23:59:59.000Z
The Yucca Mountain Project is entering a the license application (LA) stage in its mission to develop the nation's first underground nuclear waste repository. After a number of years of gathering data related to site characterization, including activities ranging from laboratory and site investigations, to numerical modeling of processes associated with conditions to be encountered in the future repository, the Project is realigning its activities towards the License Application preparation. At the current stage, the major efforts are directed at translating the results of scientific investigations into sets of data needed to support the design, and to fulfill the licensing requirements and the repository design activities. This document addresses the program need to address specific technical questions so that an assessment can be made about the suitability and adequacy of data to license and construct a repository at the Yucca Mountain Site. In July 2002, the U.S. Nuclear Regulatory Commission (NRC) published an Integrated Issue Resolution Status Report (NRC 2002). Included in this report were the Repository Design and Thermal-Mechanical Effects (RDTME) Key Technical Issues (KTI). Geotechnical agreements were formulated to resolve a number of KTI subissues, in particular, RDTME KTIs 3.04, 3.05, 3.07, and 3.19 relate to the physical, thermal and mechanical properties of the host rock (NRC 2002, pp. 2.1.1-28, 2.1.7-10 to 2.1.7-21, A-17, A-18, and A-20). The purpose of the Subsurface Geotechnical Parameters Report is to present an accounting of current geotechnical information that will help resolve KTI subissues and some other project needs. The report analyzes and summarizes available qualified geotechnical data. It evaluates the sufficiency and quality of existing data to support engineering design and performance assessment. In addition, the corroborative data obtained from tests performed by a number of research organizations is presented to reinforce conclusions derived from the pool of data gathered within a full QA-controlled domain. An evaluation of the completeness of the current data is provided with respect to the requirements for geotechnical data to support design and performance assessment.
Exploiting the Impact of Database System Configuration Parameters: A Design of Experiments Approach
Minnesota, University of
in determining DBMS performance. However, the number of configuration parameters in a DBMS is very large may have no or marginal effects on the DBMS performance for the given query workload. Database of input parameters. Second, we exploit the estimated effects to: 1) rank DBMS configuration parameters
Local Sequential Ensemble Kalman Filter for Simultaneously Tracking States and Parameters
Welch, Greg
and operation of a power system. To improve the estimation accuracy of states and parameters, this paper applies and parameters using phasor-measurement-unit (PMU) data. Based on simulation studies using multi-machine systems in power systems. Accurate information about states (e.g., rotor speeds, angles) and parameters
Calibrated Hydrothermal Parameters, Barrow, Alaska, 2013
DOE Data Explorer [Office of Scientific and Technical Information (OSTI)]
Atchley, Adam; Painter, Scott; Harp, Dylan; Coon, Ethan; Wilson, Cathy; Liljedahl, Anna; Romanovsky, Vladimir
A model-observation-experiment process (ModEx) is used to generate three 1D models of characteristic micro-topographical land-formations, which are capable of simulating present active thaw layer (ALT) from current climate conditions. Each column was used in a coupled calibration to identify moss, peat and mineral soil hydrothermal properties to be used in up-scaled simulations. Observational soil temperature data from a tundra site located near Barrow, AK (Area C) is used to calibrate thermal properties of moss, peat, and sandy loam soil to be used in the multiphysics Advanced Terrestrial Simulator (ATS) models. Simulation results are a list of calibrated hydrothermal parameters for moss, peat, and mineral soil hydrothermal parameters.
Sanford, P. C.; Templeton, J. H.; Stevens, J. L.; Dorr, K.
2002-02-25T23:59:59.000Z
Rocky Flats Closure Project (Site) includes several multi-year decontamination and decommissioning (D&D) projects which, over the next four years, will dismantle and demolish four major plutonium facilities, four major uranium facilities, and over 400 additional facilities of different types. The projects are currently generating large quantities of transuranic, low-level, mixed, hazardous, and sanitary wastes. A previous paper described the initial conceptual estimates and methods, and the evolution of these methods based on the actual results from the decommissioning of a ''pilot'' facility. The waste estimating method that resulted from that work was used for the waste estimates incorporated into the current Site baseline. This paper discusses subsequent developments on the topic of waste estimating that have occurred since the baseline work. After several months of operation under the current Site baseline, an effort was initiated to either validate or identify improvements to the waste basis-of-estimate. Specific estimate and estimating method elements were identified for additional analysis based on the element's potential for error and the impact of that error on Site activities. The analysis took advantage of actual, more detailed data collected both from three years additional experience in decommissioning a second plutonium facility and from experience in deactivating certain non-plutonium facilities. It compared the actual transuranic and low-level waste generation against their respective estimates based on overall distribution and for individual media (i.e. equipment type), and evaluated trends. Finally, it projected the quantity of lead-characteristic low-level mixed waste that will be generated from plutonium building decommissioning and upgraded the decommissioning waste estimates of the non-plutonium buildings.
Asymptotics and computations for approximation of method of regularization estimators
Lee, Sang-Joon
2005-08-29T23:59:59.000Z
, 1973) and one-sided cross-validation (Hart and Yi, 1998) can be employed for the MOR estimator. Rice (1986) demonstrated that a reasonable choice of the smoothing parameter for making mean squared error small for estimating ? may not be reasonable... in terms of the estimation error incurred for f, and vice versa. O'Sullivan (1986) overviewed various issues in MOR estimation and the solution to ill-posed inverse problems with an extension of CV and related smoothing parameter selection criteria...
Varghese, Joshua
2011-08-02T23:59:59.000Z
constant (TC) have been developed. The axial strain TC is a parameter that is related to the viscoelastic and poroelastic behavior of tissues. Estimation of this parameter can be done using curve fitting methods. However, the effect of temporal...
Simplified Approach for Estimating Impacts of Electricity Generation...
Integrated Model to Access the Global Environment Simple Interactive Models for better air quality (SIM-air) ... further results SIMPACTS is a user-friendly, simplified approach...
Updated Capital Cost Estimates for Utility Scale Electricity Generating Plants
U.S. Energy Information Administration (EIA) Indexed Site
AFDC Printable Version Share this resource Send a link to EERE: Alternative Fuels Data Center Home Page to someone by E-mail Share EERE: Alternative Fuels Data Center Home Page on Facebook Tweet about EERE: Alternative Fuels Data Center Home Page on Twitter Bookmark EERE: Alternative Fuels Data Center Home Page onYou are now leaving Energy.gov You are now leaving Energy.gov YouKizildere IRaghuraji Agro IndustriesTownDells,1Stocks Nov-14TotalThe Outlook269,023Year69,023USWNCFeet)
Simplified Approach for Estimating Impacts of Electricity Generation
AFDC Printable Version Share this resource Send a link to EERE: Alternative Fuels Data Center Home Page to someone by E-mail Share EERE: Alternative Fuels Data Center Home Page on Facebook Tweet about EERE: Alternative Fuels Data Center Home Page on Twitter Bookmark EERE: Alternative Fuels Data Center Home Page onYou are now leaving Energy.gov You are now leaving Energy.gov You are beingZealand Jump to:Ezfeedflag JumpID-f < RAPIDâ€Ž |Rippey JumpAirPowerSilcio SA JumpProject
Setting a retail generation credit
Jacobs, J.M.
1999-05-01T23:59:59.000Z
While the additional cost components will vary depending on the way that the wholesale energy component is calculated, at minimum a generation credit should recognize the following costs: Additional value of shaping or load-following; Premia associated with the risks of serving retail load; Transmission costs incurred by competitive suppliers; Commercial costs; and Reasonable profits. In this article the author reviews the construction of a generation credit, starting with three different ways to compute the wholesale cost of electric energy--as a forecast, as a forward price, or from the spot market--and then moving to consideration of additional cost items. Throughout the authors attempts to estimate the costs an efficient competitor will incur in order to illustrate the difference between a retail generation credit and a wholesale price index.
Understanding and Managing Generation Y
Wallace, Kevin
2007-12-14T23:59:59.000Z
There are four generations in the workplace today; they consist of the Silent Generation, Baby Boom Generation, Generation X, and Generation Y. Generation Y, being the newest generation, is the least understood generation although marketers...
The reach of the ATLAS experiment in SUSY parameter space
Janet Dietrich
2009-10-29T23:59:59.000Z
Already with very first data, the ATLAS experiment should be sensitive to a SUSY signal well beyond the regions explored by the Tevatron. We present a detailed study of the ATLAS discovery reach in the parameter space for various SUSY models. The expected uncertainties on the background estimates are taken ito account.
Reliable Computation of Binary Parameters in Activity Coefficient Models
Stadtherr, Mark A.
phase equilibria. The technique is demonstrated with examples using the NRTL and electrolyte-NRTL (eNRTL) models. In two of the NRTL examples, results are found that contradict previous work. In the eNRTL time that a method for parameter estimation in the eNRTL model from binary LLE data (mutual solubility
On the Estimation of Nonrandom Signal Coefficients From Jittered Samples
Goyal, Vivek K.
This paper examines the problem of estimating the parameters of a bandlimited signal from samples corrupted by random jitter (timing noise) and additive, independent identically distributed (i.i.d.) Gaussian noise, where ...
Seismic fragility estimates for reinforced concrete framed buildings
Ramamoorthy, Sathish Kumar
2007-04-25T23:59:59.000Z
story drift given the spectral acceleration at the fundamental period of the building. The unknown parameters of the demand models are estimated using the simulated response data obtained from nonlinear time history analyses of the structural models...
Broader source: Directives, Delegations, and Requirements [Office of Management (MA)]
1997-03-28T23:59:59.000Z
The chapter describes the estimates required on government-managed projects for both general construction and environmental management.
RELIABILITY OF SAMPLING INSPECTION SCHEMES APPLIED TO REPLACEMENT STEAM GENERATORS
Cizelj, Leon
RELIABILITY OF SAMPLING INSPECTION SCHEMES APPLIED TO REPLACEMENT STEAM GENERATORS Guy Roussel the size of the random sample of tubes to be inspected in replacement steam generators is revisited in this paper. A procedure to estimate the maximum number of defective tubes left in the steam generator after
Systems Engineering Cost Estimation
Bryson, Joanna J.
on project, human capital impact. 7 How to estimate Cost? Difficult to know what we are building early on1 Systems Engineering Lecture 3 Cost Estimation Dr. Joanna Bryson Dr. Leon Watts University of Bath: Contrast approaches for estimating software project cost, and identify the main sources of cost
Demonstration of Entanglement-Enhanced Phase Estimation in Solid
Gang-Qin Liu; Yu-Ran Zhang; Yan-Chun Chang; Jie-Dong Yue; Heng Fan; Xin-Yu Pan
2015-04-08T23:59:59.000Z
Precise parameter estimation plays a central role in science and technology. The statistical error in estimation can be decreased by repeating measurement, leading to that the resultant uncertainty of the estimated parameter is proportional to the square root of the number of repetitions in accordance with the central limit theorem. Quantum parameter estimation, an emerging field of quantum technology, aims to use quantum resources to yield higher statistical precision than classical approaches. Here, we report the first room-temperature implementation of entanglement-enhanced phase estimation in a solid-state system: the nitrogen-vacancy centre in pure diamond. We demonstrate a super-resolving phase measurement with two entangled qubits of different physical realizations: an nitrogen-vacancy centre electron spin and a proximal ${}^{13}$C nuclear spin. The experimental data shows clearly the uncertainty reduction when entanglement resource is used, confirming the theoretical expectation. Our results represent an elemental demonstration of enhancement of quantum metrology against classical procedure.
Using Utility Load Data to Estimate Demand for Space Cooling and Potential for Shiftable Loads
Denholm, P.; Ong, S.; Booten, C.
2012-05-01T23:59:59.000Z
This paper describes a simple method to estimate hourly cooling demand from historical utility load data. It compares total hourly demand to demand on cool days and compares these estimates of total cooling demand to previous regional and national estimates. Load profiles generated from this method may be used to estimate the potential for aggregated demand response or load shifting via cold storage.
Sensitivity of health risk estimates to air quality adjustment procedure
Whitfield, R.G.
1997-06-30T23:59:59.000Z
This letter is a summary of risk results associated with exposure estimates using two-parameter Weibull and quadratic air quality adjustment procedures (AQAPs). New exposure estimates were developed for children and child-occurrences, six urban areas, and five alternative air quality scenarios. In all cases, the Weibull and quadratic results are compared to previous results, which are based on a proportional AQAP.
Computer-intensive rate estimation, diverging statistics, and scanning
Politis, Dimitris N.
Computer-intensive rate estimation, diverging statistics, and scanning Tucker McElroy U.S. Bureau in a very general setting without requiring the choice of a tun- ing parameter. The notion of scanning method is ap- plied to different scans, and the resulting estimators are then combined to improve
Estimating Power System Dynamic States Using Extended Kalman Filter
Huang, Zhenyu; Schneider, Kevin P.; Nieplocha, Jaroslaw; Zhou, Ning
2014-10-31T23:59:59.000Z
Abstract—The state estimation tools which are currently deployed in power system control rooms are based on a steady state assumption. As a result, the suite of operational tools that rely on state estimation results as inputs do not have dynamic information available and their accuracy is compromised. This paper investigates the application of Extended Kalman Filtering techniques for estimating dynamic states in the state estimation process. The new formulated “dynamic state estimation” includes true system dynamics reflected in differential equations, not like previously proposed “dynamic state estimation” which only considers the time-variant snapshots based on steady state modeling. This new dynamic state estimation using Extended Kalman Filter has been successfully tested on a multi-machine system. Sensitivity studies with respect to noise levels, sampling rates, model errors, and parameter errors are presented as well to illustrate the robust performance of the developed dynamic state estimation process.
Estimate of CP Violation for the LBNE Project and $?_{CP}
Leonard S. Kisslinger
2012-06-27T23:59:59.000Z
Measurements of CP violation (CPV) and the basic $\\delta_{CP}$ parameter are the goals of the LBNE Project, which is being planned. Using the expected energy and baseline parameters for the LBNE Project, CPV and the dependence of CPV on $\\delta_{CP}$ are estimated, to help in the planning of this project.
The Lepton Sector of a Fourth Generation
Gustavo Burdman; Leandro Da Rold; Ricardo D. Matheus
2010-05-10T23:59:59.000Z
In extensions of the standard model with a heavy fourth generation one important question is what makes the fourth-generation lepton sector, particularly the neutrinos, so different from the lighter three generations. We study this question in the context of models of electroweak symmetry breaking in warped extra dimensions, where the flavor hierarchy is generated by the localization of the zero-mode fermions in the extra dimension. In this setup the Higgs sector is localized near the infrared brane, whereas the Majorana mass term is localized at the ultraviolet brane. As a result, light neutrinos are almost entirely Majorana particles, whereas the fourth generation neutrino is mostly a Dirac fermion. We show that it is possible to obtain heavy fourth-generation leptons in regions of parameter space where the light neutrino masses and mixings are compatible with observation. We study the impact of these bounds, as well as the ones from lepton flavor violation, on the phenomenology of these models.
Generation gaps in engineering?
Kim, David J. (David Jinwoo)
2008-01-01T23:59:59.000Z
There is much enthusiastic debate on the topic of generation gaps in the workplace today; what the generational differences are, how to address the apparent challenges, and if the generations themselves are even real. ...
Jayaram, Bhyravabotla
Solvation Free Energy of Biomacromolecules: Parameters for a Modified Generalized Born Model provides rapid estimates of the electrostatic free energies of solvation for diverse molecules of parameters compatible with the AMBER force field is described. The method is used to estimate free energies
Tools for event generator tuning and validation
Andy Buckley
2008-09-26T23:59:59.000Z
I describe the current status of MCnet tools for validating the performance of event generator simulations against data, and for tuning their phenomenological free parameters. For validation, the Rivet toolkit is now a mature and complete system, with a large library of prominent benchmark analyses. For tuning, the Professor system has recently completed its first tunes of Pythia 6, with substantial improvements on the existing default tune and potential to greatly aid the setup of new generators for LHC studies.
Small Generator Aggregation (Maine)
Broader source: Energy.gov [DOE]
This section establishes requirements for electricity providers to purchase electricity from small generators, with the goal of ensuring that small electricity generators (those with a nameplate...
GEOTHERMAL POWER GENERATION PLANT
Boyd, Tonya
2013-12-01T23:59:59.000Z
Oregon Institute of Technology (OIT) drilled a deep geothermal well on campus (to 5,300 feet deep) which produced 196oF resource as part of the 2008 OIT Congressionally Directed Project. OIT will construct a geothermal power plant (estimated at 1.75 MWe gross output). The plant would provide 50 to 75 percent of the electricity demand on campus. Technical support for construction and operations will be provided by OIT’s Geo-Heat Center. The power plant will be housed adjacent to the existing heat exchange building on the south east corner of campus near the existing geothermal production wells used for heating campus. Cooling water will be supplied from the nearby cold water wells to a cooling tower or air cooling may be used, depending upon the type of plant selected. Using the flow obtained from the deep well, not only can energy be generated from the power plant, but the “waste” water will also be used to supplement space heating on campus. A pipeline will be construction from the well to the heat exchanger building, and then a discharge line will be construction around the east and north side of campus for anticipated use of the “waste” water by facilities in an adjacent sustainable energy park. An injection well will need to be drilled to handle the flow, as the campus existing injection wells are limited in capacity.
FUNCTIONAL ESTIMATION FOR A MULTICOMPONENT AGE REPLACEMENT MODEL
L'Ecuyer, Pierre
1 FUNCTIONAL ESTIMATION FOR A MULTICOMPONENT AGE REPLACEMENT MODEL Pierre L'Ecuyer, Benoit Martin, controlled by a replacement rule based on age thresholds. We show how to estimate the expected costÂ generative simulation, maintenance models, age replacement policies. #12; 2 L'ECUYER, MARTIN, AND V ' AZQUEZ
FUNCTIONAL ESTIMATION FOR A MULTICOMPONENT AGE REPLACEMENT MODEL
VÃ¡zquez-Abad, Felisa J.
FUNCTIONAL ESTIMATION FOR A MULTICOMPONENT AGE REPLACEMENT MODEL Pierre L'Ecuyer, Benoit Martin, controlled by a replacement rule based on age thresholds. We show how to estimate the expected costÂ generative simulation, maintenance models, age replacement policies. #12; L'ECUYER, MARTIN, AND V ' AZQUEZ
Grid-based exploration of cosmological parameter space with Snake
Mikkelsen, K; Eriksen, H K
2012-01-01T23:59:59.000Z
We present a fully parallelized grid-based parameter estimation algorithm for investigating multidimensional likelihoods called Snake, and apply it to cosmological parameter estimation. The basic idea is to map out the likelihood grid-cell by grid-cell according to decreasing likelihood, and stop when a certain threshold has been reached. This approach improves vastly on the "curse of dimensionality" problem plaguing standard grid-based parameter estimation simply by disregarding grid-cells with negligible likelihood. The main advantages of this method compared to standard Metropolis-Hastings MCMC methods include 1) trivial extraction of arbitrary conditional distributions; 2) direct access to Bayesian evidences; 3) better sampling of the tails of the distribution; and 4) nearly perfect parallelization scaling. The main disadvantage is, as in the case of brute-force grid-based evaluation, a dependency on the number of parameters, N_par. One of the main goals of the present paper is to determine how large N_pa...
Chiral Lagrangian Parameters for Scalar and Pseudoscalar Mesons
Bardeen, W; Thacker, H
2004-01-01T23:59:59.000Z
The results of a high-statistics study of scalar and pseudoscalar meson propagators in quenched lattice QCD are presented. For two values of lattice spacing, $\\beta=5.7$ ($a \\approx .18$ fm) and 5.9 ($a \\approx .12$ fm), we probe the light quark mass region using clover improved Wilson fermions with the MQA pole-shifting ansatz to treat the exceptional configuration problem. The quenched chiral loop parameters $m_0$ and $\\alpha_{\\Phi}$ are determined from a study of the pseudoscalar hairpin correlator. From a global fit to the meson correlators, estimates are obtained for the relevant chiral Lagrangian parameters, including the Leutwyler parameters $L_5$ and $L_8$. Using the parameters obtained from the singlet and nonsinglet pseudoscalar correlators, the quenched chiral loop effect in the nonsinglet scalar meson correlator is studied. By removing this QCL effect from the lattice correlator, we obtain the mass and decay constant of the ground state scalar, isovector meson $a_0$.
Building unbiased estimators from non-gaussian likelihoods with application to shear estimation
DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)
Madhavacheril, Mathew S. [Stony Brook Univ., NY (United States); Slosar, Anze [Brookhaven National Lab. (BNL), Upton, NY (United States); McDonald, Patrick [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Sehgal, Neelima [Stony Brook Univ., NY (United States)
2015-01-01T23:59:59.000Z
We develop a general framework for generating estimators of a given quantity which are unbiased to a given order in the difference between the true value of the underlying quantity and the fiducial position in theory space around which we expand the likelihood. We apply this formalism to rederive the optimal quadratic estimator and show how the replacement of the second derivative matrix with the Fisher matrix is a generic way of creating an unbiased estimator (assuming choice of the fiducial model is independent of data). Next we apply the approach to estimation of shear lensing, closely following the work of Bernstein and Armstrong (2014). Our first order estimator reduces to their estimator in the limit of zero shear, but it also naturally allows for the case of non-constant shear and the easy calculation of correlation functions or power spectra using standard methods. Both our first-order estimator and Bernstein and Armstrong’s estimator exhibit a bias which is quadratic in true shear. Our third-order estimator is, at least in the realm of the toy problem of Bernstein and Armstrong, unbiased to 0.1% in relative shear errors ?g/g for shears up to |g| = 0.2.
Building unbiased estimators from non-gaussian likelihoods with application to shear estimation
DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)
Madhavacheril, Mathew S.; Slosar, Anze; McDonald, Patrick; Sehgal, Neelima
2015-01-01T23:59:59.000Z
We develop a general framework for generating estimators of a given quantity which are unbiased to a given order in the difference between the true value of the underlying quantity and the fiducial position in theory space around which we expand the likelihood. We apply this formalism to rederive the optimal quadratic estimator and show how the replacement of the second derivative matrix with the Fisher matrix is a generic way of creating an unbiased estimator (assuming choice of the fiducial model is independent of data). Next we apply the approach to estimation of shear lensing, closely following the workmore »of Bernstein and Armstrong (2014). Our first order estimator reduces to their estimator in the limit of zero shear, but it also naturally allows for the case of non-constant shear and the easy calculation of correlation functions or power spectra using standard methods. Both our first-order estimator and Bernstein and Armstrong’s estimator exhibit a bias which is quadratic in true shear. Our third-order estimator is, at least in the realm of the toy problem of Bernstein and Armstrong, unbiased to 0.1% in relative shear errors ?g/g for shears up to |g| = 0.2.« less
Broader source: Directives, Delegations, and Requirements [Office of Management (MA)]
1997-03-28T23:59:59.000Z
Specialty costs are those nonstandard, unusual costs that are not typically estimated. Costs for research and development (R&D) projects involving new technologies, costs associated with future regulations, and specialty equipment costs are examples of specialty costs. This chapter discusses those factors that are significant contributors to project specialty costs and methods of estimating costs for specialty projects.
Cooling load estimation methods
McFarland, R.D.
1984-01-01T23:59:59.000Z
Ongoing research on quantifying the cooling loads in residential buildings, particularly buildings with passive solar heating systems, is described. Correlations are described that permit auxiliary cooling estimates from monthly average insolation and weather data. The objective of the research is to develop a simple analysis method, useful early in design, to estimate the annual cooling energy required of a given building.
Estimating vehicle height using homographic projections
Cunningham, Mark F; Fabris, Lorenzo; Gee, Timothy F; Ghebretati, Jr., Frezghi H; Goddard, James S; Karnowski, Thomas P; Ziock, Klaus-peter
2013-07-16T23:59:59.000Z
Multiple homography transformations corresponding to different heights are generated in the field of view. A group of salient points within a common estimated height range is identified in a time series of video images of a moving object. Inter-salient point distances are measured for the group of salient points under the multiple homography transformations corresponding to the different heights. Variations in the inter-salient point distances under the multiple homography transformations are compared. The height of the group of salient points is estimated to be the height corresponding to the homography transformation that minimizes the variations.
Generation to Generation: The Heart of Family Medicine
Winter, Robin O
2012-01-01T23:59:59.000Z
Ageism in the Workplace. Generations Spring, 5. Westman,of caring for multiple generations simultaneously. StronglyGeneration to Generation: The Heart of Family Medicine
Estimating pixel variances in the scenes of staring sensors
Simonson, Katherine M. (Cedar Crest, NM); Ma, Tian J. (Albuquerque, NM)
2012-01-24T23:59:59.000Z
A technique for detecting changes in a scene perceived by a staring sensor is disclosed. The technique includes acquiring a reference image frame and a current image frame of a scene with the staring sensor. A raw difference frame is generated based upon differences between the reference image frame and the current image frame. Pixel error estimates are generated for each pixel in the raw difference frame based at least in part upon spatial error estimates related to spatial intensity gradients in the scene. The pixel error estimates are used to mitigate effects of camera jitter in the scene between the current image frame and the reference image frame.
Recursive bias estimation for high dimensional smoothers
Hengartner, Nicolas W [Los Alamos National Laboratory; Matzner-lober, Eric [UHB, FRANCE; Cornillon, Pierre - Andre [INRA
2008-01-01T23:59:59.000Z
In multivariate nonparametric analysis, sparseness of the covariates also called curse of dimensionality, forces one to use large smoothing parameters. This leads to biased smoothers. Instead of focusing on optimally selecting the smoothing parameter, we fix it to some reasonably large value to ensure an over-smoothing of the data. The resulting smoother has a small variance but a substantial bias. In this paper, we propose to iteratively correct the bias initial estimator by an estimate of the latter obtained by smoothing the residuals. We examine in detail the convergence of the iterated procedure for classical smoothers and relate our procedure to L{sub 2}-Boosting. We apply our method to simulated and real data and show that our method compares favorably with existing procedures.
Municipal Solid Waste Generation: Feasibility of Reconciling Measurement Methods
Schneider, Shelly H.
2014-07-25T23:59:59.000Z
to be measured. This research investigates the reconciliation of results from two methodologies for estimating municipal solid waste (MSW) generation, and assessing the potential for solid waste planners to combine the two methods in a cost-effective manner...
Firestone, Richard B; Reijonen, Jani
2014-05-27T23:59:59.000Z
An embodiment of a gamma ray generator includes a neutron generator and a moderator. The moderator is coupled to the neutron generator. The moderator includes a neutron capture material. In operation, the neutron generator produces neutrons and the neutron capture material captures at least some of the neutrons to produces gamma rays. An application of the gamma ray generator is as a source of gamma rays for calibration of gamma ray detectors.
Extracting Stacking Interaction Parameters for RNA from the Data Set of Native Structures
Thirumalai, Devarajan
Science and Technology, University of Maryland, College Park, MD 20742, USA 2 Department of Chemistry-dependent estimates of the interaction parameters. We have exploited the growing database of natively folded RNA of higher free energy states. The computed Z-scores agree with estimates made using calorimetric
2007 Estimated International Energy Flows
Smith, C A; Belles, R D; Simon, A J
2011-03-10T23:59:59.000Z
An energy flow chart or 'atlas' for 136 countries has been constructed from data maintained by the International Energy Agency (IEA) and estimates of energy use patterns for the year 2007. Approximately 490 exajoules (460 quadrillion BTU) of primary energy are used in aggregate by these countries each year. While the basic structure of the energy system is consistent from country to country, patterns of resource use and consumption vary. Energy can be visualized as it flows from resources (i.e. coal, petroleum, natural gas) through transformations such as electricity generation to end uses (i.e. residential, commercial, industrial, transportation). These flow patterns are visualized in this atlas of 136 country-level energy flow charts.
T ti E St S tTetiaroa Energy Storage System Estimated ZBB Zinc Bromide Battery Performance and Costs
Kammen, Daniel M.
://rael.berkeley.edu 1 #12;Island Load and DieselIsland Load and Diesel Generation Assumptions #12;Estimated Elect ical variation #12;Diesel Gene ationDiesel Generation It was assumed that backup generation will be met via (2) 455 kW diesel generator sets These generator sets were modeled using data available
The Different Characteristics of Aquifer Parameters and Their Implications on Pumping-Test Analysis
Jiao, Jiu Jimmy
The Different Characteristics of Aquifer Parameters and Their Implications on Pumping-Test Analysis and storativity, under constant-rate pumping conditions. A two-way coordinate is such that the conditions implications on pumping-test designs and interpretation. For example, to estimate the parameters
DIRECTIONAL DEPENDENCE OF {Lambda}CDM COSMOLOGICAL PARAMETERS
Axelsson, M.; Fantaye, Y.; Hansen, F. K.; Eriksen, H. K. [Institute of Theoretical Astrophysics, University of Oslo, P.O. Box 1029 Blindern, N-0315 Oslo (Norway); Banday, A. J. [Universite de Toulouse, UPS-OMP, IRAP, Toulouse (France); Gorski, K. M., E-mail: magnus.axelsson@astro.uio.no, E-mail: y.t.fantaye@astro.uio.no [Jet Propulsion Laboratory, M/S 169/327, 4800 Oak Grove Drive, Pasadena, CA 91109 (United States)
2013-08-10T23:59:59.000Z
We study hemispherical power asymmetry in the Wilkinson Microwave Anisotropy Probe 9 yr data. We analyze the combined V- and W-band sky maps, after application of the KQ85 mask, and find that the asymmetry is statistically significant at the 3.4{sigma} confidence level for l = 2-600, where the data are signal-dominated, with a preferred asymmetry direction (l, b) = (227, -27). Individual asymmetry axes estimated from six independent multipole ranges are all consistent with this direction. Subsequently, we estimate cosmological parameters on different parts of the sky and show that the parameters A{sub s} , n{sub s} , and {Omega}{sub b} are the most sensitive to this power asymmetry. In particular, for the two opposite hemispheres aligned with the preferred asymmetry axis, we find n{sub s} = 0.959 {+-} 0.022 and n{sub s} = 0.989 {+-} 0.024, respectively.
Leung, Ka-Ngo (Hercules, CA)
2008-04-22T23:59:59.000Z
A cylindrical neutron generator is formed with a coaxial RF-driven plasma ion source and target. A deuterium (or deuterium and tritium) plasma is produced by RF excitation in a cylindrical plasma ion generator using an RF antenna. A cylindrical neutron generating target is coaxial with the ion generator, separated by plasma and extraction electrodes which contain many slots. The plasma generator emanates ions radially over 360.degree. and the cylindrical target is thus irradiated by ions over its entire circumference. The plasma generator and target may be as long as desired. The plasma generator may be in the center and the neutron target on the outside, or the plasma generator may be on the outside and the target on the inside. In a nested configuration, several concentric targets and plasma generating regions are nested to increase the neutron flux.
Leung, Ka-Ngo (Hercules, CA)
2009-12-29T23:59:59.000Z
A cylindrical neutron generator is formed with a coaxial RF-driven plasma ion source and target. A deuterium (or deuterium and tritium) plasma is produced by RF excitation in a cylindrical plasma ion generator using an RF antenna. A cylindrical neutron generating target is coaxial with the ion generator, separated by plasma and extraction electrodes which contain many slots. The plasma generator emanates ions radially over 360.degree. and the cylindrical target is thus irradiated by ions over its entire circumference. The plasma generator and target may be as long as desired. The plasma generator may be in the center and the neutron target on the outside, or the plasma generator may be on the outside and the target on the inside. In a nested configuration, several concentric targets and plasma generating regions are nested to increase the neutron flux.
Leung, Ka-Ngo
2005-06-14T23:59:59.000Z
A cylindrical neutron generator is formed with a coaxial RF-driven plasma ion source and target. A deuterium (or deuterium and tritium) plasma is produced by RF excitation in a cylindrical plasma ion generator using an RF antenna. A cylindrical neutron generating target is coaxial with the ion generator, separated by plasma and extraction electrodes which contain many slots. The plasma generator emanates ions radially over 360.degree. and the cylindrical target is thus irradiated by ions over its entire circumference. The plasma generator and target may be as long as desired. The plasma generator may be in the center and the neutron target on the outside, or the plasma generator may be on the outside and the target on the inside. In a nested configuration, several concentric targets and plasma generating regions are nested to increase the neutron flux.
Estimating exposure of terrestrial wildlife to contaminants
Sample, B.E.; Suter, G.W. II
1994-09-01T23:59:59.000Z
This report describes generalized models for the estimation of contaminant exposure experienced by wildlife on the Oak Ridge Reservation. The primary exposure pathway considered is oral ingestion, e.g. the consumption of contaminated food, water, or soil. Exposure through dermal absorption and inhalation are special cases and are not considered hereIN. Because wildlife mobile and generally consume diverse diets and because environmental contamination is not spatial homogeneous, factors to account for variation in diet, movement, and contaminant distribution have been incorporated into the models. To facilitate the use and application of the models, life history parameters necessary to estimate exposure are summarized for 15 common wildlife species. Finally, to display the application of the models, exposure estimates were calculated for four species using data from a source operable unit on the Oak Ridge Reservation.
Perturbed Power-law parameters from WMAP7
Minu Joy; Tarun Souradeep
2010-11-19T23:59:59.000Z
We present a perturbative approach for studying inflation models with soft departures from scale free spectra of the power law model. In the perturbed power law (PPL) approach one obtains at the leading order both the scalar and tensor power spectra with the running of their spectral indices, in contrast to the widely used slow roll expansion. The PPL spectrum is confronted data and we show that the PPL parameters are well estimated from WMAP-7 data.
Perturbed Power-law parameters from WMAP7
Joy, Minu
2010-01-01T23:59:59.000Z
We present a perturbative approach for studying inflation models with soft departures from scale free spectra of the power law model. In the perturbed power law (PPL) approach one obtains at the leading order both the scalar and tensor power spectra with the running of their spectral indices, in contrast to the widely used slow roll expansion. The PPL spectrum is confronted data and we show that the PPL parameters are well estimated from WMAP-7 data.
Broader source: Directives, Delegations, and Requirements [Office of Management (MA)]
2011-05-09T23:59:59.000Z
This Guide provides uniform guidance and best practices that describe the methods and procedures that could be used in all programs and projects at DOE for preparing cost estimates. No cancellations.
Operated device estimation framework
Rengarajan, Janarthanan
2009-05-15T23:59:59.000Z
Protective device estimation is a challenging task because there are numerous protective devices present in a typical distribution system. Among various protective devices, auto-reclosers and fuses are the main overcurrent protection on distribution...
Macknick, J.; Newmark, R.; Heath, G.; Hallett, K. C.
2011-03-01T23:59:59.000Z
Various studies have attempted to consolidate published estimates of water use impacts of electricity generating technologies, resulting in a wide range of technologies and values based on different primary sources of literature. The goal of this work is to consolidate the various primary literature estimates of water use during the generation of electricity by conventional and renewable electricity generating technologies in the United States to more completely convey the variability and uncertainty associated with water use in electricity generating technologies.
FY 2015 FY 2016 FY 2017 FY 2013 President's Budget Request 3,821.2 3,712.8 3,932.8 4,076.5 4,076.5 4 Estimate Budget Authority (in $ millions) FY 2011 FY 2012 FY 2013 FY 2014 FY 2015 FY 2016 FY 2017 FY 2013EXPLORATION EXP-1 Actual Estimate Budget Authority (in $ millions) FY 2011 FY 2012 FY 2013 FY 2014
Neutron Resonance Parameters and Covariance Matrix of 239Pu
Derrien, Herve [ORNL; Leal, Luiz C [ORNL; Larson, Nancy M [ORNL
2008-08-01T23:59:59.000Z
In order to obtain the resonance parameters in a single energy range and the corresponding covariance matrix, a reevaluation of 239Pu was performed with the code SAMMY. The most recent experimental data were analyzed or reanalyzed in the energy range thermal to 2.5 keV. The normalization of the fission cross section data was reconsidered by taking into account the most recent measurements of Weston et al. and Wagemans et al. A full resonance parameter covariance matrix was generated. The method used to obtain realistic uncertainties on the average cross section calculated by SAMMY or other processing codes was examined.
State energy data report 1996: Consumption estimates
NONE
1999-02-01T23:59:59.000Z
The State Energy Data Report (SEDR) provides annual time series estimates of State-level energy consumption by major economic sectors. The estimates are developed in the Combined State Energy Data System (CSEDS), which is maintained and operated by the Energy Information Administration (EIA). The goal in maintaining CSEDS is to create historical time series of energy consumption by State that are defined as consistently as possible over time and across sectors. CSEDS exists for two principal reasons: (1) to provide State energy consumption estimates to Members of Congress, Federal and State agencies, and the general public and (2) to provide the historical series necessary for EIA`s energy models. To the degree possible, energy consumption has been assigned to five sectors: residential, commercial, industrial, transportation, and electric utility sectors. Fuels covered are coal, natural gas, petroleum, nuclear electric power, hydroelectric power, biomass, and other, defined as electric power generated from geothermal, wind, photovoltaic, and solar thermal energy. 322 tabs.
H Filtering with Inequality Constraints for Aircraft Turbofan Engine Health Estimation
Simon, Dan
shows how inequality-constrained H filtering can be applied to aircraft engine health estimation the inequality-constrained H filter can be reduced to a standard H filter combined with a quadratic programming health parameter estimation. This paper applies inequality-constrained H filtering to estimate aircraft
Mercier, Matthieu J.
We present the results of a combined experimental and numerical study of the generation of internal waves using the novel internal wave generator design of Gostiaux et al. (Exp. Fluids, vol. 42, 2007, pp. 123–130). This ...
Liu, Jingjing
2010-03-24T23:59:59.000Z
This thesis has improved Baltazar's methodology for potential energy savings estimation from retro-commissioning/retrofits measures. Important improvements and discussions are made on optimization parameters, limits on ...
Transient Flow in a Heterogeneous Vadose Zone with Uncertain Parameters
A. M. Tartakovsky; Luis Garcia-Naranjo; Daniel M. Tartakovsky
2004-02-01T23:59:59.000Z
We consider transient flow in unsaturated heterogeneous porous media with uncertain hydraulic parameters. Our aim is to provide unbiased predictions (estimates) of system states, such as pressure head, water content, and fluxes, and to quantify the uncertainty associated with such predictions. We achieve this goal by treating hydraulic parameters as random fields and the corresponding flow equations as stochastic. Current stochastic analyses of transient flow in partially saturated soils require linearization of the constitutive relations, which may lead to significant inaccuracies when these relations are highly nonlinear. If relative conductivity and saturation vary exponentially with pressure and the corresponding scaling parameters are random variables, the transient Richards equation is mapped onto a linear equation by means of the Kirchhoff transformation. This allows us to develop deterministic differential equations for the first and second ensemble moments of pressure and saturation. We solve these equations analytically, for vertical infiltration, and compare them with direct Monte Carlo simulations.
Paris-Sud XI, UniversitÃ© de
GIGARCH Ã k facteurs. Abdou KÃ¢ DIONGUE a,b , Dominique GUEGAN a aENS Cachan IDHE-MORA, UMR CNRS 8533, 61 (1)-(2) a Ã©tÃ© introduit et Ã©tudiÃ© dans les articles Email addresses: abdou-ka.diongue@edf.fr (Abdou
AN OVERVIEW OF TOOL FOR RESPONSE ACTION COST ESTIMATING (TRACE)
FERRIES SR; KLINK KL; OSTAPKOWICZ B
2012-01-30T23:59:59.000Z
Tools and techniques that provide improved performance and reduced costs are important to government programs, particularly in current times. An opportunity for improvement was identified for preparation of cost estimates used to support the evaluation of response action alternatives. As a result, CH2M HILL Plateau Remediation Company has developed Tool for Response Action Cost Estimating (TRACE). TRACE is a multi-page Microsoft Excel{reg_sign} workbook developed to introduce efficiencies into the timely and consistent production of cost estimates for response action alternatives. This tool combines costs derived from extensive site-specific runs of commercially available remediation cost models with site-specific and estimator-researched and derived costs, providing the best estimating sources available. TRACE also provides for common quantity and key parameter links across multiple alternatives, maximizing ease of updating estimates and performing sensitivity analyses, and ensuring consistency.
Hickam, Christopher Dale (Glasford, IL)
2008-05-13T23:59:59.000Z
A motor/generator is provided for connecting between a transmission input shaft and an output shaft of a prime mover. The motor/generator may include a motor/generator housing, a stator mounted to the motor/generator housing, a rotor mounted at least partially within the motor/generator housing and rotatable about a rotor rotation axis, and a transmission-shaft coupler drivingly coupled to the rotor. The transmission-shaft coupler may include a clamp, which may include a base attached to the rotor and a plurality of adjustable jaws.
A Flexible Method of Estimating Luminosity Functions
Brandon C. Kelly; Xiaohui Fan; Marianne Vestergaard
2008-05-19T23:59:59.000Z
We describe a Bayesian approach to estimating luminosity functions. We derive the likelihood function and posterior probability distribution for the luminosity function, given the observed data, and we compare the Bayesian approach with maximum-likelihood by simulating sources from a Schechter function. For our simulations confidence intervals derived from bootstrapping the maximum-likelihood estimate can be too narrow, while confidence intervals derived from the Bayesian approach are valid. We develop our statistical approach for a flexible model where the luminosity function is modeled as a mixture of Gaussian functions. Statistical inference is performed using Markov chain Monte Carlo (MCMC) methods, and we describe a Metropolis-Hastings algorithm to perform the MCMC. The MCMC simulates random draws from the probability distribution of the luminosity function parameters, given the data, and we use a simulated data set to show how these random draws may be used to estimate the probability distribution for the luminosity function. In addition, we show how the MCMC output may be used to estimate the probability distribution of any quantities derived from the luminosity function, such as the peak in the space density of quasars. The Bayesian method we develop has the advantage that it is able to place accurate constraints on the luminosity function even beyond the survey detection limits, and that it provides a natural way of estimating the probability distribution of any quantities derived from the luminosity function, including those that rely on information beyond the survey detection limits.
Pavel Lougovski; Raphael Pooser
2014-04-23T23:59:59.000Z
The majority of Quantum Random Number Generators (QRNG) are designed as converters of a continuous quantum random variable into a discrete classical random bit value. For the resulting random bit sequence to be minimally biased, the conversion process demands an experimenter to fully characterize the underlying quantum system and implement parameter estimation routines. Here we show that conventional approaches to parameter estimation (such as e.g. {\\it Maximum Likelihood Estimation}) used on a finite QRNG data sample without caution may introduce binning bias and lead to overestimation of the randomness of the QRNG output. To bypass these complications, we develop an alternative conversion approach based on the Bayesian statistical inference method. We illustrate our approach using experimental data from a time-of-arrival QRNG and numerically simulated data from a vacuum homodyning QRNG. Side-by-side comparison with the conventional conversion technique shows that our method provides an automatic on-line bias control and naturally bounds the best achievable QRNG bit rate for a given measurement record.
Fedrigo, Melissa
2009-11-26T23:59:59.000Z
Field measured estimates of aboveground biomass (AGB) for 15 transects in Bwindi Impenetrable National Park (BINP), Uganda were used to generate a number of prediction models for estimating aboveground biomass (AGB) over the full extent of BINP. AGB...
Creating a Cognitive Agent in a Virtual World: Planning, Navigation, and Natural Language Generation
Hewlett, William
2013-01-01T23:59:59.000Z
Generation . . . . . . . . . . . . . . . . . . . . .Language Generation . . . . . . . . . . . . . . . . .Language Generation . . . . . . . . . . . . . . . . . . . .
MELE: Maximum Entropy Leuven Estimators
Paris, Quirino
2001-01-01T23:59:59.000Z
of the Generalized Maximum Entropy Estimator of the Generaland Douglas Miller, Maximum Entropy Econometrics, Wiley andCalifornia Davis MELE: Maximum Entropy Leuven Estimators by
Preliminary relative permeability estimates of methanehydrate-bearing sand
Seol, Yongkoo; Kneafsey, Timothy J.; Tomutsa, Liviu; Moridis,George J.
2006-05-08T23:59:59.000Z
The relative permeability to fluids in hydrate-bearingsediments is an important parameter for predicting natural gas productionfrom gas hydrate reservoirs. We estimated the relative permeabilityparameters (van Genuchten alpha and m) in a hydrate-bearing sand by meansof inverse modeling, which involved matching water saturation predictionswith observations from a controlled waterflood experiment. We used x-raycomputed tomography (CT) scanning to determine both the porosity and thehydrate and aqueous phase saturation distributions in the samples. X-rayCT images showed that hydrate and aqueous phase saturations arenon-uniform, and that water flow focuses in regions of lower hydratesaturation. The relative permeability parameters were estimated at twolocations in each sample. Differences between the estimated parametersets at the two locations were attributed to heterogeneity in the hydratesaturation. Better estimates of the relative permeability parametersrequire further refinement of the experimental design, and betterdescription of heterogeneity in the numerical inversions.
Quantitative estimation in Health Impact Assessment: Opportunities and challenges
Bhatia, Rajiv, E-mail: rajiv.bhatia@sfdph.or [San Francisco Department of Public Health, CA (United States); Seto, Edmund [University of California at Berkeley, CA (United States)
2011-04-15T23:59:59.000Z
Health Impact Assessment (HIA) considers multiple effects on health of policies, programs, plans and projects and thus requires the use of diverse analytic tools and sources of evidence. Quantitative estimation has desirable properties for the purpose of HIA but adequate tools for quantification exist currently for a limited number of health impacts and decision settings; furthermore, quantitative estimation generates thorny questions about the precision of estimates and the validity of methodological assumptions. In the United States, HIA has only recently emerged as an independent practice apart from integrated EIA, and this article aims to synthesize the experience with quantitative health effects estimation within that practice. We use examples identified through a scan of available identified instances of quantitative estimation in the U.S. practice experience to illustrate methods applied in different policy settings along with their strengths and limitations. We then discuss opportunity areas and practical considerations for the use of quantitative estimation in HIA.
Parameter 4 | Open Energy Information
AFDC Printable Version Share this resource Send a link to EERE: Alternative Fuels Data Center Home Page to someone by E-mail Share EERE: Alternative Fuels Data Center Home Page on Facebook Tweet about EERE: Alternative Fuels Data Center Home Page on Twitter Bookmark EERE: Alternative Fuels Data Center Home Page onYou are now leaving Energy.gov You are now leaving Energy.gov You are being directedAnnual SiteofEvaluatingGroup |JilinLuOpenNorthOlympiaAnalysis) JumpPalcan sPaquin Energy andParameter
PHYSICAL PARAMETERS OF STANDARD AND BLOWOUT JETS
Pucci, Stefano; Romoli, Marco [Department of Physics and Astronomy, University of Firenze, I-50121 Firenze (Italy); Poletto, Giannina [INAF-Arcetri Astrophysical Observatory, I-50125 Firenze (Italy); Sterling, Alphonse C., E-mail: stpucci@arcetri.astro.it [Space Science Office, NASA/MSFC, Huntsville, Al 35812 (United States)
2013-10-10T23:59:59.000Z
The X-ray Telescope on board the Hinode mission revealed the occurrence, in polar coronal holes, of much more numerous jets than previously indicated by the Yohkoh/Soft X-ray Telescope. These plasma ejections can be of two types, depending on whether they fit the standard reconnection scenario for coronal jets or if they include a blowout-like eruption. In this work, we analyze two jets, one standard and one blowout, that have been observed by the Hinode and STEREO experiments. We aim to infer differences in the physical parameters that correspond to the different morphologies of the events. To this end, we adopt spectroscopic techniques and determine the profiles of the plasma temperature, density, and outflow speed versus time and position along the jets. The blowout jet has a higher outflow speed, a marginally higher temperature, and is rooted in a stronger magnetic field region than the standard event. Our data provide evidence for recursively occurring reconnection episodes within both the standard and the blowout jet, pointing either to bursty reconnection or to reconnection occurring at different locations over the jet lifetimes. We make a crude estimate of the energy budget of the two jets and show how energy is partitioned among different forms. Also, we show that the magnetic energy that feeds the blowout jet is a factor of 10 higher than the magnetic energy that fuels the standard event.
Consistent Estimation for Aggregated GARCH
Komunjer, Ivana
2001-01-01T23:59:59.000Z
of the QMLEs of the ”weak” GARCH with TDGP parameters (¯ 0 ;of the QMLEs of the ”weak” GARCH with TDGP parameters (¯ 0 ;of the QMLEs of the ”weak” GARCH with TDGP parameters (¯ 0 ;
A better estimation of the Universal Gravitational Constant
Prasanna, Thankasala
1993-01-01T23:59:59.000Z
) can bc computed. If m ?m I (say), then eq(2) can be written ))12 ) r 'i+ Gmj I + ? ): = 0 fyl t 2 (3) Hence, the gravitational parameters Gm; of various large bodies in space can be deter- mined. More typically, rather than attempting to measure... r and 'r, some more easily mea- surable solution property of the differential equation (3) is used to estimate Gm, This has been the basic principle behind estimation of the gravitational parameters of celestial objects in space. Several...
SPACE TECHNOLOGY Actual Estimate
SPACE TECHNOLOGY TECH-1 Actual Estimate Budget Authority (in $ millions) FY 2011 FY 2012 FY 2013 FY.7 247.0 Exploration Technology Development 144.6 189.9 202.0 215.5 215.7 214.5 216.5 Notional SPACE TECHNOLOGY OVERVIEW .............................. TECH- 2 SBIR AND STTR
; - calculated separately for the most important radionuclides produced in nuclear weapons tests. Those would averages for all tests. 2. Provide a list of references regarding: (1) the history of nuclear weapons to the Population of the Continental U.S. from Nevada Weapons Tests and Estimates of Deposition Density
Status of three-neutrino oscillation parameters, circa 2013
F. Capozzi; G. L. Fogli; E. Lisi; A. Marrone; D. Montanino; A. Palazzo
2014-05-05T23:59:59.000Z
The standard three-neutrino (3nu) oscillation framework is being increasingly refined by results coming from different sets of experiments, using neutrinos from solar, atmospheric, accelerator and reactor sources. At present, each of the known oscillation parameters [the two squared mass gaps (delta m^2, Delta m^2) and the three mixing angles (theta_12}, theta_13, theta_23)] is dominantly determined by a single class of experiments. Conversely, the unknown parameters [the mass hierarchy, the theta_23 octant and the CP-violating phase delta] can be currently constrained only through a combined analysis of various (eventually all) classes of experiments. In the light of recent new results coming from reactor and accelerator experiments, and of their interplay with solar and atmospheric data, we update the estimated N-sigma ranges of the known 3nu parameters, and revisit the status of the unknown ones. Concerning the hierarchy, no significant difference emerges between normal and inverted mass ordering. A slight overall preference is found for theta_23 in the first octant and for nonzero CP violation with sin delta < 0; however, for both parameters, such preference exceeds 1 sigma only for normal hierarchy. We also discuss the correlations and stability of the oscillation parameters within different combinations of data sets.
IDC RP2 & 3 US Industry Standard Cost Estimate Summary.
Harris, James M.; Huelskamp, Robert M.
2015-01-01T23:59:59.000Z
Sandia National Laboratories has prepared a ROM cost estimate for budgetary planning for the IDC Reengineering Phase 2 & 3 effort, using a commercial software cost estimation tool calibrated to US industry performance parameters. This is not a cost estimate for Sandia to perform the project. This report provides the ROM cost estimate and describes the methodology, assumptions, and cost model details used to create the ROM cost estimate. ROM Cost Estimate Disclaimer Contained herein is a Rough Order of Magnitude (ROM) cost estimate that has been provided to enable initial planning for this proposed project. This ROM cost estimate is submitted to facilitate informal discussions in relation to this project and is NOT intended to commit Sandia National Laboratories (Sandia) or its resources. Furthermore, as a Federally Funded Research and Development Center (FFRDC), Sandia must be compliant with the Anti-Deficiency Act and operate on a full-cost recovery basis. Therefore, while Sandia, in conjunction with the Sponsor, will use best judgment to execute work and to address the highest risks and most important issues in order to effectively manage within cost constraints, this ROM estimate and any subsequent approved cost estimates are on a 'full-cost recovery' basis. Thus, work can neither commence nor continue unless adequate funding has been accepted and certified by DOE.
Barnette, Daniel W. (Veguita, NM)
2002-01-01T23:59:59.000Z
The present invention provides a method of grid generation that uses the geometry of the problem space and the governing relations to generate a grid. The method can generate a grid with minimized discretization errors, and with minimal user interaction. The method of the present invention comprises assigning grid cell locations so that, when the governing relations are discretized using the grid, at least some of the discretization errors are substantially zero. Conventional grid generation is driven by the problem space geometry; grid generation according to the present invention is driven by problem space geometry and by governing relations. The present invention accordingly can provide two significant benefits: more efficient and accurate modeling since discretization errors are minimized, and reduced cost grid generation since less human interaction is required.
Steam generator support system
Moldenhauer, J.E.
1987-08-25T23:59:59.000Z
A support system for connection to an outer surface of a J-shaped steam generator for use with a nuclear reactor or other liquid metal cooled power source is disclosed. The J-shaped steam generator is mounted with the bent portion at the bottom. An arrangement of elongated rod members provides both horizontal and vertical support for the steam generator. The rod members are interconnected to the steam generator assembly and a support structure in a manner which provides for thermal distortion of the steam generator without the transfer of bending moments to the support structure and in a like manner substantially minimizes forces being transferred between the support structure and the steam generator as a result of seismic disturbances. 4 figs.
Steam generator support system
Moldenhauer, James E. (Simi Valley, CA)
1987-01-01T23:59:59.000Z
A support system for connection to an outer surface of a J-shaped steam generator for use with a nuclear reactor or other liquid metal cooled power source. The J-shaped steam generator is mounted with the bent portion at the bottom. An arrangement of elongated rod members provides both horizontal and vertical support for the steam generator. The rod members are interconnected to the steam generator assembly and a support structure in a manner which provides for thermal distortion of the steam generator without the transfer of bending moments to the support structure and in a like manner substantially minimizes forces being transferred between the support structure and the steam generator as a result of seismic disturbances.
Use of Cost Estimating Relationships
Broader source: Directives, Delegations, and Requirements [Office of Management (MA)]
1997-03-28T23:59:59.000Z
Cost Estimating Relationships (CERs) are an important tool in an estimator's kit, and in many cases, they are the only tool. Thus, it is important to understand their limitations and characteristics. This chapter discusses considerations of which the estimator must be aware so the Cost Estimating Relationships can be properly used.
Broader source: Energy.gov [DOE]
The amount of electricity generated by the wind industry started to grow back around 1999, and since 2007 has been increasing at a rapid pace.
Energy Science and Technology Software Center (OSTI)
003027MLTPL00 Network Traffic Generator for Low-rate Small Network Equipment Software http://eln.lbl.gov/sne_traffic_gen.html
Hydrogen Generation for Refineries
Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site
Single Cycle Shown for ATB SteamCarbon 3 * ATB reforming * Steamcarbon 3 * Syngas generated during reforming * 70% H 2 * 20% CO * Syngas composition agrees with...
Next-generation transcriptome assembly
Martin, Jeffrey A.
2012-01-01T23:59:59.000Z
technologies - the next generation. Nat Rev Genet 11, 31-algorithms for next-generation sequencing data. Genomicsassembly from next- generation sequencing data. Genome Res
Reliable estimation of shock position in shock-capturing compressible hydrodynamics codes
Nelson, Eric M [Los Alamos National Laboratory
2008-01-01T23:59:59.000Z
The displacement method for estimating shock position in a shock-capturing compressible hydrodynamics code is introduced. Common estimates use simulation data within the captured shock, but the displacement method uses data behind the shock, making the estimate consistent with and as reliable as estimates of material parameters obtained from averages or fits behind the shock. The displacement method is described in the context of a steady shock in a one-dimensional lagrangian hydrodynamics code, and demonstrated on a piston problem and a spherical blast wave.The displacement method's estimates of shock position are much better than common estimates in such applications.
Dynamics of Noncommutative Solitons I: Spectral Theory and Dispersive Estimates
August J. Krueger; Avy Soffer
2014-11-16T23:59:59.000Z
We consider the Schr\\"odinger equation with a Hamiltonian given by a second order difference operator with nonconstant growing coefficients, on the half one dimensional lattice. This operator appeared first naturally in the construction and dynamics of noncommutative solitons in the context of noncommutative field theory. We prove pointwise in time decay estimates, with the optimal decay rate $t^{-1}\\log^{-2}t$ generically. We use a novel technique involving generating functions of orthogonal polynomials to achieve this estimate.
Synchrophasor Measurement-Based Wind Plant Inertia Estimation: Preprint
Zhang, Y.; Bank, J.; Wan, Y. H.; Muljadi, E.; Corbus, D.
2013-05-01T23:59:59.000Z
The total inertia stored in all rotating masses that are connected to power systems, such as synchronous generations and induction motors, is an essential force that keeps the system stable after disturbances. To ensure bulk power system stability, there is a need to estimate the equivalent inertia available from a renewable generation plant. An equivalent inertia constant analogous to that of conventional rotating machines can be used to provide a readily understandable metric. This paper explores a method that utilizes synchrophasor measurements to estimate the equivalent inertia that a wind plant provides to the system.
A study of IMRT planning parameters on planning efficiency, delivery efficiency, and plan quality
Mittauer, Kathryn [Department of Radiation Oncology, College of Medicine, University of Florida, Gainesville, Florida 32603 and J. Crayton Pruitt Family Department of Biomedical Engineering, University of Florida, Gainesville, Florida 32611 (United States); Lu Bo; Yan Guanghua; Kahler, Darren; Amdur, Robert; Liu Chihray [Department of Radiation Oncology, College of Medicine, University of Florida, Gainesville, Florida 32603 (United States); Gopal, Arun [Department of Radiation Oncology, New York-Presbyterian Hospital, Columbia University, New York, New York 10032 (United States)
2013-06-15T23:59:59.000Z
Purpose: To improve planning and delivery efficiency of head and neck IMRT without compromising planning quality through the evaluation of inverse planning parameters.Methods: Eleven head and neck patients with pre-existing IMRT treatment plans were selected for this retrospective study. The Pinnacle treatment planning system (TPS) was used to compute new treatment plans for each patient by varying the individual or the combined parameters of dose/fluence grid resolution, minimum MU per segment, and minimum segment area. Forty-five plans per patient were generated with the following variations: 4 dose/fluence grid resolution plans, 12 minimum segment area plans, 9 minimum MU plans, and 20 combined minimum segment area/minimum MU plans. Each plan was evaluated and compared to others based on dose volume histograms (DVHs) (i.e., plan quality), planning time, and delivery time. To evaluate delivery efficiency, a model was developed that estimated the delivery time of a treatment plan, and validated through measurements on an Elekta Synergy linear accelerator. Results: The uncertainty (i.e., variation) of the dose-volume index due to dose calculation grid variation was as high as 8.2% (5.5 Gy in absolute dose) for planning target volumes (PTVs) and 13.3% (2.1 Gy in absolute dose) for planning at risk volumes (PRVs). Comparison results of dose distributions indicated that smaller volumes were more susceptible to uncertainties. The grid resolution of a 4 mm dose grid with a 2 mm fluence grid was recommended, since it can reduce the final dose calculation time by 63% compared to the accepted standard (2 mm dose grid with a 2 mm fluence grid resolution) while maintaining a similar level of dose-volume index variation. Threshold values that maintained adequate plan quality (DVH results of the PTVs and PRVs remained satisfied for their dose objectives) were 5 cm{sup 2} for minimum segment area and 5 MU for minimum MU. As the minimum MU parameter was increased, the number of segments and delivery time were decreased. Increasing the minimum segment area parameter decreased the plan MU, but had less of an effect on the number of segments and delivery time. Our delivery time model predicted delivery time to within 1.8%. Conclusions: Increasing the dose grid while maintaining a small fluence grid allows for improved planning efficiency without compromising plan quality. Delivery efficiency can be improved by increasing the minimum MU, but not the minimum segment area. However, increasing the respective minimum MU and/or the minimum segment area to any value greater than 5 MU and 5 cm{sup 2} is not recommended because it degrades plan quality.
Ye, Sheng; Li, Hongyi; Huang, Maoyi; Ali, Melkamu; Leng, Guoyong; Leung, Lai-Yung R.; Wang, Shaowen; Sivapalan, Murugesu
2014-07-21T23:59:59.000Z
Subsurface stormflow is an important component of the rainfall–runoff response, especially in steep terrain. Its contribution to total runoff is, however, poorly represented in the current generation of land surface models. The lack of physical basis of these common parameterizations precludes a priori estimation of the stormflow (i.e. without calibration), which is a major drawback for prediction in ungauged basins, or for use in global land surface models. This paper is aimed at deriving regionalized parameterizations of the storage–discharge relationship relating to subsurface stormflow from a top–down empirical data analysis of streamflow recession curves extracted from 50 eastern United States catchments. Detailed regression analyses were performed between parameters of the empirical storage–discharge relationships and the controlling climate, soil and topographic characteristics. The regression analyses performed on empirical recession curves at catchment scale indicated that the coefficient of the power-law form storage–discharge relationship is closely related to the catchment hydrologic characteristics, which is consistent with the hydraulic theory derived mainly at the hillslope scale. As for the exponent, besides the role of field scale soil hydraulic properties as suggested by hydraulic theory, it is found to be more strongly affected by climate (aridity) at the catchment scale. At a fundamental level these results point to the need for more detailed exploration of the co-dependence of soil, vegetation and topography with climate.
Monotonic Local Decay Estimates
Avy Soffer
2011-10-29T23:59:59.000Z
For the Hamiltonian operator H = -{\\Delta}+V(x) of the Schr\\"odinger Equation with a repulsive potential, the problem of local decay is considered. It is analyzed by a direct method, based on a new, L^2 bounded, propagation observable. The resulting decay estimate, is in certain cases monotonic in time, with no "Quantum Corrections". This method is then applied to some examples in one and higher dimensions. In particular the case of the Wave Equation on a Schwarzschild manifold is redone: Local decay, stronger than the known ones are proved (minimal loss of angular derivatives and lower order of radial derivatives of initial data). The method developed here can be an alternative in some cases to the Morawetz type estimates, with L^2-multipliers replacing the first order operators. It provides an alternative to Mourre's method, by including thresholds and high energies.
Estimating radiogenic cancer risks
NONE
1994-06-01T23:59:59.000Z
This document presents a revised methodology for EPA`s estimation of cancer risks due to low-LET radiation exposures in light of information that has become available since the publication of BIER III, especially new information on the Japanese atomic bomb survivors. For most cancer sites, the risk model is one in which the age-specific relative risk coefficients are obtained by taking the geometric mean of coefficients derived from the atomic bomb survivor data employing two different methods for transporting risks from Japan to the U.S. (multiplicative and NIH projection methods). Using 1980 U.S. vital statistics, the risk models are applied to estimate organ-specific risks, per unit dose, for a stationary population.
Cai, H.; Wang, M.; Elgowainy, A.; Han, J. (Energy Systems)
2012-07-06T23:59:59.000Z
Greenhouse gas (CO{sub 2}, CH{sub 4} and N{sub 2}O, hereinafter GHG) and criteria air pollutant (CO, NO{sub x}, VOC, PM{sub 10}, PM{sub 2.5} and SO{sub x}, hereinafter CAP) emission factors for various types of power plants burning various fuels with different technologies are important upstream parameters for estimating life-cycle emissions associated with alternative vehicle/fuel systems in the transportation sector, especially electric vehicles. The emission factors are typically expressed in grams of GHG or CAP per kWh of electricity generated by a specific power generation technology. This document describes our approach for updating and expanding GHG and CAP emission factors in the GREET (Greenhouse Gases, Regulated Emissions, and Energy Use in Transportation) model developed at Argonne National Laboratory (see Wang 1999 and the GREET website at http://greet.es.anl.gov/main) for various power generation technologies. These GHG and CAP emissions are used to estimate the impact of electricity use by stationary and transportation applications on their fuel-cycle emissions. The electricity generation mixes and the fuel shares attributable to various combustion technologies at the national, regional and state levels are also updated in this document. The energy conversion efficiencies of electric generating units (EGUs) by fuel type and combustion technology are calculated on the basis of the lower heating values of each fuel, to be consistent with the basis used in GREET for transportation fuels. On the basis of the updated GHG and CAP emission factors and energy efficiencies of EGUs, the probability distribution functions (PDFs), which are functions that describe the relative likelihood for the emission factors and energy efficiencies as random variables to take on a given value by the integral of their own probability distributions, are updated using best-fit statistical curves to characterize the uncertainties associated with GHG and CAP emissions in life-cycle modeling with GREET.
Contracting for wind generation
Newbery, David
The UK Government proposes offering long-term Feed-in-Tariffs (FiTs) to low-carbon generation to reduce risk and encourage new entrants. Their preference is for a Contract-for-Difference (CfD) or a premium FiT (pFiT) for all generation regardless...
Laser beam generating apparatus
Warner, B.E.; Duncan, D.B.
1994-02-15T23:59:59.000Z
Laser beam generating apparatus including a septum segment disposed longitudinally within the tubular structure of the apparatus is described. The septum provides for radiatively dissipating heat buildup within the tubular structure and for generating relatively uniform laser beam pulses so as to minimize or eliminate radial pulse delays (the chevron effect). 7 figures.
Laser beam generating apparatus
Warner, B.E.; Duncan, D.B.
1993-12-28T23:59:59.000Z
Laser beam generating apparatus including a septum segment disposed longitudinally within the tubular structure of the apparatus. The septum provides for radiatively dissipating heat buildup within the tubular structure and for generating relatively uniform laser beam pulses so as to minimize or eliminate radial pulse delays (the chevron effect). 11 figures.
Moto-Oka, T.; Kitsuregawa, M.
1985-01-01T23:59:59.000Z
The leader of Japan's Fifth Generation computer project, known as the 'Apollo' project, and a young computer scientist elucidate in this book the process of how the idea came about, international reactions, the basic technology, prospects for realization, and the abilities of the Fifth Generation computer. Topics considered included forecasting, research programs, planning, and technology impacts.
Kampa, Aleksander Edward
1988-01-01T23:59:59.000Z
) December 1988 Extremal Index Estimation (December 1988) Aleksander Edward Kampa, Ecole Centrale de Paris, France Chairman of Advisory Comittee: Dr. Tailen Hsing If (X ) is a strictly stationary sequence satisfying certain n dependence restrictions (e.... g. D or A), then the relationship between the extremal properties of (X ) and its associated independent sequence (X ) n n can. under certain conditions, be summed up by a single constant Be[0. 1]. called the extremal index. Results of extreme...
Blumenthal, Jurg M.; Thompson, Wayne
2009-06-12T23:59:59.000Z
can collect samples from a corn field and use this data to calculate the yield estimate. An interactive grain yield calculator is provided in the Appendix of the pdf version of this publication. The calculator is also located in the publication.... Plan and prepare for sample and data collection. 2. Collect field samples and record data. 3. Analyze the data using the interactive grain yield calculator in the Appendix. Plan and prepare for sample and data collection Predetermine sample locations...
Aerosol Best Estimate Value-Added Product
Flynn, C; Turner, D; Koontz, A; Chand, D; Sivaraman, C
2012-07-19T23:59:59.000Z
The objective of the Aerosol Best Estimate (AEROSOLBE) value-added product (VAP) is to provide vertical profiles of aerosol extinction, single scatter albedo, asymmetry parameter, and Angstroem exponents for the atmospheric column above the Central Facility at the ARM Southern Great Plains (SGP) site. We expect that AEROSOLBE will provide nearly continuous estimates of aerosol optical properties under a range of conditions (clear, broken clouds, overcast clouds, etc.). The primary requirement of this VAP was to provide an aerosol data set as continuous as possible in both time and height for the Broadband Heating Rate Profile (BBHRP) VAP in order to provide a structure for the comprehensive assessment of our ability to model atmospheric radiative transfer for all conditions. Even though BBHRP has been completed, AEROSOLBE results are very valuable for environmental, atmospheric, and climate research.
Last, George V.; Rockhold, Mark L.; Murray, Christopher J.; Cantrell, Kirk J.
2009-07-24T23:59:59.000Z
In fiscal years 2007 and 2008, the Hanford Site Groundwater Remediation Project, formerly managed by Fluor Hanford, Inc., requested the Pacific Northwest National Laboratory (PNNL) to support the development and initial implementation of a strategy to establish and maintain, under configuration control, a set of Hanford-specific flow and transport parameter estimates that can be used to support Hanford Site assessments. This document provides a summary of those efforts, culminating in a set of best-estimate Hanford-specific parameters for use in place of the default parameters used in the RESRAD code. The RESRAD code is a computer model designed to estimate radiation doses and risks from RESidual RADioactive materials. The long-term goals of the PNNL work are to improve the consistency, defensibility, and traceability of parameters and their ranges of variability, and to ensure a sound basis for assigning parameters for flow and transport models in the code. The strategy was to start by identifying the existing parameter data sets most recently used in site assessments, documenting these parameter data sets and the raw data sets on which they were based, and using the existing parameter sets to define best-estimate parameters for use in the RESRAD code. The Hanford-specific assessment parameters compiled for use in RESRAD are traceable back to the professional judgment of the authors of published documents. Within the references, parameters are often not directly traceable back to the raw data and analytical approaches used to derive the assessment parameters. Future activities will work to continuously improve the defensibility and traceability of the parameter data sets and to address limitations and technical issues associated with the existing assessment parameter data sets.
Entanglement Generation by Electric Field Background
Zahra Ebadi; Behrouz Mirza
2014-10-12T23:59:59.000Z
The quantum vacuum is unstable under the influence of an external electric field and decays into pairs of charged particles, a process which is known as the Schwinger pair production. We propose and demonstrate that this electric field can generate entanglement. Using the Schwinger pair production for constant and pulsed electric fields, we study entanglement for scalar particles with zero spins and Dirac fermions. One can observe the variation of the entanglement produced for bosonic and fermionic modes with respect to different parameters.
Joint estimation of phase and phase diffusion for quantum metrology
Mihai D. Vidrighin; Gaia Donati; Marco G. Genoni; Xian-Min Jin; W. Steven Kolthammer; M. S. Kim; Animesh Datta; Marco Barbieri; Ian A. Walmsley
2014-10-20T23:59:59.000Z
Phase estimation, at the heart of many quantum metrology and communication schemes, can be strongly affected by noise, whose amplitude may not be known, or might be subject to drift. Here, we investigate the joint estimation of a phase shift and the amplitude of phase diffusion, at the quantum limit. For several relevant instances, this multiparameter estimation problem can be effectively reshaped as a two-dimensional Hilbert space model, encompassing the description of an interferometer phase probed with relevant quantum states -- split single-photons, coherent states or N00N states. For these cases, we obtain a trade-off bound on the statistical variances for the joint estimation of phase and phase diffusion, as well as optimum measurement schemes. We use this bound to quantify the effectiveness of an actual experimental setup for joint parameter estimation for polarimetry. We conclude by discussing the form of the trade-off relations for more general states and measurements.
Comer, K.; Gaddy, C.D.; Seaver, D.A.; Stillwell, W.G.
1985-01-01T23:59:59.000Z
The US Nuclear Regulatory Commission and Sandia National Laboratories sponsored a project to evaluate psychological scaling techniques for use in generating estimates of human error probabilities. The project evaluated two techniques: direct numerical estimation and paired comparisons. Expert estimates were found to be consistent across and within judges. Convergent validity was good, in comparison to estimates in a handbook of human reliability. Predictive validity could not be established because of the lack of actual relative frequencies of error (which will be a difficulty inherent in validation of any procedure used to estimate HEPs). Application of expert estimates in probabilistic risk assessment and in human factors is discussed.
Use of Slip Ring Induction Generator for Wind Power Generation
K Y Patil; D S Chavan
Wind energy is now firmly established as a mature technology for electricity generation. There are different types of generators that can be used for wind energy generation, among which Slip ring Induction generator proves to be more advantageous. To analyse application of Slip ring Induction generator for wind power generation, an experimental model is developed and results are studied. As power generation from natural sources is the need today and variable speed wind energy is ample in amount in India, it is necessary to study more beneficial options for wind energy generating techniques. From this need a model is developed by using Slip ring Induction generator which is a type of Asynchronous generator.
Estimating sandstone permeability using network models with pore size distributions
Mathews, Alan Ronald
1991-01-01T23:59:59.000Z
the effects of each parameter on the response of the network lattice. A FoR+RAv source code was written to generate and analyze the response of the network model (see Appendix G for source code description and flow chart). The controlling parameters used... in appearance to empirical data. A network model is developed to simulate the pore geometry of a clean, well-sorted sandstone. Pores were modeled as straight capillaries connected in various lattice configurations. Complex lattice configurations produce more...
Leung, Ka-Ngo; Lou, Tak Pui
2005-03-22T23:59:59.000Z
A compact neutron generator has at its outer circumference a toroidal shaped plasma chamber in which a tritium (or other) plasma is generated. A RF antenna is wrapped around the plasma chamber. A plurality of tritium ion beamlets are extracted through spaced extraction apertures of a plasma electrode on the inner surface of the toroidal plasma chamber and directed inwardly toward the center of neutron generator. The beamlets pass through spaced acceleration and focusing electrodes to a neutron generating target at the center of neutron generator. The target is typically made of titanium tubing. Water is flowed through the tubing for cooling. The beam can be pulsed rapidly to achieve ultrashort neutron bursts. The target may be moved rapidly up and down so that the average power deposited on the surface of the target may be kept at a reasonable level. The neutron generator can produce fast neutrons from a T-T reaction which can be used for luggage and cargo interrogation applications. A luggage or cargo inspection system has a pulsed T-T neutron generator or source at the center, surrounded by associated gamma detectors and other components for identifying explosives or other contraband.
Hydrogen Generation From Electrolysis
Steven Cohen; Stephen Porter; Oscar Chow; David Henderson
2009-03-06T23:59:59.000Z
Small-scale (100-500 kg H2/day) electrolysis is an important step in increasing the use of hydrogen as fuel. Until there is a large population of hydrogen fueled vehicles, the smaller production systems will be the most cost-effective. Performing conceptual designs and analyses in this size range enables identification of issues and/or opportunities for improvement in approach on the path to 1500 kg H2/day and larger systems. The objectives of this program are to establish the possible pathways to cost effective larger Proton Exchange Membrane (PEM) water electrolysis systems and to identify areas where future research and development efforts have the opportunity for the greatest impact in terms of capital cost reduction and efficiency improvements. System design and analysis was conducted to determine the overall electrolysis system component architecture and develop a life cycle cost estimate. A design trade study identified subsystem components and configurations based on the trade-offs between system efficiency, cost and lifetime. Laboratory testing of components was conducted to optimize performance and decrease cost, and this data was used as input to modeling of system performance and cost. PEM electrolysis has historically been burdened by high capital costs and lower efficiency than required for large-scale hydrogen production. This was known going into the program and solutions to these issues were the focus of the work. The program provided insights to significant cost reduction and efficiency improvement opportunities for PEM electrolysis. The work performed revealed many improvement ideas that when utilized together can make significant progress towards the technical and cost targets of the DOE program. The cell stack capital cost requires reduction to approximately 25% of today’s technology. The pathway to achieve this is through part count reduction, use of thinner membranes, and catalyst loading reduction. Large-scale power supplies are available today that perform in a range of efficiencies, >95%, that are suitable for the overall operational goals. The balance of plant scales well both operationally and in terms of cost becoming a smaller portion of the overall cost equation as the systems get larger. Capital cost reduction of the cell stack power supplies is achievable by modifying the system configuration to have the cell stacks in electrical series driving up the DC bus voltage, thereby allowing the use of large-scale DC power supply technologies. The single power supply approach reduces cost. Elements of the cell stack cost reduction and efficiency improvement work performed in the early stage of the program is being continued in subsequent DOE sponsored programs and through internal investment by Proton. The results of the trade study of the 100 kg H2/day system have established a conceptual platform for design and development of a next generation electrolyzer for Proton. The advancements started by this program have the possibility of being realized in systems for the developing fueling markets in 2010 period.
Masuda, H.; Claridge, D.
2012-01-01T23:59:59.000Z
, cooling and heating and weather data using multiple linear regression models based on the simplified steady-state energy balance for a whole building. Two approaches using different response variables: the energy balance load (EBL) and the building thermal...
Menon, Ravishankar
2013-01-01T23:59:59.000Z
D. in Electrical Engineering (Applied Ocean Sciences), Uni-in Electrical Engineering (Applied Ocean Sciences) byElectrical Engineering (Applied Ocean Sciences) University
On Parameter and State Estimation for Linear Differential-Algebraic Equations
Gustafsson, Fredrik
). These models arise as the natural product of object-oriented modeling languages, such as Modelica. However September 2005 #12;tations that are close to those of object-oriented mod- eling tools, like Modelica
On Parameter and State Estimation for Linear Differential-Algebraic Equations
Schön, Thomas
). These models arise as the natural product of object-oriented modeling languages, such as Modelica. However September 2006 #12;resentations that are close to those of object-oriented modeling tools, like Modelica
Blandin, Sebastien
2012-01-01T23:59:59.000Z
in the case of non-linear regression since there is nocan be extended to non-linear regression methods through the
Reservoir parameters estimation from well log and core data: a case study from the North Sea
Edinburgh, University of
is based on matching core and log data, and the linear and non-linear regressions are then used to build. Second, linear and non- linear regression are employed to derive porosity, shale vol- umes, clay contents
Crop yield estimation model for Iowa using remote sensing and surface parameters
Singh, Ramesh P.
and prediction using piecewise linear regression method with breakpoint. Crop production environment consists of inherent sources of heterogeneity and their non-linear behavior. A non-linear Quasi-Newton multi
Modeling and parameter estimation for point-actuated continuous-facesheet
Stress focusing for controlled fracture in microelectromechanical systems Matthew A. Meitl,a Xue in microelectromechanical systems MEMSs based on the control of corner sharpness. Studies of model MEMS structures
IEEE TRANSACTIONS ON KNOWLEDGE AND DATA ENGINEERING 1 NORTHSTAR: A Parameter Estimation
Shekhar, Shashi
of the many application domains, such as, regional economics, ecology, environmental management, public safety, Spatial Autocorrelation, Spatial Data Mining, Spatial Databases, Maximum Likelihood Theory. I, such as regional economics [24], ecology [9], [41], environmental management [19], public safety [21
Geophysical parameter estimation with a passive microwave spectrometer at 54 / 118 / 183 / 425 GHz
Leslie, Robert Vincent, 1972-
2004-01-01T23:59:59.000Z
(cont.) model of a convective cell is presented that provides a physical basis for this relationship.
On the vortex parameter estimation using wide band signals in active acoustic system
Paris-Sud XI, Université de
is an important operation in a large number of applications such as turbine monitoring, de- tection of a vortex in a closed hydraulic test loop. The objective of the work is to emphasize the effect
Hartmann, Mitra J. Z.
vehicles (SSVs) present a challenge from modeling, trajectory-tracking, and control design perspec- tives generally necessitates either robust control techniques or accurate models of the vehicle dynamics values from coarsely sampled data for a skid-steered vehicle, which traverses unknown or changing terrain
Khalil, Ahmad S. (Ahmad Samir), 1980-
2004-01-01T23:59:59.000Z
A sufficient understanding of the pathology that leads to cardiovascular disease is currently deficient. Atherosclerosis is a complex disease that is believed to be initiated and promoted by linked biochemical and biomechanical ...
Parameter estimation in induction motors: a comparison between the PE and the TS paradigm
Garatti, Simone
- terized by a poly-phase stator windings besides a three-phase rotor windings. Feeding the windings,Rs) and the inductances (Lr,Ls) of the rotor and stator windings, and the mutual inductance M. In the model equations, induction motors have a fixed stator and a mobile rotor, but, differently from the others, they are charac
Estimation of parameters in thermal-field emission from diamond D.G. Walkera
Walker, D. Greg
: Thermal field emission; Diamond film 1. Introduction Polycrystalline diamond films can exhibit outstanding polycrystalline diamond films at elevated temperatures. Thermal effects are included in the models and provide. Wang et al. [10] observed emission from the region between grains in polycrystalline diamond films
Estimating evolutionary parameters and detecting signals of natural selection from genetic data
Bhatia, Gaurav
2014-01-01T23:59:59.000Z
Even prior to the elucidation of the structure of DNA, the theoretical foundations of population genetics had been well developed. Advances made by Sewall Wright, John B.S. Haldane, and Ronald A. Fisher form the basis with ...
Parameter Estimation and Model Discrimination for a Lithium-Ion Cell
interest in the modeling of the lithium-ion battery ever since this battery was first com- mercialized.1-18 This interest has been fueled by the combination of the fast growing lithium-ion battery market and the desire of a lithium- ion battery measured over a wide range of rates. Single-Particle Model This single-particle model
Multi-parameter estimation in glacier models with adjoint and algorithmic differentiation
Davis, Andrew D. (Andrew Donaldson)
2012-01-01T23:59:59.000Z
The cryosphere is comprised of about 33 million km³ of ice, which corresponds to 70 meters of global mean sea level equivalent [30]. Simulating continental ice masses, such as the Antarctic or Greenland Ice Sheets, requires ...
State and Parameter Estimation for Nonlinearly Parameterized Systems: An H-Based Approach
Johansen, Tor Arne
Fjær Grip Ali Saberi Tor A. Johansen School of Electrical Engineering and Computer Science by an The work of Håvard Fjær Grip is supported by the Research Council of Norway. The work of Ali Saberi
State and Parameter Estimation for Linear Systems with Nonlinearly Parameterized Perturbations
Johansen, Tor Arne
Håvard Fjær Grip? Ali Saberi?? Tor A. Johansen? Abstract-- We consider systems that can be described of Ali Saberi is partially supported by National Science Foundation grant ECS-0528882 and NAVY grants ONR
Frontczak, Monika
2012-01-01T23:59:59.000Z
relation to using ventilation and heating systems and theiryour home and use ventilation and heating systems properly?how the shading, ventilation and heating systems work and
hal-00119494,version1-10Dec2006 Error structures and parameter estimation
Boyer, Edmond
probabilistic approach we have to know the law of the pair (C, C) or equivalently the law of C and the conditional law of C given C. Thus, the study of error transmission is associated to the calculus of images of probability measures. Unfortunately, the knowledge of the law of C given C by means of experiment
Estimation of Hydraulic Parameters under Unsaturated Flow Conditions in Heap Leaching
Sepúlveda, Mauricio
is a widely used extraction method for low-grade minerals as well as copper, gold, silver, and uranium. Copper minerals are primar- ily categorized as either copper sulphides or oxides. During heap leaching, sulfuric is suitable for copper recovery of the more stable sulphide minerals from copper ores. The construction
Estimation of uncertain parameters to improve modeling of Mirobially Induced Calcite Precipitation
Cirpka, Olaf Arie
in the subsurface or fracking could be reduced with sealing technologies like microbially induced calcite Injectionwellvicinity Fracking CO2 Reservoir Figure 1: Potential application sites of MICP as a sealing technology
Masuda, H.; Claridge, D.
2012-01-01T23:59:59.000Z
, cooling and heating and weather data using multiple linear regression models based on the simplified steady-state energy balance for a whole building. Two approaches using different response variables: the energy balance load (EBL) and the building thermal...
Estimation of fracture flow parameters through numerical analysis of hydromechanical pressure pulses
Cappa, F.
2009-01-01T23:59:59.000Z
an Engineered Fractured Geothermal Reservoir. Example of theinteractions in a fractured carbonate reservoir inferredwithin a shallow fractured carbonate reservoir. Fracture
Frontczak, Monika
2012-01-01T23:59:59.000Z
ved Center for Indeklima og Energi, DTU På forhånd tak forgerne lufte ud for at spare energi ? Jeg luftede ud for atved at bruge så lidt energi som muligt, hvis et sådan
Frontczak, Monika
2012-01-01T23:59:59.000Z
register over bygninger i Danmark (BBR registeret). Vi harat leve i og nedbringe Danmarks samlede energiforbrug.typer af boliger i Danmark og udfyldt af 645 personer (
DOE TASK 99-5 Update REFINEMENT AND VALIDATION OF IN SITU PARAMETER ESTIMATION
and Aerospace Engineering Stillwater, OK KEY WORDS geothermal energy, ground coupled, heat pump, heat exchanger facing designers of Ground Source Heat Pump (GSHP) systems applied in commercial, institutional as well as large residential buildings. The number of boreholes and the depth and cost of each borehole are highly
Ditchkoff, Steve
). Methods Thermal imaging system The thermal imaging camera used for locating fawns was a Raytheon PalmIR 250 Digital (24 Ã? 10 Ã? 10 cm; Raytheon Commercial In
Estimation with Incompletely Specified Loss Functions (the Case of Several Location Parameters)
Brown, Lawrence D.
) Author(s): Lawrence D. Brown Source: Journal of the American Statistical Association, Vol. 70, No. 350 (Jun., 1975), pp. 417- 427 Published by: American Statistical Association Stable URL: http of scholarship. For more information about JSTOR, please contact support@jstor.org. American Statistical
Parameter Estimation and Capacity Fade Analysis of Lithium-Ion Batteries Using Reformulated Models
Braatz, Richard D.
Many researchers have worked to develop methods to analyze and characterize capacity fade in lithium-ion batteries. As a complement to approaches to mathematically model capacity fade that require detailed understanding ...
Frontczak, Monika
2012-01-01T23:59:59.000Z
quality (IEQ) acceptance in residential buildings, Energyand Building, 41, 930- Lai, J.H.K. and Yik, F.W.H. (2007)of workers in office buildings: the European HOPE project,
Estimation of Distributed Parameters in Permittivity Models of Composite Dielectric Materials Using
Metric Framework, inorganic glass. 1 #12;1 Introduction Complex materials such as ceramic matrix spectroscopy has been shown to have sensitivity to heat treated ceramic thermal barrier coatings, which
Vadose zone influences on aquifer parameter estimates of saturated-zone hydraulic theory
Szilagyi, Jozsef
Szilagyi* Conservation and Survey Division, University of Nebraska, 113 Nebraska Hall, Lincoln, NE 68588 aquifer properties at the scale of the watershed (Szilagyi et al., 1998). Such work is of the utmost-mail address: jszilagyil@unl.edu (J. Szilagyi). #12;(Fig. 1), and h is the changing phreatic surface
Richardson, Andrew D.
and earth system models, especially for long-term (multian- nual and greater) simulations. Data assimilation
Lichter, Matthew D. (Matthew Daniel), 1977-
2005-01-01T23:59:59.000Z
Future space missions are expected to use autonomous robotic systems to carry out a growing number of tasks. These tasks may include the assembly, inspection, and maintenance of large space structures; the capture and ...
Chen, Shu-Ching
to Fire and/or Explosion in the Chemical Process Industries [1-4] Year Location Chemical Event Deaths/injured 1943 Ludigshafen, Germany Butadiene Explosion >100d 1944 Cleveland, OH LNG Fire 128/200~400 1947 Texas ?/>200 1962 Ras Taruna, Saudi Arabia Propane Fire 1/111 1964 Tokyo, Japan MEKPO Fire/Explosion 19
Mukhopadhyay, S.
2009-01-01T23:59:59.000Z
have assumed the same rock properties for the entire packed-earlier, among the rock properties (permeability, porosity,However, these are not rock properties and are constrained
Azad, Abdul-Majeed
accurate exper- imental measurements on the density, and heat capacity of liquid UO2 up to $8000 K density and isobaric heat capacity, much more easily than other conventional methods [3,4]. Many of state for liquid urania has also been developed which predicts a critical temperature (Tc) % 10500 K
Mukhopadhyay, S.
2009-01-01T23:59:59.000Z
that specific heat capacity, initial liquid saturation, andSpecific heat capacity Gas Saturation Liquid saturationheat capacity from FFTL, more precision in measurement is needed. Liquid
Dynamic Structure Learning of Factor Graphs and Parameter Estimation of a Constrained Nonlinear
Southern California, University of
optimization is that the underlying reservoir structure is unknown and changes continuously over time. One of the popular oil recovery techniques is waterflooding, which injects water into injectors with the optimization is that the underlying structure of oil reservoirs is unknown and it continuously changes over
Estimates of HE-LHC beam parameters at different injection energies
Sen, Tanaji; /Fermilab
2010-11-01T23:59:59.000Z
A future upgrade to the LHC envisions increasing the top energy to 16.5 TeV and upgrading the injectors. There are two proposals to replace the SPS as the injector to the LHC. One calls for a superconducting ring in the SPS tunnel while the other calls for an injector (LER) in the LHC tunnel. In both scenarios, the injection energy to the LHC will increase. In this note we look at some of the consequences of increased injection energy to the beam dynamics in the LHC.
Parameter Estimation of Dynamic Air-conditioning Component Models Using Limited Sensor Data
Hariharan, Natarajkumar
2011-08-08T23:59:59.000Z
.1). ?? , is the area of orifice opening and ?? is the coefficient of discharge of the expansion valve at that specific condition. Coefficient of discharge is a dependent on the EEV geometry and the thermal?fluid properties of the refrigerant flowing through... Area of application of bulb pressure ?2 Area of application of evaporator pressure ?? External surface area of the TEV bulb ??? Area of heat conduction between the refrigerant and the bulb ?? Area of opening for refrigerant flow in expansion...
Al-Nasir, Abdul Majid Hamza
1968-01-01T23:59:59.000Z
ORDER RELATIONS AND PRIOR DISTRIBU 'IONS IN 1:-IE ESTXYJiTION OF MULTIVARIATE NOPSLAL PARAI'E&iTiS NI~N PARTIAL DATA A Thesis by ABDUL MAJID HA?ZA AL-NASZR Submitt d o the Grad. nate College oi' Texas UM Univ rsity in partial fu' fillment . f... the requirement for the aegree of MASTER OF SCIENCE August 1968 Major Subject: Statistics ORDER RELATIONS AND PRIOR DISTRIBUTIONS IJJ THE ESTI1UTION OF MULTIVARIATE NORJJAL PARtuETERS NlTH PARTIAL DATA A Thesis ( by ABDUL IJAJID HANZA AL-NASIR Approved...
On-line parameter estimation via algebraic method: An experimental illustration.
Paris-Sud XI, UniversitÃ© de
identification. This algorithm is illustrated experimentally on a Permanent Magnet Stepper Motor (PMSM identification, Algebraic method, Magnetic bearing, PMSM. I. INTRODUCTION This article is concerned
DOE/SC-ARM/TR-097 Radiatively Important Parameters Best Estimate
Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)
AFDC Printable Version Share this resource Send a link to EERE: Alternative Fuels Data Center Home Page to someone by E-mail Share EERE: Alternative Fuels Data Center Home Page on Facebook Tweet about EERE: Alternative Fuels Data Center Home Page on Twitter Bookmark EERE: Alternative1 First Use of Energy for All Purposes (Fuel and Nonfuel), 2002; Level: National5Sales for4,645U.S. DOEThe Bonneville Power Administration wouldDECOMPOSITION OF CALCIUMCOSTDOENuclear1382 THEDOE0-354-1502225 The7
Finding New Thermoelectric Compounds Using Crystallographic Data: Atomic Displacement Parameters
Chakoumakos, B.C.; Mandrus, D.G.; Sales, B.C.; Sharp, J.W.
1999-08-29T23:59:59.000Z
A new structure-property relationship is discussed which links atomic displacement parameters (ADPs) and the lattice thermal conductivity of clathrate-like compounds. For many clathrate-like compounds, in which one of the atom types is weakly bound and ''rattles'' within its atomic cage, room temperature ADP information can be used to estimate the room temperature lattice thermal conductivity, the vibration frequency of the ''rattler'', and the temperature dependence of the heat capacity. Neutron data and X-ray crystallography data, reported in the literature, are used to apply this analysis to several promising classes of thermoelectric materials.
Solid angle and surface density as criticality parameters
Thomas, J.T.
1980-10-01T23:59:59.000Z
Two methods often used to establish nuclear criticality safety limits for operations with fissile materials are the surface density and solid angle techniques. The two methods are used as parameters to express experimental and validated calculations of critical configurations. It is demonstrated that each method can represent critical arrangements of subcritical units and that there can be established a one-to-one correspondence between them. The analyses further show that the effect on an array neutron multiplication factor of perturbations to the array can be reliably estimated and that each form of fissile material and unit shape has a specific representation.
Outdoor PV Module Degradation of Current-Voltage Parameters: Preprint
Smith, R. M.; Jordan, D. C.; Kurtz, S. R.
2012-04-01T23:59:59.000Z
Photovoltaic (PV) module degradation rate analysis quantifies the loss of PV power output over time and is useful for estimating the impact of degradation on the cost of energy. An understanding of the degradation of all current-voltage (I-V) parameters helps to determine the cause of the degradation and also gives useful information for the design of the system. This study reports on data collected from 12 distinct mono- and poly-crystalline modules deployed at the National Renewable Energy Laboratory (NREL) in Golden, Colorado. Most modules investigated showed < 0.5%/year decrease in maximum power due to short-circuit current decline.
Perturbed power-law parameters from WMAP7
Joy, Minu [Dept. of Physics, Alphonsa College, Pala 686574 (India); Souradeep, Tarun, E-mail: minu@iucaa.ernet.in, E-mail: tarun@iucaa.ernet.in [Inter-University Centre for Astronomy and Astrophysics, Post Bag 4, Ganeshkhind, Pune 411 007 (India)
2011-02-01T23:59:59.000Z
We present a perturbative approach for studying inflation models with soft departures from scale free spectra of the power law model. In the perturbed power law (PPL) approach one obtains at the leading order both the scalar and tensor power spectra with the running of their spectral indices. In contrast to the widely used slow roll expansion method, for which ? and ? have to be small, PPL can look also at models with comparatively larger ? and ? with the condition that (?+?) is small. The PPL spectrum is confronted data and we show that the PPL parameters are well estimated from WMAP-7 data.
Datadriven calibration of linear estimators with minimal penalties
This paper tackles the problem of selecting among several linear estimators in non parametric regression; this includes model selection for linear regression, the choice of a regularization parameter in kernel ridge classification, with linear and non linear predictors [37, 36]. A central issue common to all regularization
Data-driven calibration of linear estimators with minimal penalties
Paris-Sud XI, Université de
This paper tackles the problem of selecting among several linear estimators in non- parametric regression; this includes model selection for linear regression, the choice of a regularization parameter in kernel ridge classification, with linear and non- linear predictors [37, 36]. A central issue common to all regularization
Condition Number Estimates for Combined Potential Boundary Integral
Langdon, Stephen
Condition Number Estimates for Combined Potential Boundary Integral Operators in Acoustic parameter. Of independent interest we also obtain upper and lower bounds on the norms of two oscillatory integral operators, namely the classical acoustic single- and double-layer potential operators. 1
Condition Number Estimates for Combined Potential Boundary Integral
Langdon, Stephen
Condition Number Estimates for Combined Potential Boundary Integral Operators in Acoustic parameter. Of independent interest we first obtain upper and lower bounds on the norms of two oscillatory integral operators, namely the classical acoustic single- and double-layer potential operators. 1
ESTIMATION AND CONTROL OF INDUSTRIAL PROCESSES WITH PARTICLE FILTERS
de Freitas, Nando
ESTIMATION AND CONTROL OF INDUSTRIAL PROCESSES WITH PARTICLE FILTERS Rub´en Morales of industrial processes. In particular, we adopt a jump Markov linear Gaussian (JMLG) model to describe an industrial heat exchanger. The parameters of this model are identi- fied with the expectation maximisation
Panel Damping Loss Factor Estimation Using The Random Decrement Technique
Dande, Himanshu Amol
2010-12-10T23:59:59.000Z
The use of the Random Decrement Technique (RDT) for estimating panel damping loss factors ranging from 1% to 10% is examined in a systematic way, with a focus on establishing the various parameters one must specify to use the technique to the best...
Measurement Noise versus Process Noise in Ionosphere Estimation for WAAS
Stanford University
Measurement Noise versus Process Noise in Ionosphere Estimation for WAAS Juan Blanch, Todd Walter of several parameters: the geometry of the measurements, the measurement noise, and the state of the ionosphere, which yields the process noise. It is very important to distinguish carefully between measurement
Lepton sector of a fourth generation
Burdman, G. [Fermi National Accelerator Laboratory, Batavia, Illinois 60510 (United States); Instituto de Fisica, Universidade de Sao Paulo, Sao Paulo (Brazil); Da Rold, L. [Centro Atomico Bariloche, Bariloche (Argentina); Matheus, R. D. [Instituto de Fisica, Universidade de Sao Paulo, Sao Paulo (Brazil)
2010-09-01T23:59:59.000Z
In extensions of the standard model with a heavy fourth generation, one important question is what makes the fourth-generation lepton sector, particularly the neutrinos, so different from the lighter three generations. We study this question in the context of models of electroweak symmetry breaking in warped extra dimensions, where the flavor hierarchy is generated by choosing the localization of the zero-mode fermions in the extra dimension. In this setup the Higgs sector is localized near the infrared brane, whereas the Majorana mass term is localized at the ultraviolet brane. As a result, light neutrinos are almost entirely Majorana particles, whereas the fourth-generation neutrino is mostly a Dirac fermion. We show that it is possible to obtain heavy fourth-generation leptons in regions of parameter space where the light neutrino masses and mixings are compatible with observation. We study the impact of these bounds, as well as the ones from lepton flavor violation, on the phenomenology of these models.
Synthetic guide star generation
Payne, Stephen A.; Page, Ralph H.; Ebbers, Christopher A.; Beach, Raymond J.
2004-03-09T23:59:59.000Z
A system for assisting in observing a celestial object and providing synthetic guide star generation. A lasing system provides radiation at a frequency at or near 938 nm and radiation at a frequency at or near 1583 nm. The lasing system includes a fiber laser operating between 880 nm and 960 nm and a fiber laser operating between 1524 nm and 1650 nm. A frequency-conversion system mixes the radiation and generates light at a frequency at or near 589 nm. A system directs the light at a frequency at or near 589 nm toward the celestial object and provides synthetic guide star generation.
Synthetic guide star generation
Payne, Stephen A. (Castro Valley, CA) [Castro Valley, CA; Page, Ralph H. (Castro Valley, CA) [Castro Valley, CA; Ebbers, Christopher A. (Livermore, CA) [Livermore, CA; Beach, Raymond J. (Livermore, CA) [Livermore, CA
2008-06-10T23:59:59.000Z
A system for assisting in observing a celestial object and providing synthetic guide star generation. A lasing system provides radiation at a frequency at or near 938 nm and radiation at a frequency at or near 1583 nm. The lasing system includes a fiber laser operating between 880 nm and 960 nm and a fiber laser operating between 1524 nm and 1650 nm. A frequency-conversion system mixes the radiation and generates light at a frequency at or near 589 nm. A system directs the light at a frequency at or near 589 nm toward the celestial object and provides synthetic guide star generation.
Lothian, Josh [ORNL; Powers, Sarah S [ORNL; Sullivan, Blair D [ORNL; Baker, Matthew B [ORNL; Schrock, Jonathan [ORNL; Poole, Stephen W [ORNL
2013-12-01T23:59:59.000Z
The benchmarking effort within the Extreme Scale Systems Center at Oak Ridge National Laboratory seeks to provide High Performance Computing benchmarks and test suites of interest to the DoD sponsor. The work described in this report is a part of the effort focusing on graph generation. A previously developed benchmark, SystemBurn, allowed the emulation of dierent application behavior profiles within a single framework. To complement this effort, similar capabilities are desired for graph-centric problems. This report examines existing synthetic graph generator implementations in preparation for further study on the properties of their generated synthetic graphs.
Sokolov, Mikhail A [ORNL
2010-01-01T23:59:59.000Z
A force-displacement trace of a Charpy impact test of a reactor pressure vessel (RPV) steel in the transition range has a characteristic point, the so-called force at the end of unstable crack propagation , Fa. A two-parameter Weibull probability function is used to model the distribution of the Fa in Charpy tests performed at ORNL on different RPV steels in the unirradiated and irradiated conditions. These data have a good replication at a given test temperature, thus, the statistical analysis was applicable. It is shown that when temperature is normalized to TNDT (T-TNDT) or to T100a (T-T100a), the median Fa values of different RPV steels have a tendency to form the same shape of temperature dependence. Depending on normalization temperature, TNDT or T100a, it suggests a universal shape of the temperature dependence of Fa for different RPV steels. The best fits for these temperature dependencies are presented. These dependencies are suggested for use in estimation of NDT or T100a from randomly generated Charpy impact tests. The maximum likelihood methods are used to derive equations to estimate TNDT and T100a from randomly generated Charpy impact tests.
Demonstration of Entanglement-Enhanced Phase Estimation in Solid
Gang-Qin Liu; Yu-Ran Zhang; Yan-Chun Chang; Jie-Dong Yue; Heng Fan; Xin-Yu Pan
2014-08-03T23:59:59.000Z
Precise parameter estimation plays a central role in science and technology. The statistical error in estimation can be decreased by repeating measurement, leading to that the resultant uncertainty of the estimated parameter is proportional to the square root of the number of repetitions in accordance with the central limit theorem. Quantum parameter estimation, an emerging field of quantum technology, aims to use quantum resources to yield higher statistical precision than classical approaches. Here, we report the first room-temperature implementation of entanglement-enhanced phase estimation in a solid-state system: the nitrogen-vacancy (NV) centre in pure diamond. We demonstrate a super-resolving phase measurement with two entangled qubits of different physical realizations: a NV centre electron spin and a proximal ${}^{13}$C nuclear spin. The experimental data shows clearly the uncertainty reduction when entanglement resource is used, confirming the theoretical expectation. Our results represent a more generalized and elemental demonstration of enhancement of quantum metrology against classical procedure, which fully exploits the quantum nature of the system and probes.
The Effects of an Increasing Surplus of Energy Generating Capability in the Pacific Northwest
limit the extent to which operators of wind generation can economically displace wind generation adequacy, reliability, and efficiency. This paper presents estimates of the effect of incremental wind generation on the frequency of excess energy events and on the costs and other implications of dealing
Image-based meteorologic visibility estimation
Graves, Nathan
2011-01-01T23:59:59.000Z
the estimated luminance. . . . . . . . . . . . . . . . . .Nephelometer . . . . . . 3.4.3 Luminance Meter . . . . 4intensity and the estimated luminance. . . . . . . . . .
Generating electricity from viruses
Lee, Seung-Wuk
2014-06-23T23:59:59.000Z
Berkeley Lab's Seung-Wuk Lee discusses "Generating electricity from viruses" in this Oct. 28, 2013 talk, which is part of a Science at the Theater event entitled Eight Big Ideas.