Prediction of vehicle impact forces
Kaderka, Darrell Laine
1990-01-01T23:59:59.000Z
PREDICTION OF VEHICLE IMPACT FORCES A Thesis by DARRELL LAINE KADERKA Submitted to the Office of Graduate Studies of Texas ARM University in partial fulfillment of the requirements for the degree of MASTER OF SCIENCE May 1990 Major Subject...: Civil Engineering PREDICTION OF VEHICLE IMPACT FORCES A Thesis by DARRELL LAINE KADERKA Approved as to style and content by: C. Eugene Buth (Chair of Committee) W. ynn Beason (Member) I? D n E. B ay (Member) es T. P. Yao (Departmen Head) May...
Improving predictability of time series using maximum entropy methods
Gregor Chliamovitch; Alexandre Dupuis; Bastien Chopard; Anton Golub
2014-11-28T23:59:59.000Z
We discuss how maximum entropy methods may be applied to the reconstruction of Markov processes underlying empirical time series and compare this approach to usual frequency sampling. It is shown that, at least in low dimension, there exists a subset of the space of stochastic matrices for which the MaxEnt method is more efficient than sampling, in the sense that shorter historical samples have to be considered to reach the same accuracy. Considering short samples is of particular interest when modelling smoothly non-stationary processes, for then it provides, under some conditions, a powerful forecasting tool. The method is illustrated for a discretized empirical series of exchange rates.
Improving predictability of time series using maximum entropy methods
Chliamovitch, Gregor; Chopard, Bastien; Golub, Anton
2014-01-01T23:59:59.000Z
We discuss how maximum entropy methods may be applied to the reconstruction of Markov processes underlying empirical time series and compare this approach to usual frequency sampling. It is shown that, at least in low dimension, there exists a subset of the space of stochastic matrices for which the MaxEnt method is more efficient than sampling, in the sense that shorter historical samples have to be considered to reach the same accuracy. Considering short samples is of particular interest when modelling smoothly non-stationary processes, for then it provides, under some conditions, a powerful forecasting tool. The method is illustrated for a discretized empirical series of exchange rates.
EERE Takes Important Steps to Ensure Maximum Impact of Technology...
Office of Environmental Management (EM)
and led by EERE's Jeff Dowd and Yaw Agyeman from Lawrence Berkeley National Laboratory (LBNL), both of whom have developed and supervised impact analysis for EERE before, and also...
Mirant Potomac, Alexandria, Virginia: Maximum Impacts Predicted by
AFDC Printable Version Share this resource Send a link to EERE: Alternative Fuels Data Center Home Page to someone by E-mail Share EERE: Alternative Fuels Data Center Home Page on Facebook Tweet about EERE: Alternative Fuels Data Center Home Page on Twitter Bookmark EERE: Alternative Fuels Data Center Home Page on Google Bookmark EERE: Alternative Fuels Data Center Home Page on Office of Inspector General Office0-72.pdfGeorgeDoesn't32 MasterAcquisitiTechnology |EnergyMinutes: EM2006
Mirant Potomac, Alexandria, Virginia: Maximum Impacts Predicted by
AFDC Printable Version Share this resource Send a link to EERE: Alternative Fuels Data Center Home Page to someone by E-mail Share EERE: Alternative Fuels Data Center Home Page on Facebook Tweet about EERE: Alternative Fuels Data Center Home Page on Twitter Bookmark EERE: Alternative Fuels Data Center Home Page on Google Bookmark EERE: Alternative Fuels Data Center Home Page on Office of Inspector General Office0-72.pdfGeorgeDoesn't32 MasterAcquisitiTechnology |EnergyMinutes:
Mirant Potomac, Alexandria, Virginia: Maximum Impacts Predicted by
AFDC Printable Version Share this resource Send a link to EERE: Alternative Fuels Data Center Home Page to someone by E-mail Share EERE: Alternative Fuels Data Center Home Page on Facebook Tweet about EERE: Alternative Fuels Data Center Home Page on Twitter Bookmark EERE: Alternative Fuels Data Center Home Page on Google Bookmark EERE: Alternative Fuels Data Center Home Page on Office of Inspector General Office0-72.pdfGeorgeDoesn't32 MasterAcquisitiTechnology |EnergyMinutes:AERMOD-PRIME,
Mirant Potomac, Alexandria, Virginia: Maximum Impacts Predicted by
AFDC Printable Version Share this resource Send a link to EERE: Alternative Fuels Data Center Home Page to someone by E-mail Share EERE: Alternative Fuels Data Center Home Page on Facebook Tweet about EERE: Alternative Fuels Data Center Home Page on Twitter Bookmark EERE: Alternative Fuels Data Center Home Page on Google Bookmark EERE: Alternative Fuels Data Center Home Page on Office of Inspector General Office0-72.pdfGeorgeDoesn't32 MasterAcquisitiTechnology
Microbial impacts on geothermometry temperature predictions
Yoshiko Fujita; David W. Reed; Kaitlyn R. Nowak; Vicki S. Thompson; Travis L. McLing; Robert W. Smith
2013-02-01T23:59:59.000Z
Conventional geothermometry approaches assume that the composition of a collected water sample originating in a deep geothermal reservoir still reflects chemical equilibration of the water with the deep reservoir rocks. However, for geothermal prospecting samples whose temperatures have dropped to <120°C, temperature predictions may be skewed by the activity of microorganisms; microbial metabolism can drastically and rapidly change the water’s chemistry. We hypothesize that knowledge of microbial impacts on exploration sample geochemistry can be used to constrain input into geothermometry models and thereby improve the reliability of reservoir temperature predictions. To evaluate this hypothesis we have chosen to focus on sulfur cycling, because of the significant changes in redox state and pH associated with sulfur chemistry. Redox and pH are critical factors in defining the mineral-fluid equilibria that form the basis of solute geothermometry approaches. Initially we are developing assays to detect the process of sulfate reduction, using knowledge of genes specific to sulfate reducing microorganisms. The assays rely on a common molecular biological technique known as quantitative polymerase chain reaction (qPCR), which allows estimation of the number of target organisms in a particular sample by enumerating genes specific to the organisms rather than actually retrieving and characterizing the organisms themselves. For quantitation of sulfate reducing genes using qPCR, we constructed a plasmid (a piece of DNA) containing portions of two genes (known as dsrA and dsrB) that are directly involved with sulfate reduction and unique to sulfate reducing microorganisms. Using the plasmid as well as DNA from other microorganisms known to be sulfate reducers or non-sulfate reducers, we developed qPCR protocols and showed the assay’s specificity to sulfate reducers and that a qPCR standard curve using the plasmid was linear over >5 orders of magnitude. As a first test with actual field samples, the assay was applied to DNA extracted from water collected at springs located in and around the town of Soda Springs, Idaho. Soda Springs is located in the fold and thrust belt on the eastern boundary of the track of the Yellowstone Hotspot, where a deep carbon dioxide source believed to originate from Mississippian limestone contacts acidic hydrothermal fluids at depth. Both sulfate and sulfide have been measured in samples collected previously at Soda Springs. Preliminary results indicate that sulfate reducing genes were present in each of the samples tested. Our work supports evaluation of the potential for microbial processes to have altered water chemistry in geothermal exploration samples.
Numerical Prediction of High-Impact Local Weather: A
Xue, Ming
Chapter 6 Numerical Prediction of High-Impact Local Weather: A Driver for Petascale Computing Ming winds, lightning, hurricanes and winter storms, cause hundreds of deaths and average annual economic of mitigating the impacts of such events on the economy and society is obvious, our ability to do so
John Max Wilson; Keith Andrew
2012-07-27T23:59:59.000Z
We investigate the relative time scales associated with finite future cosmological singularities, especially those classified as Big Rip cosmologies, and the maximum predictability time of a coupled FRW-KG scalar cosmology with chaotic regimes. Our approach is to show that by starting with a FRW-KG scalar cosmology with a potential that admits an analytical solution resulting in a finite time future singularity there exists a Lyapunov time scale that is earlier than the formation of the singularity. For this singularity both the cosmological scale parameter a(t) and the Hubble parameter H(t) become infinite at a finite future time, the Big Rip time. We compare this time scale to the predictability time scale for a chaotic FRW-KG scalar cosmology. We find that there are cases where the chaotic time scale is earlier than the Big Rip singularity calling for special care in interpreting and predicting the formation of the future cosmological singularity.
Predicting on-site environmental impacts of municipal engineering works
Gangolells, Marta, E-mail: marta.gangolells@upc.edu; Casals, Miquel, E-mail: miquel.casals@upc.edu; Forcada, Núria, E-mail: nuria.forcada@upc.edu; Macarulla, Marcel, E-mail: marcel.macarulla@upc.edu
2014-01-15T23:59:59.000Z
The research findings fill a gap in the body of knowledge by presenting an effective way to evaluate the significance of on-site environmental impacts of municipal engineering works prior to the construction stage. First, 42 on-site environmental impacts of municipal engineering works were identified by means of a process-oriented approach. Then, 46 indicators and their corresponding significance limits were determined on the basis of a statistical analysis of 25 new-build and remodelling municipal engineering projects. In order to ensure the objectivity of the assessment process, direct and indirect indicators were always based on quantitative data from the municipal engineering project documents. Finally, two case studies were analysed and found to illustrate the practical use of the proposed model. The model highlights the significant environmental impacts of a particular municipal engineering project prior to the construction stage. Consequently, preventive actions can be planned and implemented during on-site activities. The results of the model also allow a comparison of proposed municipal engineering projects and alternatives with respect to the overall on-site environmental impact and the absolute importance of a particular environmental aspect. These findings are useful within the framework of the environmental impact assessment process, as they help to improve the identification and evaluation of on-site environmental aspects of municipal engineering works. The findings may also be of use to construction companies that are willing to implement an environmental management system or simply wish to improve on-site environmental performance in municipal engineering projects. -- Highlights: • We present a model to predict the environmental impacts of municipal engineering works. • It highlights significant on-site environmental impacts prior to the construction stage. • Findings are useful within the environmental impact assessment process. • They also help contractors to implement environmental management systems.
Using the Maximum X-ray Flux Ratio and X-ray Background to Predict Solar Flare Class
Winter, Lisa M
2015-01-01T23:59:59.000Z
We present the discovery of a relationship between the maximum ratio of the flare flux (namely, 0.5-4 Ang to the 1-8 Ang flux) and non-flare background (namely, the 1-8 Ang background flux), which clearly separates flares into classes by peak flux level. We established this relationship based on an analysis of the Geostationary Operational Environmental Satellites (GOES) X-ray observations of ~ 50,000 X, M, C, and B flares derived from the NOAA/SWPC flares catalog. Employing a combination of machine learning techniques (K-nearest neighbors and nearest-centroid algorithms) we show a separation of the observed parameters for the different peak flaring energies. This analysis is validated by successfully predicting the flare classes for 100% of the X-class flares, 76% of the M-class flares, 80% of the C-class flares and 81% of the B-class flares for solar cycle 24, based on the training of the parametric extracts for solar flares in cycles 22-23.
Letschert, Virginie; Desroches, Louis-Benoit; McNeil, Michael; Saheb, Yamina
2010-05-03T23:59:59.000Z
The US Department of Energy (US DOE) has placed lighting and appliance standards at a very high priority of the U.S. energy policy. However, the maximum energy savings and CO2 emissions reduction achievable via minimum efficiency performance standards (MEPS) has not yet been fully characterized. The Bottom Up Energy Analysis System (BUENAS), first developed in 2007, is a global, generic, and modular tool designed to provide policy makers with estimates of potential impacts resulting from MEPS for a variety of products, at the international and/or regional level. Using the BUENAS framework, we estimated potential national energy savings and CO2 emissions mitigation in the US residential sector that would result from the most aggressive policy foreseeable: standards effective in 2014 set at the current maximum technology (Max Tech) available on the market. This represents the most likely characterization of what can be maximally achieved through MEPS in the US. The authors rely on the latest Technical Support Documents and Analytical Tools published by the U.S. Department of Energy as a source to determine appliance stock turnover and projected efficiency scenarios of what would occur in the absence of policy. In our analysis, national impacts are determined for the following end uses: lighting, television, refrigerator-freezers, central air conditioning, room air conditioning, residential furnaces, and water heating. The analyzed end uses cover approximately 65percent of site energy consumption in the residential sector (50percent of the electricity consumption and 80percent of the natural gas and LPG consumption). This paper uses this BUENAS methodology to calculate that energy savings from Max Tech for the U.S. residential sector products covered in this paper will reach an 18percent reduction in electricity demand compared to the base case and 11percent in Natural Gas and LPG consumption by 2030 The methodology results in reductions in CO2 emissions of a similar magnitude.
Theoretical Prediction and Impact of Fundamental Electric Dipole Moments
Sebastian A. R. Ellis; Gordon L. Kane
2014-05-29T23:59:59.000Z
The predicted Standard Model (SM) electric dipole moments (EDMs) of electrons and quarks are tiny, providing an important window to observe new physics. Theories beyond the SM typically allow relatively large EDMs. The EDMs depend on the relative phases of terms in the effective Lagrangian of the extended theory, which are generally unknown. Underlying theories, such as string/M-theories compactified to four dimensions, could predict the phases and thus EDMs in the resulting supersymmetric (SUSY) theory. Earlier one of us, with collaborators, made such a prediction and found, unexpectedly, that the phases were predicted to be zero at tree level in the theory at the unification or string scale $\\sim\\mathcal{O}(10^{16}$ GeV). Electroweak (EW) scale EDMs still arise via running from the high scale, and depend only on the SM Yukawa couplings that also give the CKM phase. Here we extend the earlier work by studying the dependence of the low scale EDMs on the constrained but not fully known fundamental Yukawa couplings. The dominant contribution is from two loop diagrams and is not sensitive to the choice of Yukawa texture. The electron EDM should not be found to be larger than about $ 5\\times 10^{-30} e$ cm, and the neutron EDM should not be larger than about $5\\times 10^{-29}e$ cm. These values are quite a bit smaller than the reported predictions from Split SUSY and typical effective theories, but much larger than the Standard Model prediction. Also, since models with random phases typically give much larger EDMs, it is a significant testable prediction of compactified M-theory that the EDMs should not be above these upper limits. The actual EDMs can be below the limits, so once they are measured they could provide new insight into the fundamental Yukawa couplings of leptons and quarks. We comment also on the role of strong CP violation. EDMs probe fundamental physics near the Planck scale.
Predicted Impacts of Proton Temperature Anisotropy on Solar Wind Turbulence
Klein, Kristopher G
2015-01-01T23:59:59.000Z
Particle velocity distributions measured in the weakly collisional solar wind are frequently found to be non-Maxwellian, but how these non-Maxwellian distributions impact the physics of plasma turbulence in the solar wind remains unanswered. Using numerical solutions of the linear dispersion relation for a collisionless plasma with a bi-Maxwellian proton velocity distribution, we present a unified framework for the four proton temperature anisotropy instabilities, identifying the associated stable eigenmodes, highlighting the unstable region of wavevector space, and presenting the properties of the growing eigenfunctions. Based on physical intuition gained from this framework, we address how the proton temperature anisotropy impacts the nonlinear dynamics of the \\Alfvenic fluctuations underlying the dominant cascade of energy from large to small scales and how the fluctuations driven by proton temperature anisotropy instabilities interact nonlinearly with each other and with the fluctuations of the large-scal...
Not Available
1993-07-01T23:59:59.000Z
This document provides an analysis of the potential impacts associated with the proposed action, which is continued operation of Naval Petroleum Reserve No. I (NPR-1) at the Maximum Efficient Rate (MER) as authorized by Public law 94-258, the Naval Petroleum Reserves Production Act of 1976 (Act). The document also provides a similar analysis of alternatives to the proposed action, which also involve continued operations, but under lower development scenarios and lower rates of production. NPR-1 is a large oil and gas field jointly owned and operated by the federal government and Chevron U.SA Inc. (CUSA) pursuant to a Unit Plan Contract that became effective in 1944; the government`s interest is approximately 78% and CUSA`s interest is approximately 22%. The government`s interest is under the jurisdiction of the United States Department of Energy (DOE). The facility is approximately 17,409 acres (74 square miles), and it is located in Kern County, California, about 25 miles southwest of Bakersfield and 100 miles north of Los Angeles in the south central portion of the state. The environmental analysis presented herein is a supplement to the NPR-1 Final Environmental Impact Statement of that was issued by DOE in 1979 (1979 EIS). As such, this document is a Supplemental Environmental Impact Statement (SEIS).
The impact of electricity market schemes on predictability being a decision factor in the wind farm
Paris-Sud XI, Université de
The impact of electricity market schemes on predictability being a decision factor in the wind farm used criterion of capacity factor on the investment phase of a wind farm and on spatial planning, it is now recognized that accurate short-term forecasts of wind farms´ power output over the next few hours
The impact of electricity market schemes on predictability being a decision factor in the wind farm
Paris-Sud XI, Université de
The impact of electricity market schemes on predictability being a decision factor in the wind farm of capacity factor on the investment phase of a wind farm and on spatial planning in an electricity market, it is now recognized that accurate short-term forecasts of wind farms´ power output over the next few hours
Maguire, J.; Burch, J.
2013-08-01T23:59:59.000Z
Modeling residential water heaters with dynamic simulation models can provide accurate estimates of their annual energy consumption, if the units? characteristics and use conditions are known. Most gas storage water heaters (GSWHs) include a standing pilot light. It is generally assumed that the pilot light energy will help make up standby losses and have no impact on the predicted annual energy consumption. However, that is not always the case. The gas input rate and conversion efficiency of a pilot light for a GSWH were determined from laboratory data. The data were used in simulations of a typical GSWH with and without a pilot light, for two cases: 1) the GSWH is used alone; and 2) the GSWH is the second tank in a solar water heating (SWH) system. The sensitivity of wasted pilot light energy to annual hot water use, climate, and installation location was examined. The GSWH used alone in unconditioned space in a hot climate had a slight increase in energy consumption. The GSWH with a pilot light used as a backup to an SWH used up to 80% more auxiliary energy than one without in hot, sunny locations, from increased tank losses.
Karak, Bidya Binay [Department of Physics, Indian Institute of Science, Bangalore 560012 (India); Nandy, Dibyendu, E-mail: bidya_karak@physics.iisc.ernet.in, E-mail: dnandi@iiserkol.ac.in [Indian Institute for Science Education and Research, Kolkata, Mohampur 741252, West Bengal (India)
2012-12-10T23:59:59.000Z
Prediction of the Sun's magnetic activity is important because of its effect on space environment and climate. However, recent efforts to predict the amplitude of the solar cycle have resulted in diverging forecasts with no consensus. Yeates et al. have shown that the dynamical memory of the solar dynamo mechanism governs predictability, and this memory is different for advection- and diffusion-dominated solar convection zones. By utilizing stochastically forced, kinematic dynamo simulations, we demonstrate that the inclusion of downward turbulent pumping of magnetic flux reduces the memory of both advection- and diffusion-dominated solar dynamos to only one cycle; stronger pumping degrades this memory further. Thus, our results reconcile the diverging dynamo-model-based forecasts for the amplitude of solar cycle 24. We conclude that reliable predictions for the maximum of solar activity can be made only at the preceding minimum-allowing about five years of advance planning for space weather. For more accurate predictions, sequential data assimilation would be necessary in forecasting models to account for the Sun's short memory.
Buelt, J.L.; Conbere, W.; Freshley, M.D.; Hicks, R.J.; Kuhn, W.L.; Lamar, D.A.; Serne, R.J.; Smoot, J.L.
1988-03-01T23:59:59.000Z
Impact from past and potential future discharges of ammoniated water to the 216-A-36B crib have on groundwater and river concentrations of hazardous chemical constitutents are studied. Until August 1987, the 216-A-36B crib, located in the 200-East Area of the Hanford Site, accepted ammoniated water discharges. Although this study addresses known hazardous chemical constituents associated with such discharges, the primary concern is the discharge of NH/sub 4/OH because of its microbiological conversion to NO/sub 2//sup /minus// and NO/sub 3//sup /minus//. As a result of fuel decladding operations, material balance calculations indicate that NH/sub 4/OH has been discharged to the 216-A-36B crib in amounts that exceed reportable quantities under the Comprehensive Environmental Response, Compensation and Liability Act of 1980. Although flow to the crib is relatively constant, the estimated NH/sub 4/OH discharge varies from negligible to a maximum of 10,000 g-molesh. Because these discharges are intermittent, the concentration delivered to the groundwater is a function of soil sorption, microbiological conversion rates of NH/sub 4//sup +/ to NO/sub 2//sup /minus// and NO/sub 3//sup /minus//, and groundwater dispersion. This report provides results based on the assumptions of maximum, nominal, and discountinued NH/sub 4/OH discharges to the crib. Consequently, the results show maximum and realistic estimates of NH/sub 4//sup +/, NO/sub 2//sup /minus// and NO/sub 3//sup /minus// concentrations in the groundwater.
MELE: Maximum Entropy Leuven Estimators
Paris, Quirino
2001-01-01T23:59:59.000Z
of the Generalized Maximum Entropy Estimator of the Generaland Douglas Miller, Maximum Entropy Econometrics, Wiley andCalifornia Davis MELE: Maximum Entropy Leuven Estimators by
Maximum Entropy Correlated Equilibria
Ortiz, Luis E.
2006-03-20T23:59:59.000Z
We study maximum entropy correlated equilibria in (multi-player)games and provide two gradient-based algorithms that are guaranteedto converge to such equilibria. Although we do not provideconvergence rates for these ...
Impact of Uncertainties in Hadron Production on Air-Shower Predictions
T. Pierog; R. Engel; D. Heck
2006-02-08T23:59:59.000Z
At high energy, cosmic rays can only be studied by measuring the extensive air showers they produce in the atmosphere of the Earth. Although the main features of air showers can be understood within a simple model of successive interactions, detailed simulations and a realistic description of particle production are needed to calculate observables relevant to air shower experiments. Currently hadronic interaction models are the main source of uncertainty of such simulations. We will study the effect of using different hadronic models available in CORSIKA and CONEX on extensive air shower predictions.
Balashov, Victor N.; Guthrie, George D.; Hakala, J. Alexandra; Lopano, Christina L. J.; Rimstidt, Donald; Brantley, Susan L.
2013-03-01T23:59:59.000Z
One idea for mitigating the increase in fossil-fuel generated CO{sub 2} in the atmosphere is to inject CO{sub 2} into subsurface saline sandstone reservoirs. To decide whether to try such sequestration at a globally significant scale will require the ability to predict the fate of injected CO{sub 2}. Thus, models are needed to predict the rates and extents of subsurface rock-water-gas interactions. Several reactive transport models for CO{sub 2} sequestration created in the last decade predicted sequestration in sandstone reservoirs of ~17 to ~90 kg CO{sub 2} m{sup -3|. To build confidence in such models, a baseline problem including rock + water chemistry is proposed as the basis for future modeling so that both the models and the parameterizations can be compared systematically. In addition, a reactive diffusion model is used to investigate the fate of injected supercritical CO{sub 2} fluid in the proposed baseline reservoir + brine system. In the baseline problem, injected CO{sub 2} is redistributed from the supercritical (SC) free phase by dissolution into pore brine and by formation of carbonates in the sandstone. The numerical transport model incorporates a full kinetic description of mineral-water reactions under the assumption that transport is by diffusion only. Sensitivity tests were also run to understand which mineral kinetics reactions are important for CO{sub 2} trapping. The diffusion transport model shows that for the first ~20 years after CO{sub 2} diffusion initiates, CO{sub 2} is mostly consumed by dissolution into the brine to form CO{sub 2,aq} (solubility trapping). From 20-200 years, both solubility and mineral trapping are important as calcite precipitation is driven by dissolution of oligoclase. From 200 to 1000 years, mineral trapping is the most important sequestration mechanism, as smectite dissolves and calcite precipitates. Beyond 2000 years, most trapping is due to formation of aqueous HCO{sub 3}{sup -}. Ninety-seven percent of the maximum CO{sub 2} sequestration, 34.5 kg CO{sub 2} per m{sup 3} of sandstone, is attained by 4000 years even though the system does not achieve chemical equilibrium until ~25,000 years. This maximum represents about 20% CO{sub 2} dissolved as CO{sub 2},aq, 50% dissolved as HCO{sub 3}{sup -}{sub ,aq}, and 30% precipitated as calcite. The extent of sequestration as HCO{sub 3}{sup -} at equilibrium can be calculated from equilibrium thermodynamics and is roughly equivalent to the amount of Na+ in the initial sandstone in a soluble mineral (here, oligoclase). Similarly, the extent of trapping in calcite is determined by the amount of Ca2+ in the initial oligoclase and smectite. Sensitivity analyses show that the rate of CO{sub 2} sequestration is sensitive to the mineral-water reaction kinetic constants between approximately 10 and 4000 years. The sensitivity of CO{sub 2} sequestration to the rate constants decreases in magnitude respectively from oligoclase to albite to smectite.
Renfrew, Ian
The impact of Greenland on the predictability of European weather systems Supervisors: Sue Gray (U-to-high latitude of Greenland means it has a major influence on the atmospheric circulation of the North Atlantic by the presence of Greenland as is the atmosphere well downstream, for example over the British Isles
Maximum entropy principal for transportation
Bilich, F. [University of Brasilia (Brazil); Da Silva, R. [National Research Council (Brazil)
2008-11-06T23:59:59.000Z
In this work we deal with modeling of the transportation phenomenon for use in the transportation planning process and policy-impact studies. The model developed is based on the dependence concept, i.e., the notion that the probability of a trip starting at origin i is dependent on the probability of a trip ending at destination j given that the factors (such as travel time, cost, etc.) which affect travel between origin i and destination j assume some specific values. The derivation of the solution of the model employs the maximum entropy principle combining a priori multinomial distribution with a trip utility concept. This model is utilized to forecast trip distributions under a variety of policy changes and scenarios. The dependence coefficients are obtained from a regression equation where the functional form is derived based on conditional probability and perception of factors from experimental psychology. The dependence coefficients encode all the information that was previously encoded in the form of constraints. In addition, the dependence coefficients encode information that cannot be expressed in the form of constraints for practical reasons, namely, computational tractability. The equivalence between the standard formulation (i.e., objective function with constraints) and the dependence formulation (i.e., without constraints) is demonstrated. The parameters of the dependence-based trip-distribution model are estimated, and the model is also validated using commercial air travel data in the U.S. In addition, policy impact analyses (such as allowance of supersonic flights inside the U.S. and user surcharge at noise-impacted airports) on air travel are performed.
Johnson, Brandon Jeffrey
2013-01-01T23:59:59.000Z
?? With their large impact on the power system and widespread distribution, residential loads provide vast resources that if utilized correctly have the potential to… (more)
Cell development obeys maximum Fisher information
B. R. Frieden; R. A. Gatenby
2014-04-29T23:59:59.000Z
Eukaryotic cell development has been optimized by natural selection to obey maximal intracellular flux of messenger proteins. This, in turn, implies maximum Fisher information on angular position about a target nuclear pore complex (NPR). The cell is simply modeled as spherical, with cell membrane (CM) diameter 10 micron and concentric nuclear membrane (NM) diameter 6 micron. The NM contains about 3000 nuclear pore complexes (NPCs). Development requires messenger ligands to travel from the CM-NPC-DNA target binding sites. Ligands acquire negative charge by phosphorylation, passing through the cytoplasm over Newtonian trajectories toward positively charged NPCs (utilizing positive nuclear localization sequences). The CM-NPC channel obeys maximized mean protein flux F and Fisher information I at the NPC, with first-order delta I = 0 and approximate 2nd-order delta I = 0 stability to environmental perturbations. Many of its predictions are confirmed, including the dominance of protein pathways of from 1-4 proteins, a 4nm size for the EGFR protein and the approximate flux value F =10^16 proteins/m2-s. After entering the nucleus, each protein ultimately delivers its ligand information to a DNA target site with maximum probability, i.e. maximum Kullback-Liebler entropy HKL. In a smoothness limit HKL approaches IDNA/2, so that the total CM-NPC-DNA channel obeys maximum Fisher I. Thus maximum information approaches non-equilibrium, one condition for life.
Martin, Andrew C.R.
The SAAP pipeline and database: tools to analyze the im- pact and predict the pathogenicity a new anal- ysis pipeline and web interface. Results of machine learning using the structural analysis
Wang, Chien.; Prinn, Ronald G.
The possible trends for atmospheric carbon monoxide in the next 100 yr have been illustrated using a coupled atmospheric chemistry and climate model driven by emissions predicted by a global economic development model. ...
Single ion heat engine with maximum efficiency at maximum power
Obinna Abah; Johannes Rossnagel; Georg Jacob; Sebastian Deffner; Ferdinand Schmidt-Kaler; Kilian Singer; Eric Lutz
2012-05-07T23:59:59.000Z
We propose an experimental scheme to realize a nano heat engine with a single ion. An Otto cycle may be implemented by confining the ion in a linear Paul trap with tapered geometry and coupling it to engineered laser reservoirs. The quantum efficiency at maximum power is analytically determined in various regimes. Moreover, Monte Carlo simulations of the engine are performed that demonstrate its feasibility and its ability to operate at maximum efficiency of 30% under realistic conditions.
Thorat, Manish R.
2010-07-14T23:59:59.000Z
-coupled stiffness. A test case is implemented to study the impact of variation of seal axial radial clearance on stability characteristics. The 1CV model by Childs and Scharrer and subsequent bulk flow models are based on the assumption of isothermal flow across...
Towards a frequency-dependent discrete maximum principle for the implicit Monte Carlo equations
Wollaber, Allan B [Los Alamos National Laboratory; Larsen, Edward W [Los Alamos National Laboratory; Densmore, Jeffery D [Los Alamos National Laboratory
2010-12-15T23:59:59.000Z
It has long been known that temperature solutions of the Implicit Monte Carlo (IMC) equations can exceed the external boundary temperatures, a so-called violation of the 'maximum principle.' Previous attempts at prescribing a maximum value of the time-step size {Delta}{sub t} that is sufficient to eliminate these violations have recommended a {Delta}{sub t} that is typically too small to be used in practice and that appeared to be much too conservative when compared to numerical solutions of the IMC equations for practical problems. In this paper, we derive a new estimator for the maximum time-step size that includes the spatial-grid size {Delta}{sub x}. This explicitly demonstrates that the effect of coarsening {Delta}{sub x} is to reduce the limitation on {Delta}{sub t}, which helps explain the overly conservative nature of the earlier, grid-independent results. We demonstrate that our new time-step restriction is a much more accurate means of predicting violations of the maximum principle. We discuss how the implications of the new, grid-dependent timestep restriction can impact IMC solution algorithms.
Piotr Plaza; Anthony J. Griffiths; Nick Syred; Thomas Rees-Gralton [Cardiff University, Cardiff (United Kingdom). Centre for Research in Energy
2009-07-15T23:59:59.000Z
The paper describes an investigation of slagging and fouling effects when cofiring coal/biomass blends by using a predictive model for large utility boilers. This model is based on the use a zone computational method to determine the midsection temperature profile throughout a boiler, coupled with a thermo-chemical model, to define and assess the risk of elevated slagging and fouling levels during cofiring of solid fuels. The application of this prediction tool was made for a 618 MW thermal wall-fired pulverized coal boiler, cofired with a typical medium volatile bituminous coal and two substitute fuels, sewage sludge and sawdust. Associated changes in boiler efficiency as well as various heat transfer and thermodynamic parameters of the system were analyzed with slagging and fouling effects for different cofiring ratios. The results of the modeling revealed that, for increased cofiring of sewage sludge, an elevated risk of slagging and high-temperature fouling occurred, in complete contrast to the effects occurring with the utilization of sawdust as a substitute fuel. 30 refs., 9 figs.,1 tab.
Maximum mass of magnetic white dwarfs
Paret, D Manreza; Horvath, J E
2015-01-01T23:59:59.000Z
We revisit in this work the problem of the maximum masses of magnetized White Dwarfs (WD). The impact of a strong magnetic field onto the structure equations is addressed. The pressures become anisotropic due to the presence of the magnetic field and split into a parallel and perpendicular components. We first construct stable solutions of TOV equations for the parallel pressures, and found that physical solutions vanish for the perpendicular pressure when $B \\gtrsim 10^{13}$ G. This fact establishes an upper bound for a magnetic field and the stability of the configurations in the (quasi) spherical approximation. Our findings also indicate that it is not possible to obtain stable magnetized WD with super Chandrasekhar masses because the values of the magnetic field needed for them are higher than this bound. To proceed into the anisotropic regime, we derived structure equations appropriated for a cylindrical metric with anisotropic pressures. From the solutions of the structure equations in cylindrical symme...
Estimating a mixed strategy employing maximum entropy
Golan, Amos; Karp, Larry; Perloff, Jeffrey M.
1996-01-01T23:59:59.000Z
MIXED STRATEGY EMPLOYING MAXIMUM ENTROPY by Amos Golan LarryMixed Strategy Employing Maximum Entropy Amos Golan Larry S.Abstract Generalized maximum entropy may be used to estimate
Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)
AFDC Printable Version Share this resource Send a link to EERE: Alternative Fuels Data Center Home Page to someone by E-mail Share EERE: Alternative Fuels Data Center Home Page on Facebook Tweet about EERE: Alternative Fuels Data Center Home Page on Twitter Bookmark EERE:1 First Use of Energy for All Purposes (Fuel and Nonfuel),Feet) Year Jan Feb Mar Apr MayAtmospheric Optical Depth7-1D: VegetationEquipment Surfaces andMapping the Nanoscale LandscapeImportsBG NorthMauro9 Maximum Let-Through
The Principle of Maximum Conformality
Brodsky, Stanley J; /SLAC; Giustino, Di; /SLAC
2011-04-05T23:59:59.000Z
A key problem in making precise perturbative QCD predictions is the uncertainty in determining the renormalization scale of the running coupling {alpha}{sub s}({mu}{sup 2}). It is common practice to guess a physical scale {mu} = Q which is of order of a typical momentum transfer Q in the process, and then vary the scale over a range Q/2 and 2Q. This procedure is clearly problematic since the resulting fixed-order pQCD prediction will depend on the renormalization scheme, and it can even predict negative QCD cross sections at next-to-leading-order. Other heuristic methods to set the renormalization scale, such as the 'principle of minimal sensitivity', give unphysical results for jet physics, sum physics into the running coupling not associated with renormalization, and violate the transitivity property of the renormalization group. Such scale-setting methods also give incorrect results when applied to Abelian QED. Note that the factorization scale in QCD is introduced to match nonperturbative and perturbative aspects of the parton distributions in hadrons; it is present even in conformal theory and thus is a completely separate issue from renormalization scale setting. The PMC provides a consistent method for determining the renormalization scale in pQCD. The PMC scale-fixed prediction is independent of the choice of renormalization scheme, a key requirement of renormalization group invariance. The results avoid renormalon resummation and agree with QED scale-setting in the Abelian limit. The PMC global scale can be derived efficiently at NLO from basic properties of the PQCD cross section. The elimination of the renormalization scheme ambiguity using the PMC will not only increases the precision of QCD tests, but it will also increase the sensitivity of colliders to new physics beyond the Standard Model.
Reducing Degeneracy in Maximum Entropy Models of Networks
Horvát, Szabolcs; Toroczkai, Zoltán
2014-01-01T23:59:59.000Z
Based on Jaynes's maximum entropy principle, exponential random graphs provide a family of principled models that allow the prediction of network properties as constrained by empirical data. However, their use is often hindered by the degeneracy problem characterized by spontaneous symmetry-breaking, where predictions simply fail. Here we show that degeneracy appears when the corresponding density of states function is not log-concave. We propose a solution to the degeneracy problem for a large class of models by exploiting the nonlinear relationships between the constrained measures to convexify the domain of the density of states. We demonstrate the effectiveness of the method on examples, including on Zachary's karate club network data.
Some interesting consequences of the maximum entropy production principle
Martyushev, L. M. [Russian Academy of Sciences, Institute of Industrial Ecology, Ural Division (Russian Federation)], E-mail: mlm@ecko.uran.ru
2007-04-15T23:59:59.000Z
Two nonequilibrium phase transitions (morphological and hydrodynamic) are analyzed by applying the maximum entropy production principle. Quantitative analysis is for the first time compared with experiment. Nonequilibrium crystallization of ice and laminar-turbulent flow transition in a circular pipe are examined as examples of morphological and hydrodynamic transitions, respectively. For the latter transition, a minimum critical Reynolds number of 1200 is predicted. A discussion of this important and interesting result is presented.
Olson, Jessica J.
2011-01-01T23:59:59.000Z
this study. Changes in hydrology are not the only potentialA Tidal Hydrology Assessment for Reconnecting Spring Branchmay change the tidal hydrology and impact the area occupied
S.P. Rupp
2005-10-01T23:59:59.000Z
In May 2000, the Cerro Grande Fire burned approximately 17,200 ha in north-central New Mexico as the result of an escaped prescribed burn initiated by Bandelier National Monument. The interaction of large-scale fires, vegetation, and elk is an important management issue, but few studies have addressed the ecological implications of vegetative succession and landscape heterogeneity on ungulate populations following large-scale disturbance events. Primary objectives of this research were to identify elk movement pathways on local and landscape scales, to determine environmental factors that influence elk movement, and to evaluate movement and distribution patterns in relation to spatial and temporal aspects of the Cerro Grande Fire. Data collection and assimilation reflect the collaborative efforts of National Park Service, U.S. Forest Service, and Department of Energy (Los Alamos National Laboratory) personnel. Geographic positioning system (GPS) collars were used to track 54 elk over a period of 3+ years and locational data were incorporated into a multi-layered geographic information system (GIS) for analysis. Preliminary tests of GPS collar accuracy indicated a strong effect of 2D fixes on position acquisition rates (PARs) depending on time of day and season of year. Slope, aspect, elevation, and land cover type affected dilution of precision (DOP) values for both 2D and 3D fixes, although significant relationships varied from positive to negative making it difficult to delineate the mechanism behind significant responses. Two-dimensional fixes accounted for 34% of all successfully acquired locations and may affect results in which those data were used. Overall position acquisition rate was 93.3% and mean DOP values were consistently in the range of 4.0 to 6.0 leading to the conclusion collar accuracy was acceptable for modeling purposes. SAVANNA, a spatially explicit, process-oriented ecosystem model, was used to simulate successional dynamics. Inputs to the SAVANNA included a land cover map, long-term weather data, soil maps, and a digital elevation model. Parameterization and calibration were conducted using field plots. Model predictions of herbaceous biomass production and weather were consistent with available data and spatial interpolations of snow were considered reasonable for this study. Dynamic outputs generated by SAVANNA were integrated with static variables, movement rules, and parameters developed for the individual-based model through the application of a habitat suitability index. Model validation indicated reasonable model fit when compared to an independent test set. The finished model was applied to 2 realistic management scenarios for the Jemez Mountains and management implications were discussed. Ongoing validation of the individual-based model presented in this dissertation provides an adaptive management tool that integrates interdisciplinary experience and scientific information, which allows users to make predictions about the impact of alternative management policies.
Maximum stellar mass versus cluster membership number revisited
Th. Maschberger; C. J. Clarke
2008-09-05T23:59:59.000Z
We have made a new compilation of observations of maximum stellar mass versus cluster membership number from the literature, which we analyse for consistency with the predictions of a simple random drawing hypothesis for stellar mass selection in clusters. Previously, Weidner and Kroupa have suggested that the maximum stellar mass is lower, in low mass clusters, than would be expected on the basis of random drawing, and have pointed out that this could have important implications for steepening the integrated initial mass function of the Galaxy (the IGIMF) at high masses. Our compilation demonstrates how the observed distribution in the plane of maximum stellar mass versus membership number is affected by the method of target selection; in particular, rather low n clusters with large maximum stellar masses are abundant in observational datasets that specifically seek clusters in the environs of high mass stars. Although we do not consider our compilation to be either complete or unbiased, we discuss the method by which such data should be statistically analysed. Our very provisional conclusion is that the data is not indicating any striking deviation from the expectations of random drawing.
Impact of graphene polycrystallinity on the performance of graphene field-effect transistors
Jiménez, David; Chaves, Ferney [Departament d'Enginyeria Electrňnica, Escola d'Enginyeria, Universitat Autňnoma de Barcelona, 08193-Bellaterra (Spain); Cummings, Aron W.; Van Tuan, Dinh [ICN2, Institut Catalŕ de Nanociencia i Nanotecnologia, Campus UAB, 08193 Bellaterra (Barcelona) (Spain); Kotakoski, Jani [Faculty of Physics, University of Vienna, Boltzmanngasse 5, 1090 Wien (Austria); Department of Physics, University of Helsinki, P.O. Box 43, 00014 University of Helsinki (Finland); Roche, Stephan [ICN2, Institut Catalŕ de Nanociencia i Nanotecnologia, Campus UAB, 08193 Bellaterra (Barcelona) (Spain); ICREA, Institució Catalana de Recerca i Estudis Avançats, 08070 Barcelona (Spain)
2014-01-27T23:59:59.000Z
We have used a multi-scale physics-based model to predict how the grain size and different grain boundary morphologies of polycrystalline graphene will impact the performance metrics of graphene field-effect transistors. We show that polycrystallinity has a negative impact on the transconductance, which translates to a severe degradation of the maximum and cutoff frequencies. On the other hand, polycrystallinity has a positive impact on current saturation, and a negligible effect on the intrinsic gain. These results reveal the complex role played by graphene grain boundaries and can be used to guide the further development and optimization of graphene-based electronic devices.
Maximum entropy segmentation of broadcast news
Christensen, Heidi; Kolluru, BalaKrishna; Gotoh, Yoshihiko; Renals, Steve
2005-01-01T23:59:59.000Z
speech recognizer and subsequently segmenting the text into utterances and topics. A maximum entropy approach is used to build statistical models for both utterance and topic segmentation. The experimental work addresses the effect on performance...
Maximum likelihood reconstruction for the Daya Bay Experiment
Xia Dongmei
2014-03-07T23:59:59.000Z
The Daya Bay Reactor Neutrino experiment is designed to precisely determine the neutrino mixing angle theta13. In this paper, we report a maximum likelihood (ML) method to reconstruct the vertex and energy of events in the anti-neutrino detector, based on a simplified optical model that describes light propagation. We calibrate the key paramters of the optical model with Co60 source, by comparing the predicted charges of the PMTs with the observed charges. With the optimized parameters, the resolution of the vertex reconstruction is about 25cm for Co60 gamma.
and agriculture increases, water supply decreases (ProClim and OcCC, 2007) as climate change alters the hydrologic of the economic impact of climate change and different adaptation strategies in the water sector is essential in Switzerland, mandated by the Federal Office for the Environment (FOEN). 4) Climate change and water resources
Predicting and Utilizing the Vehicle's Past and Futuer Road Grade...
Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site
of Advanced Powertrain Systems Impact of Vehicle Efficiency Improvements on Powertrain Design The Impact of Using Derived Fuel Consumption Maps to Predict Fuel Consumption...
Weak Scale From the Maximum Entropy Principle
Yuta Hamada; Hikaru Kawai; Kiyoharu Kawana
2014-09-23T23:59:59.000Z
The theory of multiverse and wormholes suggests that the parameters of the Standard Model are fixed in such a way that the radiation of the $S^{3}$ universe at the final stage $S_{rad}$ becomes maximum, which we call the maximum entropy principle. Although it is difficult to confirm this principle generally, for a few parameters of the Standard Model, we can check whether $S_{rad}$ actually becomes maximum at the observed values. In this paper, we regard $S_{rad}$ at the final stage as a function of the weak scale ( the Higgs expectation value ) $v_{h}$, and show that it becomes maximum around $v_{h}={\\cal{O}}(300\\text{GeV})$ when the dimensionless couplings in the Standard Model, that is, the Higgs self coupling, the gauge couplings, and the Yukawa couplings are fixed. Roughly speaking, we find that the weak scale is given by \\begin{equation} v_{h}\\sim\\frac{T_{BBN}^{2}}{M_{pl}y_{e}^{5}},\
Weak Scale From the Maximum Entropy Principle
Hamada, Yuta; Kawana, Kiyoharu
2014-01-01T23:59:59.000Z
The theory of multiverse and wormholes suggests that the parameters of the Standard Model are fixed in such a way that the radiation of the $S^{3}$ universe at the final stage $S_{rad}$ becomes maximum, which we call the maximum entropy principle. Although it is difficult to confirm this principle generally, for a few parameters of the Standard Model, we can check whether $S_{rad}$ actually becomes maximum at the observed values. In this paper, we regard $S_{rad}$ at the final stage as a function of the weak scale ( the Higgs expectation value ) $v_{h}$, and show that it becomes maximum around $v_{h}={\\cal{O}}(300\\text{GeV})$ when the dimensionless couplings in the Standard Model, that is, the Higgs self coupling, the gauge couplings, and the Yukawa couplings are fixed. Roughly speaking, we find that the weak scale is given by \\begin{equation} v_{h}\\sim\\frac{T_{BBN}^{2}}{M_{pl}y_{e}^{5}},\
Integrating Correlated Bayesian Networks Using Maximum Entropy
Jarman, Kenneth D.; Whitney, Paul D.
2011-08-30T23:59:59.000Z
We consider the problem of generating a joint distribution for a pair of Bayesian networks that preserves the multivariate marginal distribution of each network and satisfies prescribed correlation between pairs of nodes taken from both networks. We derive the maximum entropy distribution for any pair of multivariate random vectors and prescribed correlations and demonstrate numerical results for an example integration of Bayesian networks.
QCD Level Density from Maximum Entropy Method
Shinji Ejiri; Tetsuo Hatsuda
2005-09-24T23:59:59.000Z
We propose a method to calculate the QCD level density directly from the thermodynamic quantities obtained by lattice QCD simulations with the use of the maximum entropy method (MEM). Understanding QCD thermodynamics from QCD spectral properties has its own importance. Also it has a close connection to phenomenological analyses of the lattice data as well as experimental data on the basis of hadronic resonances. Our feasibility study shows that the MEM can provide a useful tool to study QCD level density.
Tissue Radiation Response with Maximum Tsallis Entropy
Sotolongo-Grau, O.; Rodriguez-Perez, D.; Antoranz, J. C.; Sotolongo-Costa, Oscar [UNED, Departamento de Fisica Matematica y de Fluidos, 28040 Madrid (Spain); UNED, Departamento de Fisica Matematica y de Fluidos, 28040 Madrid (Spain) and University of Havana, Catedra de Sistemas Complejos Henri Poincare, Havana 10400 (Cuba); University of Havana, Catedra de Sistemas Complejos Henri Poincare, Havana 10400 (Cuba)
2010-10-08T23:59:59.000Z
The expression of survival factors for radiation damaged cells is currently based on probabilistic assumptions and experimentally fitted for each tumor, radiation, and conditions. Here, we show how the simplest of these radiobiological models can be derived from the maximum entropy principle of the classical Boltzmann-Gibbs expression. We extend this derivation using the Tsallis entropy and a cutoff hypothesis, motivated by clinical observations. The obtained expression shows a remarkable agreement with the experimental data found in the literature.
Wang, J.A.
1998-05-01T23:59:59.000Z
The NRC Regulatory Guide 1.99 Revision 2 was based on 177 surveillance data points and the EPRI data base, where 76% of 177 data points and 60% of EPRI data base were from Westinghouse`s data. Therefore, other vendors` radiation environment may not be properly characterized by R.G. 1.99`s prediction. To minimize scatter from the influences of the irradiation temperature, neutron energy spectrum, displacement rate, and plant operation procedures on embrittlement models, improved embrittlement models based on group data that have similar radiation environments and reactor design and operation criteria are examined. A total of 653 shift data points from the current FR-EDB, including 397 Westinghouse data, 93 B and W data, 37 CE data, and 106 GE data, are used. A nonlinear least squares fitting FORTRAN program, incorporating a Monte Carlo procedure with 35% and 10% uncertainty assigned to the fluence and shift data, respectively, was written for this study. In order to have the same adjusted fluence value for the weld and plate material in the same capsule, the Monte Carlo least squares fitting procedure has the ability to adjust the fluence values while running the weld and plate formula simultaneously. Six chemical components, namely, copper, nickel, phosphorus, sulfur, manganese, and molybdenum, were considered in the development of the new embrittlement models. The overall percentage of reduction of the 2-sigma margins per delta RTNDT predicted by the new embrittlement models, compared to that of R.G. 1.99, for weld and base materials are 42% and 36%, respectively. Currently, the need for thermal annealing is seriously being considered for several A302B type RPVs. From the macroscopic view point, even if base and weld materials were verified from mechanical tests to be fully recovered, the linking heat affected zone (HAZ) material has not been properly characterized. Thus the final overall recovery will still be unknown. The great data scatter of the HAZ metals may be the result of the metallurgical heterogeneity that exists in the HAZ. The proposed data fitting procedure for the HAZ material is presented in the paper.
A global maximum power point tracking DC-DC converter
Duncan, Joseph, 1981-
2005-01-01T23:59:59.000Z
This thesis describes the design, and validation of a maximum power point tracking DC-DC converter capable of following the true global maximum power point in the presence of other local maximum. It does this without the ...
Ocean Circulation During the Last Glacial Maximum Simulated by PMIP3 Climate Models
Schmittner, Andreas
in the intensity of the Atlantic Overturning Circulation (distinguished by the local maximum at approximately 30 N %. In the plot corresponding to the World Ocean Circulation, an increase in the Deep Circulation, associated of the water masses as well as the impact on ocean carbon storage. References: [1] Godfrey J. S., Geophysics
Magnetofossil spike during the Paleocene-Eocene thermal maximum: Ferromagnetic resonance, rock,2 Timothy D. Raub,3,4 Dirk Schumann,5 Hojatollah Vali,5 Alexei V. Smirnov,3,6 and Joseph L. Kirschvink1 controversial hypothesis that a cometary impact triggered the PETM. Here we present ferromagnetic resonance (FMR
Ionization and maximum energy of nuclei in shock acceleration theory
Morlino, Giovanni
2011-01-01T23:59:59.000Z
We study the acceleration of heavy nuclei at SNR shocks when the process of ionization is taken into account. Heavy atoms ($Z_N >$ few) in the interstellar medium which start the diffusive shock acceleration (DSA) are never fully ionized at the moment of injection. The ionization occurs during the acceleration process, when atoms already move relativistically. For typical environment around SNRs the photo-ionization due to the background galactic radiation dominates over Coulomb collisions. The main consequence of ionization is the reduction of the maximum energy which ions can achieve with respect to the standard result of the DSA. In fact the photo-ionization has a timescale comparable to the beginning of the Sedov-Taylor phase, hence the maximum energy is no more proportional to the nuclear charge, as predicted by standard DSA, but rather to the effective ions' charge during the acceleration process, which is smaller than the total nuclear charge $Z_N$. This result can have a direct consequence in the pred...
Accelerated maximum likelihood parameter estimation for stochastic biochemical systems
Daigle, Bernie J; Roh, Min K; Petzold, Linda R; Niemi, Jarad
2012-01-01T23:59:59.000Z
as: Daigle et al. : Accelerated maximum likelihood parame-Gillespie DT: Approximate accelerated stochastic simulationARTICLE Open Access Accelerated maximum likelihood parameter
articulatorily constrained maximum: Topics by E-print Network
Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)
weight spanning forests. Amitabha Bagchi; Ankur Bhargava; Torsten Suel 2005-01-01 27 Maximum Entropy Correlated Equilibria MIT - DSpace Summary: We study maximum entropy...
Maximum screening fields of superconducting multilayer structures
Gurevich, Alex
2015-01-01T23:59:59.000Z
It is shown that a multilayer comprised of alternating thin superconducting and insulating layers on a thick substrate can fully screen the applied magnetic field exceeding the superheating fields $H_s$ of both the superconducting layers and the substrate, the maximum Meissner field is achieved at an optimum multilayer thickness. For instance, a dirty layer of thickness $\\sim 0.1\\; \\mu$m at the Nb surface could increase $H_s\\simeq 240$ mT of a clean Nb up to $H_s\\simeq 290$ mT. Optimized multilayers of Nb$_3$Sn, NbN, some of the iron pnictides, or alloyed Nb deposited onto the surface of the Nb resonator cavities could potentially double the rf breakdown field, pushing the peak accelerating electric fields above 100 MV/m while protecting the cavity from dendritic thermomagnetic avalanches caused by local penetration of vortices.
Channel State Prediction in Cognitive Radio, Part II: Single-User Prediction
Qiu, Robert Caiming
Channel State Prediction in Cognitive Radio, Part II: Single-User Prediction Zhe Chen, Nan Guo-user prediction of channel state is proposed to minimize the negative impact of response delays caused by hardware-SU) prediction is proposed and examined. In order to have convincing performance evaluation results, real- world
Ray, R.M. (DOE Bartlesville Energy Technology Center, Bartlesville, OK (United States))
1988-10-01T23:59:59.000Z
PREDICTIVE MODELS is a collection of five models - CFPM, CO2PM, ICPM, PFPM, and SFPM - used in the 1982-1984 National Petroleum Council study of enhanced oil recovery (EOR) potential. Each pertains to a specific EOR process designed to squeeze additional oil from aging or spent oil fields. The processes are: 1) chemical flooding; 2) carbon dioxide miscible flooding; 3) in-situ combustion; 4) polymer flooding; and 5) steamflood. CFPM, the Chemical Flood Predictive Model, models micellar (surfactant)-polymer floods in reservoirs, which have been previously waterflooded to residual oil saturation. Thus, only true tertiary floods are considered. An option allows a rough estimate of oil recovery by caustic or caustic-polymer processes. CO2PM, the Carbon Dioxide miscible flooding Predictive Model, is applicable to both secondary (mobile oil) and tertiary (residual oil) floods, and to either continuous CO2 injection or water-alternating gas processes. ICPM, the In-situ Combustion Predictive Model, computes the recovery and profitability of an in-situ combustion project from generalized performance predictive algorithms. PFPM, the Polymer Flood Predictive Model, is switch-selectable for either polymer or waterflooding, and an option allows the calculation of the incremental oil recovery and economics of polymer relative to waterflooding. SFPM, the Steamflood Predictive Model, is applicable to the steam drive process, but not to cyclic steam injection (steam soak) processes. The IBM PC/AT version includes a plotting capability to produces a graphic picture of the predictive model results.
Satellite Application Facility for Numerical Weather Prediction
Stoffelen, Ad
into Numerical Weather Prediction (NWP) models. However, the impact of such observations often critNWP SAF Satellite Application Facility for Numerical Weather Prediction Document NWPSAF-KN-VS-002 Stoffelen KNMI #12;NWP SAF Satellite Application Facility for Numerical Weather Prediction Report
Maximum Entropy Method Approach to $?$ Term
Masahiro Imachi; Yasuhiko Shinno; Hiroshi Yoneyama
2004-06-09T23:59:59.000Z
In Monte Carlo simulations of lattice field theory with a $\\theta$ term, one confronts the complex weight problem, or the sign problem. This is circumvented by performing the Fourier transform of the topological charge distribution $P(Q)$. This procedure, however, causes flattening phenomenon of the free energy $f(\\theta)$, which makes study of the phase structure unfeasible. In order to treat this problem, we apply the maximum entropy method (MEM) to a Gaussian form of $P(Q)$, which serves as a good example to test whether the MEM can be applied effectively to the $\\theta$ term. We study the case with flattening as well as that without flattening. In the latter case, the results of the MEM agree with those obtained from the direct application of the Fourier transform. For the former, the MEM gives a smoother $f(\\theta)$ than that of the Fourier transform. Among various default models investigated, the images which yield the least error do not show flattening, although some others cannot be excluded given the uncertainty related to statistical error.
Broader source: Energy.gov [DOE]
Predictive maintenance aims to detect equipment degradation and address problems as they arise. The result indicates potential issues, which are controlled or eliminated prior to any significant system deterioration.
Ray, R.M. (DOE Bartlesville Energy Technology Technology Center, Bartlesville, OK (United States))
1986-12-01T23:59:59.000Z
PREDICTIVE MODELS is a collection of five models - CFPM, CO2PM, ICPM, PFPM, and SFPM - used in the 1982-1984 National Petroleum Council study of enhanced oil recovery (EOR) potential. Each pertains to a specific EOR process designed to squeeze additional oil from aging or spent oil fields. The processes are: 1) chemical flooding, where soap-like surfactants are injected into the reservoir to wash out the oil; 2) carbon dioxide miscible flooding, where carbon dioxide mixes with the lighter hydrocarbons making the oil easier to displace; 3) in-situ combustion, which uses the heat from burning some of the underground oil to thin the product; 4) polymer flooding, where thick, cohesive material is pumped into a reservoir to push the oil through the underground rock; and 5) steamflood, where pressurized steam is injected underground to thin the oil. CFPM, the Chemical Flood Predictive Model, models micellar (surfactant)-polymer floods in reservoirs, which have been previously waterflooded to residual oil saturation. Thus, only true tertiary floods are considered. An option allows a rough estimate of oil recovery by caustic or caustic-polymer processes. CO2PM, the Carbon Dioxide miscible flooding Predictive Model, is applicable to both secondary (mobile oil) and tertiary (residual oil) floods, and to either continuous CO2 injection or water-alternating gas processes. ICPM, the In-situ Combustion Predictive Model, computes the recovery and profitability of an in-situ combustion project from generalized performance predictive algorithms. PFPM, the Polymer Flood Predictive Model, is switch-selectable for either polymer or waterflooding, and an option allows the calculation of the incremental oil recovery and economics of polymer relative to waterflooding. SFPM, the Steamflood Predictive Model, is applicable to the steam drive process, but not to cyclic steam injection (steam soak) processes.
GMM Estimation of a Maximum Entropy Distribution with Interval Data
Perloff, Jeffrey M.
GMM Estimation of a Maximum Entropy Distribution with Interval Data Ximing Wu* and Jeffrey M estimate it using a simple yet flexible maximum entropy density. Our Monte Carlo simulations show that the proposed maximum entropy density is able to approximate various distributions extremely well. The two
Rafael Brada; Mordehai Milgrom
1998-12-21T23:59:59.000Z
We have recently discovered that the modified dynamics (MOND) implies some universal upper bound on the acceleration that can be contributed by a `dark halo'--assumed in a Newtonian analysis to account for the effects of MOND. Not surprisingly, the limit is of the order of the acceleration constant of the theory. This can be contrasted directly with the results of structure-formation simulations. The new limit is substantial and different from earlier MOND acceleration limits (discussed in connection with the MOND explanation of the Freeman law for galaxy disks, and the Fish law for ellipticals): It pertains to the `halo', and not to the observed galaxy; it is absolute, and independent of further physical assumptions on the nature of the galactic system; and it applies at all radii, whereas the other limits apply only to the mean acceleration in the system.
Optimization Online - Economic Impacts of Advanced Weather ...
Victor M. Zavala
2010-03-05T23:59:59.000Z
Mar 5, 2010 ... Economic Impacts of Advanced Weather Forecasting on Energy System ... that state-of-the-art numerical weather prediction (NWP) models can ...
Model Predictive Control Wind Turbines
Model Predictive Control of Wind Turbines Martin Klauco Kongens Lyngby 2012 IMM-MSc-2012-65 #12;Summary Wind turbines are the biggest part of the green energy industry. Increasing interest control strategies. Control strategy has a significant impact on the wind turbine operation on many levels
A Near Maximum Likelihood Decoding Algorithm for MIMO Systems ...
2005-10-05T23:59:59.000Z
Jul 30, 2005 ... the randomization procedure of [43], we bijectively map the .... ?1x are also in the integer grid. ... in a Maximum A Posteriori (MAP) decoder by.
Solving Maximum-Entropy Sampling Problems Using Factored Masks
Samuel Burer
2005-03-02T23:59:59.000Z
Mar 2, 2005 ... Abstract: We present a practical approach to Anstreicher and Lee's masked spectral bound for maximum-entropy sampling, and we describe ...
A masked spectral bound for maximum-entropy sampling
Kurt Anstreicher
2003-09-16T23:59:59.000Z
Sep 16, 2003 ... Abstract: We introduce a new masked spectral bound for the maximum-entropy sampling problem. This bound is a continuous generalization of ...
Maximum entropy generation in open systems: the Fourth Law?
Umberto Lucia
2010-11-17T23:59:59.000Z
This paper develops an analytical and rigorous formulation of the maximum entropy generation principle. The result is suggested as the Fourth Law of Thermodynamics.
analog fixed maximum: Topics by E-print Network
Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)
state for given entanglement which can be viewed as an analogue of the Jaynes maximum entropy principle. Pawel Horodecki; Ryszard Horodecki; Michal Horodecki 1998-05-22...
IBM Research Report Solving Maximum-Entropy Sampling ...
2005-02-28T23:59:59.000Z
Feb 28, 2005 ... Solving Maximum-Entropy Sampling Problems Using. Factored Masks. Samuel Burer. Department of Management Sciences. University of Iowa.
A Requirement for Significant Reduction in the Maximum BTU Input...
A Requirement for Significant Reduction in the Maximum BTU Input Rate of Decorative Vented Gas Fireplaces Would Impose Substantial Burdens on Manufacturers A Requirement for...
Maximum Constant Boost Control of the Z-Source Inverter
Tolbert, Leon M.
Maximum Constant Boost Control of the Z-Source Inverter Miaosen Shen1 , Jin Wang1 , Alan Joseph1 Laboratory Abstract: This paper proposes two maximum constant boost control methods for the Z-source inverter to modulation index is analyzed in detail and verified by simulation and experiment. Keywords- Z-source inverter
Appendix 22 Draft Nutrient Management Plan and Total Maximum Daily
Appendix 22 Draft Nutrient Management Plan and Total Maximum Daily Load for Flathead Lake, Montana. #12;11/01/01 DRAFT i October 30, 2001 Draft Nutrient Management Plan and Total Maximum Daily Load..............................................................................................................................2-11 SECTION 3.0 APPLICABLE WATER QUALITY STANDARDS
Exact computation of the Maximum Entropy Potential of spiking neural networks models
Cofre, Rodrigo
2014-01-01T23:59:59.000Z
Understanding how stimuli and synaptic connectivity in uence the statistics of spike patterns in neural networks is a central question in computational neuroscience. Maximum Entropy approach has been successfully used to characterize the statistical response of simultaneously recorded spiking neurons responding to stimuli. But, in spite of good performance in terms of prediction, the ?tting parameters do not explain the underlying mechanistic causes of the observed correlations. On the other hand, mathematical models of spiking neurons (neuro-mimetic models) provide a probabilistic mapping between stimulus, network architecture and spike patterns in terms of conditional proba- bilities. In this paper we build an exact analytical mapping between neuro-mimetic and Maximum Entropy models.
LANDFILL OPERATION FOR CARBON SEQUESTRATION AND MAXIMUM METHANE EMISSION CONTROL
Don Augenstein; Ramin Yazdani; Rick Moore; Michelle Byars; Jeff Kieffer; Professor Morton Barlaz; Rinav Mehta
2000-02-26T23:59:59.000Z
Controlled landfilling is an approach to manage solid waste landfills, so as to rapidly complete methane generation, while maximizing gas capture and minimizing the usual emissions of methane to the atmosphere. With controlled landfilling, methane generation is accelerated to more rapid and earlier completion to full potential by improving conditions (principally moisture, but also temperature) to optimize biological processes occurring within the landfill. Gas is contained through use of surface membrane cover. Gas is captured via porous layers, under the cover, operated at slight vacuum. A field demonstration project has been ongoing under NETL sponsorship for the past several years near Davis, CA. Results have been extremely encouraging. Two major benefits of the technology are reduction of landfill methane emissions to minuscule levels, and the recovery of greater amounts of landfill methane energy in much shorter times, more predictably, than with conventional landfill practice. With the large amount of US landfill methane generated, and greenhouse potency of methane, better landfill methane control can play a substantial role both in reduction of US greenhouse gas emissions and in US renewable energy. The work described in this report, to demonstrate and advance this technology, has used two demonstration-scale cells of size (8000 metric tons [tonnes]), sufficient to replicate many heat and compaction characteristics of larger ''full-scale'' landfills. An enhanced demonstration cell has received moisture supplementation to field capacity. This is the maximum moisture waste can hold while still limiting liquid drainage rate to minimal and safely manageable levels. The enhanced landfill module was compared to a parallel control landfill module receiving no moisture additions. Gas recovery has continued for a period of over 4 years. It is quite encouraging that the enhanced cell methane recovery has been close to 10-fold that experienced with conventional landfills. This is the highest methane recovery rate per unit waste, and thus progress toward stabilization, documented anywhere for such a large waste mass. This high recovery rate is attributed to moisture, and elevated temperature attained inexpensively during startup. Economic analyses performed under Phase I of this NETL contract indicate ''greenhouse cost effectiveness'' to be excellent. Other benefits include substantial waste volume loss (over 30%) which translates to extended landfill life. Other environmental benefits include rapidly improved quality and stabilization (lowered pollutant levels) in liquid leachate which drains from the waste.
Impact of risk on the maximum bid price for farm land
Miles, Jennifer Doughty
2012-06-07T23:59:59.000Z
37 40 5. Sample Output for Computer Simulation Model 6. Final Page of Computer Output . . . . . . . 7. Assumptions Made in Model Solutions . 8. Alternative Assumptions About the Buyer' s Annual Subjective Probability Distributions for the Net... when developing the computer simulation model are: (I) The buyer's portfolio of non-farm assets (i. e. , stocks, bonds, nonfarm bus1ness assets, etc. ) represents a small and insig- n1ficant part of his total investment portfol1o. This allows one...
Harrington, Jerry Y.
November 2004) ABSTRACT The effects of solar heating and infrared cooling on the vapor depositional growth of as much as 45 min. Including infrared cooling as well as solar heating in the LES and microphysical bin Solar Heating CHRISTOPHER M. HARTMAN AND JERRY Y. HARRINGTON Department of Meteorology, The Pennsylvania
EERE Takes Important Steps to Ensure Maximum Impact of Technology Program
Office of Environmental Management (EM)
AFDC Printable Version Share this resource Send a link to EERE: Alternative Fuels Data Center Home Page to someone by E-mail Share EERE: Alternative Fuels Data Center Home Page on Facebook Tweet about EERE: Alternative Fuels Data Center Home Page on Twitter Bookmark EERE:1 First Use of Energy for All Purposes (Fuel and Nonfuel),Feet) Year Jan Feb Mar Apr May Jun Jul(Summary) "of EnergyEnergy CooperationRequirements Matrix U.S.7685Department ofEnergy-Efficient1Department of
Burin des Roziers, T.
2011-01-01T23:59:59.000Z
Mathematics. In optimal prediction. Communications press,and R. Kupferman. On the prediction of large- scale dynamicsand D. Levy. Optimal prediction and pertur- bation theory.
Liu, Jian
2008-01-01T23:59:59.000Z
1992). J. Skilling, in Maximum entropy and Bayesian methods,1989). S. F. Gull, in Maximum entropy and Bayesian methods,with the classical maximum entropy (CME) technique (MEAC-
Improved constraints on transit time distributions from argon 39: A maximum entropy approach
Holzer, Mark; Primeau, Francois W
2010-01-01T23:59:59.000Z
Gull (1991), Bayesian maximum entropy image reconstruction,Atlantic venti- lated? Maximum entropy inversions of bottlefrom argon 39: A maximum entropy approach Mark Holzer 1,2
Soffer, Bernard H; Kikuchi, Ryoichi
1994-01-01T23:59:59.000Z
of Confidence for Maximum Entropy Restoration and EstimationApril 3, 1992) The Maximum Entropy method, using physicalare discussed. Maximum Entropy (ME) estimation has been
On the maximum pressure rise rate in boosted HCCI operation
Wildman, Craig B.
This paper explores the combined effects of boosting, intake air temperature, trapped residual gas fraction, and dilution on the Maximum Pressure Rise Rate (MPRR) in a boosted single cylinder gasoline HCCI engine with ...
Maximum Photovoltaic Penetration Levels on Typical Distribution Feeders: Preprint
Hoke, A.; Butler, R.; Hambrick, J.; Kroposki, B.
2012-07-01T23:59:59.000Z
This paper presents simulation results for a taxonomy of typical distribution feeders with various levels of photovoltaic (PV) penetration. For each of the 16 feeders simulated, the maximum PV penetration that did not result in steady-state voltage or current violation is presented for several PV location scenarios: clustered near the feeder source, clustered near the midpoint of the feeder, clustered near the end of the feeder, randomly located, and evenly distributed. In addition, the maximum level of PV is presented for single, large PV systems at each location. Maximum PV penetration was determined by requiring that feeder voltages stay within ANSI Range A and that feeder currents stay within the ranges determined by overcurrent protection devices. Simulations were run in GridLAB-D using hourly time steps over a year with randomized load profiles based on utility data and typical meteorological year weather data. For 86% of the cases simulated, maximum PV penetration was at least 30% of peak load.
Maximum containment : the most controversial labs in the world
Bruzek, Alison K. (Allison Kim)
2013-01-01T23:59:59.000Z
In 2002, following the September 11th attacks and the anthrax letters, the United States allocated money to build two maximum containment biology labs. Called Biosafety Level 4 (BSL-4) facilities, these labs were built to ...
Multichannel Blind Identification: From Subspace to Maximum Likelihood Methods
Tong, Lang
Multichannel Blind Identification: From Subspace to Maximum Likelihood Methods LANG TONG, MEMBER, IEEE, AND SYLVIE PERREAU Invited Paper A review of recent blind channel estimation algorithms is pre-- Blind equalization, parameter estimation, system identification. I. INTRODUCTION A. What Is Blind
Multi-Class Classification with Maximum Margin Multiple Kernel
Tomkins, Andrew
(named OBSCURE and UFO-MKL, respectively) are used to optimize primal versions of equivalent problems), the OBSCURE and UFO-MKL algorithms are compared against MCMKL #12;Multi-Class Classification with Maximum
Maximum entropy method and oscillations in the diffraction cone
O. Dumbrajs; J. Kontros; A. Lengyel
2000-07-15T23:59:59.000Z
The maximum entropy method has been applied to investigate the oscillating structure in the pbarp- and pp-elastic scattering differential cross-section at high energy and small momentum transfer. Oscillations satisfying quite realistic reliability criteria have been found.
Efficiency at maximum power of interacting molecular machines
N. Golubeva; A. Imparato
2012-10-22T23:59:59.000Z
We investigate the efficiency of systems of molecular motors operating at maximum power. We consider two models of kinesin motors on a microtubule: for both the simplified and the detailed model, we find that the many-body exclusion effect enhances the efficiency at maximum power of the many-motor system, with respect to the single motor case. Remarkably, we find that this effect occurs in a limited region of the system parameters, compatible with the biologically relevant range.
Filtering Additive Measurement Noise with Maximum Entropy in the Mean
Henryk Gzyl; Enrique ter Horst
2007-09-04T23:59:59.000Z
The purpose of this note is to show how the method of maximum entropy in the mean (MEM) may be used to improve parametric estimation when the measurements are corrupted by large level of noise. The method is developed in the context on a concrete example: that of estimation of the parameter in an exponential distribution. We compare the performance of our method with the bayesian and maximum likelihood approaches.
The maximum entropy tecniques and the statistical description of systems
B. Z. Belashev; M. K. Suleymanov
2001-10-19T23:59:59.000Z
The maximum entropy technique (MENT) is used to determine the distribution functions of physical values. MENT naturally combines required maximum entropy, the properties of a system and connection conditions in the form of restrictions imposed on the system. It can, therefore, be employed to statistically describe closed and open systems. Examples in which MENT is used to describe equilibrium and non-equilibrium states, as well as steady states that are far from being in thermodynamic equilibrium, are discussed.
How efficiency shapes market impact J. Doyne Farmer,1, 2
How efficiency shapes market impact J. Doyne Farmer,1, 2 Austin Gerig,3, 1 Fabrizio Lillo,1, 4 Pola 12, 00198, Roma, Italy 3School of Finance and Economics, University of Technology, Sydney metaorder size 17 1. Immediate impact 17 2. Permanent impact 18 C. Effect of maximum order size 18 D. Market
Data Assimilation for Idealised Mathematical Models of Numerical Weather Prediction
Wirosoetisno, Djoko
Data Assimilation for Idealised Mathematical Models of Numerical Weather Prediction Supervisors). Background: Numerical Weather Prediction (NWP) has seen significant gains in accuracy in recent years due is directed at achieving real-world impact in numerical weather prediction by addressing fundamental issues
Variable Selection for Modeling the Absolute Magnitude at Maximum of Type Ia Supernovae
Uemura, Makoto; Kawabata, S; Ikeda, Shiro; Maeda, Keiichi
2015-01-01T23:59:59.000Z
We discuss what is an appropriate set of explanatory variables in order to predict the absolute magnitude at the maximum of Type Ia supernovae. In order to have a good prediction, the error for future data, which is called the "generalization error," should be small. We use cross-validation in order to control the generalization error and LASSO-type estimator in order to choose the set of variables. This approach can be used even in the case that the number of samples is smaller than the number of candidate variables. We studied the Berkeley supernova database with our approach. Candidates of the explanatory variables include normalized spectral data, variables about lines, and previously proposed flux-ratios, as well as the color and light-curve widths. As a result, we confirmed the past understanding about Type Ia supernova: i) The absolute magnitude at maximum depends on the color and light-curve width. ii) The light-curve width depends on the strength of Si II. Recent studies have suggested to add more va...
NGC2613, 3198, 6503, 7184: Case studies against `maximum' disks
B. Fuchs
1998-12-02T23:59:59.000Z
Decompositions of the rotation curves of NGC2613, 3198, 6505, and 7184 are analysed. For these galaxies the radial velocity dispersions of the stars have been measured and their morphology is clearly discernible. If the parameters of the decompositions are chosen according to the `maximum' disk hypothesis, the Toomre Q stability parameter is systematically less than one and the multiplicities of the spiral arms as expected from density wave theory are inconsitent with the observed morphologies of the galaxies. The apparent Q<1 instability, in particular, is a strong argument against the `maximum' disk hypothesis.
Efficiency of autonomous soft nano-machines at maximum power
Udo Seifert
2010-11-11T23:59:59.000Z
We consider nano-sized artificial or biological machines working in steady state enforced by imposing non-equilibrium concentrations of solutes or by applying external forces, torques or electric fields. For unicyclic and strongly coupled multicyclic machines, efficiency at maximum power is not bounded by the linear response value 1/2. For strong driving, it can even approach the thermodynamic limit 1. Quite generally, such machines fall in three different classes characterized, respectively, as "strong and efficient", "strong and inefficient", and "balanced". For weakly coupled multicyclic machines, efficiency at maximum power has lost any universality even in the linear response regime.
When are microcircuits well-modeled by maximum entropy methods?
2010-07-20T23:59:59.000Z
POSTER PRESENTATION Open Access When are microcircuits well-modeled by maximum entropy methods? Andrea K Barreiro1*, Eric T Shea-Brown1, Fred M Rieke2,3, Julijana Gjorgjieva4 From Nineteenth Annual Computational Neuroscience Meeting: CNS*2010 San... Antonio, TX, USA. 24-30 July 2010 Recent experiments in retina and cortex have demon- strated that pairwise maximum entropy (PME) methods can approximate observed spiking patterns to a high degree of accuracy [1,2]. In this paper we examine...
Valence quark distributions of the proton from maximum entropy approach
Rong Wang; Xurong Chen
2014-10-14T23:59:59.000Z
We present an attempt of maximum entropy principle to determine valence quark distributions in the proton at very low resolution scale $Q_0^2$. The initial three valence quark distributions are obtained with limited dynamical information from quark model and QCD theory. Valence quark distributions from this method are compared to the lepton deep inelastic scattering data, and the widely used CT10 and MSTW08 data sets. The obtained valence quark distributions are consistent with experimental observations and the latest global fits of PDFs. Maximum entropy method is expected to be particularly useful in the case where relatively little information from QCD calculation is given.
Valence quark distributions of the proton from maximum entropy approach
Wang, Rong
2014-01-01T23:59:59.000Z
We present an attempt of maximum entropy principle to determine valence quark distributions in the proton at very low resolution scale $Q_0^2$. The initial three valence quark distributions are obtained with limited dynamical information from quark model and QCD theory. Valence quark distributions from this method are compared to the lepton deep inelastic scattering data, and the widely used CT10 and MSTW08 data sets. The obtained valence quark distributions are consistent with experimental observations and the latest global fits of PDFs. Maximum entropy method is expected to be particularly useful in the case where relatively little information from QCD calculation is given.
Assessing complexity by means of maximum entropy models
Chliamovitch, Gregor; Velasquez, Lino
2014-01-01T23:59:59.000Z
We discuss a characterization of complexity based on successive approximations of the probability density describing a system by means of maximum entropy methods, thereby quantifying the respective role played by different orders of interaction. This characterization is applied on simple cellular automata in order to put it in perspective with the usual notion of complexity for such systems based on Wolfram classes. The overlap is shown to be good, but not perfect. This suggests that complexity in the sense of Wolfram emerges as an intermediate regime of maximum entropy-based complexity, but also gives insights regarding the role of initial conditions in complexity-related issues.
Brodsky, Stanley J.; /SLAC; Wu, Xing-Gang; /Chongqing U.
2012-04-02T23:59:59.000Z
The uncertainty in setting the renormalization scale in finite-order perturbative QCD predictions using standard methods substantially reduces the precision of tests of the Standard Model in collider experiments. It is conventional to choose a typical momentum transfer of the process as the renormalization scale and take an arbitrary range to estimate the uncertainty in the QCD prediction. However, predictions using this procedure depend on the choice of renormalization scheme, leave a non-convergent renormalon perturbative series, and moreover, one obtains incorrect results when applied to QED processes. In contrast, if one fixes the renormalization scale using the Principle of Maximum Conformality (PMC), all non-conformal {l_brace}{beta}{sub i}{r_brace}-terms in the perturbative expansion series are summed into the running coupling, and one obtains a unique, scale-fixed, scheme-independent prediction at any finite order. The PMC renormalization scale {mu}{sub R}{sup PMC} and the resulting finite-order PMC prediction are both to high accuracy independent of choice of the initial renormalization scale {mu}{sub R}{sup init}, consistent with renormalization group invariance. Moreover, after PMC scale-setting, the n!-growth of the pQCD expansion is eliminated. Even the residual scale-dependence at fixed order due to unknown higher-order {l_brace}{beta}{sub i}{r_brace}-terms is substantially suppressed. As an application, we apply the PMC procedure to obtain NNLO predictions for the t{bar t}-pair hadroproduction cross-section at the Tevatron and LHC colliders. There are no renormalization scale or scheme uncertainties, thus greatly improving the precision of the QCD prediction. The PMC prediction for {sigma}{sub t{bar t}} is larger in magnitude in comparison with the conventional scale-setting method, and it agrees well with the present Tevatron and LHC data. We also verify that the initial scale-independence of the PMC prediction is satisfied to high accuracy at the NNLO level: the total cross-section remains almost unchanged even when taking very disparate initial scales {mu}{sub R}{sup init} equal to m{sub t}, 20 m{sub t}, {radical}s.
Maximum power tracking control scheme for wind generator systems
Mena Lopez, Hugo Eduardo
2008-10-10T23:59:59.000Z
The purpose of this work is to develop a maximum power tracking control strategy for variable speed wind turbine systems. Modern wind turbine control systems are slow, and they depend on the design parameters of the turbine and use wind and/or rotor...
Maximum power tracking control scheme for wind generator systems
Mena, Hugo Eduardo
2009-05-15T23:59:59.000Z
The purpose of this work is to develop a maximum power tracking control strategy for variable speed wind turbine systems. Modern wind turbine control systems are slow, and they depend on the design parameters of the turbine and use wind and/or rotor...
Maximum-principle-satisfying and positivity-preserving high order ...
2011-04-01T23:59:59.000Z
conservation laws: Survey and new developments ..... Notice that in (2.10) we need to evaluate the maximum/minimum of a ..... total energy, p is the pressure, e is the internal energy, and ? > 1 is a constant ... under a standard CFL condition.
Maximum Entropy in Support of Semantically Annotated Datasets
Kreinovich, Vladik
Maximum Entropy in Support of Semantically Annotated Datasets Paulo Pinheiro da Silva, Vladik whether two datasets describe the same quantity. The existing solution to this problem is to use these datasets' ontologies to deduce that these datasets indeed represent the same quantity. However, even when
Performance of Civil Aviation Receivers during Maximum Solar Activity Events
Boyer, Edmond
Performance of Civil Aviation Receivers during Maximum Solar Activity Events Lina DEAMBROGIO on the fields of ionosphere scintillations, solar energetic particles and on the implementation of operational the upcoming period of high solar activity. Emilien ROBERT got his PhD in 2005 and started to work on behalf
Rapidly Solving an Online Sequence of Maximum Flow Problems
2008-02-29T23:59:59.000Z
... an interdictor allocates a finite amount of resources to remove arcs from a net- ... is, the next maximum flow problem in the sequence differs from the previous one by ..... the appropriate reoptimization case and then taking the appropriate action to ..... Our first set of computational experiments tested the performance of our ...
THE MAXIMUM CAPACITY OF A LINE PLAN IS INAPPROXIMABLE
Nabben, Reinhard
THE MAXIMUM CAPACITY OF A LINE PLAN IS INAPPROXIMABLE CHRISTINA PUHL AND SEBASTIAN STILLER Abstract a network, upper arc-capacities and a line pool. E-mail: puhl@math.tu-berlin.de, stiller of the European Commission under contract no. FP6-021235-2. 1 #12;2 CHRISTINA PUHL AND SEBASTIAN STILLER We
O(1)-Approximations for Maximum Movement Piotr Berman1
Demaine, Erik
movement of the pebbles, motivated by minimizing either execution time or energy usage. Spe- cific problems the maximum movement made by pebbles on a graph to reach a configuration in which the pebbles form a connected. For example, in the connectivity goal, the proximity of the robots should form a connected graph. Two
Maximization of Recursive Utilities: A Dynamic Maximum Principle Approach
Di Girolami, Cristina
Maximization of Recursive Utilities: A Dynamic Maximum Principle Approach Wahid FAIDI LAMSIN, ENIT for a class of robust utility function introduced in Bordigoni, Matoussi et Schweizer (2005). Our method-investment strategy which is characterized as the unique solution of a forward-backward system. Key words : Utility
Maximum likelihood estimation of the equity Efstathios Avdis
Kahana, Michael J.
premium is usually estimated by taking the sample mean of stock returns and subtracting a measure the expected return on the aggregate stock market less the government bill rate, is of central importance an alternative esti- mator, based on maximum likelihood, that takes into account informa- tion contained
Renewable Energy Scheduling for Fading Channels with Maximum Power Constraint
Greenberg, Albert
Renewable Energy Scheduling for Fading Channels with Maximum Power Constraint Zhe Wang Electrical--In this paper, we develop efficient algorithm to obtain the optimal energy schedule for fading channel with energy harvesting. We assume that the side information of both the channel states and energy harvesting
Retrocommissioning Case Study - Applying Building Selection Criteria for Maximum Results
Luskay, L.; Haasl, T.; Irvine, L.; Frey, D.
2002-01-01T23:59:59.000Z
RETROCOMMISSIONING CASE STUDY ?Applying Building Selection Criteria for Maximum Results? Larry Luskay, Tudi Haasl, Linda Irvine Portland Energy Conservation, Inc. Portland, Oregon Donald Frey Architectural Energy Corporation Boulder.... The building was retrocommissioned by Portland Energy Conservation, Inc. (PECI), in conjunction with Architectural Energy Corporation (AEC). The building-specific goals were: 1) Obtain cost-effective energy savings from optimizing operation...
Bayesian prediction of the Gaussian states from n sample
F. Tanaka; F. Komaki
2006-05-12T23:59:59.000Z
Recently quantum prediction problem was proposed in the Bayesian framework. It is shown that Bayesian predictive density operators are the best predictive density operators when we evaluate them by using the average relative entropy based on a prior.As an illustrative example, we treat the Gaussian states family adopting the Gaussian distribution as a prior and give the Bayesian predictive density operator with the heterodyne measurement fixed. We show that it is better than the plug-in predictive density operator based on the maximum likelihood estimate by calculating each average relative entropy.
Modeling of Wave Impact Using a Pendulum System
Nie, Chunyong
2011-08-08T23:59:59.000Z
For high speed vessels and offshore structures, wave impact, a main source of environmental loads, causes high local stresses and structural failure. However, the prediction of wave impact loads presents numerous challenges due to the complex nature...
Environment and energy in Iceland: A comparative analysis of values and impacts
Thorhallsdottir, Thora Ellen [Institute of Biology (Iceland)]. E-mail: theth@hi.is
2007-08-15T23:59:59.000Z
Within an Icelandic framework plan for energy, environmental values and impacts were estimated in multicriteria analyses for 19 hydroelectric and 22 geothermal developments. Four natural environment classes were defined (geology + hydrology, species, ecosystems + soils, landscape + wilderness) with cultural heritage as the fifth class. Values and impacts were assessed through 6 agglomerated attributes: richness/diversity, rarity, size/continuity/pristineness, information/symbolic value, international responsibility and visual value. The project offers a unique opportunity for comparing environmental values and impacts within a large sample of sites and energy alternatives treated within a common methodological framework. Total values were higher in hydroelectric than in geothermal areas. Hydroelectric areas scored high for cultural heritage (particularly in rarity and information value), landscape and wilderness. Geothermal areas had high bedrock and hydrological diversity and information values, and a high landscape visual value but little cultural heritage. High values were correlated among some classes of the natural environment, all of which are likely to reflect functional relationships. In contrast, cultural heritage values were not related to natural environment values. Overall, landscape and wilderness had the highest mean value and were also most affected by energy development. Over 40% of the hydroelectric development had a predicted mean impact value of > 4 (out of a maximum of 10), compared with 10% of the geothermal projects. Excluding two outsized hydropower options, there was a significant correlation between plant capacity and impact on geology and hydrology but not with other environmental variables.
Latent feature models for dyadic prediction /
Menon, Aditya Krishna
2013-01-01T23:59:59.000Z
prediction . . . . . . . . . . . . . . . . . . . . . . . . .Response prediction . . . . . . . . . . . . . . . . . . .2.4.3 Weighted link prediction . . . . . .
IC performance prediction system
Ramakrishnan, Venkatakrishnan
1996-01-01T23:59:59.000Z
electrical test data, supplemented with in-line and in-situ data to make performance predictions. Based on the waterlevel parametric test, we will predict chip performance in order to select the appropriate package. Predictions that fall outside acceptable...
Branstator, Grant
2014-12-09T23:59:59.000Z
The overall aim of our project was to quantify and characterize predictability of the climate as it pertains to decadal time scale predictions. By predictability we mean the degree to which a climate forecast can be distinguished from the climate that exists at initial forecast time, taking into consideration the growth of uncertainty that occurs as a result of the climate system being chaotic. In our project we were especially interested in predictability that arises from initializing forecasts from some specific state though we also contrast this predictability with predictability arising from forecasting the reaction of the system to external forcing – for example changes in greenhouse gas concentration. Also, we put special emphasis on the predictability of prominent intrinsic patterns of the system because they often dominate system behavior. Highlights from this work include: • Development of novel methods for estimating the predictability of climate forecast models. • Quantification of the initial value predictability limits of ocean heat content and the overturning circulation in the Atlantic as they are represented in various state of the artclimate models. These limits varied substantially from model to model but on average were about a decade with North Atlantic heat content tending to be more predictable than North Pacific heat content. • Comparison of predictability resulting from knowledge of the current state of the climate system with predictability resulting from estimates of how the climate system will react to changes in greenhouse gas concentrations. It turned out that knowledge of the initial state produces a larger impact on forecasts for the first 5 to 10 years of projections. • Estimation of tbe predictability of dominant patterns of ocean variability including well-known patterns of variability in the North Pacific and North Atlantic. For the most part these patterns were predictable for 5 to 10 years. • Determination of especially predictable patterns in the North Atlantic. The most predictable of these retain predictability substantially longer than generic patterns, with some being predictable for two decades.
Maximum Entropy Principle and the Higgs Boson Mass
Alves, Alexandre; da Silva, Roberto
2014-01-01T23:59:59.000Z
A successful connection between Higgs boson decays and the Maximum Entropy Principle is presented. Based on the information theory inference approach we determine the Higgs boson mass as $M_H= 125.04\\pm 0.25$ GeV, a value fully compatible to the LHC measurement. This is straightforwardly obtained by taking the Higgs boson branching ratios as the target probability distributions of the inference, without any extra assumptions beyond the Standard Model. Yet, the principle can be a powerful tool in the construction of any model affecting the Higgs sector. We give, as an example, the case where the Higgs boson has an extra invisible decay channel. Our findings suggest that a system of Higgs bosons undergoing a collective decay to Standard Model particles is among the most fundamental ones where the Maximum Entropy Principle applies.
Maximum Entropy Principle and the Higgs Boson Mass
Alexandre Alves; Alex G. Dias; Roberto da Silva
2014-11-18T23:59:59.000Z
A successful connection between Higgs boson decays and the Maximum Entropy Principle is presented. Based on the information theory inference approach we determine the Higgs boson mass as $M_H= 125.04\\pm 0.25$ GeV, a value fully compatible to the LHC measurement. This is straightforwardly obtained by taking the Higgs boson branching ratios as the target probability distributions of the inference, without any extra assumptions beyond the Standard Model. Yet, the principle can be a powerful tool in the construction of any model affecting the Higgs sector. We give, as an example, the case where the Higgs boson has an extra invisible decay channel. Our findings suggest that a system of Higgs bosons undergoing a collective decay to Standard Model particles is among the most fundamental ones where the Maximum Entropy Principle applies.
Marcus Hutter -1 -Online Prediction Bayes versus Experts Online Prediction
Hutter, Marcus
Marcus Hutter - 1 - Online Prediction Bayes versus Experts Online Prediction: Bayes versus;Marcus Hutter - 2 - Online Prediction Bayes versus Experts Table of Contents · Sequential/online prediction: Setup · Bayesian Sequence Prediction (Bayes) · Prediction with Expert Advice (PEA) · PEA Bounds
Maximum entanglement in squeezed boson and fermion states
Khanna, F. C. [Theoretical Physics Institute, University of Alberta, Edmonton, Alberta T6G 2J1 (Canada); TRIUMF, Vancouver, British Columbia V6T 2A3 (Canada); Malbouisson, J. M. C. [Instituto de Fisica, Universidade Federal da Bahia, 40210-340, Salvador, BA (Brazil); Theoretical Physics Institute, University of Alberta, Edmonton, Alberta T6G 2J1 (Canada); Santana, A. E. [Instituto de Fisica, Universidade de Brasilia, 70910-900, Brasilia, DF (Brazil); Theoretical Physics Institute, University of Alberta, Edmonton, Alberta T6G 2J1 (Canada); Santos, E. S. [Centro Federal de Educacao Tecnologica da Bahia, 40030-010, Salvador, BA (Brazil)
2007-08-15T23:59:59.000Z
A class of squeezed boson and fermion states is studied with particular emphasis on the nature of entanglement. We first investigate the case of bosons, considering two-mode squeezed states. Then we construct the fermion version to show that such states are maximum entangled, for both bosons and fermions. To achieve these results, we demonstrate some relations involving squeezed boson states. The generalization to the case of fermions is made by using Grassmann variables.
Maximum Entry and Mandatory Separation Ages for Certain Security Employees
Broader source: Directives, Delegations, and Requirements [Office of Management (MA)]
2001-10-11T23:59:59.000Z
The policy establishes the DOE policy on maximum entry and mandatory separation ages for primary or secondary positions covered under special statutory retirement provisions and for those employees whose primary duties are the protection of officials of the United States against threats to personal safety or the investigation, apprehension, and detention of individuals suspected or convicted of offenses against the criminal laws of the United States. Admin Chg 1, dated 12-1-11, cancels DOE P 310.1.
Maximum entropy method for reconstruction of the CMB images
A. T. Bajkova
2002-05-21T23:59:59.000Z
We propose a new approach for the accurate reconstruction of cosmic microwave background distributions from observations containing in addition to the primary fluctuations the radiation from unresolved extragalactic point sources and pixel noise. The approach uses some effective realizations of the well-known maximum entropy method and principally takes into account {\\it a priori} information about finiteness and spherical symmetry of the power spectrum of the CMB satisfying the Gaussian statistics.
Occam's Razor Cuts Away the Maximum Entropy Principle
Rudnicki, ?ukasz
2014-01-01T23:59:59.000Z
I show that the maximum entropy principle can be replaced by a more natural assumption, that there exists a phenomenological function of entropy consistent with the microscopic model. The requirement of existence provides then a unique construction of the related probability density. I conclude the letter with an axiomatic formulation of the notion of entropy, which is suitable for exploration of the non-equilibrium phenomena.
PNNL: A Supervised Maximum Entropy Approach to Word Sense Disambiguation
Tratz, Stephen C.; Sanfilippo, Antonio P.; Gregory, Michelle L.; Chappell, Alan R.; Posse, Christian; Whitney, Paul D.
2007-06-23T23:59:59.000Z
In this paper, we described the PNNL Word Sense Disambiguation system as applied to the English All-Word task in Se-mEval 2007. We use a supervised learning approach, employing a large number of features and using Information Gain for dimension reduction. Our Maximum Entropy approach combined with a rich set of features produced results that are significantly better than baseline and are the highest F-score for the fined-grained English All-Words subtask.
Final report for confinement vessel analysis. Task 2, Safety vessel impact analyses
Murray, Y.D. [APTEK, Inc., Colorado Springs, CO (United States)
1994-01-26T23:59:59.000Z
This report describes two sets of finite element analyses performed under Task 2 of the Confinement Vessel Analysis Program. In each set of analyses, a charge is assumed to have detonated inside the confinement vessel, causing the confinement vessel to fail in either of two ways; locally around the weld line of a nozzle, or catastrophically into two hemispheres. High pressure gases from the internal detonation pressurize the inside of the safety vessel and accelerate the fractured nozzle or hemisphere into the safety vessel. The first set of analyses examines the structural integrity of the safety vessel when impacted by the fractured nozzle. The objective of these calculations is to determine if the high strength bolt heads attached to the nozzle penetrate or fracture the lower strength safety vessel, thus allowing gaseous detonation products to escape to the atmosphere. The two dimensional analyses predict partial penetration of the safety vessel beneath the tip of the penetrator. The analyses also predict maximum principal strains in the safety vessel which exceed the measured ultimate strain of steel. The second set of analyses examines the containment capability of the safety vessel closure when impacted by half a confinement vessel (hemisphere). The predicted response is the formation of a 0.6-inch gap, caused by relative sliding and separation between the two halves of the safety vessel. Additional analyses with closure designs that prevent the gap formation are recommended.
Martin Wilde, Principal Investigator
2012-12-31T23:59:59.000Z
ABSTRACT Application of Real-Time Offsite Measurements in Improved Short-Term Wind Ramp Prediction Skill Improved forecasting performance immediately preceding wind ramp events is of preeminent concern to most wind energy companies, system operators, and balancing authorities. The value of near real-time hub height-level wind data and more general meteorological measurements to short-term wind power forecasting is well understood. For some sites, access to onsite measured wind data - even historical - can reduce forecast error in the short-range to medium-range horizons by as much as 50%. Unfortunately, valuable free-stream wind measurements at tall tower are not typically available at most wind plants, thereby forcing wind forecasters to rely upon wind measurements below hub height and/or turbine nacelle anemometry. Free-stream measurements can be appropriately scaled to hub-height levels, using existing empirically-derived relationships that account for surface roughness and turbulence. But there is large uncertainty in these relationships for a given time of day and state of the boundary layer. Alternatively, forecasts can rely entirely on turbine anemometry measurements, though such measurements are themselves subject to wake effects that are not stationary. The void in free-stream hub-height level measurements of wind can be filled by remote sensing (e.g., sodar, lidar, and radar). However, the expense of such equipment may not be sustainable. There is a growing market for traditional anemometry on tall tower networks, maintained by third parties to the forecasting process (i.e., independent of forecasters and the forecast users). This study examines the value of offsite tall-tower data from the WINDataNOW Technology network for short-horizon wind power predictions at a wind farm in northern Montana. The presentation shall describe successful physical and statistical techniques for its application and the practicality of its application in an operational setting. It shall be demonstrated that when used properly, the real-time offsite measurements materially improve wind ramp capture and prediction statistics, when compared to traditional wind forecasting techniques and to a simple persistence model.
Predictive Maintenance Technologies
Broader source: Energy.gov [DOE]
Several diagnostic technologies and best practices are available to assist Federal agencies with predictive maintenance programs.
Beyond Boltzmann-Gibbs statistics: Maximum entropy hyperensembles out-of-equilibrium
Crooks, Gavin E.
2006-01-01T23:59:59.000Z
1957). J. Skilling, in Maximum Entropy and Bayesian Methods,45–52. J. Skilling, in Maximum Entropy and Bayesian Methods,e C. C. Rodriguez, in Maximum Entropy and Bayesian Methods,
Deriving the continuity of maximum-entropy basis functions via variational analysis
Sukumar, N.; Wets, R. J. -B.
2007-01-01T23:59:59.000Z
and V. J. DellaPietra, A maximum entropy approach to naturalJ. and R. K. Bryan, Maximum entropy image reconstruction:Heidelberg, Continuity of maximum-entropy basis functions p
Mahowald, Natalie
Climate response and radiative forcing from mineral aerosols during the last glacial maximum, pre-industrial, current and doubled-carbon dioxide climates Natalie M. Mahowald,1,2 Masaru Yoshioka,1,2 William D. Collins July 2006; accepted 9 August 2006; published 27 October 2006. [1] Mineral aerosol impacts on climate
Piecewise training for structured prediction
Sutton, Charles; McCallum, Andrew
2009-01-01T23:59:59.000Z
margin methods for structured prediction. In Internationalaccuracy computational gene prediction. PLoS Computationaltraining for structured prediction Charles Sutton · Andrew
Maximum Achievable Control Technology for New Industrial Boilers (released in AEO2005)
Reports and Publications (EIA)
2005-01-01T23:59:59.000Z
As part of Clean Air Act 90 (CAAA90, the EPA on February 26, 2004, issued a final rulethe National Emission Standards for Hazardous Air Pollutants (NESHAP) to reduce emissions of hazardous air pollutants (HAPs) from industrial, commercial, and institutional boilers and process heaters. The rule requires industrial boilers and process heaters to meet limits on HAP emissions to comply with a Maximum Achievable Control Technology (MACT) floor level of control that is the minimum level such sources must meet to comply with the rule. The major HAPs to be reduced are hydrochloric acid, hydrofluoric acid, arsenic, beryllium, cadmium, and nickel. The EPA predicts that the boiler MACT rule will reduce those HAP emissions from existing sources by about 59,000 tons per year in 2005.
Spectral Analysis of Excited Nucleons in Lattice QCD with Maximum Entropy Method
Kiyoshi Sasaki; Shoichi Sasaki; Tetsuo Hatsuda
2005-07-12T23:59:59.000Z
We study the mass spectra of excited baryons with the use of the lattice QCD simulations. We focus our attention on the problem of the level ordering between the positive-parity excited state N'(1440) (the Roper resonance) and the negative-parity excited state N^*(1535). Nearly perfect parity projection is accomplished by combining the quark propagators with periodic and anti-periodic boundary conditions in the temporal direction. Then we extract the spectral functions from the lattice data by utilizing the maximum entropy method. We observe that the masses of the N' and N^* states are close for wide range of the quark masses (M_pi=0.61-1.22 GeV), which is in contrast to the phenomenological prediction of the quark models. The role of the Wilson doublers in the baryonic spectral functions is also studied.
Quantifying extrinsic noise in gene expression using the maximum entropy framework
Purushottam D. Dixit
2013-04-04T23:59:59.000Z
We present a maximum entropy framework to separate intrinsic and extrinsic contributions to noisy gene expression solely from the profile of expression. We express the experimentally accessible probability distribution of the copy number of the gene product (mRNA or protein) by accounting for possible variations in extrinsic factors. The distribution of extrinsic factors is estimated using the maximum entropy principle. Our results show that extrinsic factors qualitatively and quantitatively affect the probability distribution of the gene product. We work out, in detail, the transcription of mRNA from a constitutively expressed promoter in {\\it E. coli}. We suggest that the variation in extrinsic factors may account for the observed {\\it wider than Poisson} distribution of mRNA copy numbers. We successfully test our framework on a numerical simulation of a simple gene expression scheme that accounts for the variation in extrinsic factors. We also make falsifiable predictions, some of which are tested on previous experiments in {\\it E. coli} while others need verification. Application of the current framework to more complex situations is also discussed.
Better Nonlinear Models from Noisy Data: Attractors with Maximum Likelihood
Patrick E. McSharry; Leonard A. Smith
1999-11-30T23:59:59.000Z
A new approach to nonlinear modelling is presented which, by incorporating the global behaviour of the model, lifts shortcomings of both least squares and total least squares parameter estimates. Although ubiquitous in practice, a least squares approach is fundamentally flawed in that it assumes independent, normally distributed (IND) forecast errors: nonlinear models will not yield IND errors even if the noise is IND. A new cost function is obtained via the maximum likelihood principle; superior results are illustrated both for small data sets and infinitely long data streams.
Application of Maximum Entropy Method to Dynamical Fermions
Jonathan Clowser; Costas Strouthos
2001-10-16T23:59:59.000Z
The Maximum Entropy Method is applied to dynamical fermion simulations of the (2+1)-dimensional Nambu-Jona-Lasinio model. This model is particularly interesting because at T=0 it has a broken phase with a rich spectrum of mesonic bound states and a symmetric phase where there are resonances, and hence the simple pole assumption of traditional fitting procedures breaks down. We present results extracted from simulations on large lattices for the spectral functions of the elementary fermion, the pion, the sigma, the massive pseudoscalar meson and the symmetric phase resonances.
Excited nucleon spectrum from lattice QCD with maximum entropy method
K. Sasaki; S. Sasaki; T. Hatsuda; M. Asakawa
2003-09-29T23:59:59.000Z
We study excited states of the nucleon in quenched lattice QCD with the spectral analysis using the maximum entropy method. Our simulations are performed on three lattice sizes $16^3\\times 32$, $24^3\\times 32$ and $32^3\\times 32$, at $\\beta=6.0$ to address the finite volume issue. We find a significant finite volume effect on the mass of the Roper resonance for light quark masses. After removing this systematic error, its mass becomes considerably reduced toward the direction to solve the level order puzzle between the Roper resonance $N'(1440)$ and the negative-parity nucleon $N^*(1535)$.
Discovery Park Impact NNSA PRISM Center for
Holland, Jeffrey
Discovery Park Impact NNSA PRISM Center for Prediction of Reliability, Integrity and Survivability in PRISM. Purdue is one of 5 centers funded under NNSA's Predictive Science Academic Alliance Program Computing, a division of Information Technology at Purdue. The NNSA national laboratories will be involved
Maximum surface level and temperature histories for Hanford waste tanks
Flanagan, B.D.; Ha, N.D.; Huisingh, J.S.
1994-09-02T23:59:59.000Z
Radioactive defense waste resulting from the chemical processing of spent nuclear fuel has been accumulating at the Hanford Site since 1944. This waste is stored in underground waste-storage tanks. The Hanford Site Tank Farm Facilities Interim Safety Basis (ISB) provides a ready reference to the safety envelope for applicable tank farm facilities and installations. During preparation of the ISB, tank structural integrity concerns were identified as a key element in defining the safety envelope. These concerns, along with several deficiencies in the technical bases associated with the structural integrity issues and the corresponding operational limits/controls specified for conduct of normal tank farm operations are documented in the ISB. Consequently, a plan was initiated to upgrade the safety envelope technical bases by conducting Accelerated Safety Analyses-Phase 1 (ASA-Phase 1) sensitivity studies and additional structural evaluations. The purpose of this report is to facilitate the ASA-Phase 1 studies and future analyses of the single-shell tanks (SSTs) and double-shell tanks (DSTs) by compiling a quantitative summary of some of the past operating conditions the tanks have experienced during their existence. This report documents the available summaries of recorded maximum surface levels and maximum waste temperatures and references other sources for more specific data.
Prediction of plant species distributions across six Peter B. Pearman,1
Zimmermann, Niklaus E.
LETTER Prediction of plant species distributions across six millennia Peter B. Pearman,1 The usefulness of species distribution models (SDMs) in predicting impacts of climate change on biodiversity alternative way to evaluate the predictive ability of SDMs across time is to compare their predictions
Research Summary Sustainability impact assessment: tools for environmental, social and economic
Research Summary Sustainability impact assessment: tools for environmental, social and economic to produce Sustainability Impact Assessment Tools (SIATs) that will be used to predict the impacts) and will be used as part of the Impact Assessment (IA) process, as set out in the Impact Assessment Guidelines
Letham, Benjamin
In sequential event prediction, we are given a “sequence database” of past event sequences to learn from, and we aim to predict the next event within a current event sequence. We focus on applications where the set of the ...
Universal Prediction Neri Merhav
Merhav, Neri
Universal Prediction Neri Merhav y Meir Feder z July 23, 1998 Abstract This paper consists of an overview on universal prediction from an information-theoretic perspective. Special attention is given of the universal prediction problem are described with emphasis on the analogy and the di erences between results
Identification in Prediction Theory
Bielefeld, University of
Identification in Prediction Theory Lars BÂ¨aumer Bielefeld 2000 #12;Acknowledgment I wish to thank remarks. 1 #12;Contents 1 Introduction 3 2 Finite-State Predictability 7 2.1 A Universal Predictor Predictability and Identifiability . . . . . . 30 3.3 Markov Machines for Identification
A maximum entropy framework for non-exponential distributions
Peterson, Jack; Dill, Ken A
2015-01-01T23:59:59.000Z
Probability distributions having power-law tails are observed in a broad range of social, economic, and biological systems. We describe here a potentially useful common framework. We derive distribution functions $\\{p_k\\}$ for situations in which a `joiner particle' $k$ pays some form of price to enter a `community' of size $k-1$, where costs are subject to economies-of-scale (EOS). Maximizing the Boltzmann-Gibbs-Shannon entropy subject to this energy-like constraint predicts a distribution having a power-law tail; it reduces to the Boltzmann distribution in the absence of EOS. We show that the predicted function gives excellent fits to 13 different distribution functions, ranging from friendship links in social networks, to protein-protein interactions, to the severity of terrorist attacks. This approach may give useful insights into when to expect power-law distributions in the natural and social sciences.
Reduction in maximum time uncertainty of paired time signals
Theodosiou, G.E.; Dawson, J.W.
1981-02-11T23:59:59.000Z
Reduction in the maximum time uncertainty (t/sub max/ - t/sub min/) of a series of paired time signals t/sub 1/ and t/sub 2/ varying between two input terminals and representative of a series of single events where t/sub 1/ less than or equal to t/sub 2/ and t/sub 1/ + t/sub 2/ equals a constant, is carried out with a circuit utilizing a combination of OR and AND gates as signal selecting means and one or more time delays to increase the minimum value (t/sub min/) of the first signal t/sub 1/ closer to t/sub max/ and thereby reduce the difference. The circuit may utilize a plurality of stages to reduce the uncertainty by factors of 20 to 800.
Probable maximum flood control; Yucca Mountain Site Characterization Project
DeGabriele, C.E.; Wu, C.L. [Bechtel National, Inc., San Francisco, CA (United States)
1991-11-01T23:59:59.000Z
This study proposes preliminary design concepts to protect the waste-handling facilities and all shaft and ramp entries to the underground from the probable maximum flood (PMF) in the current design configuration for the proposed Nevada Nuclear Waste Storage Investigation (NNWSI) repository protection provisions were furnished by the United States Bureau of Reclamation (USSR) or developed from USSR data. Proposed flood protection provisions include site grading, drainage channels, and diversion dikes. Figures are provided to show these proposed flood protection provisions at each area investigated. These areas are the central surface facilities (including the waste-handling building and waste treatment building), tuff ramp portal, waste ramp portal, men-and-materials shaft, emplacement exhaust shaft, and exploratory shafts facility.
Fracture Toughness and Maximum Stress in a Disordered Lattice System
Chiyori Urabe; Shinji Takesue
2008-12-29T23:59:59.000Z
Fracture in a disordered lattice system is studied. In our system, particles are initially arranged on the triangular lattice and each nearest-neighbor pair is connected with a randomly chosen soft or hard Hookean spring. Every spring has the common threshold of stress at which it is cut. We make an initial crack and expand the system perpendicularly to the crack. We find that the maximum stress in the stress-strain curve is larger than those in the systems with soft or hard springs only (uniform systems). Energy required to advance fracture is also larger in some disordered systems, which indicates that the fracture toughness improves. The increase of the energy is caused by the following two factors. One is that the soft spring is able to hold larger energy than the hard one. The other is that the number of cut springs increases as the fracture surface becomes tortuous in disordered systems.
Maximum Margin Clustering for State Decomposition of Metastable Systems
Wu, Hao
2015-01-01T23:59:59.000Z
When studying a metastable dynamical system, a prime concern is how to decompose the phase space into a set of metastable states. Unfortunately, the metastable state decomposition based on simulation or experimental data is still a challenge. The most popular and simplest approach is geometric clustering which is developed based on the classical clustering technique. However, the prerequisites of this approach are: (1) data are obtained from simulations or experiments which are in global equilibrium and (2) the coordinate system is appropriately selected. Recently, the kinetic clustering approach based on phase space discretization and transition probability estimation has drawn much attention due to its applicability to more general cases, but the choice of discretization policy is a difficult task. In this paper, a new decomposition method designated as maximum margin metastable clustering is proposed, which converts the problem of metastable state decomposition to a semi-supervised learning problem so that...
Improved Maximum Entropy Analysis with an Extended Search Space
Alexander Rothkopf
2013-01-07T23:59:59.000Z
The standard implementation of the Maximum Entropy Method (MEM) follows Bryan and deploys a Singular Value Decomposition (SVD) to limit the dimensionality of the underlying solution space apriori. Here we present arguments based on the shape of the SVD basis functions and numerical evidence from a mock data analysis, which show that the correct Bayesian solution is not in general recovered with this approach. As a remedy we propose to extend the search basis systematically, which will eventually recover the full solution space and the correct solution. In order to adequately approach problems where an exponentially damped kernel is used, we provide an open-source implementation, using the C/C++ language that utilizes high precision arithmetic adjustable at run-time. The LBFGS algorithm is included in the code in order to attack problems without the need to resort to a particular search space restriction.
Quantum maximum entropy principle for a system of identical particles
Trovato, M. [Dipartimento di Matematica, Universita di Catania, Viale A. Doria, 95125 Catania (Italy); Reggiani, L. [Dipartimento di Ingegneria dell' Innovazione and CNISM, Universita del Salento, Via Arnesano s/n, 73100 Lecce (Italy)
2010-02-15T23:59:59.000Z
By introducing a functional of the reduced density matrix, we generalize the definition of a quantum entropy which incorporates the indistinguishability principle of a system of identical particles. With the present definition, the principle of quantum maximum entropy permits us to solve the closure problem for a quantum hydrodynamic set of balance equations corresponding to an arbitrary number of moments in the framework of extended thermodynamics. The determination of the reduced Wigner function for equilibrium and nonequilibrium conditions is found to become possible only by assuming that the Lagrange multipliers can be expanded in powers of (Planck constant/2pi){sup 2}. Quantum contributions are expressed in powers of (Planck constant/2pi){sup 2} while classical results are recovered in the limit (Planck constant/2pi)->0.
Ullmer, Brygg
PREDICTION OF CUTTINGS BED HEIGHT WITH COMPUTATIONAL FLUID DYNAMICS IN DRILLING HORIZONTAL parameters such as wellbore geometry, pump rate, drilling fluid rheology and density, and maximum drilling Computational Fluid Dynamics methods. Movement, concentration and accumulation of drilled cuttings in non
Atmospheric chemistry impacts and feedbacks on the global carbon cycle
prediction. Issues to be addressed include the quantification of the impact of the atmospheric oxidation and the oxidative state of the atmosphere. The end goal is to create a model that can quantitatively predict is required to: Predict 3-D atmospheric CO2 production as a function of the CCSM3 atmospheric chemistry module
Savannah River Site radioiodine atmospheric releases and offsite maximum doses
Marter, W.L.
1990-11-01T23:59:59.000Z
Radioisotopes of iodine have been released to the atmosphere from the Savannah River Site since 1955. The releases, mostly from the 200-F and 200-H Chemical Separations areas, consist of the isotopes, I-129 and 1-131. Small amounts of 1-131 and 1-133 have also been released from reactor facilities and the Savannah River Laboratory. This reference memorandum was issued to summarize our current knowledge of releases of radioiodines and resultant maximum offsite doses. This memorandum supplements the reference memorandum by providing more detailed supporting technical information. Doses reported in this memorandum from consumption of the milk containing the highest I-131 concentration following the 1961 1-131 release incident are about 1% higher than reported in the reference memorandum. This is the result of using unrounded 1-131 concentrations of I-131 in milk in this memo. It is emphasized here that this technical report does not constitute a dose reconstruction in the same sense as the dose reconstruction effort currently underway at Hanford. This report uses existing published data for radioiodine releases and existing transport and dosimetry models.
LANDFILL OPERATION FOR CARBON SEQUESTRATION AND MAXIMUM METHANE EMISSION CONTROL
Don Augenstein
2001-02-01T23:59:59.000Z
The work described in this report, to demonstrate and advance this technology, has used two demonstration-scale cells of size (8000 metric tons [tonnes]), sufficient to replicate many heat and compaction characteristics of larger ''full-scale'' landfills. An enhanced demonstration cell has received moisture supplementation to field capacity. This is the maximum moisture waste can hold while still limiting liquid drainage rate to minimal and safely manageable levels. The enhanced landfill module was compared to a parallel control landfill module receiving no moisture additions. Gas recovery has continued for a period of over 4 years. It is quite encouraging that the enhanced cell methane recovery has been close to 10-fold that experienced with conventional landfills. This is the highest methane recovery rate per unit waste, and thus progress toward stabilization, documented anywhere for such a large waste mass. This high recovery rate is attributed to moisture, and elevated temperature attained inexpensively during startup. Economic analyses performed under Phase I of this NETL contract indicate ''greenhouse cost effectiveness'' to be excellent. Other benefits include substantial waste volume loss (over 30%) which translates to extended landfill life. Other environmental benefits include rapidly improved quality and stabilization (lowered pollutant levels) in liquid leachate which drains from the waste.
Maximum Entropy Analysis of the Spectral Functions in Lattice QCD
M. Asakawa; T. Hatsuda; Y. Nakahara
2001-02-26T23:59:59.000Z
First principle calculation of the QCD spectral functions (SPFs) based on the lattice QCD simulations is reviewed. Special emphasis is placed on the Bayesian inference theory and the Maximum Entropy Method (MEM), which is a useful tool to extract SPFs from the imaginary-time correlation functions numerically obtained by the Monte Carlo method. Three important aspects of MEM are (i) it does not require a priori assumptions or parametrizations of SPFs, (ii) for given data, a unique solution is obtained if it exists, and (iii) the statistical significance of the solution can be quantitatively analyzed. The ability of MEM is explicitly demonstrated by using mock data as well as lattice QCD data. When applied to lattice data, MEM correctly reproduces the low-energy resonances and shows the existence of high-energy continuum in hadronic correlation functions. This opens up various possibilities for studying hadronic properties in QCD beyond the conventional way of analyzing the lattice data. Future problems to be studied by MEM in lattice QCD are also summarized.
Improved Maximum Entropy Method with an Extended Search Space
Alexander Rothkopf
2012-08-25T23:59:59.000Z
We report on an improvement to the implementation of the Maximum Entropy Method (MEM). It amounts to departing from the search space obtained through a singular value decomposition (SVD) of the Kernel. Based on the shape of the SVD basis functions we argue that the MEM spectrum for given $N_\\tau$ data-points $D(\\tau)$ and prior information $m(\\omega)$ does not in general lie in this $N_\\tau$ dimensional singular subspace. Systematically extending the search basis will eventually recover the full search space and the correct extremum. We illustrate this idea through a mock data analysis inspired by actual lattice spectra, to show where our improvement becomes essential for the success of the MEM. To remedy the shortcomings of Bryan's SVD prescription we propose to use the real Fourier basis, which consists of trigonometric functions. Not only does our approach lead to more stable numerical behavior, as the SVD is not required for the determination of the basis functions, but also the resolution of the MEM becomes independent from the position of the reconstructed peaks.
Maximum entropy detection of planets around active stars
Petit, P; Hébrard, E; Morin, J; Folsom, C P; Böhm, T; Boisse, I; Borgniet, S; Bouvier, J; Delfosse, X; Hussain, G; Jeffers, S V; Marsden, S C; Barnes, J R
2015-01-01T23:59:59.000Z
(shortened for arXiv) We aim to progress towards more efficient exoplanet detection around active stars by optimizing the use of Doppler Imaging in radial velocity measurements. We propose a simple method to simultaneously extract a brightness map and a set of orbital parameters through a tomographic inversion technique derived from classical Doppler mapping. Based on the maximum entropy principle, the underlying idea is to determine the set of orbital parameters that minimizes the information content of the resulting Doppler map. We carry out a set of numerical simulations to perform a preliminary assessment of the robustness of our method, using an actual Doppler map of the very active star HR 1099 to produce a realistic synthetic data set for various sets of orbital parameters of a single planet in a circular orbit. Using a simulated time-series of 50 line profiles affected by a peak-to-peak activity jitter of 2.5 km/s, we are able in most cases to recover the radial velocity amplitude, orbital phase and o...
Dynamic Prediction of Concurrency Errors
Sadowski, Caitlin
2012-01-01T23:59:59.000Z
Relation 15 Must-Before Race Prediction 16 Implementation 17viii Abstract Dynamic Prediction of Concurrency Errors bySANTA CRUZ DYNAMIC PREDICTION OF CONCURRENCY ERRORS A
Bullard, K.L.
1994-08-01T23:59:59.000Z
The US Geological Survey (USGS), as part of the Yucca Mountain Project (YMP), is conducting studies at Yucca Mountain, Nevada. The purposes of these studies are to provide hydrologic and geologic information to evaluate the suitability of Yucca Mountain for development as a high-level nuclear waste repository, and to evaluate the ability of the mined geologic disposal system (MGDS) to isolate the waste in compliance with regulatory requirements. In particular, the project is designed to acquire information necessary for the Department of Energy (DOE) to demonstrate in its environmental impact statement (EIS) and license application whether the MGDS will meet the requirements of federal regulations 10 CFR Part 60, 10 CFR Part 960, and 40 CFR Part 191. Complete study plans for this part of the project were prepared by the USGS and approved by the DOE in August and September of 1990. The US Bureau of Reclamation (Reclamation) was selected by the USGS as a contractor to provide probable maximum flood (PMF) magnitudes and associated inundation maps for preliminary engineering design of the surface facilities at Yucca Mountain. These PMF peak flow estimates are necessary for successful waste repository design and construction. The PMF technique was chosen for two reasons: (1) this technique complies with ANSI requirements that PMF technology be used in the design of nuclear related facilities (ANSI/ANS, 1981), and (2) the PMF analysis has become a commonly used technology to predict a ``worst possible case`` flood scenario. For this PMF study, probable maximum precipitation (PMP) values were obtained for a local storm (thunderstorm) PMP event. These values were determined from the National Weather Services`s Hydrometeorological Report No. 49 (HMR 49).
Setting the Renormalization Scale in QCD: The Principle of Maximum Conformality
Brodsky, Stanley J.; /SLAC /Southern Denmark U., CP3-Origins; Di Giustino, Leonardo; /SLAC
2011-08-19T23:59:59.000Z
A key problem in making precise perturbative QCD predictions is the uncertainty in determining the renormalization scale {mu} of the running coupling {alpha}{sub s}({mu}{sup 2}): The purpose of the running coupling in any gauge theory is to sum all terms involving the {beta} function; in fact, when the renormalization scale is set properly, all non-conformal {beta} {ne} 0 terms in a perturbative expansion arising from renormalization are summed into the running coupling. The remaining terms in the perturbative series are then identical to that of a conformal theory; i.e., the corresponding theory with {beta} = 0. The resulting scale-fixed predictions using the 'principle of maximum conformality' (PMC) are independent of the choice of renormalization scheme - a key requirement of renormalization group invariance. The results avoid renormalon resummation and agree with QED scale-setting in the Abelian limit. The PMC is also the theoretical principle underlying the BLM procedure, commensurate scale relations between observables, and the scale-setting method used in lattice gauge theory. The number of active flavors nf in the QCD {beta} function is also correctly determined. We discuss several methods for determining the PMC/BLM scale for QCD processes. We show that a single global PMC scale, valid at leading order, can be derived from basic properties of the perturbative QCD cross section. The elimination of the renormalization scheme ambiguity using the PMC will not only increase the precision of QCD tests, but it will also increase the sensitivity of collider experiments to new physics beyond the Standard Model.
Bowyer, Ted W. [Pacific Northwest National Laboratory (PNNL), Richland, WA (United States); Kephart, Rosara F.; Eslinger, Paul W. [Pacific Northwest National Laboratory (PNNL), Richland, WA (United States); Friese, Judah I. [Pacific Northwest National Laboratory (PNNL), Richland, WA (United States); Miley, Harry S. [Pacific Northwest National Laboratory (PNNL), Richland, WA (United States); Saey, Paul R. [Vienna University of Technology, Atomic Institute of the Austrian Universities, Vienna (Austria)
2013-01-01T23:59:59.000Z
Fission gases such as 133Xe are used extensively for monitoring the world for signs of nuclear testing in systems such as the International Monitoring System (IMS). These gases are also produced by nuclear reactors and by fission production of 99Mo for medical use. Recently, medical isotope production facilities have been identified as the major contributor to the background of radioactive xenon isotopes (radioxenon) in the atmosphere (Saey, et al., 2009). These releases pose a potential future problem for monitoring nuclear explosions if not addressed. As a starting point, a maximum acceptable daily xenon emission rate was calculated, that is both scientifically defendable as not adversely affecting the IMS, but also consistent with what is possible to achieve in an operational environment. This study concludes that an emission of 5×109 Bq/day from a medical isotope production facility would be both an acceptable upper limit from the perspective of minimal impact to monitoring stations, but also appears to be an achievable limit for large isotope producers.
Predictability and Diagnosis of Low-Frequency Climate Processes in the Pacific
Dr. Arthur J. Miller
2008-10-15T23:59:59.000Z
Predicting the climate for the coming decades requires understanding both natural and anthropogenically forced climate variability. This variability is important because it has major societal impacts, for example by causing floods or droughts on land or altering fishery stocks in the ocean. Our results fall broadly into three topics: evaluating global climate model predictions; regional impacts of climate changes over western North America; and regional impacts of climate changes over the eastern North Pacific Ocean.
Study of Different Implementation Approaches for a Maximum Power Point Florent Boico Brad Lehman
Lehman, Brad
will study the design of a maximum power point tracker for low power solar panels (10-50W). In the process weStudy of Different Implementation Approaches for a Maximum Power Point Tracker 1 Florent Boico Brad Lehman Northeastern University Abstract: This paper studies the design of a Maximum Power Point Tracker
A Maximum Entropy Algorithm for Rhythmic Analysis of Genome-Wide Expression Patterns
Richardson, David
A Maximum Entropy Algorithm for Rhythmic Analysis of Genome-Wide Expression Patterns Christopher James Langmead C. Robertson McClung Bruce Randall Donald ,,,Â§,Â¶ Abstract We introduce a maximum entropy-based spectral analysis, maximum entropy spectral reconstruction is well suited to signals of the type generated
1 A MAXIMUM ENTROPY METHOD FOR SUBNETWORK ORIGIN-DESTINATION 2 TRIP MATRIX ESTIMATION
Kockelman, Kara M.
1 A MAXIMUM ENTROPY METHOD FOR SUBNETWORK ORIGIN-DESTINATION 2 TRIP MATRIX ESTIMATION 3 4 Chi Xie 5, maximum entropy, linearization 36 algorithm, column generation 37 #12;C. Xie, K.M. Kockelman and S is the trip matrix of the simplified network. This paper discusses a5 maximum entropy method
Maximum entropy and Bayesian approaches to the ratio problem Edward Z. Shen*
Perloff, Jeffrey M.
Maximum entropy and Bayesian approaches to the ratio problem Edward Z. Shen* Jeffrey M. Perloff** January 2001 Abstract Maximum entropy and Bayesian approaches provide superior estimates of a ratio extra information in the supports for the underlying parameters for generalized maximum entropy (GME
Perloff, Jeffrey M.
Comparison of Maximum Entropy and Higher-Order Entropy Estimators Amos Golan* and Jeffrey M. Perloff** ABSTRACT We show that the generalized maximum entropy (GME) is the only estimation method- classes of estimators may outperform the GME estimation rule. Keywords: generalized entropy, maximum
A maximum entropy-least squares estimator for elastic origin-destination trip matrix estimation
Kockelman, Kara M.
A maximum entropy-least squares estimator for elastic origin- destination trip matrix estimation propose a combined maximum entropy-least squares (ME-LS) estimator, by which O- D flows are distributed-destination trip table; elastic demand; maximum entropy; least squares; subnetwork analysis; convex combination
Economic Impact Reporting Framework
Economic Impact Reporting Framework 2007/08 November 2008 #12;#12;Economic Impact Reporting Framework 2007/08 #12;STFC Economic Impact Reporting Framework 2007/08 Contents: Introduction..............................................................................................................................................2 1: Overall Economic Impacts
Economic Impact Reporting Framework
Economic Impact Reporting Framework 2008/09 #12;#12;Economic Impact Reporting Framework 2008/09 #12;STFC Economic Impact Reporting Framework 2008/09 Contents: Introduction..............................................................................................................................................2 1: Overall Economic Impacts
THE MAXIMUM ENERGY OF ACCELERATED PARTICLES IN RELATIVISTIC COLLISIONLESS SHOCKS
Sironi, Lorenzo [Harvard-Smithsonian Center for Astrophysics, 60 Garden Street, Cambridge, MA 02138 (United States); Spitkovsky, Anatoly [Department of Astrophysical Sciences, Princeton University, Princeton, NJ 08544-1001 (United States); Arons, Jonathan, E-mail: lsironi@cfa.harvard.edu [Department of Astronomy, Department of Physics, and Theoretical Astrophysics Center, University of California, Berkeley, CA 94720 (United States)
2013-07-01T23:59:59.000Z
The afterglow emission from gamma-ray bursts (GRBs) is usually interpreted as synchrotron radiation from electrons accelerated at the GRB external shock that propagates with relativistic velocities into the magnetized interstellar medium. By means of multi-dimensional particle-in-cell simulations, we investigate the acceleration performance of weakly magnetized relativistic shocks, in the magnetization range 0 {approx}< {sigma} {approx}< 10{sup -1}. The pre-shock magnetic field is orthogonal to the flow, as generically expected for relativistic shocks. We find that relativistic perpendicular shocks propagating in electron-positron plasmas are efficient particle accelerators if the magnetization is {sigma} {approx}< 10{sup -3}. For electron-ion plasmas, the transition to efficient acceleration occurs for {sigma} {approx}< 3 Multiplication-Sign 10{sup -5}. Here, the acceleration process proceeds similarly for the two species, since the electrons enter the shock nearly in equipartition with the ions, as a result of strong pre-heating in the self-generated upstream turbulence. In both electron-positron and electron-ion shocks, we find that the maximum energy of the accelerated particles scales in time as {epsilon}{sub max}{proportional_to}t {sup 1/2}. This scaling is shallower than the so-called (and commonly assumed) Bohm limit {epsilon}{sub max}{proportional_to}t, and it naturally results from the small-scale nature of the Weibel turbulence generated in the shock layer. In magnetized plasmas, the energy of the accelerated particles increases until it reaches a saturation value {epsilon}{sub sat}/{gamma}{sub 0} m{sub i}c {sup 2} {approx} {sigma}{sup -1/4}, where {gamma}{sub 0} m{sub i}c {sup 2} is the mean energy per particle in the upstream bulk flow. Further energization is prevented by the fact that the self-generated turbulence is confined within a finite region of thickness {proportional_to}{sigma}{sup -1/2} around the shock. Our results can provide physically grounded inputs for models of non-thermal emission from a variety of astrophysical sources, with particular relevance to GRB afterglows.
The Impact of Using Derived Fuel Consumption Maps to Predict...
Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site
Directions in Engine-Efficiency and Emissions Research (DEER) Conference in Detroit, MI, September 27-30, 2010. p-09nuszkowski.pdf More Documents & Publications Engine Waste...
Near-term prediction of impact-relevant heatwave indices
Stevenson, Paul
out-of-sample 30yr climatology for years prior to the start of the simulation removing the mean biasMurphy, 1988] Â· The reference forecasts (y) used included observed climatology (average of previous 30). This was calculated with indices which had bias of the Eobs climatology of that index removed before calculation
Predicting High Impact Academic Papers Using Citation Network Features
Christen, Peter
strategies to remain competitive on a global scale. The utilisation of data mining techniques to make&D to the same extent as other economic powerhouses to take advantage of being `the first mover', with the development of insightful pre- dictive analytics over a range of data sources, it can become an `early adopter
New tool predicts economic impacts of natural gas stations | Argonne
Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)
AFDC Printable Version Share this resource Send a link to EERE: Alternative Fuels Data Center Home Page to someone by E-mail Share EERE: Alternative Fuels Data Center Home Page on Facebook Tweet about EERE: Alternative Fuels Data Center Home Page on Twitter Bookmark EERE:1 First Use of Energy for All Purposes (Fuel and Nonfuel),Feet) Year Jan Feb Mar Apr MayAtmosphericNuclear Security Administration the Contributions andDataNational Libraryornl.govNew imagingLaboratoryTool
A Workshop to Identify Research Needs and Impacts in Predictive...
presicerpt.pdf More Documents & Publications Overview of the DOE Advanced Combustion Engine R&D Vehicle Technologies Office Merit Review 2014: Overview of the DOE Advanced...
Sandia National Laboratories: predicts photovoltaic array ocular impacts
Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)
AFDC Printable Version Share this resource Send a link to EERE: Alternative Fuels Data Center Home Page to someone by E-mail Share EERE: Alternative Fuels Data Center Home Page on Facebook Tweet about EERE: Alternative Fuels Data Center Home Page on Twitter Bookmark EERE:1 First Use of Energy for All Purposes (Fuel and Nonfuel),Feet) Year Jan Feb Mar Apr MayAtmosphericNuclear Security Administration the1developmentturbine bladelifetimepower-to-gas applications Storingenergy
Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)
AFDC Printable Version Share this resource Send a link to EERE: Alternative Fuels Data Center Home Page to someone by E-mail Share EERE: Alternative Fuels Data Center Home Page on Facebook Tweet about EERE: Alternative Fuels Data Center Home Page on Twitter Bookmark EERE:1 First Use of Energy for All Purposes (Fuel and Nonfuel),Feet) Year Jan Feb Mar Apr May Jun Jul(Summary)morphinanInformation Desert Southwest RegionatSearchScheduled System OutagesNews PressThemes Scientific Impact
deformation prediction techniques, for assessing mining impacts on surface structures and facilities of surface facility to be protected. The Surface Deformation Prediction Software System (SDPS consideration of active, as well as abandoned, mine operations. The damages attributed to this phenomenon
Earthquake prediction: Simple methods for complex phenomena
Luen, Bradley
2010-01-01T23:59:59.000Z
and predictions . . . . . . . . . . . . . . . . . . . . .6.1 Assessing models and predictions . . . . . . .What are earthquake predictions and forecasts? . . . . . .
Annette Schafer, Arthur S. Rood, A. Jeffrey Sondrup
2011-12-23T23:59:59.000Z
Groundwater impacts have been analyzed for the proposed remote-handled low-level waste disposal facility. The analysis was prepared to support the National Environmental Policy Act environmental assessment for the top two ranked sites for the proposed disposal facility. A four-phase screening and analysis approach was documented and applied. Phase I screening was site independent and applied a radionuclide half-life cut-off of 1 year. Phase II screening applied the National Council on Radiation Protection analysis approach and was site independent. Phase III screening used a simplified transport model and site-specific geologic and hydrologic parameters. Phase III neglected the infiltration-reducing engineered cover, the sorption influence of the vault system, dispersion in the vadose zone, vertical dispersion in the aquifer, and the release of radionuclides from specific waste forms. These conservatisms were relaxed in the Phase IV analysis which used a different model with more realistic parameters and assumptions. Phase I screening eliminated 143 of the 246 radionuclides in the inventory from further consideration because each had a half-life less than 1 year. An additional 13 were removed because there was no ingestion dose coefficient available. Of the 90 radionuclides carried forward from Phase I, 57 radionuclides had simulated Phase II screening doses exceeding 0.4 mrem/year. Phase III and IV screening compared the maximum predicted radionuclide concentration in the aquifer to maximum contaminant levels. Of the 57 radionuclides carried forward from Phase II, six radionuclides were identified in Phase III as having simulated future aquifer concentrations exceeding maximum contaminant limits. An additional seven radionuclides had simulated Phase III groundwater concentrations exceeding 1/100th of their respective maximum contaminant levels and were also retained for Phase IV analysis. The Phase IV analysis predicted that none of the thirteen remaining radionuclides would exceed the maximum contaminant levels for either site location. The predicted cumulative effective dose equivalent from all 13 radionuclides also was less than the dose criteria set forth in Department of Energy Order 435.1 for each site location. An evaluation of composite impacts showed one site is preferable over the other based on the potential for commingling of groundwater contamination with other facilities.
Brauner, Neima
PREDICTION OF TEMPERATURE-DEPENDENT PROPERTIES BY CORRELATIONS BASED ON SIMILARITY OF MOLECULAR and environmental impact assessment, hazard and operability analysis. Therefore, methods for reliable prediction of property data are needed. In particular, prediction of temperature-dependent properties (like vapor
Prediction of future fifteen solar cycles
K. M. Hiremath
2007-04-11T23:59:59.000Z
In the previous study (Hiremath 2006a), the solar cycle is modeled as a forced and damped harmonic oscillator and from all the 22 cycles (1755-1996), long-term amplitudes, frequencies, phases and decay factor are obtained. Using these physical parameters of the previous 22 solar cycles and by an {\\em autoregressive model}, we predict the amplitude and period of the future fifteen solar cycles. Predicted amplitude of the present solar cycle (23) matches very well with the observations. The period of the present cycle is found to be 11.73 years. With these encouraging results, we also predict the profiles of future 15 solar cycles. Important predictions are : (i) the period and amplitude of the cycle 24 are 9.34 years and 110 ($\\pm 11$), (ii) the period and amplitude of the cycle 25 are 12.49 years and 110 ($\\pm$ 11), (iii) during the cycles 26 (2030-2042 AD), 27 (2042-2054 AD), 34 (2118-2127 AD), 37 (2152-2163 AD) and 38 (2163-2176 AD), the sun might experience a very high sunspot activity, (iv) the sun might also experience a very low (around 60) sunspot activity during cycle 31 (2089-2100 AD) and, (v) length of the solar cycles vary from 8.65 yrs for the cycle 33 to maximum of 13.07 yrs for the cycle 35.
Letschert, Virginie
2010-01-01T23:59:59.000Z
Refrigerators, Refrigerator-Freezers, Freezers PreliminaryRefrigerators, Refrigerator-Freezers, Freezers Pre-NOPRlighting, television, refrigerator-freezers, central air
Letschert, Virginie
2010-01-01T23:59:59.000Z
and general lighting incandescent services (GLIS) areLighting Phase out of incandescent lighting has been passedout of general service incandescent lamps (GSIL) which don’t
Letschert, Virginie
2010-01-01T23:59:59.000Z
central air conditioners, water heaters and furnaces) areair conditioners, water heaters and furnaces) UnregulatedM i l l o 10 i n s 2. Water Heaters DOE has issued a final
Letschert, Virginie
2010-01-01T23:59:59.000Z
2010). We consider only the BAT for storage water heaters.Shipments for gas storage water heaters are projected toyear while electric storage water heaters are projected to
Letschert, Virginie
2010-01-01T23:59:59.000Z
Administration, Annual Energy Outlook 2010 with Projectionsis taken from the annual energy outlook (AEO) 2010 (DOE/EIA-
Predicting the Operating Behavior of Ceramic Filters from Thermo-Mechanical Ash Properties
Hemmer, G.; Kasper, G.
2002-09-19T23:59:59.000Z
Stable operation, in other words the achievement of a succession of uniform filtration cycles of reasonable length is a key issue in high-temperature gas filtration with ceramic media. Its importance has rather grown in recent years, as these media gain in acceptance due to their excellent particle retention capabilities. Ash properties have been known for some time to affect the maximum operating temperature of filters. However, softening and consequently ''stickiness'' of the ash particles generally depend on composition in a complex way. Simple and accurate prediction of critical temperature ranges from ash analysis--and even more so from coal analysis--is still difficult without practical and costly trials. In general, our understanding of what exactly happens during break-down of filtration stability is still rather crude and general. Early work was based on the concept that ash particles begin to soften and sinter near the melting temperatures of low-melting, often alkaline components. This softening coincides with a fairly abrupt increase of stickiness, that can be detected with powder mechanical methods in a Jenicke shear cell as first shown by Pilz (1996) and recently confirmed by others (Kamiya et al. 2001 and 2002, Kanaoka et al. 2001). However, recording {sigma}-{tau}-diagrams is very time consuming and not the only off-line method of analyzing or predicting changes in thermo-mechanical ash behavior. Pilz found that the increase in ash stickiness near melting was accompanied by shrinkage attributed to sintering. Recent work at the University of Karlsruhe has expanded the use of such thermo-analytical methods for predicting filtration behavior (Hemmer 2001). Demonstrating their effectiveness is one objective of this paper. Finally, our intent is to show that ash softening at near melting temperatures is apparently not the only phenomenon causing problems with filtration, although its impact is certainly the ''final catastrophe''. There are other significant changes in regeneration at intermediate temperatures, which may lead to long-term deterioration.
Global Health and Economic Impacts of Future Ozone Pollution
Webster, Mort D.
We assess the human health and economic impacts of projected 2000-2050 changes in ozone pollution using the MIT Emissions Prediction and Policy Analysis-Health Effects (EPPA-HE) model, in combination with results from the ...
Application of the Principle of Maximum Conformality to Top-Pair Production
Brodsky, Stanley J.; /SLAC; Wu, Xing-Gang; /SLAC /Chongqing U.
2013-05-13T23:59:59.000Z
A major contribution to the uncertainty of finite-order perturbative QCD predictions is the perceived ambiguity in setting the renormalization scale {mu}{sub r}. For example, by using the conventional way of setting {mu}{sub r} {element_of} [m{sub t}/2, 2m{sub t}], one obtains the total t{bar t} production cross-section {sigma}{sub t{bar t}} with the uncertainty {Delta}{sigma}{sub t{bar t}}/{sigma}{sub t{bar t}} {approx} (+3%/-4%) at the Tevatron and LHC even for the present NNLO level. The Principle of Maximum Conformality (PMC) eliminates the renormalization scale ambiguity in precision tests of Abelian QED and non-Abelian QCD theories. By using the PMC, all nonconformal {l_brace}{beta}{sub i}{r_brace}-terms in the perturbative expansion series are summed into the running coupling constant, and the resulting scale-fixed predictions are independent of the renormalization scheme. The correct scale-displacement between the arguments of different renormalization schemes is automatically set, and the number of active flavors n{sub f} in the {l_brace}{beta}{sub i}{r_brace}-function is correctly determined. The PMC is consistent with the renormalization group property that a physical result is independent of the renormalization scheme and the choice of the initial renormalization scale {mu}{sub r}{sup init}. The PMC scale {mu}{sub r}{sup PMC} is unambiguous at finite order. Any residual dependence on {mu}{sub r}{sup init} for a finite-order calculation will be highly suppressed since the unknown higher-order {l_brace}{beta}{sub i}{r_brace}-terms will be absorbed into the PMC scales higher-order perturbative terms. We find that such renormalization group invariance can be satisfied to high accuracy for {sigma}{sub t{bar t}} at the NNLO level. In this paper we apply PMC scale-setting to predict the t{bar t} cross-section {sigma}{sub t{bar t}} at the Tevatron and LHC colliders. It is found that {sigma}{sub t{bar t}} remains almost unchanged by varying {mu}{sub r}{sup init} within the region of [m{sub t}/4, 4m{sub t}]. The convergence of the expansion series is greatly improved. For the (q{bar q})-channel, which is dominant at the Tevatron, its NLO PMC scale is much smaller than the top-quark mass in the small x-region, and thus its NLO cross-section is increased by about a factor of two. In the case of the (gg)-channel, which is dominant at the LHC, its NLO PMC scale slightly increases with the subprocess collision energy {radical}s, but it is still smaller than m{sub t} for {radical} {approx}< 1 TeV, and the resulting NLO cross-section is increased by {approx}20%. As a result, a larger {sigma}{sub t{bar t}} is obtained in comparison to the conventional scale-setting method, which agrees well with the present Tevatron and LHC data. More explicitly, by setting m{sub t} = 172.9 {+-} 1.1 GeV, we predict {sigma}{sub Tevatron, 1.96 TeV} = 7.626{sub -0.257}{sup +0.265} pb, {sigma}{sub LHC, 7 TeV} = 171.8{sub -5.6}{sup +5.8} pb and {sigma}{sub LHC, 14 TeV} = 941.3{sub -26.5}{sup +28.4} pb.
Predicting recreation priorities
Hunt, Kindal Alayne
2012-06-07T23:59:59.000Z
: Recreation, Park, and Tourism Sciences PREDICTING RECREATION PRIORITIES A Thesis by KINDAL ALAYNE HUNT Submitted to the Office of Graduate Studies of Texas A&M University in partial fulfillment of the requirements for the degree of MASTER OF SCIENCE... Approved as to style and content by: David Scott (C ' ofCo it e) er itt er) John Cromp (Membe Ro tton (Member) os 0' (Head f D ent) May 2003 Major Subject: Recreation, Park, and Tourism Sciences ABSTRACT Predicting Recreation Priorifies. (May...
Thesis Proposal Anytime Prediction
Garlan, David
approach incrementally applies a sequence of weaker predictors as time progresses, using each new result computation as possible to provide the most accurate result. This issue is further complicated by applications. Such an algorithm rapidly produces an initial prediction and then continues to refine the result as time allows
Joshua Garland; Elizabeth Bradley
2015-03-05T23:59:59.000Z
Prediction models that capture and use the structure of state-space dynamics can be very effective. In practice, however, one rarely has access to full information about that structure, and accurate reconstruction of the dynamics from scalar time-series data---e.g., via delay-coordinate embedding---can be a real challenge. In this paper, we show that forecast models that employ incomplete embeddings of the dynamics can produce surprisingly accurate predictions of the state of a dynamical system. In particular, we demonstrate the effectiveness of a simple near-neighbor forecast technique that works with a two-dimensional embedding. Even though correctness of the topology is not guaranteed for incomplete reconstructions like this, the dynamical structure that they capture allows for accurate predictions---in many cases, even more accurate than predictions generated using a full embedding. This could be very useful in the context of real-time forecasting, where the human effort required to produce a correct delay-coordinate embedding is prohibitive.
PRIVACY IMPACT ASSESSMENT: ORG NAME - SYSTEM NAME PIA Template Version 3 - May, 2009 1 Department of Energy Privacy Impact Assessment (PIA) Guidance is provided in the template....
Restarting TMI unit one: social and psychological impacts
Sorensen, J.; Soderstrom, J.; Bolin, R.; Copenhaver, E.; Carnes, S.
1983-12-01T23:59:59.000Z
A technical background is provided for preparing an environmental assessment of the social and psychological impacts of restarting the undamaged reactor at Three Mile Island (TMI). Its purpose is to define the factors that may cause impacts, to define what those impacts might be, and to make a preliminary assessment of how impacts could be mitigated. It does not attempt to predict or project the magnitude of impacts. Four major research activities were undertaken: a literature review, focus-group discussions, community profiling, and community surveys. As much as possible, impacts of the accident at Unit 2 were differentiated from the possible impacts of restarting Unit 1. It is concluded that restart will generate social conflict in the TMI vicinity which could lead to adverse effects. Furthermore, between 30 and 50 percent of the population possess characteristics which are associated with vulnerability to experiencing negative impacts. Adverse effects, however, can be reduced with a community-based mitigation strategy.
LIFETIME PREDICTION FOR MODEL 9975 O-RINGS IN KAMS
Hoffman, E.; Skidmore, E.
2009-11-24T23:59:59.000Z
The Savannah River Site (SRS) is currently storing plutonium materials in the K-Area Materials Storage (KAMS) facility. The materials are packaged per the DOE 3013 Standard and transported and stored in KAMS in Model 9975 shipping packages, which include double containment vessels sealed with dual O-rings made of Parker Seals compound V0835-75 (based on Viton{reg_sign} GLT). The outer O-ring of each containment vessel is credited for leaktight containment per ANSI N14.5. O-ring service life depends on many factors, including the failure criterion, environmental conditions, overall design, fabrication quality and assembly practices. A preliminary life prediction model has been developed for the V0835-75 O-rings in KAMS. The conservative model is based primarily on long-term compression stress relaxation (CSR) experiments and Arrhenius accelerated-aging methodology. For model development purposes, seal lifetime is defined as a 90% loss of measurable sealing force. Thus far, CSR experiments have only reached this target level of degradation at temperatures {ge} 300 F. At lower temperatures, relaxation values are more tolerable. Using time-temperature superposition principles, the conservative model predicts a service life of approximately 20-25 years at a constant seal temperature of 175 F. This represents a maximum payload package at a constant ambient temperature of 104 F, the highest recorded in KAMS to date. This is considered a highly conservative value as such ambient temperatures are only reached on occasion and for short durations. The presence of fiberboard in the package minimizes the impact of such temperature swings, with many hours to several days required for seal temperatures to respond proportionately. At 85 F ambient, a more realistic but still conservative value, bounding seal temperatures are reduced to {approx}158 F, with an estimated seal lifetime of {approx}35-45 years. The actual service life for O-rings in a maximum wattage package likely lies higher than the estimates due to the conservative assumptions used for the model. For lower heat loads at similar ambient temperatures, seal lifetime is further increased. The preliminary model is based on several assumptions that require validation with additional experiments and longer exposures at more realistic conditions. The assumption of constant exposure at peak temperature is believed to be conservative. Cumulative damage at more realistic conditions will likely be less severe but is more difficult to assess based on available data. Arrhenius aging behavior is expected, but non-Arrhenius behavior is possible. Validation of Arrhenius behavior is ideally determined from longer tests at temperatures closer to actual service conditions. CSR experiments will therefore continue at lower temperatures to validate the model. Ultrasensitive oxygen consumption analysis has been shown to be useful in identifying non-Arrhenius behavior within reasonable test periods. Therefore, additional experiments are recommended and planned to validate the model.
LANGMUIR WAVE ACTIVITY: COMPARING THE ULYSSES SOLAR MINIMUM AND SOLAR MAXIMUM ORBITS
California at Berkeley, University of
). The top three panels correspond to the southern segment of the solar minimum orbit; repeated passesLANGMUIR WAVE ACTIVITY: COMPARING THE ULYSSES SOLAR MINIMUM AND SOLAR MAXIMUM ORBITS R. J at the electron plasma frequency) during the solar minimum and solar maximum orbits of Ulysses. At high latitudes
Comparison of VLF Wave Activity in the Solar Wind During Solar Maximum and Minimum
California at Berkeley, University of
second fast latitude scan (near the solar maximum) with the wave observations during the first fast Experiments (URAP) of Ulysses during its first orbit, which occurred when the solar activity was approachingComparison of VLF Wave Activity in the Solar Wind During Solar Maximum and Minimum: Ulysses
On the duration of the Paleocene-Eocene thermal maximum Ursula Rohl and Thomas Westerhold
Zachos, James
On the duration of the Paleocene-Eocene thermal maximum (PETM) Ursula RoÂ¨hl and Thomas Westerhold of California, Santa Cruz, California 95064, USA [1] The Paleocene-Eocene thermal maximum (PETM) is one of global warming and a massive perturbation of the global carbon cycle from injection of isotopically light
An Optimal Randomized Algorithm for Maximum Tukey Depth Timothy M. Chan
Chan, Timothy M.
An Optimal Randomized Algorithm for Maximum Tukey Depth Timothy M. Chan Abstract We present the first optimal algorithm to compute the maximum Tukey depth (also known as location or halfspace depth , the Tukey depth of a point q IRd is defined as: min{|P | : over all halfspaces containing q}. We
Beating the maximum cooling limit with graded thermoelectric materials Zhixi Bian and Ali Shakouria
.1063/1.2396895 The maximum cooling temperature is one of the perfor- mance parameters for a thermoelectric module. ExcludingBeating the maximum cooling limit with graded thermoelectric materials Zhixi Bian and Ali Shakouria cooling of a single element thermoelectric material cannot be improved by changing its geometry.3
Maximum Power Transfer Tracking in a Solar USB Charger for Smartphones
Pedram, Massoud
chargers do not perform the maximum power point tracking [2], [3] of the solar panel. We excludeMaximum Power Transfer Tracking in a Solar USB Charger for Smartphones Abstract--Battery life poor capacity utilization during solar energy harvesting. In this paper, we propose and demonstrate
Maximum-Power-Point Tracking Method of Photovoltaic Using Only Single Current Sensor
Fujimoto, Hiroshi
» «Solar cell systems» Abstract This paper describes a novel strategy of maximum-power-point tracking point using only a single current sensor, i.e., a Hall-effect CT. Output power of the photovoltaic can-climbing method is employed to seek the maximum power point, using the output power obtained from only the current
Vegetation and Fire at the Last Glacial Maximum in Tropical South America
Binford, Michael W.
Chapter 4 Vegetation and Fire at the Last Glacial Maximum in Tropical South America Francis E temperatures. Keywords Charcoal Â· Last Glacial Maximum Â· pollen Â· Quaternary Â· tropical South America F-mail: Francis.Mayle@ed.ac.uk 89F. Vimeux et al. (eds.), Past Climate Variability in South America
A Basic Thermodynamic Derivation of the Maximum Overburden Pressure Generated in Frost Heave
Libbrecht, Kenneth G.
can derive the maximum overburden pressure. A similar argument can also produce the maximum Heave Engine Frost heave is a common environmental process in which the freezing of water into ice can produce forces large enough to seriously damage roads and bridges [1]. Contrary to common belief, frost
Frequency Moments Inverse Problem and Maximum (Shannon vs. R enyi-Tsallis) Entropy
) maximization of Shannon's entropy (MaxEnt), b) maximization of R#19;enyi-Tsallis entropy (maxTent). ConcerningEnt 4 1.2 Aims 5 2 Frequency moment constraints 5 2.1 Characteristics of MaxEnt choice 6 2.2 Maximum RFrequency Moments Inverse Problem and Maximum (Shannon vs. R#19;enyi-Tsallis) Entropy #3; A case
How Is the Maximum Entropy of a Quantized Surface Related to Its Area?
I. B. Khriplovich; R. V. Korkin
2001-12-27T23:59:59.000Z
The maximum entropy of a quantized surface is demonstrated to be proportional to the surface area in the classical limit. The result is valid in loop quantum gravity, and in a somewhat more general class of approaches to surface quantization. The maximum entropy is calculated explicitly for some specific cases.
Osterloh, Frank
, we show that the maximum conversion efficiency is limited further by the excited state entropyMaximum Theoretical Efficiency Limit of Photovoltaic Devices: Effect of Band Structure on Excited State Entropy Frank E. Osterloh* Department of Chemistry, University of CaliforniaDavis, One Shields
The prediction problem Empirical studies
McCullagh, Peter
The prediction problem Empirical studies Details and summary Conditional prediction intervals 2009 Peter McCullagh, V. Vovk, I. Nouretdinov, D. Devetyarov and A. Gammerman #12;The prediction problem Empirical studies Details and summary Outline 1 The prediction problem Linear regression model
Unification of Field Theory and Maximum Entropy Methods for Learning Probability Densities
Kinney, Justin B
2014-01-01T23:59:59.000Z
Bayesian field theory and maximum entropy are two methods for learning smooth probability distributions (a.k.a. probability densities) from finite sampled data. Both methods were inspired by statistical physics, but the relationship between them has remained unclear. Here I show that Bayesian field theory subsumes maximum entropy density estimation. In particular, the most common maximum entropy methods are shown to be limiting cases of Bayesian inference using field theory priors that impose no boundary conditions on candidate densities. This unification provides a natural way to test the validity of the maximum entropy assumption on one's data. It also provides a better-fitting nonparametric density estimate when the maximum entropy assumption is rejected.
Predicting Steam Turbine Performance
Harriz, J. T.
," PREDICTING STEAM TURBINE PERFORMANCE James T. Harriz, EIT Waterland, Viar & Associates, Inc. Wilmington, Delaware ABSTRACT Tracking the performance of extraction, back pressure and condensing steam turbines is a crucial part... energy) and test data are presented. Techniques for deriving efficiency curves from each source are described. These techniques can be applied directly to any steam turbine reliability study effort. INTRODUCTION As the cost of energy resources...
Predicting velocities and turbulent exchange in isolated street canyons and at a neighborhood scale
Hall, Terianne C
2010-01-01T23:59:59.000Z
Urban planners need a fast, simple model to assess the impact of early design phase iterations of neighborhood layout on the microclimate. Specifically, this model should be able to predict the expected urban heat island ...
Vermont, University of
- tainably reducing carbon storage and biodiversity. biodiversity conservation | carbon emissions | reducing, wood production, and biodiversity conservation. The impacts of individual forms of tropical forestPredictable waves of sequential forest degradation and biodiversity loss spreading from an African
Broader source: Energy.gov [DOE]
Original Impact Calculations, from the Tool Kit Framework: Small Town University Energy Program (STEP).
Robert Felix Tournier
2014-04-25T23:59:59.000Z
An undercooled liquid is unstable. The driving force of the glass transition at Tg is a change of the undercooled-liquid Gibbs free energy. The classical Gibbs free energy change for a crystal formation is completed including an enthalpy saving. The crystal growth critical nucleus is used as a probe to observe the Laplace pressure change Dp accompanying the enthalpy change -Vm*Dp at Tg where Vm is the molar volume. A stable glass-liquid transition model predicts the specific heat jump of fragile liquids at temperatures smaller than Tg, the Kauzmann temperature TK where the liquid entropy excess with regard to crystal goes to zero, the equilibrium enthalpy between TK and Tg, the maximum nucleation rate at TK of superclusters containing magic atom numbers, and the equilibrium latent heats at Tg and TK. Strong-to-fragile and strong-to-strong liquid transitions at Tg are also described and all their thermodynamic parameters are determined from their specific heat jumps. The existence of fragile liquids quenched in the amorphous state, which do not undergo liquid-liquid transition during heating preceding their crystallization, is predicted. Long ageing times leading to the formation at TK of a stable glass composed of superclusters containing up to 147 atoms, touching and interpenetrating, are evaluated from nucleation rates.
Predictive Energy Optimization
Dickinson, P.
2013-01-01T23:59:59.000Z
Predictive?Energy?Optimization Peter?Dickinson Phone:?+1?(415)?233?2306 Email:??Peterd@buildingiq.com Twitter:??@Pete_BIQ BuildingIQ?Overview 2 ? Software?to?intelligently?assess?and?control?HVAC? energy for...,?retail,?government,?hospitality,?etc ? Integration?with?all?major?BMS? ? 10?30%?HVAC?energy?savings?and?up?to?30%?peak? load?reduction during?DR?events ? Subscription?based?service?? minimal?capex BuildingIQ optimizes?energy?use?in?commercial?buildings?by?transforming? existing...
Predictive energy management for hybrid electric vehicles -Prediction horizon and
Paris-Sud XI, Université de
Predictive energy management for hybrid electric vehicles - Prediction horizon and battery capacity of a combined hybrid electric vehicle. Keywords: Hybrid vehicles, Energy Management, Predictive control, Optimal vehicle studied uses a complex transmission composed of planetary gear sets and two electric motors
Microcontroller Servomotor for Maximum Effective Power Point for Solar Cell System
Al-Khalidy, M.; Al-Rawi, O.; Noaman, N.
2010-01-01T23:59:59.000Z
In this paper a Maximum Power point (MPP) tracking algorithm is developed using dual-axis servomotor feedback tracking control system. An efficient and accurate servomotor system is used to increase the system efficiency and reduces the solar cell...
Achieving Consistent Maximum Brake Torque with Varied Injection Timing in a DI Diesel Engine
Kroeger, Timothy H
2013-09-19T23:59:59.000Z
, the characteristics of combustion for swept injection timings along the maximum brake torque plateau are determined. The research is conducted by varying injection timing at constant engine speed and load while measuring engine emissions and in-cylinder pressure...
Delay Analysis of Maximum Weight Scheduling in Wireless Ad Hoc Networks
Modiano, Eytan H.
This paper studies delay properties of the well-known maximum weight scheduling algorithm in wireless ad hoc networks. We consider wireless networks with either one-hop or multihop flows. Specifically, this paper shows ...
Tropical climate variability from the last glacial maximum to the present
Dahl, Kristina Ariel
2005-01-01T23:59:59.000Z
This thesis evaluates the nature and magnitude of tropical climate variability from the Last Glacial Maximum to the present. The temporal variability of two specific tropical climate phenomena is examined. The first is the ...
Dynamical Reconstruction of Upper-Ocean Conditions in the Last Glacial Maximum Atlantic
Wunsch, Carl
Proxies indicate that the Last Glacial Maximum (LGM) Atlantic Ocean was marked by increased meridional and zonal near sea surface temperature gradients relative to today. Using a least squares fit of a full general circulation ...
Atlantic Ocean circulation at the last glacial maximum : inferences from data and models
Dail, Holly Janine
2012-01-01T23:59:59.000Z
This thesis focuses on ocean circulation and atmospheric forcing in the Atlantic Ocean at the Last Glacial Maximum (LGM, 18-21 thousand years before present). Relative to the pre-industrial climate, LGM atmospheric CO? ...
Submodule Integrated Distributed Maximum Power Point Tracking for Solar Photovoltaic Applications
Pilawa-Podgurski, Robert C. N.
This paper explores the benefits of distributed power electronics in solar photovoltaic applications through the use of submodule integrated maximum power point trackers (MPPT). We propose a system architecture that provides ...
Acoustic Space Dimensionality Selection and Combination using the Maximum Entropy Principle
Abdel-Haleem, Yasser H; Renals, Steve; Lawrence, Neil D
2004-01-01T23:59:59.000Z
In this paper we propose a discriminative approach to acoustic space dimensionality selection based on maximum entropy modelling. We form a set of constraints by composing the acoustic space with the space of phone classes, and use a continuous...
Author's personal copy Unified behaviour of maximum soot yields of methane, ethane and propane
GĂĽlder, Ă?mer L.
Author's personal copy Unified behaviour of maximum soot yields of methane, ethane and propane the current study and the previous measurements in similar flames with methane, ethane, and propane flames
A stochastic model for sediment yield using the Principle of Maximum Entropy
Singh, V. P.; Krstanovic, P. F.
WATER RESOURCES RESEARCH, VOL. 23, NO. 5, PAGES 781-793, MAY 1987 A Stochastic Model for Sediment Yield Using the Principle of Maximum Entropy V. P. SINGH AND P. F. KRSTANOVIC Department of Civil Engineering, Louisiana State University, Baton... Rouge The principle of maximum entropy was applied to derive a stochastic model for sediment yield from upland watersheds. By maximizing the conditional entropy subject to certain constraints, a probability distribution of sediment yield conditioned...
Weather Regime Prediction Using Statistical Learning
A. Deloncle; R. Berk; F. D'Andrea; M. Ghil
2011-01-01T23:59:59.000Z
most advanced numerical weather prediction models still havefor numerical weather prediction models. Acknowledgements It
Kim, Leonard, E-mail: kimlh@umdnj.edu [Department of Radiation Oncology, Cancer Institute of New Jersey, Robert Wood Johnson Medical School, University of Medicine and Dentistry of New Jersey, New Brunswick, NJ (United States); Narra, Venkat; Yue, Ning [Department of Radiation Oncology, Cancer Institute of New Jersey, Robert Wood Johnson Medical School, University of Medicine and Dentistry of New Jersey, New Brunswick, NJ (United States)
2013-07-01T23:59:59.000Z
Recent studies have reported potentially clinically meaningful dose differences when heterogeneity correction is used in breast balloon brachytherapy. In this study, we report on the relationship between heterogeneity-corrected and -uncorrected doses for 2 commonly used plan evaluation metrics: maximum point dose to skin surface and maximum point dose to ribs. Maximum point doses to skin surface and ribs were calculated using TG-43 and Varian Acuros for 20 patients treated with breast balloon brachytherapy. The results were plotted against each other and fit with a zero-intercept line. Max skin dose (Acuros) = max skin dose (TG-43) ? 0.930 (R{sup 2} = 0.995). The average magnitude of difference from this relationship was 1.1% (max 2.8%). Max rib dose (Acuros) = max rib dose (TG-43) ? 0.955 (R{sup 2} = 0.9995). The average magnitude of difference from this relationship was 0.7% (max 1.6%). Heterogeneity-corrected maximum point doses to the skin surface and ribs were proportional to TG-43-calculated doses. The average deviation from proportionality was 1%. The proportional relationship suggests that a different metric other than maximum point dose may be needed to obtain a clinical advantage from heterogeneity correction. Alternatively, if maximum point dose continues to be used in recommended limits while incorporating heterogeneity correction, institutions without this capability may be able to accurately estimate these doses by use of a scaling factor.
Estimating the error in simulation prediction over the design space
Shinn, R. (Rachel); Hemez, F. M. (François M.); Doebling, S. W. (Scott W.)
2003-01-01T23:59:59.000Z
This study addresses the assessrnent of accuracy of simulation predictions. A procedure is developed to validate a simple non-linear model defined to capture the hardening behavior of a foam material subjected to a short-duration transient impact. Validation means that the predictive accuracy of the model must be established, not just in the vicinity of a single testing condition, but for all settings or configurations of the system. The notion of validation domain is introduced to designate the design region where the model's predictive accuracy is appropriate for the application of interest. Techniques brought to bear to assess the model's predictive accuracy include test-analysis coi-relation, calibration, bootstrapping and sampling for uncertainty propagation and metamodeling. The model's predictive accuracy is established by training a metalnodel of prediction error. The prediction error is not assumed to be systcmatic. Instead, it depends on which configuration of the system is analyzed. Finally, the prediction error's confidence bounds are estimated by propagating the uncertainty associated with specific modeling assumptions.
Magazine R729 Motor prediction
Flanagan, Randy
Magazine R729 Primer Motor prediction Daniel M. Wolpert* and J. Randall Flanagan The concept of motor prediction was first considered by Helmholtz when trying to understand how we localise visual position of the eye, predicted the gaze position based on a copy of the motor command acting on the eye
Nonlinear Sound during Granular Impact
Abram H. Clark; Alec J. Petersen; Lou Kondic; R. P. Behringer
2014-08-08T23:59:59.000Z
How do dynamic stresses propagate in granular material after a high-speed impact? This occurs often in natural and industrial processes. Stress propagation in a granular material is controlled by the inter-particle force law, $f$, in terms of particle deformation, $\\delta$, often given by $f\\propto\\delta^{\\alpha}$, with $\\alpha>1$. This means that a linear wave description is invalid when dynamic stresses are large compared to the original confining pressure. With high-speed video and photoelastic grains with varying stiffness, we experimentally study how forces propagate following an impact and explain the results in terms of the nonlinear force law (we measure $\\alpha\\approx 1.4$). The spatial structure of the forces and the propagation speed, $v_f$, depend on a dimensionless parameter, $M'=t_cv_0/d$, where $v_0$ is the intruder speed at impact, $d$ is the grain diameter, and $t_c$ is a binary collision time between grains with relative speed $v_0$. For $M'\\ll 1$, propagati ng forces are chain-like, and the measured $v_f \\propto d/t_c\\propto v_b(v_0/v_b)^\\frac{\\alpha-1}{\\alpha+1}$, where $v_b$ is the bulk sound speed. For larger $M'$, the force response has a 2D character, and forces propagate faster than predicted by $d/t_c$ due to collective stiffening of a packing.
Broader source: Energy.gov [DOE]
Presents how Energy Impact Illinois overcame barriers in the multifamily sector through financing partnerships and expert advice.
On the "viscosity maximum" during the uniaxial extension of a low density polyethylene
Teodor I. Burghelea; Zdenek Stary; Helmut Muenstedt
2010-01-13T23:59:59.000Z
An experimental investigation of the viscosity overshoot phenomenon observed during uniaxial extension of a low density polyethylene is pre- sented. For this purpose, traditional integral viscosity measurements on a Muenstedt type extensional rheometer are combined with local mea- surements based on the in-situ visualization of the sample under exten- sion. For elongational experiments at constant strain rates within a wide range of Weissenberg numbers (Wi), three distinct deformation regimes are identified. Corresponding to low values of Wi (regime I), the tensile stress displays a broad maximum. This maximum can be explained by simple mathematical arguments as a result of low deformation rates and it should not be confused with the viscosity overshoot phenomenon. Corre- sponding to intermediate values of Wi (regime II), a local maximum of the integral extensional viscosity is systematically observed. However, within this regime, the local viscosity measurements reveal no maximum, but a plateau. Careful inspection of the images of samples within this regime shows that, corresponding to the maximum of the integral viscosity, sec- ondary necks develop along the sample. The emergence of a maximum of the integral elongational viscosity is thus related to the distinct in- homogeneity of deformation states and is not related to the rheological properties of the material. In the fast stretching limit (high Wi, regime III), the overall geometric uniformity of the sample is well preserved, no secondary necks are observed and both the integral and the local transient elongational viscosity show no maximum. A detailed comparison of the experimental findings with results from literature is presented.
Period-luminosity and period-luminosity-colour relations for Mira variables at maximum light
S. M. Kanbur; M. A. Hendry; D. Clarke
1997-04-14T23:59:59.000Z
In this paper we confirm the existence of period-luminosity (PL) and period-luminosity-colour (PLC) relations at maximum light for O and C Mira variables in the LMC. We demonstrate that in the J and H bands the maximum light PL relations have a significantly smaller dispersion than their counterparts at mean light, while the K band and bolometric PL relations have a dispersion comparable to that at mean light. In the J, H and K bands the fitted PL relations for the O Miras are found to have smaller dispersion than those for the C Miras, at both mean and maximum light, while the converse is true for the relations based on bolometric magnitudes. The inclusion of a non-zero log period term is found to be highly significant in all cases except that of the C Miras in the J band, for which the data are found to be consistent with having constant absolute magnitude. This suggests the possibility of employing C Miras as standard candles. We suggest both a theoretical justification for the existence of Mira PL relations at maximum light and a possible explanation of why these relations should have a smaller dispersion than at mean light. The existence of such maximum light relations offers the possibility of extending the range and improving the accuracy of the Mira distance scale to Galactic globular clusters and to other galaxies.
On the maximum value of the cosmic abundance of oxygen and the oxygen yield
L. S. Pilyugin; T. X. Thuan; J. M. Vilchez
2007-01-11T23:59:59.000Z
We search for the maximum oxygen abundance in spiral galaxies. Because this maximum value is expected to occur in the centers of the most luminous galaxies, we have constructed the luminosity - central metallicity diagram for spiral galaxies, based on a large compilation of existing data on oxygen abundances of HII regions in spiral galaxies. We found that this diagram shows a plateau at high luminosities (-22.3 oxygen abundance 12+log(O/H) ~ 8.87. This provides strong evidence that the oxygen abundance in the centers of the most luminous metal-rich galaxies reaches the maximum attainable value of oxygen abundance. Since some fraction of the oxygen (about 0.08 dex) is expected to be locked into dust grains, the maximum value of the true gas+dust oxygen abundance in spiral galaxies is 12+log(O/H) ~ 8.95. This value is a factor of ~ 2 higher than the recently estimated solar value. Based on the derived maximum oxygen abundance in galaxies, we found the oxygen yield to be about 0.0035, depending on the fraction of oxygen incorporated into dust grains.
HFIR Vessel Maximum Permissible Pressures for Operating Period 26 to 50 EFPY (100 MW)
Cheverton, R.D.; Inger, J.R.
1999-01-01T23:59:59.000Z
Extending the life of the HFIR pressure vessel from 26 to 50 EFPY (100 MW) requires an updated calculation of the maximum permissible pressure for a range in vessel operating temperatures (40-120 F). The maximum permissible pressure is calculated using the equal-potential method, which takes advantage of knowledge gained from periodic hydrostatic proof tests and uses the test conditions (pressure, temperature, and frequency) as input. The maximum permissible pressure decreases with increasing time between hydro tests but is increased each time a test is conducted. The minimum values that occur just prior to a test either increase or decrease with time, depending on the vessel temperature. The minimum value of these minimums is presently specified as the maximum permissible pressure. For three vessel temperatures of particular interest (80, 88, and 110 F) and a nominal time of 3.0 EFPY(100 MVV)between hydro tests, these pressures are 677, 753, and 850 psi. For the lowest temperature of interest (40 F), the maximum permissible pressure is 295 psi.
Annette Schafer; Arthur S. Rood; A. Jeffrey Sondrup
2011-08-01T23:59:59.000Z
The groundwater impacts have been analyzed for the proposed RH-LLW disposal facility. A four-step analysis approach was documented and applied. This assessment compared the predicted groundwater ingestion dose to the more restrictive of either the 25 mrem/yr all pathway dose performance objective, or the maximum contaminant limit performance objective. The results of this analysis indicate that the groundwater impacts for either proposed facility location are expected to be less than the performance objectives. The analysis was prepared to support the NEPA-EA for the top two ranking of the proposed RH-LLW sites. As such, site-specific conditions were incorporated for each set of results generated. These site-specific conditions were included to account for the transport of radionuclides through the vadose zone and through the aquifer at each site. Site-specific parameters included the thickness of vadose zone sediments and basalts, moisture characteristics of the sediments, and aquifer velocity. Sorption parameters (Kd) were assumed to be very conservative values used in Track II analysis of CERCLA sites at INL. Infiltration was also conservatively assumed to represent higher rates corresponding to disturbed soil conditions. The results of this analysis indicate that the groundwater impacts for either proposed facility location are expected to be less than the performance objectives.
Annette Schafer; Arthur S. Rood; A. Jeffrey Sondrup
2011-12-01T23:59:59.000Z
The groundwater impacts have been analyzed for the proposed RH-LLW disposal facility. A four-step analysis approach was documented and applied. This assessment compared the predicted groundwater ingestion dose to the more restrictive of either the 25 mrem/yr all pathway dose performance objective, or the maximum contaminant limit performance objective. The results of this analysis indicate that the groundwater impacts for either proposed facility location are expected to be less than the performance objectives. The analysis was prepared to support the NEPA-EA for the top two ranking of the proposed RH-LLW sites. As such, site-specific conditions were incorporated for each set of results generated. These site-specific conditions were included to account for the transport of radionuclides through the vadose zone and through the aquifer at each site. Site-specific parameters included the thickness of vadose zone sediments and basalts, moisture characteristics of the sediments, and aquifer velocity. Sorption parameters (Kd) were assumed to be very conservative values used in Track II analysis of CERCLA sites at INL. Infiltration was also conservatively assumed to represent higher rates corresponding to disturbed soil conditions. The results of this analysis indicate that the groundwater impacts for either proposed facility location are expected to be less than the performance objectives.
Spectral Modeling of SNe Ia Near Maximum Light: Probing the Characteristics of Hydro Models
E. Baron; S. Bongard; David Branch; Peter H. Hauschildt
2006-03-03T23:59:59.000Z
We have performed detailed NLTE spectral synthesis modeling of 2 types of 1-D hydro models: the very highly parameterized deflagration model W7, and two delayed detonation models. We find that overall both models do about equally well at fitting well observed SNe Ia near to maximum light. However, the Si II 6150 feature of W7 is systematically too fast, whereas for the delayed detonation models it is also somewhat too fast, but significantly better than that of W7. We find that a parameterized mixed model does the best job of reproducing the Si II 6150 line near maximum light and we study the differences in the models that lead to better fits to normal SNe Ia. We discuss what is required of a hydro model to fit the spectra of observed SNe Ia near maximum light.
Fast singular value decomposition combined maximum entropy method for plasma tomography
Kim, Junghee; Choe, W. [Department of Physics, Korea Advanced Institute of Science and Technology, 373-1 Guseong-dong, Yuseong-gu, Daejeon 305-701(Korea, Republic of)
2006-02-15T23:59:59.000Z
The maximum entropy method (MEM) is a widely used reconstruction algorithm in plasma physics. Drawbacks of the conventional MEM are its heavy time-consuming process and possible generation of noisy reconstruction results. In this article, a modified maximum entropy algorithm is described which speeds up the calculation and shows better noise handling capability. Similar to the rapid minimum Fisher information method, the modified maximum entropy algorithm uses simple matrix operations instead of treating a fully nonlinear problem. The preprocess for rapid tomographic calculation is based on the vector operations and the singular value decomposition (SVD). The initial guess of the sought-for emissivity is calculated by SVD and this helped reconstruction about ten times faster than the conventional MEM. Therefore, the developed fast MEM can be used for intershot tomographic analyses of fusion plasmas.
Byrne, Raymond Harry; Silva Monroy, Cesar Augusto.
2012-12-01T23:59:59.000Z
The valuation of an electricity storage device is based on the expected future cash ow generated by the device. Two potential sources of income for an electricity storage system are energy arbitrage and participation in the frequency regulation market. Energy arbitrage refers to purchasing (stor- ing) energy when electricity prices are low, and selling (discharging) energy when electricity prices are high. Frequency regulation is an ancillary service geared towards maintaining system frequency, and is typically procured by the independent system operator in some type of market. This paper outlines the calculations required to estimate the maximum potential revenue from participating in these two activities. First, a mathematical model is presented for the state of charge as a function of the storage device parameters and the quantities of electricity purchased/sold as well as the quantities o ered into the regulation market. Using this mathematical model, we present a linear programming optimization approach to calculating the maximum potential revenue from an elec- tricity storage device. The calculation of the maximum potential revenue is critical in developing an upper bound on the value of storage, as a benchmark for evaluating potential trading strate- gies, and a tool for capital nance risk assessment. Then, we use historical California Independent System Operator (CAISO) data from 2010-2011 to evaluate the maximum potential revenue from the Tehachapi wind energy storage project, an American Recovery and Reinvestment Act of 2009 (ARRA) energy storage demonstration project. We investigate the maximum potential revenue from two di erent scenarios: arbitrage only and arbitrage combined with the regulation market. Our analysis shows that participation in the regulation market produces four times the revenue compared to arbitrage in the CAISO market using 2010 and 2011 data. Then we evaluate several trading strategies to illustrate how they compare to the maximum potential revenue benchmark. We conclude with a sensitivity analysis with respect to key parameters.
Bootstrap Prediction Intervals for Time Series /
Pan, Li
2013-01-01T23:59:59.000Z
1.5 Joint Prediction intervals . . . . . . . . . . . . .1.6 Generalized Bootstrap prediction1.8.1 Bootstrap Prediction Intervals Based on Studentized
Prediction Intervals in Generalized Linear Mixed Models
Yang, Cheng-Hsueh
2013-01-01T23:59:59.000Z
3.1. BLP Based Prediction Intervals………………………………………..……3.2. BP Based Prediction Intervals………………..………………………..……4.1.1. BLP Based Prediction Interval………………………………………. 4.1.2.
Computational prediction and analysis of protein structure
Meruelo, Alejandro Daniel
2012-01-01T23:59:59.000Z
I, and Bowie JU. Kink prediction in membrane proteins.Los Angeles Computational prediction and analysis of proteinOF THE DISSERTATION Computational prediction and analysis of
Empirical Prediction Intervals for County Population Forecasts
Rayer, Stefan; Smith, Stanley K.; Tayman, Jeff
2009-01-01T23:59:59.000Z
in the determination and prediction of population forecastperformance of empirical prediction intervals? Table 5 shows26, 163–184. Empirical Prediction Intervals for County
Study on Two Optimization Problems: Line Cover and Maximum Genus Embedding
Cao, Cheng
2012-07-16T23:59:59.000Z
STUDY ON TWO OPTIMIZATION PROBLEMS: LINE COVER AND MAXIMUM GENUS EMBEDDING A Thesis by CHENG CAO Submitted to the O ce of Graduate Studies of Texas A&M University in partial ful llment of the requirements for the degree of MASTER OF SCIENCE... May 2012 Major Subject: Computer Science STUDY ON TWO OPTIMIZATION PROBLEMS: LINE COVER AND MAXIMUM GENUS EMBEDDING A Thesis by CHENG CAO Submitted to the O ce of Graduate Studies of Texas A&M University in partial ful llment...
What is the maximum rate at which entropy of a string can increase?
Ropotenko, Kostyantyn [State Administration of Communications, Ministry of Transport and Communications of Ukraine 22, Khreschatyk, 01001, Kyiv (Ukraine)
2009-03-15T23:59:59.000Z
According to Susskind, a string falling toward a black hole spreads exponentially over the stretched horizon due to repulsive interactions of the string bits. In this paper such a string is modeled as a self-avoiding walk and the string entropy is found. It is shown that the rate at which information/entropy contained in the string spreads is the maximum rate allowed by quantum theory. The maximum rate at which the black hole entropy can increase when a string falls into a black hole is also discussed.
Hydrodynamic Relaxation of an Electron Plasma to a Near-Maximum Entropy State
Rodgers, D. J.; Servidio, S.; Matthaeus, W. H.; Mitchell, T. B.; Aziz, T. [Department of Physics and Astronomy, University of Delaware, Newark, Delaware 19716 (United States); Montgomery, D. C. [Department of Physics and Astronomy, Dartmouth College, Hanover, New Hampshire 03755 (United States)
2009-06-19T23:59:59.000Z
Dynamical relaxation of a pure electron plasma in a Malmberg-Penning trap is studied, comparing experiments, numerical simulations and statistical theories of weakly dissipative two-dimensional (2D) turbulence. Simulations confirm that the dynamics are approximated well by a 2D hydrodynamic model. Statistical analysis favors a theoretical picture of relaxation to a near-maximum entropy state with constrained energy, circulation, and angular momentum. This provides evidence that 2D electron fluid relaxation in a turbulent regime is governed by principles of maximum entropy.
Maximum-Entropy Closures for Kinetic Theories of Neuronal Network Dynamics
Rangan, Aaditya V.; Cai, David [Courant Institute of Mathematical Sciences, New York University, New York, New York 10012 (United States)
2006-05-05T23:59:59.000Z
We analyze (1+1)D kinetic equations for neuronal network dynamics, which are derived via an intuitive closure from a Boltzmann-like equation governing the evolution of a one-particle (i.e., one-neuron) probability density function. We demonstrate that this intuitive closure is a generalization of moment closures based on the maximum-entropy principle. By invoking maximum-entropy closures, we show how to systematically extend this kinetic theory to obtain higher-order (1+1)D kinetic equations and to include coupled networks of both excitatory and inhibitory neurons.
Maximum Entropy Models of Shortest Path and Outbreak Distributions in Networks
Bauckhage, Christian; Hadiji, Fabian
2015-01-01T23:59:59.000Z
Properties of networks are often characterized in terms of features such as node degree distributions, average path lengths, diameters, or clustering coefficients. Here, we study shortest path length distributions. On the one hand, average as well as maximum distances can be determined therefrom; on the other hand, they are closely related to the dynamics of network spreading processes. Because of the combinatorial nature of networks, we apply maximum entropy arguments to derive a general, physically plausible model. In particular, we establish the generalized Gamma distribution as a continuous characterization of shortest path length histograms of networks or arbitrary topology. Experimental evaluations corroborate our theoretical results.
Benchmarking performance: Environmental impact statements in Egypt
Badr, El-Sayed A., E-mail: ebadr@mans.edu.e [Environmental Sciences Department, Faculty of Science at Damietta, Mansoura University, New Damietta City, PO Box 103 (Egypt); Zahran, Ashraf A., E-mail: ashraf_zahran@yahoo.co [Environmental Studies and Research Institute, Minufiya University, Sadat City, Sixth Zone, PO 32897 (Egypt); Cashmore, Matthew, E-mail: m.cashmore@uea.ac.u [InteREAM, School of Environmental Sciences, University of East Anglia, Norwich, Norfolk, NR4 7TJ (United Kingdom)
2011-04-15T23:59:59.000Z
Environmental impact assessment (EIA) was formally introduced in Egypt in 1994. This short paper evaluates 'how well' the EIA process is working in practice in Egypt, by reviewing the quality of 45 environmental impact statements (EISs) produced between 2000 and 2007 for a variety of project types. The Lee and Colley review package was used to assess the quality of the selected EISs. About 69% of the EISs sampled were found to be of a satisfactory quality. An assessment of the performance of different elements of the EIA process indicates that descriptive tasks tend to be performed better than scientific tasks. The quality of core elements of EIA (e.g., impact prediction, significance evaluation, scoping and consideration of alternatives) appears to be particularly problematic. Variables that influence the quality of EISs are identified and a number of broad recommendations are made for improving the effectiveness of the EIA system.
Nolan, David S.
On the Vertical Decay Rate of the Maximum Tangential Winds in Tropical Cyclones DANIEL P. STERN independent of both the maximum wind speed and the radius of maximum winds (RMW). This can be seen winds change with height. Above 2-km height, vertical profiles of Vmaxnorm are nearly independent
Mechanistic-based Ductility Prediction
Broader source: Energy.gov (indexed) [DOE]
Predictive modeling & performance: - Performance validation of "demo" structure in corrosion, fatigue, and durability Total project funding DOE: 3,000,000 ...
DIAGNOSIS OF CONDITIONAL MAXIMUM TORNADO DAMAGE PROBABILITIESP2.20 Bryan T. Smith1
. Thompson1 , Harold E. Brooks2 , Andrew R. Dean1 , and Kimberly L. Elmore2 1 NOAA/NWS/NCEP/Storm Prediction Center, Norman, Oklahoma 2 NOAA/National Severe Storms Laboratory, Norman, Oklahoma 1. Introduction. Smith, NOAA/NWS/NCEP/Storm Prediction Center, 120 David L. Boren Blvd., Suite 2300, Norman, OK 73072
Exact Maximum Likelihood estimator for the BL-GARCH model under elliptical distributed
Paris-Sud XI, Université de
Exact Maximum Likelihood estimator for the BL-GARCH model under elliptical distributed innovations, Brisbane QLD 4001, Australia Abstract We are interested in the parametric class of Bilinear GARCH (BL-GARCH examine, in this paper, the BL-GARCH model in a general setting under some non-normal distributions. We
Stone, G. A.; DeVito, E. M.; Nease, N. H.
2002-01-01T23:59:59.000Z
Texas adopted in its residential building energy code a maximum 0.40 solar heat gain coefficient (SHGC) for fenestration (e.g., windows, glazed doors and skylights)-a critical driver of cooling energy use, comfort and peak demand. An analysis...
The chronology of the Last Glacial Maximum and deglacial events in central Argentine Patagonia
The chronology of the Last Glacial Maximum and deglacial events in central Argentine Patagonia and deglaciation in the Lago PueyrredoÂ´n valley of central Patagonia, 47.5 S, Argentina. The valley was a major and the onset of deglaciation occurred broadly synchronously throughout Patagonia. Deglaciation resulted
Paris-Sud XI, UniversitĂ© de
periods often appear in industry due to a machine breakdown (stochastic) or preventive maintenance of machine unavailability. However, in some cases (e.g. preventive maintenance), the maintenance of a machineSingle-machine scheduling with periodic and exible periodic maintenance to minimize maximum
THE SECOND LAW OF THERMODYNAMICS AND THE GLOBAL CLIMATE SYSTEM: A REVIEW OF THE MAXIMUM
Lorenz, Ralph D.
to absorption of solar radiation in the climate system is found to be irrelevant to the maximized prop- erties from hot to cold places, thereby producing the kinetic energy of the fluid itself. His generalTHE SECOND LAW OF THERMODYNAMICS AND THE GLOBAL CLIMATE SYSTEM: A REVIEW OF THE MAXIMUM ENTROPY
Hydraulic limits on maximum plant transpiration and the emergence of the safetyefficiency trade-off
Jackson, Robert B.
Hydraulic limits on maximum plant transpiration and the emergence of the safetyefficiency trade.12126 Key words: hydraulic limitation, safety efficiency trade-off, soilplantatmosphere model, trait hydraulics constrain ecosystem productivity by setting physical limits to water transport and hence carbon
Performance of Photovoltaic Maximum Power Point Tracking Algorithms in the Presence of Noise
Odam, Kofi
Performance of Photovoltaic Maximum Power Point Tracking Algorithms in the Presence of Noise tracking (MPPT) algorithms for photovoltaic systems, including how noise affects both tracking speed-performance photovoltaic sys- tems. An intelligent controller adjusts the voltage, current, or impedance seen by a solar
Hydroelastic analysis of the floating plate optimized for maximum radiation damping
Damaren, Christopher J.
Hydroelastic analysis of the floating plate optimized for maximum radiation damping Christopher J t In previous work, the problem of optimizing the shape of a thin floating plate to maximize radiation damping, incompressible ocean of infinite extent. For simplicity, only rigid heave motions were considered and the damping
Maximum late Holocene extent of the western Greenland Ice Sheet during the late 20th century
Briner, Jason P.
the 20th century. This suggests a lagged ice-margin response to prior cooling, such as the Little Ice AgeMaximum late Holocene extent of the western Greenland Ice Sheet during the late 20th century Samuel Keywords: Greenland Ice Sheet Little Ice Age 10 Be exposure dating Ice-dammed lake Lake sediment core a b
Maximum Likelihood Estimation of Mixture Densities for Binned and Truncated Multivariate
Smyth, Padhraic
Maximum Likelihood Estimation of Mixture Densities for Binned and Truncated Multivariate Data in data analysis and machine learning. This paper addresses the problem of fitting mixture densities to multivariate binned and truncated data. The EM approach proposed by McLachlan and Jones (1988
Design of wind farm layout for maximum wind energy capture Andrew Kusiak*, Zhe Song
Kusiak, Andrew
Design of wind farm layout for maximum wind energy capture Andrew Kusiak*, Zhe Song Intelligent Accepted 24 August 2009 Available online 22 September 2009 Keywords: Wind farm Wind turbine Layout design Optimization Evolutionary algorithms Operations research a b s t r a c t Wind is one of the most promising
Maximum Class Separability for Rough-Fuzzy C-Means Based Brain MR Image Segmentation
Pal, Sankar Kumar
Maximum Class Separability for Rough-Fuzzy C-Means Based Brain MR Image Segmentation Pradipta Maji of brain MR images. The RFCM algorithm comprises a judicious integration of the of rough sets, fuzzy sets with vagueness and incompleteness in class definition of brain MR images, the membership function of fuzzy sets
EXTENSION OF THE MAXIMUM POWER REGION OF DOUBLY-SALIENT VARIABLE RELUCTANCE MOTORS
Paris-Sud XI, UniversitĂ© de
-Salient Variable Reluctance Motors (DSVRM) has been investigated and developed for variable-speed drives during, variable-frequency generators, wind wheels, machine tools, etc.). In these applications, it is generally necessary to operate in a regime of a high speed ux-weakening (zone of maximum constant power), for a better
Sukumar, N.
the nonlinear system of equations. Maximum-entropy basis functions are used to discretize the two displacement control method is implemented to solve the nonlinear system of equations and to obtain tools in the field of structural engineering, Yaw and co-workers [1] presented a blended FE and meshfree
Maximum Utility Product Pricing Models and Algorithms Based on Reservation Prices
TunĂ§el, Levent
Maximum Utility Product Pricing Models and Algorithms Based on Reservation Prices R. Shioda L. Tun for pricing a product line with several customer segments under the assumption that customers' product choices utility model and formulate it as a mixed-integer programming problem, design heuristics and valid cuts
Self-Assembly for Maximum Yields Under Constraints Michael J. Fox and Jeff S. Shamma
Shamma, Jeff S.
Self-Assembly for Maximum Yields Under Constraints Michael J. Fox and Jeff S. Shamma Abstract-- We present an algorithm that, given any target tree, synthesizes reversible self-assembly rules that provide states that cannot be recovered from the unlabeled graph. I. INTRODUCTION Self-assembly is the phenomenon
Generalized Local Maximum Principles for Finite-Difference Operators Author(s): Achi Brandt
Generalized Local Maximum Principles for Finite-Difference Operators Author(s): Achi Brandt Source://www.jstor.org/action/showPublisher?publisherCode=ams. . Each copy of any part of a JSTOR transmission must contain the same copyright notice that appears on the screen or printed page of such transmission. JSTOR is a not-for-profit service that helps scholars
Wang, Yuqing
0 Energy Production, Frictional Dissipation, and Maximum Intensity of a Numerically Simulated) viewed as a heat engine converts heat energy extracted from the ocean to kinetic energy of the TC, which is eventually dissipated due to surface friction. Since the energy production rate is a linear function while
Wang, Yuqing
Energy Production, Frictional Dissipation, and Maximum Intensity of a Numerically Simulated as a heat engine converts heat energy extracted from the ocean into the kinetic energy of the TC, which is eventually dissipated due to surface friction. Since the energy production rate is a linear function while
NOAA Technical Memorandum NWS HYDRO 39 PROBABLE MAXIMUM PRECIPITATION FOR THE UPPER
NOAA Technical Memorandum NWS HYDRO 39 PROBABLE MAXIMUM PRECIPITATION FOR THE UPPER DEERFIELD RIVER The Office of Hydrology (HYDRO) of the National Weather Service (NWS) develops procedures for making river agencies, and conducts pertinent research and development. NOAA Technical Memorandums in the NWS HYDRO
Pauly, Daniel
). In addition, average surface water pH of the ocean has dropped by 0.1 units since pre- industrial timesIntegrating ecophysiology and plankton dynamics into projected changes in maximum fisheries catch 7TJ, UK 2 Centre for Environment, Fisheries and Aquaculture Science, Pakefield Road, Lowestoft
Blind Equalization via Approximate Maximum Likelihood Source Seungjin CHOI x1 and Andrzej CICHOCKI y
Choi, Seungjin
Blind Equalization via Approximate Maximum Likelihood Source Separation Seungjin CHOI x1, RIKEN 2-1 Hirosawa, Wako-shi Saitama 351-0198, JAPAN Abstract Blind equalization of single input multiple output (SIMO) FIR channels can be refor- mulated as the problem of blind source separation
Mandelis, Andreas
Photothermoacoustic imaging of biological tissues: maximum depth characterization comparison for Advanced Diffusion-Wave Technologies Department of Mechanical and Industrial Engineering 5 King's College induced in light-absorbing materials can be observed either as a transient signal in time domain
Power and Sample Size Determination for a Stepwise Test Procedure for Finding the Maximum Safe Dose
Tamhane, Ajit C.
Power and Sample Size Determination for a Stepwise Test Procedure for Finding the Maximum Safe Dose This paper addresses the problem of power and sample size calculation for a stepwise multiple test procedure of a compound. A general expression for the power of this procedure is derived. It is used to find the minimum
Optimization of stomatal conductance for maximum carbon gain under dynamic soil moisture
Katul, Gabriel
Optimization of stomatal conductance for maximum carbon gain under dynamic soil moisture Stefano Accepted 26 September 2013 Available online 9 October 2013 Keywords: Optimization Photosynthesis Soil moisture Stomatal conductance Transpiration a b s t r a c t Optimization theories explain a variety
A Distributed Approach to Maximum Power Point Tracking for Photovoltaic Sub-Module Differential
Liberzon, Daniel
of the proposed distributed algorithm. I. INTRODUCTION IN photovoltaic (PV) energy systems, PV modules are often of the system, small size and low power ratings of the power electronics circuit components. Due1 A Distributed Approach to Maximum Power Point Tracking for Photovoltaic Sub-Module Differential
Uncorking the bottle: What triggered the Paleocene/Eocene thermal maximum methane release?
Uncorking the bottle: What triggered the Paleocene/Eocene thermal maximum methane release? Miriam E realms that has been attributed to a massive methane (CH4) release from marine gas hydrate reservoirs. Previously proposed mechanisms for this methane release rely on a change in deepwater source region
Electrical Estimation of Conditional Probability for Maximum-likelihood Based PMD Mitigation
Zweck, John
Xi, T¨ulay Adali, and John Zweck Department of Computer Science and Electrical Engineering UniversityElectrical Estimation of Conditional Probability for Maximum-likelihood Based PMD Mitigation Wenze probability density functions in the presence of both all-order PMD and ASE noise are estimated electronically
Turro, Nicholas J.
Hydrogen Molecules inside Fullerene C70: Quantum Dynamics, Energetics, Maximum Occupancy of Chemistry, New York UniVersity, New York, New York 10003, Department of Chemistry, Brown UniVersity, ProVidence, Rhode Island 02912, and Department of Chemistry, Columbia UniVersity, New York, New York 10027 Received
Maximum Power Point Tracking Control for Photovoltaic System Using Adaptive Neuro-Fuzzy
Paris-Sud XI, UniversitĂ© de
conventional controller like Adaptive Neuro-Fuzzy "ANFIS" and fuzzy logic controller is proposed and simulated power point tracking (MPPT) technique will be used. Fuzzy logic control "FLC" and adaptive neuro-fuzzyMaximum Power Point Tracking Control for Photovoltaic System Using Adaptive Neuro- Fuzzy "ANFIS
Extraction of Spectral Functions from Dyson-Schwinger Studies via the Maximum Entropy Method
Dominik Nickel
2006-07-20T23:59:59.000Z
It is shown how to apply the Maximum Entropy Method (MEM) to numerical Dyson-Schwinger studies for the extraction of spectral functions of correlators from their corresponding Euclidean propagators. Differences to the application in lattice QCD are emphasized and, as an example, the spectral functions of massless quarks in cold and dense matter are presented.
Nasser, Hassan
2014-01-01T23:59:59.000Z
We propose a numerical method to learn Maximum Entropy (MaxEnt) distributions with spatio-temporal constraints from experimental spike trains. This is an extension of two papers [10] and [4] who proposed the estimation of parameters where only spatial constraints were taken into account. The extension we propose allows to properly handle memory effects in spike statistics, for large sized neural networks.
Beyond Boltzmann-Gibbs statistics: Maximum entropy hyperensembles out of equilibrium Gavin E at equilibrium? Here, we argue the most appropriate additional parameter is the nonequilibrium entropy of ways that the same system can be out of equilibrium. That the equilibrium entropy is maximized given
Extraction of spectral functions from Dyson-Schwinger studies via the maximum entropy method
Nickel, Dominik [Institut fuer Kernphysik, Technische Universitaet Darmstadt, D-64289 Darmstadt (Germany)]. E-mail: dominik.nickel@physik.tu-darmstadt.de
2007-08-15T23:59:59.000Z
It is shown how to apply the Maximum Entropy Method (MEM) to numerical Dyson-Schwinger studies for the extraction of spectral functions of correlators from their corresponding Euclidean propagators. Differences to the application in lattice QCD are emphasized and, as an example, the spectral functions of massless quarks in cold and dense matter are presented.
Lattice Field Theory with the Sign Problem and the Maximum Entropy Method
Masahiro Imachi; Yasuhiko Shinno; Hiroshi Yoneyama
2007-02-09T23:59:59.000Z
Although numerical simulation in lattice field theory is one of the most effective tools to study non-perturbative properties of field theories, it faces serious obstacles coming from the sign problem in some theories such as finite density QCD and lattice field theory with the $\\theta$ term. We reconsider this problem from the point of view of the maximum entropy method.
A PROXIMITY CONTROL ALGORITHM TO MINIMIZE NONSMOOTH AND NONCONVEX SEMI-INFINITE MAXIMUM
Noll, Dominikus
in the context of eigen- value optimization, and [8] gives an overview of the history. The bases for the presentA PROXIMITY CONTROL ALGORITHM TO MINIMIZE NONSMOOTH AND NONCONVEX SEMI-INFINITE MAXIMUM EIGENVALUE function, semi-infinite problem, H-norm. 1. Introduction. Proximity control for bundle methods has been
A PROXIMITY CONTROL ALGORITHM TO MINIMIZE NONSMOOTH AND NONCONVEX SEMI-INFINITE MAXIMUM
Noll, Dominikus
in the context of eigen- value optimization, and [9] gives an overview of the history. The bases for the presentA PROXIMITY CONTROL ALGORITHM TO MINIMIZE NONSMOOTH AND NONCONVEX SEMI-INFINITE MAXIMUM EIGENVALUE function, semi-infinite problem, H-norm. 1. Introduction. Proximity control for bundle methods has been
Original article Predicted global warming
Boyer, Edmond
Original article Predicted global warming and Douglas-fir chilling requirements DD McCreary1 DP to predicted global warming. Douglas-fir / chilling / global warming / bud burst / reforestation RĂ©sumĂ© offer evidence that mean global warming of 3-4 Â°C could occur within the next century, particularly
SALTSTONE DISPOSAL FACILITY: DETERMINATION OF THE PROBABLE MAXIMUM WATER TABLE ELEVATION
Hiergesell, R
2005-04-01T23:59:59.000Z
A coverage depicting the configuration of the probable maximum water table elevation in the vicinity of the Saltstone Disposal Facility (SDF) was developed to support the Saltstone program. This coverage is needed to support the construction of saltstone vaults to assure that they remain above the maximum elevation of the water table during the Performance Assessment (PA) period of compliance. A previous investigation to calculate the historical high water table beneath the SDF (Cook, 1983) was built upon to incorporate new data that has since become available to refine that estimate and develop a coverage that could be extended to the perennial streams adjacent to the SDF. This investigation incorporated the method used in the Cook, 1983 report to develop an estimate of the probable maximum water table for a group of wells that either existed at one time at or near the SDF or which currently exist. Estimates of the probable maximum water table at these wells were used to construct 2D contour lines depicting this surface beneath the SDF and extend them to the nearby hydrologic boundaries at the perennial streams adjacent to the SDF. Although certain measures were implemented to assure that the contour lines depict a surface above which the water table will not rise, the exact elevation of this surface cannot be known with complete certainty. It is therefore recommended that the construction of saltstone vaults incorporate a vertical buffer of at least 5-feet between the base of the vaults and the depicted probable maximum water table elevation. This should provide assurance that the water table under the wet extreme climatic condition will never rise to intercept the base of a vault.
National Laboratory Impact Initiative
Broader source: Energy.gov [DOE]
The National Laboratory Impact Initiative supports the relationship between the Office of Energy Efficiency & Renewable Energy and the national laboratory enterprise. The national laboratories...
Broader source: Energy.gov [DOE]
This is a document from Energy Impact Illinois posted on the website of the U.S. Department of Energy's Better Buildings Neighborhood Program.
Pesonen, Amanda Danielle
2012-10-19T23:59:59.000Z
, incivility research may offer important insights into the prediction and prevention of more high-impact forms of mistreatment. If ___________ This thesis follows the style of Journal of Applied Psychology. 2 incivility does indeed lie at the root...
Peltier, W. Richard
, and Climatological Implications STEPHEN D. GRIFFITHS AND W. RICHARD PELTIER Department of Physics, University; Griffiths and Peltier 2008; Arbic et al. 2008). Such changes can impact the climate system in a variety
STFC Economic Impact Reporting Framework 2009/10 Economic Impact
STFC Economic Impact Reporting Framework 2009/10 Economic Impact Reporting Framework 2009/10 #12;STFC Economic Impact Reporting Framework 2009/10 Economic Impact Reporting Framework 2009/10 #12;STFC Economic Impact Reporting Framework 2009/10 1 Contents: Introduction
Statistical static timing analysis considering the impact of power supply noise in VLSI circuits
Kim, Hyun Sung
2009-06-02T23:59:59.000Z
As semiconductor technology is scaled and voltage level is reduced, the impact of the variation in power supply has become very significant in predicting the realistic worst-case delays in integrated circuits. The analysis of power supply noise...
Avoiding Earth Impacts Using Albedo Modification as Applied to 99942 Apophis
Margulieux, Richard Steven
2011-08-08T23:59:59.000Z
Current orbital solutions for 99942 Apophis predict a close approach to the Earth in April 2029. The parameters of that approach affect the future trajectory of Apophis, potentially leading to an impact in 2036, 2056, 2068, etc. The dynamic model...
The economic impact of global climate and tropospheric oxone on world agricultural production
Wang, Xiaodu
2005-01-01T23:59:59.000Z
The objective of my thesis is to analyze the economic impact on agriculture production from changes in climate and tropospheric ozone, and related policy interventions. The analysis makes use of the Emissions Prediction ...
Deschenes, Olivier; Greenstone, Michael
2004-01-01T23:59:59.000Z
1989): “Global Climate Change and Agriculture: An Economicpart of climate change for agriculture. These predictedAgriculture,” in Robert Mendelsohn and James E. Neumann (editors), The Impact of Climate Change
Parameterization of Maximum Wave Heights Forced by Hurricanes: Application to Corpus Christi, Texas
Taylor, Sym 1978-
2012-12-07T23:59:59.000Z
of open-coast and bay environment hurricane wave conditions and (2) expedient prediction, for rapid evaluation, of wave hazards as a function of hurricane parameters. This thesis presents the coupled ADCIRC-SWAN numerical model results of wave height...
On the minimum and maximum mass of neutron stars and the delayed collapse
Strobel, K; Strobel, Klaus; Weigel, Manfred K.
2001-01-01T23:59:59.000Z
The minimum and maximum mass of protoneutron stars and neutron stars are investigated. The hot dense matter is described by relativistic (including hyperons) and non-relativistic equations of state. We show that the minimum mass ($\\sim$ 0.88 - 1.28 $M_{\\sun}$) of a neutron star is determined by the earliest stage of its evolution and is nearly unaffected by the presence of hyperons. The maximum mass of a neutron star is limited by the protoneutron star or hot neutron star stage. Further we find that the delayed collapse of a neutron star into a black hole during deleptonization is not only possible for equations of state with softening components, as for instance, hyperons, meson condensates etc., but also for neutron stars with a pure nucleonic-leptonic equation of state.
On the minimum and maximum mass of neutron stars and the delayed collapse
Klaus Strobel; Manfred K. Weigel
2000-12-14T23:59:59.000Z
The minimum and maximum mass of protoneutron stars and neutron stars are investigated. The hot dense matter is described by relativistic (including hyperons) and non-relativistic equations of state. We show that the minimum mass ($\\sim$ 0.88 - 1.28 $M_{\\sun}$) of a neutron star is determined by the earliest stage of its evolution and is nearly unaffected by the presence of hyperons. The maximum mass of a neutron star is limited by the protoneutron star or hot neutron star stage. Further we find that the delayed collapse of a neutron star into a black hole during deleptonization is not only possible for equations of state with softening components, as for instance, hyperons, meson condensates etc., but also for neutron stars with a pure nucleonic-leptonic equation of state.
Beer, M.
1980-12-01T23:59:59.000Z
The maximum likelihood method for the multivariate normal distribution is applied to the case of several individual eigenvalues. Correlated Monte Carlo estimates of the eigenvalue are assumed to follow this prescription and aspects of the assumption are examined. Monte Carlo cell calculations using the SAM-CE and VIM codes for the TRX-1 and TRX-2 benchmark reactors, and SAM-CE full core results are analyzed with this method. Variance reductions of a few percent to a factor of 2 are obtained from maximum likelihood estimation as compared with the simple average and the minimum variance individual eigenvalue. The numerical results verify that the use of sample variances and correlation coefficients in place of the corresponding population statistics still leads to nearly minimum variance estimation for a sufficient number of histories and aggregates.
An Ad-Hoc Method for Obtaining chi**2 Values from Unbinned Maximum Likelihood Fits
M. Williams; C. A. Meyer
2008-06-30T23:59:59.000Z
A common goal in an experimental physics analysis is to extract information from a reaction with multi-dimensional kinematics. The preferred method for such a task is typically the unbinned maximum likelihood method. In fits using this method, the likelihood is a goodness-of-fit quantity in that it effectively discriminates between available hypotheses; however, it does not provide any information as to how well the best hypothesis describes the data. In this paper, we present an {\\em ad-hoc} procedure for obtaining chi**2/n.d.f. values from unbinned maximum likelihood fits. This method does not require binning the data, making it very applicable to multi-dimensional problems.
Maximum-Entropy Meshfree Method for Compressible and Near-Incompressible Elasticity
Ortiz, A; Puso, M A; Sukumar, N
2009-09-04T23:59:59.000Z
Numerical integration errors and volumetric locking in the near-incompressible limit are two outstanding issues in Galerkin-based meshfree computations. In this paper, we present a modified Gaussian integration scheme on background cells for meshfree methods that alleviates errors in numerical integration and ensures patch test satisfaction to machine precision. Secondly, a locking-free small-strain elasticity formulation for meshfree methods is proposed, which draws on developments in assumed strain methods and nodal integration techniques. In this study, maximum-entropy basis functions are used; however, the generality of our approach permits the use of any meshfree approximation. Various benchmark problems in two-dimensional compressible and near-incompressible small strain elasticity are presented to demonstrate the accuracy and optimal convergence in the energy norm of the maximum-entropy meshfree formulation.
Hanel, Rudolf; Gell-Mann, Murray
2014-01-01T23:59:59.000Z
The maximum entropy principle (MEP) is a method for obtaining the most likely distribution functions of observables from statistical systems, by maximizing entropy under constraints. The MEP has found hundreds of applications in ergodic and Markovian systems in statistical mechanics, information theory, and statistics. For several decades there exists an ongoing controversy whether the notion of the maximum entropy principle can be extended in a meaningful way to non-extensive, non-ergodic, and complex statistical systems and processes. In this paper we start by reviewing how Boltzmann-Gibbs-Shannon entropy is related to multiplicities of independent random processes. We then show how the relaxation of independence naturally leads to the most general entropies that are compatible with the first three Shannon-Khinchin axioms, the (c,d)-entropies. We demonstrate that the MEP is a perfectly consistent concept for non-ergodic and complex statistical systems if their relative entropy can be factored into a general...
Maximum-entropy principle for static and dynamic high-field transport in semiconductors
Trovato, M. [Dipartimento di Matematica, Universita di Catania, Viale A. Doria, 95125 Catania (Italy); Reggiani, L. [Dipartimento di Ingegneria dell' Innovazione e Nanotechnology National Laboratory of CNR-INFM, Universita di Lecce, Via Arnesano s/n, 73100 Lecce (Italy)
2006-06-15T23:59:59.000Z
Within the maximum entropy principle we present a general theory able to provide, in a dynamical context, the macroscopic relevant variables for carrier transport under electric fields of arbitrary strength. For the macroscopic variables the linearized maximum entropy approach is developed including full-band effects within a total energy scheme. Under spatially homogeneous conditions, we construct a closed set of hydrodynamic equations for the small-signal (dynamic) response of the macroscopic variables. The coupling between the driving field and the energy dissipation is analyzed quantitatively by using an arbitrary number of moments of the distribution function. The theoretical approach is applied to n-Si at 300 K and is validated by comparing numerical calculations with ensemble Monte Carlo simulations and with experimental data.
Computation of the maximum loadability of a power system using nonlinear optimization
Khabirov, Abdufarrukh
2001-01-01T23:59:59.000Z
Between Generator and Load. . . . . . . . . 34 E. Flowchart for Optimization Program F. Tutorial Example G. Conclusion. . 35 36 44 V SIMULATION RESULTS. 45 A. Introduction. B. Results of Simulation for Maximum Loadability of the Total System. I... of this work starting from the basics. Chapter III will cover concepts of power flow and loadability along with tutorial example. The literature survey over this topic and previous work as well as problem statement and solution method will be covered...
An Analysis of Maximum Residential Energy Efficiency in Hot and Humid Climates
Malhotra, M.; Haberl, J. S.
2006-01-01T23:59:59.000Z
the high efficiency instantaneous water heater with electronic ignition. The largest equipment energy savings (20%) was achieved from the horizontal-axis clothes washer. Compact fluorescent lamps (CFLs) saved 75% lighting energy use. Among all...AN ANALYSIS OF MAXIMUM RESIDENTIAL ENERGY EFFICIENCY IN HOT AND HUMID CLIMATES Mini Malhotra Graduate Research Assistant Jeff Haberl, Ph.D., P.E. Professor/Associate Director Energy Systems Laboratory, Texas A&M University College...
A maximum entropy theorem with applications to the measurement of biodiversity
Leinster, Tom
2009-01-01T23:59:59.000Z
This is a preliminary article stating and proving a new maximum entropy theorem. The entropies that we consider can be used as measures of biodiversity. In that context, the question is: for a given collection of species, which frequency distribution(s) maximize the diversity? The theorem provides the answer. The chief surprise is that although we are dealing not just with a single entropy, but a one-parameter family of entropies, there is a single distribution maximizing all of them simultaneously.
Hydrodynamic equations for electrons in graphene obtained from the maximum entropy principle
Barletti, Luigi, E-mail: luigi.barletti@unifi.it [Dipartimento di Matematica e Informatica “Ulisse Dini”, Universitŕ degli Studi di Firenze, Viale Morgagni 67/A, 50134 Firenze (Italy)
2014-08-15T23:59:59.000Z
The maximum entropy principle is applied to the formal derivation of isothermal, Euler-like equations for semiclassical fermions (electrons and holes) in graphene. After proving general mathematical properties of the equations so obtained, their asymptotic form corresponding to significant physical regimes is investigated. In particular, the diffusive regime, the Maxwell-Boltzmann regime (high temperature), the collimation regime and the degenerate gas limit (vanishing temperature) are considered.
REMARKS ON THE MAXIMUM ENTROPY METHOD APPLIED TO FINITE TEMPERATURE LATTICE QCD.
UMEDA, T.; MATSUFURU, H.
2005-07-25T23:59:59.000Z
We make remarks on the Maximum Entropy Method (MEM) for studies of the spectral function of hadronic correlators in finite temperature lattice QCD. We discuss the virtues and subtlety of MEM in the cases that one does not have enough number of data points such as at finite temperature. Taking these points into account, we suggest several tests which one should examine to keep the reliability for the results, and also apply them using mock and lattice QCD data.
Towards the application of the Maximum Entropy Method to finite temperature Upsilon Spectroscopy
M. Oevers; C. Davies; J. Shigemitsu
2000-09-22T23:59:59.000Z
According to the Narnhofer Thirring Theorem interacting systems at finite temperature cannot be described by particles with a sharp dispersion law. It is therefore mandatory to develop new methods to extract particle masses at finite temperature. The Maximum Entropy method offers a path to obtain the spectral function of a particle correlation function directly. We have implemented the method and tested it with zero temperature Upsilon correlation functions obtained from an NRQCD simulation. Results for different smearing functions are discussed.
Maximum entropy deconvolution of resonant inelastic x-ray scattering spectra
J. Laverock; A. R. H. Preston; D. Newby Jr; K. E. Smith; S. B. Dugdale
2012-02-10T23:59:59.000Z
Resonant inelastic x-ray scattering (RIXS) has become a powerful tool in the study of the electronic structure of condensed matter. Although the linewidths of many RIXS features are narrow, the experimental broadening can often hamper the identification of spectral features. Here, we show that the Maximum Entropy technique can successfully be applied in the deconvolution of RIXS spectra, improving the interpretation of the loss features without a severe increase in the noise ratio.
Remarks on the Maximum Entropy Method applied to finite temperature lattice QCD
Takashi Umeda; Hideo Matsufuru
2005-10-05T23:59:59.000Z
We make remarks on the Maximum Entropy Method (MEM) for studies of the spectral function of hadronic correlators in finite temperature lattice QCD. We discuss the virtues and subtlety of MEM in the cases that one does not have enough number of data points such as at finite temperature. Taking these points into account, we suggest several tests which one should examine to keep the reliability for the results, and also apply them using mock and lattice QCD data.
Predictability in Temporal Alexandru Marcu
Predictability in Temporal Networks Alexandru Marcu Supervisors: Sune Lehmann JĂ¸rgensen Jakob Eg First and foremost, I would like to thank my supervisors, Sune Lehmann and Jakob Eg Larsen
Model prediction for reactor control
Ardell, G.G.; Gumowski, B.
1983-06-01T23:59:59.000Z
Model prediction is offered as a substitute to lengthy analysis of sample procedures to control product properties not amendable to direct measurement during chemical processing. A computer model of a reactor is set up, and control actions, based on current predicted values, are established. The control is based on predicted ''measurements'' which are derived using a dynamic process model solved on-line. The model is corrected by real measurements in the process operation. A two phase exothermic catalyzed reaction, with the objective of producing material with specified properties, is tested in this paper. The model prediction performance was very good. Model systems enable a more effective control to be exercised than the sample method.
Optimal prediction in molecular dynamics
Benjamin Seibold
2008-08-22T23:59:59.000Z
Optimal prediction approximates the average solution of a large system of ordinary differential equations by a smaller system. We present how optimal prediction can be applied to a typical problem in the field of molecular dynamics, in order to reduce the number of particles to be tracked in the computations. We consider a model problem, which describes a surface coating process, and show how asymptotic methods can be employed to approximate the high dimensional conditional expectations, which arise in optimal prediction. The thus derived smaller system is compared to the original system in terms of statistical quantities, such as diffusion constants. The comparison is carried out by Monte-Carlo simulations, and it is shown under which conditions optimal prediction yields a valid approximation to the original system.
Business Development - Predictive Maintenance Products
Sceiczina, P.
2005-01-01T23:59:59.000Z
BUSINESS DEVELOPMENT - PREDICTIVE MAINTENANCE PRODUCTS Phillip Sceiczina, ifm efector, inc. In this time of global competitiveness, more companies are focusing on reducing manufacturing costs to increase profits. Energy costs can be a...
Pflugrath, Brett D.; Brown, Richard S.; Carlson, Thomas J.
2012-03-01T23:59:59.000Z
This study investigated the maximum depth at which juvenile Chinook salmon Oncorhynchus tshawytscha can acclimate by attaining neutral buoyancy. Depth of neutral buoyancy is dependent upon the volume of gas within the swim bladder, which greatly influences the occurrence of injuries to fish passing through hydroturbines. We used two methods to obtain maximum swim bladder volumes that were transformed into depth estimations - the increased excess mass test (IEMT) and the swim bladder rupture test (SBRT). In the IEMT, weights were surgically added to the fishes exterior, requiring the fish to increase swim bladder volume in order to remain neutrally buoyant. SBRT entailed removing and artificially increasing swim bladder volume through decompression. From these tests, we estimate the maximum acclimation depth for juvenile Chinook salmon is a median of 6.7m (range = 4.6-11.6 m). These findings have important implications to survival estimates, studies using tags, hydropower operations, and survival of juvenile salmon that pass through large Kaplan turbines typical of those found within the Columbia and Snake River hydropower system.
Information theory and climate prediction
Leung, Lai-yung
1988-01-01T23:59:59.000Z
INFORMATION THEORY AND CLIMATE PREDICTION A Thesis by LAI-YUNG LEUNG Submitted to the Graduate College of Texas A&M University in partial fulfillment of the requirements for the degree of MASTER OF SCIENCE May 1988 Major Subject...: Meteorology INFORMATION THEORY AND CLIMATE PREDICTION A Thesis by LAI-YUNG LEUNG Approved as to style and content by: Gerald R. North (Chairman) George L. Huebner (Member) Robert O. Reid (Member) James R. Scoggins (Head of Department) May 19BB...
Feigon, Brooke
Predicting future climate change for the UK and East AngliaPredicting future climate change confidence in the following future changes in UK climate: Average temperature increases Summer temperature part in farming, so we might expect these changes to have an impact on agriculture affecting both
Environmental impact report (draft)
Not Available
1980-05-01T23:59:59.000Z
The three projects as proposed by Pacific Gas and Electric Company and the environmental analysis of the projects are discussed. Sections on the natural and social environments of the proposed projects and their surrounding areas consist of descriptions of the setting, discussions of the adverse and beneficial consequences of the project, and potential mitigation measures to reduce the effects of adverse impacts. The Environmental Impact Report includes discussions of unavoidable adverse effects, irreversible changes, long-term and cumulative impacts, growth-inducing effects, and feasible alternatives to the project. (MHR)
Holzer, Mark; Primeau, Francois W; Smethie, William M; Khatiwala, Samar
2010-01-01T23:59:59.000Z
Gull (1991), Bayesian maximum entropy image reconstruction,F. Primeau (2006), A maximum entropy approach to water massSouth- ern Ocean? A maximum entropy approach to global water
Big Data, Big Impact: New Possibilities for International Development
Chen, Keh-Hsun
0 Big Data, Big Impact: New Possibilities for International Development #12;1 Executive Summary for harnessing big data. #12;2 Financial Services Data gleaned from mobile money services can provide deep is able to predict the magnitude of a disease outbreak half way around the world. Similarly, an aid agency
CLIMATE CHANGE IMPACTS ON HYDROELECTRIC POWER G.P. Harrison(1),
Harrison, Gareth
CLIMATE CHANGE IMPACTS ON HYDROELECTRIC POWER G.P. Harrison(1), H.W. Whittington(1) and S.W. Gundry implications for the design, operation and viability of hydroelectric power stations. This describes attempts to predict and quantify these impacts. It details a methodology for computer based modelling of hydroelectric
What Is An Environmental Impact ...
National Nuclear Security Administration (NNSA)
What Is An Environmental Impact Statement? What Is An Environmental Impact Statement? An EIS is prepared in a series of steps: gathering government and public comments to define...
Energy Impact Illinois Rebates
Broader source: Energy.gov [DOE]
The Energy Impact Illinois program offers rebates for implementing energy efficient measures. Homeowners and businesses can use the "Find Energy Savings Actions" tool to see all the programs they...
Broader source: Energy.gov [DOE]
Energy Impact Illinois partners with local banks and credit unions to provide low-interest loans to help reduce the upfront costs associated with energy efficiency improvements. Loans can be used...
Brunsen, W.; Worley, W.; Frost, E.
1988-09-30T23:59:59.000Z
This is a progress report on the first phase of a project to measure the economic impacts of a rapidly changing U.S. target base. The purpose of the first phase is to designate and test the macroeconomic impact analysis model. Criteria were established for a decision-support model. Additional criteria were defined for an interactive macroeconomic impact analysis model. After a review of several models, the Economic Impact Forecast System model of the U.S. Army Construction Research Laboratory was selected as the appropriate input-output tool that can address local and regional economic analysis. The model was applied to five test cases to demonstrate its utility and define possible revisions to meet project criteria. A plan for EIFS access was defined at three levels. Objectives and tasks for scenario refinement are proposed.
Impacted material placement plans
Hickey, M.J.
1997-01-29T23:59:59.000Z
Impacted material placement plans (IMPP) are documents identifying the essential elements in placing remediation wastes into disposal facilities. Remediation wastes or impacted material(s) are those components used in the construction of the disposal facility exclusive of the liners and caps. The components might include soils, concrete, rubble, debris, and other regulatory approved materials. The IMPP provides the details necessary for interested parties to understand the management and construction practices at the disposal facility. The IMPP should identify the regulatory requirements from applicable DOE Orders, the ROD(s) (where a part of a CERCLA remedy), closure plans, or any other relevant agreements or regulations. Also, how the impacted material will be tracked should be described. Finally, detailed descriptions of what will be placed and how it will be placed should be included. The placement of impacted material into approved on-site disposal facilities (OSDF) is an integral part of gaining regulatory approval. To obtain this approval, a detailed plan (Impacted Material Placement Plan [IMPP]) was developed for the Fernald OSDF. The IMPP provides detailed information for the DOE, site generators, the stakeholders, regulatory community, and the construction subcontractor placing various types of impacted material within the disposal facility.
Neural networks predict well inflow performance
Alrumah, Muhammad K.
2004-09-30T23:59:59.000Z
Predicting well inflow performance relationship accurately is very important for production engineers. From these predictions, future plans for handling and improving well performance can be established. One method of predicting well inflow...
Time-predictable Computer Architecture Martin Schoeberl
Time-predictable Computer Architecture Martin Schoeberl Institute of Computer Engineering Vienna. Then we propose solutions for a time- predictable computer architecture. The proposed architecture in computer architectures are: pipelining, instruction and data caching, dynamic branch prediction, out
D. Sutton; B. R. Johnson; M. L. Brown; P. Cabella; P. G. Ferreira; K. M. Smith
2008-07-23T23:59:59.000Z
Map-making presents a significant computational challenge to the next generation of kilopixel CMB polarisation experiments. Years worth of time ordered data (TOD) from thousands of detectors will need to be compressed into maps of the T, Q and U Stokes parameters. Fundamental to the science goal of these experiments, the observation of B-modes, is the ability to control noise and systematics. In this paper, we consider an alternative to the maximum-likelihood method, called destriping, where the noise is modelled as a set of discrete offset functions and then subtracted from the time-stream. We compare our destriping code (Descart: the DEStriping CARTographer) to a full maximum-likelihood map-maker, applying them to 200 Monte-Carlo simulations of time-ordered data from a ground based, partial-sky polarisation modulation experiment. In these simulations, the noise is dominated by either detector or atmospheric 1/f noise. Using prior information of the power spectrum of this noise, we produce destriped maps of T, Q and U which are negligibly different from optimal. The method does not filter the signal or bias the E or B-mode power spectra. Depending on the length of the destriping baseline, the method delivers between 5 and 22 times improvement in computation time over the maximum-likelihood algorithm. We find that, for the specific case of single detector maps, it is essential to destripe the atmospheric 1/f in order to detect B-modes, even though the Q and U signals are modulated by a half-wave plate spinning at 5-Hz.
Quaternary Science Reviews 23 (2004) 529560 Holocene thermal maximum in the western Arctic (0180
Oswald, Wyatt
of Colorado, Boulder, USA f Department of Geography, University of Oregon, Eugene, USA g College of Forest of Geography, University of Trondheim, Norway m Department of Geography, University of Cincinnati, Ohio, USA n through its impact on surface energy balance and ocean circulation. The lingering ice also attests
On Weyl channels being covariant with respect to the maximum commutative group of unitaries
G. G. Amosov
2006-08-10T23:59:59.000Z
We investigate the Weyl channels being covariant with respect to the maximum commutative group of unitary operators. This class includes the quantum depolarizing channel and the "two-Pauli" channel as well. Then, we show that our estimation of the output entropy for a tensor product of the phase damping channel and the identity channel based upon the decreasing property of the relative entropy allows to prove the additivity conjecture for the minimal output entropy for the quantum depolarizing channel in any prime dimesnsion and for the "two Pauli" channel in the qubit case.
A reliable, fast and low cost maximum power point tracker for photovoltaic applications
Enrique, J.M.; Andujar, J.M.; Bohorquez, M.A. [Departamento de Ingenieria Electronica, de Sistemas Informaticos y Automatica, Universidad de Huelva (Spain)
2010-01-15T23:59:59.000Z
This work presents a new maximum power point tracker system for photovoltaic applications. The developed system is an analog version of the ''P and O-oriented'' algorithm. It maintains its main advantages: simplicity, reliability and easy practical implementation, and avoids its main disadvantages: inaccurateness and relatively slow response. Additionally, the developed system can be implemented in a practical way at a low cost, which means an added value. The system also shows an excellent behavior for very fast variables in incident radiation levels. (author)
A New Maximum-Likelihood Change Estimator for Two-Pass SAR Coherent Change Detection.
Wahl, Daniel E.; Yocky, David A.; Jakowatz, Charles V,
2014-09-01T23:59:59.000Z
In this paper, we derive a new optimal change metric to be used in synthetic aperture RADAR (SAR) coherent change detection (CCD). Previous CCD methods tend to produce false alarm states (showing change when there is none) in areas of the image that have a low clutter-to-noise power ratio (CNR). The new estimator does not suffer from this shortcoming. It is a surprisingly simple expression, easy to implement, and is optimal in the maximum-likelihood (ML) sense. The estimator produces very impressive results on the CCD collects that we have tested.
Application of Maximum Entropy Method to Lattice Field Theory with a Topological Term
M. Imachi; Y. Shinno; H. Yoneyama
2003-09-22T23:59:59.000Z
In Monte Carlo simulation, lattice field theory with a $\\theta$ term suffers from the sign problem. This problem can be circumvented by Fourier-transforming the topological charge distribution $P(Q)$. Although this strategy works well for small lattice volume, effect of errors of $P(Q)$ becomes serious with increasing volume and prevents one from studying the phase structure. This is called flattening. As an alternative approach, we apply the maximum entropy method (MEM) to the Gaussian $P(Q)$. It is found that the flattening could be much improved by use of the MEM.
Conditional maximum-entropy method for selecting prior distributions in Bayesian statistics
Abe, Sumiyoshi
2014-01-01T23:59:59.000Z
The conditional maximum-entropy method (abbreviated here as C-MaxEnt) is formulated for selecting prior probability distributions in Bayesian statistics for parameter estimation. This method is inspired by a statistical-mechanical approach to systems governed by dynamics with largely-separated time scales and is based on three key concepts: conjugate pairs of variables, dimensionless integration measures with coarse-graining factors and partial maximization of the joint entropy. The method enables one to calculate a prior purely from a likelihood in a simple way. It is shown in particular how it not only yields Jeffreys's rules but also reveals new structures hidden behind them.
Charmonium spectra at finite temperature from QCD sum rules with the maximum entropy method
Philipp Gubler; Kenji Morita; Makoto Oka
2011-08-30T23:59:59.000Z
Charmonia spectral functions at finite temperature are studied using QCD sum rules in combination with the maximum entropy method. This approach enables us to directly obtain the spectral function from the sum rules, without having to introduce any specific assumption about its functional form. As a result, it is found that while J/psi and eta_c manifest themselves as significant peaks in the spectral function below the deconfinement temperature T_c, they quickly dissolve into the continuum and almost completely disappear at temperatures between 1.0 T_c and 1.1 T_c.
H. Rudolf Fiebig
2002-10-31T23:59:59.000Z
We study various aspects of extracting spectral information from time correlation functions of lattice QCD by means of Bayesian inference with an entropic prior, the maximum entropy method (MEM). Correlator functions of a heavy-light meson-meson system serve as a repository for lattice data with diverse statistical quality. Attention is given to spectral mass density functions, inferred from the data, and their dependence on the parameters of the MEM. We propose to employ simulated annealing, or cooling, to solve the Bayesian inference problem, and discuss practical issues of the approach.
Maximum Entropy and the Stress Distribution in Soft Disk Packings Above Jamming
Yegang Wu; S. Teitel
2014-10-17T23:59:59.000Z
We show that the maximum entropy hypothesis can successfully explain the distribution of stresses on compact clusters of particles within disordered mechanically stable packings of soft, isotropically stressed, frictionless disks above the jamming transition. We show that, in our two dimensional case, it becomes necessary to consider not only the stress but also the Maxwell-Cremona force-tile area, as a constraining variable that determines the stress distribution. The importance of the force-tile area was suggested by earlier computations on an idealized force-network ensemble.
Spectral Functions, Maximum Entropy Method and Unconventional Methods in Lattice Field Theory
Chris Allton; Danielle Blythe; Jonathan Clowser
2002-04-26T23:59:59.000Z
We present two unconventional methods of extracting information from hadronic 2-point functions produced by Monte Carlo simulations. The first is an extension of earlier work by Leinweber which combines a QCD Sum Rule approach with lattice data. The second uses the Maximum Entropy Method to invert the 2-point data to obtain estimates of the spectral function. The first approach is applied to QCD data, and the second method is applied to the Nambu--Jona-Lasinio model in (2+1)D. Both methods promise to augment the current approach where physical quantities are extracted by fitting to pure exponentials.
On Weyl channels being covariant with respect to the maximum commutative group of unitaries
Amosov, Grigori G. [Department of Higher Mathematics, Moscow Institute of Physics and Technology, Dolgoprudny 141700 (Russian Federation)
2007-01-15T23:59:59.000Z
We investigate the Weyl channels being covariant with respect to the maximum commutative group of unitary operators. This class includes the quantum depolarizing channel and the 'two-Pauli' channel as well. Then, we show that our estimation of the output entropy for a tensor product of the phase damping channel and the identity channel based upon the decreasing property of the relative entropy allows to prove the additivity conjecture for the minimal output entropy for the quantum depolarizing channel in any prime dimension and for the two-Pauli channel in the qubit case.
Maximum entropy analysis of hadron spectral functions and excited states in quenched lattice QCD
CP-PACS Collaboration; :; S. Aoki; R. Burkhalter; M. Fukugita; S. Hashimoto; N. Ishizuka; Y. Iwasaki; K. Kanaya; T. Kaneko; Y. Kuramashi; M. Okawa; Y. Taniguchi; A. Ukawa; T. Yamazaki; T. Yoshié
2001-10-16T23:59:59.000Z
Employing the maximum entropy method we extract the spectral functions from meson correlators at four lattice spacings in quenched QCD with the Wilson quark action. We confirm that the masses and decay constants, obtained from the position and the area of peaks, agree well with the results from the conventional exponential fit. For the first excited state, we obtain $m_{\\pi_1} = 660(590)$ MeV, $m_{\\rho_1} = 1540(570)$ MeV, and $f_{\\rho_1} = 0.085(36)$ in the continuum limit.
Chapman, Patrick
Abstract--The many different techniques for maximum power point tracking of photovoltaic arrays on implementation. This manuscript should serve as a convenient reference for future work in photovoltaic power generation. Index Terms--maximum power point tracking, MPPT, photovoltaic, PV. I. INTRODUCTION RACKING
Mitchell, Richard
On Maximum Available Feedback and PID Control - 1 IEEE SMC UK&RI Applied Cybernetics Â© Dr Richard Mitchell 2005 ON MAXIMUM AVAILABLE FEEDBACK AND PID CONTROL Dr Richard Mitchell, Cybernetics, University frequencies A recent IEEE SMC Paper describes a robust PID controller whose phase is flat at key frequencies
Marden, James
(musculoskeletal systems and man-made machines such as piston engines, jets, and electric motors that use rotary) that simulated in vivo maximum musculoskeletal performance was proportional to muscle mass0.83, a significant increase in the scaling exponent over that of maximum isometric force output. The dynamic performance
Chen, Sheng
Blind Joint Maximum Likelihood Channel Estimation and Data Detection for Single-Input Multiple of Southampton, Southampton SO17 1BJ, U.K. Abstract--A blind adaptive scheme is proposed for joint maximum. A simulation example is used to demon- strate the effectiveness of this joint ML optimization scheme for blind
THE IMPACT OF GENERATION MIX ON PLACEMENT OF STATIC VAR COMPENSATORS
THE IMPACT OF GENERATION MIX ON PLACEMENT OF STATIC VAR COMPENSATORS Robert H. Lasseter, Fellow to provide the maximum transfer capability for all possible generation mixes. The margin to low voltage limit bus system will be used to demonstrate this method over a wide range of generation patterns. Keywords
THE IMPACT OF GENERATION MIX ON PLACEMENT OF STATIC VAR COMPENSATORS
THE IMPACT OF GENERATION MIX ON PLACEMENT OF STATIC VAR COMPENSATORS Robert H. Lasseter, Fellow to provide maximum transfer capability for all possible generation mixes. The margin to low voltage limit. The IEEE 24 bus system will be used to demonstrate this method over a wide range of generation patterns
Analyzing the Impact of Useless Write-Backs on the Endurance and Energy Consumption of PCM
Zhang, Youtao
regions and we present an energy model to determine the maximum energy savings that could potentially memory in the past three decades has been DRAM due to its low cost per bit and low energy consumptionAnalyzing the Impact of Useless Write-Backs on the Endurance and Energy Consumption of PCM Main
Integrating Information, Science, and Technology for Prediction
Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)
Integrating Information, Science, and Technology for Prediction Integrating Information, Science, and Technology for Prediction (IS&T) The Lab's four Science Pillars harness...
analyze various monocrystalline silicon solar cells. The light-IV curves around the maximum power point.3 IMPACT OF LATERAL VARIATIONS ON THE SOLAR CELL EFFICIENCY David Hinken, Karsten Bothe and Rolf Brendel-dimensional approach to calculate the impact of local parameters on the global solar cell efficiency. The presented
Murton, Mark; Bouchier, Francis A.; vanDongen, Dale T.; Mack, Thomas Kimball; Cutler, Robert Paul; Ross, Michael P.
2013-08-01T23:59:59.000Z
Although technological advances provide new capabilities to increase the robustness of security systems, they also potentially introduce new vulnerabilities. New capability sometimes requires new performance requirements. This paper outlines an approach to establishing a key performance requirement for an emerging intrusion detection sensor: the sensored net. Throughout the security industry, the commonly adopted standard for maximum opening size through barriers is a requirement based on square inches-typically 96 square inches. Unlike standard rigid opening, the dimensions of a flexible aperture are not fixed, but variable and conformable. It is demonstrably simple for a human intruder to move through a 96-square-inch opening that is conformable to the human body. The longstanding 96-square-inch requirement itself, though firmly embedded in policy and best practice, lacks a documented empirical basis. This analysis concluded that the traditional 96-square-inch standard for openings is insufficient for flexible openings that are conformable to the human body. Instead, a circumference standard is recommended for these newer types of sensored barriers. The recommended maximum circumference for a flexible opening should be no more than 26 inches, as measured on the inside of the netting material.
Trovato, M. [Dipartimento di Matematica, Universita di Catania, Viale A. Doria, I-95125 Catania (Italy); Reggiani, L. [Dipartimento di Ingegneria dell' Innovazione and CNISM, Universita del Salento, Via Arnesano s/n, I-73100 Lecce (Italy)
2011-12-15T23:59:59.000Z
By introducing a quantum entropy functional of the reduced density matrix, the principle of quantum maximum entropy is asserted as fundamental principle of quantum statistical mechanics. Accordingly, we develop a comprehensive theoretical formalism to construct rigorously a closed quantum hydrodynamic transport within a Wigner function approach. The theoretical formalism is formulated in both thermodynamic equilibrium and nonequilibrium conditions, and the quantum contributions are obtained by only assuming that the Lagrange multipliers can be expanded in powers of ({h_bar}/2{pi}){sup 2}. In particular, by using an arbitrary number of moments, we prove that (1) on a macroscopic scale all nonlocal effects, compatible with the uncertainty principle, are imputable to high-order spatial derivatives, both of the numerical density n and of the effective temperature T; (2) the results available from the literature in the framework of both a quantum Boltzmann gas and a degenerate quantum Fermi gas are recovered as a particular case; (3) the statistics for the quantum Fermi and Bose gases at different levels of degeneracy are explicitly incorporated; (4) a set of relevant applications admitting exact analytical equations are explicitly given and discussed; (5) the quantum maximum entropy principle keeps full validity in the classical limit, when ({h_bar}/2{pi}){yields}0.
Space Perception by Visuokinesthetic Prediction
Moeller, Ralf
propose a robot model of space perception in a restricted domain in which a robot arm pushes a small block predicts the visual image of the gripper tool and the kinesthetic state of the robot arm after a small which would move the gripper of the robot arm from its current position to a position where it would
Deforestation Deforestation predictions for Amazonia
Camara, Gilberto
Amazon Deforestation Models Deforestation predictions for Amazonia presented by W. F. Laurance et deforestation. Much has already been said by the scientific community about their model--its apocalyptical ("Deforestation in Amazonia," 21 May 2004, p. 1109), blaming planned infrastructure and the land speculation
A prediction for bubbling geometries
Takuya Okuda
2008-02-11T23:59:59.000Z
We study the supersymmetric circular Wilson loops in N=4 Yang-Mills theory. Their vacuum expectation values are computed in the parameter region that admits smooth bubbling geometry duals. The results are a prediction for the supergravity action evaluated on the bubbling geometries for Wilson loops.
Availability of corona cage for predicting audible noise generated from HVDC transmission line
Nakano, Y.; Sunaga, Y.
1989-04-01T23:59:59.000Z
This paper describes a prospect that a corona cage is available for predicting audible noise (AN) generated from HVDC transmission line. This is based on the assumption that generation quantities of AN and corona current are determined by Fmax (the true maximum conductor surface gradient in the presence of space charge) regardless of the surrounding electrode arrangement. This assumption has been verified by tests using corona cages and a test line.
Availabilty of corona cage for predicting radio interference generated from HVDC transmission line
Nakano, Y.; Sunaga, Y. (Central Research Inst. of Electric Power Industry, Tokyo (Japan))
1990-07-01T23:59:59.000Z
This paper describes a prospect that a corona cage is available for predicting radio interference (RI) generated from HVDC transmission lines. This is based on the assumption that the generation quantity of RI is determined by Fmax (the true maximum conductor surface gradient in the presence of space charge), regardless of surrounding electrode arrangement. This assumption has been verified by tests using corona cages and a test line.
Robert Felix Tournier
2015-02-23T23:59:59.000Z
An undercooled liquid is unstable. The driving force of the glass transition at Tg is a change of the undercooled-liquid Gibbs free energy. The classical Gibbs free energy change for a crystal formation is completed including an enthalpy saving. The crystal growth critical nucleus is used as a probe to observe the Laplace pressure change Dp accompanying the enthalpy change -Vm *Dp at Tg where Vm is the molar volume. A stable glass-liquid transition model predicts the specific heat jump of fragile liquids at temperatures smaller than Tg, the Kauzmann temperature TK where the liquid entropy excess with regard to crystal goes to zero, the equilibrium enthalpy between TK and Tg, the maximum nucleation rate at TK of superclusters containing magic atom numbers, and the equilibrium latent heats at Tg and TK. Strong-to-fragile and strong-to-strong liquid transitions at Tg are also described and all their thermodynamic parameters are determined from their specific heat jumps. The existence of fragile liquids quenched in the amorphous state, which do not undergo liquid-liquid transition during heating preceding their crystallization, is predicted. Long ageing times leading to the formation at TK of a stable glass composed of superclusters containing up to 147 atoms, touching and interpenetrating, are evaluated from nucleation rates. A fragile-to-fragile liquid transition occurs at Tg without stable-glass formation while a strong glass is stable after transition.
Intelligent wind power prediction systems final report
Intelligent wind power prediction systems Â final report Â Henrik Aalborg Nielsen (han (FU 4101) Ens. journal number: 79029-0001 Project title: Intelligent wind power prediction systems #12;#12;Intelligent wind power prediction systems 1/36 Contents 1 Introduction 6 2 The Wind Power Prediction Tool 7 3
FINAL DRAFT VI. Application 3: Recruitment Prediction
Miller, Tom
FINAL DRAFT 106 VI. Application 3: Recruitment Prediction Contributors: S. Sarah Hinckley, Bernard Megrey, Thomas Miller Definition What do we mean by recruitment prediction? The first thing to consider in defining this term is the time horizon of the prediction. Short-term predictions mean the use of individual
STOCHASTIC METHODS FOR THE PREDICTION OF
New York at Stoney Brook, State University of
STOCHASTIC METHODS FOR THE PREDICTION OF COMPLEX MULTISCALE PHENOMENA James Glimm, \\Lambda Alamos, NM 87545 Abstract The purpose of this paper is to develop a general framework for the prediction of current interest to the authors. Prediction involves a two step process of inverse prediction to describe
Reginatto, Marcel; Zimbal, Andreas [Physikalisch-Technische Bundesanstalt, 38116 Braunschweig (Germany)
2008-02-15T23:59:59.000Z
In applications of neutron spectrometry to fusion diagnostics, it is advantageous to use methods of data analysis which can extract information from the spectrum that is directly related to the parameters of interest that describe the plasma. We present here methods of data analysis which were developed with this goal in mind, and which were applied to spectrometric measurements made with an organic liquid scintillation detector (type NE213). In our approach, we combine Bayesian parameter estimation methods and unfolding methods based on the maximum entropy principle. This two-step method allows us to optimize the analysis of the data depending on the type of information that we want to extract from the measurements. To illustrate these methods, we analyze neutron measurements made at the PTB accelerator under controlled conditions, using accelerator-produced neutron beams. Although the methods have been chosen with a specific application in mind, they are general enough to be useful for many other types of measurements.
Netest: A Tool to Measure the Maximum Burst Size, Available Bandwidth and Achievable Throughput
Jin, Guojun; Tierney, Brian
2003-01-31T23:59:59.000Z
Distinguishing available bandwidth and achievable throughput is essential for improving network applications' performance. Achievable throughput is the throughput considering a number of factors such as network protocol, host speed, network path, and TCP buffer space, where as available bandwidth only considers the network path. Without understanding this difference, trying to improve network applications' performance is like ''blind men feeling the elephant'' [4]. In this paper, we define and distinguish bandwidth and throughput, and debate which part of each is achievable and which is available. Also, we introduce and discuss a new concept - Maximum Burst Size that is crucial to the network performance and bandwidth sharing. A tool, netest, is introduced to help users to determine the available bandwidth, and provides information to achieve better throughput with fairness of sharing the available bandwidth, thus reducing misuse of the network.
From Physics to Economics: An Econometric Example Using Maximum Relative Entropy
Giffin, Adom
2009-01-01T23:59:59.000Z
Econophysics, is based on the premise that some ideas and methods from physics can be applied to economic situations. We intend to show in this paper how a physics concept such as entropy can be applied to an economic problem. In so doing, we demonstrate how information in the form of observable data and moment constraints are introduced into the method of Maximum relative Entropy (MrE). A general example of updating with data and moments is shown. Two specific econometric examples are solved in detail which can then be used as templates for real world problems. A numerical example is compared to a large deviation solution which illustrates some of the advantages of the MrE method.
A. Vaudrey; P. Baucour; F. Lanzetta; R. Glises
2010-08-30T23:59:59.000Z
Producing useful electrical work in consuming chemical energy, the fuel cell have to reject heat to its surrounding. However, as it occurs for any other type of engine, this thermal energy cannot be exchanged in an isothermal way in finite time through finite areas. As it was already done for various types of systems, we study the fuel cell within the finite time thermodynamics framework and define an endoreversible fuel cell. Considering different types of heat transfer laws, we obtain an optimal value of the operating temperature, corresponding to a maximum produced power. This analysis is a first step of a thermodynamical approach of design of thermal management devices, taking into account performances of the whole system.
Vaudrey, A; Lanzetta, F; Glises, R
2009-01-01T23:59:59.000Z
Producing useful electrical work in consuming chemical energy, the fuel cell have to reject heat to its surrounding. However, as it occurs for any other type of engine, this thermal energy cannot be exchanged in an isothermal way in finite time through finite areas. As it was already done for various types of systems, we study the fuel cell within the finite time thermodynamics framework and define an endoreversible fuel cell. Considering different types of heat transfer laws, we obtain an optimal value of the operating temperature, corresponding to a maximum produced power. This analysis is a first step of a thermodynamical approach of design of thermal management devices, taking into account performances of the whole system.
G. Litak; T. Kaminski; J. Czarnigowski; A. K. Sen; M. Wendeker
2006-11-29T23:59:59.000Z
In this paper we analyze the cycle-to-cycle variations of maximum pressure $p_{max}$ and peak pressure angle $\\alpha_{pmax}$ in a four-cylinder spark ignition engine. We examine the experimental time series of $p_{max}$ and $\\alpha_{pmax}$ for three different spark advance angles. Using standard statistical techniques such as return maps and histograms we show that depending on the spark advance angle, there are significant differences in the fluctuations of $p_{max}$ and $\\alpha_{pmax}$. We also calculate the multiscale entropy of the various time series to estimate the effect of randomness in these fluctuations. Finally, we explain how the information on both $p_{max}$ and $\\alpha_{pmax}$ can be used to develop optimal strategies for controlling the combustion process and improving engine performance.
Urniezius, Renaldas [Kaunas University of Technology, Kaunas (Lithuania)
2011-03-14T23:59:59.000Z
The principle of Maximum relative Entropy optimization was analyzed for dead reckoning localization of a rigid body when observation data of two attached accelerometers was collected. Model constraints were derived from the relationships between the sensors. The experiment's results confirmed that accelerometers each axis' noise can be successfully filtered utilizing dependency between channels and the dependency between time series data. Dependency between channels was used for a priori calculation, and a posteriori distribution was derived utilizing dependency between time series data. There was revisited data of autocalibration experiment by removing the initial assumption that instantaneous rotation axis of a rigid body was known. Performance results confirmed that such an approach could be used for online dead reckoning localization.
Azimuthal Anisotropy in Heavy Ion Collisions from the Maximum Entropy Method
Pirner, Hans J
2014-01-01T23:59:59.000Z
We investigate the azimuthal anisotropy v2 of particle production in nucleus-nucleus collisions in the maximum entropy approach. This necessitates two new parameters delta and lambda2. The parameter delta describes the deformation of transverse configuration space and is related to the anisotropy of the overlap zone of the two nuclei. The parameter lambda2 defines the anisotropy of the particle distribution in momentum space. Assuming deformed flux tubes at the early stage of the collision we relate the momentum to the space asymmetry i.e. lambda2 to delta with the uncertainty relation. We compute the anisotropy v2 as a function of centrality, transverse momentum and rapidity using gluon-hadron duality. The general features of LHC data are reproduced.
Source Function Determined from HBT Correlations by the Maximum Entropy Principle
Wu Yuanfang; Ulrich Heinz
1996-07-18T23:59:59.000Z
We study the reconstruction of the source function in space-time directly from the measured HBT correlation function using the Maximum Entropy Principle. We find that the problem is ill-defined without at least one additional theoretical constraint as input. Using the requirement of a finite source lifetime for the latter we find a new Gaussian parametrization of the source function directly in terms of the measured HBT radius parameters and its lifetime, where the latter is a free parameter which is not directly measurable by HBT. We discuss the implications of our results for the remaining freedom in building source models consistent with a given set of measured HBT radius parameters.
A maximum-entropy approach to the adiabatic freezing of a supercooled liquid
Santi Prestipino
2013-04-29T23:59:59.000Z
I employ the van der Waals theory of Baus and coworkers to analyze the fast, adiabatic decay of a supercooled liquid in a closed vessel with which the solidification process usually starts. By imposing a further constraint on either the system volume or pressure, I use the maximum-entropy method to quantify the fraction of liquid that is transformed into solid as a function of undercooling and of the amount of a foreign gas that could possibly be also present in the test tube. Upon looking at the implications of thermal and mechanical insulation for the energy cost of forming a solid droplet within the liquid, I identify one situation where the onset of solidification inevitably occurs near the wall in contact with the bath.
Parthapratim Biswas; H. Shimoyama; L. R. Mead
2009-10-23T23:59:59.000Z
We apply the maximum entropy principle to construct the natural invariant density and Lyapunov exponent of one-dimensional chaotic maps. Using a novel function reconstruction technique that is based on the solution of Hausdorff moment problem via maximizing Shannon entropy, we estimate the invariant density and the Lyapunov exponent of nonlinear maps in one-dimension from a knowledge of finite number of moments. The accuracy and the stability of the algorithm are illustrated by comparing our results to a number of nonlinear maps for which the exact analytical results are available. Furthermore, we also consider a very complex example for which no exact analytical result for invariant density is available. A comparison of our results to those available in the literature is also discussed.
Spectral function and excited states in lattice QCD with maximum entropy method
CP-PACS Collaboration; :; T. Yamazaki; S. Aoki; R. Burkhalter; M. Fukugita; S. Hashimoto; N. Ishizuka; Y. Iwasaki; K. Kanaya; T. Kaneko; Y. Kuramashi; M. Okawa; Y. Taniguchi; A. Ukawa; T. Yoshié
2001-05-29T23:59:59.000Z
We apply the maximum entropy method to extract the spectral functions for pseudoscalar and vector mesons from hadron correlators previously calculated at four different lattice spacings in quenched QCD with the Wilson quark action. We determine masses and decay constants for the ground and excited states of the pseudoscalar and vector channels from position and area of peaks in the spectral functions. We obtain the results, $m_{\\pi_1} = 660(590)$ MeV and $m_{\\rho_1} = 1540(570)$ MeV for the masses of the first excited state masses, in the continuum limit of quenched QCD. We also find unphysical states which have infinite mass in the continuum limit, and argue that they are bound states of two doublers of the Wilson quark action. If the interpretation is correct, this is the first time that the state of doublers is identified in lattice QCD numerical simulations.
Application of the Maximum Entropy Method to the (2+1)d Four-Fermion Model
C. R. Allton; J. E. Clowser; S. J. Hands; J. B. Kogut; C. G. Strouthos
2002-08-19T23:59:59.000Z
We investigate spectral functions extracted using the Maximum Entropy Method from correlators measured in lattice simulations of the (2+1)-dimensional four-fermion model. This model is particularly interesting because it has both a chirally broken phase with a rich spectrum of mesonic bound states and a symmetric phase where there are only resonances. In the broken phase we study the elementary fermion, pion, sigma and massive pseudoscalar meson; our results confirm the Goldstone nature of the pi and permit an estimate of the meson binding energy. We have, however, seen no signal of sigma -> pi pi decay as the chiral limit is approached. In the symmetric phase we observe a resonance of non-zero width in qualitative agreement with analytic expectations; in addition the ultra-violet behaviour of the spectral functions is consistent with the large non-perturbative anomalous dimension for fermion composite operators expected in this model.
Azimuthal Anisotropy in Heavy Ion Collisions from the Maximum Entropy Method
Hans J. Pirner
2014-05-09T23:59:59.000Z
We investigate the azimuthal anisotropy v2 of particle production in nucleus-nucleus collisions in the maximum entropy approach. This necessitates two new parameters delta and lambda2. The parameter delta describes the deformation of transverse configuration space and is related to the anisotropy of the overlap zone of the two nuclei. The parameter lambda2 defines the anisotropy of the particle distribution in momentum space. Assuming deformed flux tubes at the early stage of the collision we relate the momentum to the space asymmetry i.e. lambda2 to delta with the uncertainty relation. We compute the anisotropy v2 as a function of centrality, transverse momentum and rapidity using gluon-hadron duality. The general features of LHC data are reproduced.
CP$^{N-1}$ model with the theta term and maximum entropy method
Masahiro Imachi; Yasuhiko Shinno; Hiroshi Yoneyama
2004-09-25T23:59:59.000Z
A $\\theta$ term in lattice field theory causes the sign problem in Monte Carlo simulations. This problem can be circumvented by Fourier-transforming the topological charge distribution $P(Q)$. This strategy, however, has a limitation, because errors of $P(Q)$ prevent one from calculating the partition function ${\\cal Z}(\\theta)$ properly for large volumes. This is called flattening. As an alternative approach to the Fourier method, we utilize the maximum entropy method (MEM) to calculate ${\\cal Z}(\\theta)$. We apply the MEM to Monte Carlo data of the CP$^3$ model. It is found that in the non-flattening case, the result of the MEM agrees with that of the Fourier transform, while in the flattening case, the MEM gives smooth ${\\cal Z}(\\theta)$.
MADmap: A Massively Parallel Maximum-Likelihood Cosmic Microwave Background Map-Maker
Cantalupo, Christopher; Borrill, Julian; Jaffe, Andrew; Kisner, Theodore; Stompor, Radoslaw
2009-06-09T23:59:59.000Z
MADmap is a software application used to produce maximum-likelihood images of the sky from time-ordered data which include correlated noise, such as those gathered by Cosmic Microwave Background (CMB) experiments. It works efficiently on platforms ranging from small workstations to the most massively parallel supercomputers. Map-making is a critical step in the analysis of all CMB data sets, and the maximum-likelihood approach is the most accurate and widely applicable algorithm; however, it is a computationally challenging task. This challenge will only increase with the next generation of ground-based, balloon-borne and satellite CMB polarization experiments. The faintness of the B-mode signal that these experiments seek to measure requires them to gather enormous data sets. MADmap is already being run on up to O(1011) time samples, O(108) pixels and O(104) cores, with ongoing work to scale to the next generation of data sets and supercomputers. We describe MADmap's algorithm based around a preconditioned conjugate gradient solver, fast Fourier transforms and sparse matrix operations. We highlight MADmap's ability to address problems typically encountered in the analysis of realistic CMB data sets and describe its application to simulations of the Planck and EBEX experiments. The massively parallel and distributed implementation is detailed and scaling complexities are given for the resources required. MADmap is capable of analysing the largest data sets now being collected on computing resources currently available, and we argue that, given Moore's Law, MADmap will be capable of reducing the most massive projected data sets.
Is the friction angle the maximum slope of a free surface of a non cohesive material?
A. Modaressi; P. Evesque
2005-07-13T23:59:59.000Z
Starting from a symmetric triangular pile with a horizontal basis and rotating the basis in the vertical plane, we have determined the evolution of the stress distribution as a function of the basis inclination using Finite Elements method with an elastic-perfectly plastic constitutive model, defined by its friction angle, without cohesion. It is found that when the yield function is the Drucker-Prager one, stress distribution satisfying equilibrium can be found even when one of the free-surface slopes is larger than the friction angle. This means that piles with a slope larger than the friction angle can be (at least) marginally stable and that slope rotation is not always a destabilising perturbation direction. On the contrary, it is found that the slope cannot overpass the friction angle when a Mohr-Coulomb yield function is used. Theoretical explanation of these facts is given which enlightens the role plaid by the intermediate principal stress in both cases of the Mohr-Coulomb criterion and of the Drucker-Prager one. It is then argued that the Mohr-Coulomb criterion assumes a spontaneous symmetry breaking, as soon as the two smallest principal stresses are different ; this is not physical most likely; so this criterion shall be replaced by a Drucker-Prager criterion in the vicinity of the equality, which leads to the previous anomalous behaviour ; so these numerical computations enlighten the avalanche process: they show that no dynamical angle larger than the static one is needed to understand avalanching. It is in agreement with previous experimental results. Furthermore, these results show that the maximum angle of repose can be modified using cyclic rotations; we propose a procedure that allows to achieve a maximum angle of repose to be equal to the friction angle .
RISK PREDICTION OF A BEHAVIOR-BASED ADHESION CONTROL NETWORK FOR ONLINE SAFETY ANALYSIS OF
Berns, Karsten
by default. But for wheeled driving on concrete walls via negative pressure adhesion a prediction of risks- ited payload. Also the impact of features like surface roughness, sheathing defects, porous areas is de- signed to be used for inspections of large concrete buildings as depicted in figure 1
Predicting and mitigating the global warming potential of agro-ecosystems
Paris-Sud XI, UniversitĂ© de
Predicting and mitigating the global warming potential of agro-ecosystems S. Lehugera 1 , B and methane are the main biogenic greenhouse gases (GHG) con-2 tributing to the global warming potential (GWP to design productive16 agro-ecosystems with low global warming impact.17 Keywords18 Global warming potential
Miami, University of
Environmental Impacts on National Security Using Satellite Data Authors: Dr. Sara Graves, Todd Berendes in the state of Alabama on critical infrastructure and assets with national security implications. The changeNOAA, 2012 Climate Prediction Applications Science Workshop (CPASW), Climate Services for National
Extended foundations of stochastic prediction
Sergey Kamenshchikov
2014-06-28T23:59:59.000Z
The basic purpose of this work was to suggest universal quantitative description of ergodic system intermediate bifurcation and obligatory conditions of this transition. Conditions for existence of phase state and first order phase transition were introduced in terms of energy balance for system volume unit. Extended Fokker - Plank equation with time dependent diffusion factor was formulated. It turned out that for ergodic system with fixed boundary quantized energy spectrum of phase stable states exists. Obtained results may be applied for prediction of ergodic system behavior. If isolation condition is satisfied, phase spectrum quantization allows selecting proper control parameters for system stabilization. Information about current system coarsened energy allows predicting of future stochastic system behavior on the basis of extended Fokker - Plank model.
Simulated combined abnormal environment fire calculations for aviation impacts.
Brown, Alexander L.
2010-08-01T23:59:59.000Z
Aircraft impacts at flight speeds are relevant environments for aircraft safety studies. This type of environment pertains to normal environments such as wildlife impacts and rough landings, but also the abnormal environment that has more recently been evidenced in cases such as the Pentagon and World Trade Center events of September 11, 2001, and the FBI building impact in Austin. For more severe impacts, the environment is combined because it involves not just the structural mechanics, but also the release of the fuel and the subsequent fire. Impacts normally last on the order of milliseconds to seconds, whereas the fire dynamics may last for minutes to hours, or longer. This presents a serious challenge for physical models that employ discrete time stepping to model the dynamics with accuracy. Another challenge is that the capabilities to model the fire and structural impact are seldom found in a common simulation tool. Sandia National Labs maintains two codes under a common architecture that have been used to model the dynamics of aircraft impact and fire scenarios. Only recently have these codes been coupled directly to provide a fire prediction that is better informed on the basis of a detailed structural calculation. To enable this technology, several facilitating models are necessary, as is a methodology for determining and executing the transfer of information from the structural code to the fire code. A methodology has been developed and implemented. Previous test programs at the Sandia National Labs sled track provide unique data for the dynamic response of an aluminum tank of liquid water impacting a barricade at flight speeds. These data are used to validate the modeling effort, and suggest reasonable accuracy for the dispersion of a non-combustible fluid in an impact environment. The capability is also demonstrated with a notional impact of a fuel-filled container at flight speed. Both of these scenarios are used to evaluate numeric approximations, and help provide an understanding of the quantitative accuracy of the modeling methods.
Predicting low-frequency radio fluxes of known extrasolar planets
Grießmeier, J -M; Spreeuw, H
2008-01-01T23:59:59.000Z
Context. Close-in giant extrasolar planets (''Hot Jupiters'') are believed to be strong emitters in the decametric radio range. Aims. We present the expected characteristics of the low-frequency magnetospheric radio emission of all currently known extrasolar planets, including the maximum emission frequency and the expected radio flux. We also discuss the escape of exoplanetary radio emission from the vicinity of its source, which imposes additional constraints on detectability. Methods. We compare the different predictions obtained with all four existing analytical models for all currently known exoplanets. We also take care to use realistic values for all input parameters. Results. The four different models for planetary radio emission lead to very different results. The largest fluxes are found for the magnetic energy model, followed by the CME model and the kinetic energy model (for which our results are found to be much less optimistic than those of previous studies). The unipolar interaction model does ...
Predicting fracture in micron-scale polycrystalline silicon MEMS structures.
Hazra, Siddharth S. (Carnegie Mellon University, Pittsburgh, PA); de Boer, Maarten Pieter (Carnegie Mellon University, Pittsburgh, PA); Boyce, Brad Lee; Ohlhausen, James Anthony; Foulk, James W., III; Reedy, Earl David, Jr.
2010-09-01T23:59:59.000Z
Designing reliable MEMS structures presents numerous challenges. Polycrystalline silicon fractures in a brittle manner with considerable variability in measured strength. Furthermore, it is not clear how to use a measured tensile strength distribution to predict the strength of a complex MEMS structure. To address such issues, two recently developed high throughput MEMS tensile test techniques have been used to measure strength distribution tails. The measured tensile strength distributions enable the definition of a threshold strength as well as an inferred maximum flaw size. The nature of strength-controlling flaws has been identified and sources of the observed variation in strength investigated. A double edge-notched specimen geometry was also tested to study the effect of a severe, micron-scale stress concentration on the measured strength distribution. Strength-based, Weibull-based, and fracture mechanics-based failure analyses were performed and compared with the experimental results.
Determination of the impact vector in intermediate energy heavy ion collisions
Ogilvie, C.A.; Cebra, D.A.; Clayton, J.; Howden, S.; Karn, J.; Vander Molen, A.; Westfall, G.D.; Wilson, W.K.; Winfield, J.S. (National Superconducting Cyclotron Laboratory and Department of Physics Astronomy, Michigan State University, East Lansing, Michigan 48824 (US))
1989-08-01T23:59:59.000Z
We examine a variety of methods for determining both the impact parameter and the direction of the impact vector in symmetric nuclear collisions at intermediate energies. Two quantities, the particle multiplicity and the midrapidity charge, retain their dependence on the impact parameter after filtering through the acceptance of a typical 4{pi} detector. By gating on these quantities we can select four ranges of impact parameters. There is some overlap of these ranges. It is noted that this is dependent on the model used to simulate the collisions. The midrapidity charge has the advantage that it integrates over the final fragments and should be less sensitive to how the collision zone disassembles. The angle of the impact vector is well reproduced with the method developed by Danielewicz and Odyniec. The difference between known and determined reaction plane has a half-width at half-maximum of less than 70{degree}. Some comparisons are made to experimental data.
Matus, Kira J. (Kira Jen)
2005-01-01T23:59:59.000Z
In China, elevated levels of urban air pollution result in significant adverse health impacts for its large and rapidly growing urban population. An expanded version of the Emissions Prediction and Policy Analysis (EPPA), ...
Impact of pH on the removal of fluoride, nitrate and boron by nanofiltration/reverse osmosis
Richards, Laura A.; Vuachčre, Marion; Schäfer, Andrea
2010-01-01T23:59:59.000Z
The objective of this study was to evaluate the impact of pH on boron, fluoride, and nitrate retention by comparing modelled speciation predictions with retention using six different nanofiltration (NF) and reverse osmosis ...
THE IMPACT OF THERMAL ENGINEERING RESEARCH ON GLOBAL CLIMATE CHANGE
Phelan, Patrick [Arizona State University; Abdelaziz, Omar [ORNL; Otanicar, Todd [University of Tulsa; Phelan, Bernadette [Phelan Research Solutions, Inc.; Prasher, Ravi [Arizona State University; Taylor, Robert [University of New South Wales, Sydney, Australia; Tyagi, Himanshu [Indian Institute of Technology Ropar, India
2014-01-01T23:59:59.000Z
Global climate change is recognized by many people around the world as being one of the most pressing issues facing our society today. The thermal engineering research community clearly plays an important role in addressing this critical issue, but what kind of thermal engineering research is, or will be, most impactful? In other words, in what directions should thermal engineering research be targeted in order to derive the greatest benefit with respect to global climate change? To answer this question we consider the potential reduction in greenhouse gas (GHG) emissions, coupled with potential economic impacts, resulting from thermal engineering research. Here a new model framework is introduced that allows a technological, sector-by-sector analysis of GHG emissions avoidance. For each sector, we consider the maximum reduction in CO2 emissions due to such research, and the cost effectiveness of the new efficient technologies. The results are normalized on a country-by-country basis, where we consider the USA, the European Union, China, India, and Australia as representative countries or regions. Among energy supply-side technologies, improvements in coal-burning power generation are seen as having the most beneficial CO2 and economic impacts. The one demand-side technology considered, residential space cooling, offers positive but limited impacts. The proposed framework can be extended to include additional technologies and impacts, such as water consumption.
Catalysis-by-design impacts assessment
Fassbender, L L; Young, J K [Pacific Northwest Lab., Richland, WA (USA); Sen, R K [Sen (R.K.) and Associates, Washington, DC (USA)
1991-05-01T23:59:59.000Z
Catalyst researchers have always recognized the need to develop a detailed understanding of the mechanisms of catalytic processes, and have hoped that it would lead to developing a theoretical predictive base to guide the search for new catalysts. This understanding allows one to develop a set of hierarchical models, from fundamental atomic-level ab-initio models to detailed engineering simulations of reactor systems, to direct the search for optimized, efficient catalyst systems. During the last two decades, the explosions of advanced surface analysis techniques have helped considerably to develop the building blocks for understanding various catalytic reactions. An effort to couple these theoretical and experimental advances to develop a set of hierarchical models to predict the nature of catalytic materials is a program entitled Catalysis-by-Design (CRD).'' In assessing the potential impacts of CBD on US industry, the key point to remember is that the value of the program lies in developing a novel methodology to search for new catalyst systems. Industrial researchers can then use this methodology to develop proprietary catalysts. Most companies involved in catalyst R D have two types of ongoing projects. The first type, what we call market-driven R D,'' are projects that support and improve upon a company's existing product lines. Project of the second type, technology-driven R D,'' are longer term, involve the development of totally new catalysts, and are initiated through scientists' research ideas. The CBD approach will impact both types of projects. However, this analysis indicates that the near-term impacts will be on market-driven'' projects. The conclusions and recommendations presented in this report were obtained by the authors through personal interviews with individuals involved in a variety of industrial catalyst development programs and through the three CBD workshops held in the summer of 1989. 34 refs., 7 figs., 7 tabs.
DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)
Phillips, Claire L. [TERA; Gregg, Jillian W. [TERA; Wilson, John K. [TERA
2011-11-01T23:59:59.000Z
Daily minimum temperature (Tmin) has increased faster than daily maximum temperature (Tmax) in many parts of the world, leading to decreases in diurnal temperature range (DTR). Projections suggest these trends are likely to continue in many regions, particularly northern latitudes and in arid regions. Despite wide speculation that asymmetric warming has different impacts on plant and ecosystem production than equal-night-and-day warming, there has been little direct comparison of these scenarios. Reduced DTR has also been widely misinterpreted as a result of night-only warming, when in fact Tmin occurs near dawn, indicating higher morning as well as night temperatures. We report on the first experiment to examine ecosystem-scale impacts of faster increases in Tmin than Tmax, using precise temperature controls to create realistic diurnal temperature profiles with gradual day-night temperature transitions and elevated early morning as well as night temperatures. Studying a constructed grassland ecosystem containing species native to Oregon, USA, we found the ecosystem lost more carbon at elevated than ambient temperatures, but was unaffected by the 3şC difference in DTR between symmetric warming (constantly ambient +3.5şC) and asymmetric warming (dawn Tmin=ambient +5şC, afternoon Tmax= ambient +2şC). Reducing DTR had no apparent effect on photosynthesis, likely because temperatures were most different in the morning and late afternoon when light was low. Respiration was also similar in both warming treatments, because respiration temperature sensitivity was not sufficient to respond to the limited temperature differences between asymmetric and symmetric warming. We concluded that changes in daily mean temperatures, rather than changes in Tmin/Tmax, were sufficient for predicting ecosystem carbon fluxes in this reconstructed Mediterranean grassland system.
The Prediction of Extratropical Storm Tracks by the ECMWF and NCEP Ensemble Prediction Systems
Begstsson, Lennart
The Prediction of Extratropical Storm Tracks by the ECMWF and NCEP Ensemble Prediction Systems Author: Email: lsrf@mail.nerc-essc.ac.uk #12;Abstract The prediction of extratropical cyclones Prediction (NCEP) Ensemble Prediction Systems (EPS) has been investigated using an objective feature tracking
The Prediction of Extratropical Storm Tracks by the ECMWF and NCEP Ensemble Prediction Systems
Froude, Lizzie
The Prediction of Extratropical Storm Tracks by the ECMWF and NCEP Ensemble Prediction Systems 2006) ABSTRACT The prediction of extratropical cyclones by the European Centre for Medium-Range Weather Forecasts (ECMWF) and the National Centers for Environmental Prediction (NCEP) ensemble prediction systems
The Economic Impact of Binghamton
Suzuki, Masatsugu
The Economic Impact of Binghamton University, FY2010 (July 1, 2009-June 30, 2010) Office .......................................................................................................... 2 ECONOMIC OUTPUT and Tioga counties and the overall impact of New York State in terms of economic output, jobs, and human
Predicting the NFL using Twitter
Sinha, Shiladitya; Gimpel, Kevin; Smith, Noah A
2013-01-01T23:59:59.000Z
We study the relationship between social media output and National Football League (NFL) games, using a dataset containing messages from Twitter and NFL game statistics. Specifically, we consider tweets pertaining to specific teams and games in the NFL season and use them alongside statistical game data to build predictive models for future game outcomes (which team will win?) and sports betting outcomes (which team will win with the point spread? will the total points be over/under the line?). We experiment with several feature sets and find that simple features using large volumes of tweets can match or exceed the performance of more traditional features that use game statistics.
Galactosynthesis Predictions at High Redshift
Ari Buchalter; Raul Jimenez; Marc Kamionkowski
2001-02-02T23:59:59.000Z
We predict the Tully-Fisher (TF) and surface-brightness--magnitude relation for disk galaxies at z=3 and discuss the origin of these scaling relations and their scatter. We show that the variation of the TF relation with redshift can be a potentially powerful discriminator of galaxy-formation models. In particular, the TF relation at high redshift might be used to break parameter degeneracies among galactosynthesis models at z=0, as well as to constrain the redshift distribution of collapsing dark-matter halos, the star-formation history and baryon fraction in the disk, and the distribution of halo spins.
Predictive Simulation | Department of Energy
Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)
AFDC Printable Version Share this resource Send a link to EERE: Alternative Fuels Data Center Home Page to someone by E-mail Share EERE: Alternative Fuels Data Center Home Page on Facebook Tweet about EERE: Alternative Fuels Data Center Home Page on Twitter Bookmark EERE:1 First Use of Energy for All Purposes (Fuel and Nonfuel),Feet) Year Jan Feb Mar Apr MayAtmosphericNuclear Security Administration the1 - SeptemberMicroneedles for medical point ofPowerSaverPredicting
Relocation impacts of a major release from SRTC
Blanchard, A.; Thompson, E.A.; Thompson, J.M.
1999-06-01T23:59:59.000Z
The relocation impacts of an accidental release, scenario 1-RD-3 , are evaluated for the Savannah River Technology Center. The extent of the area potentially contaminated to a level that would result in doses exceeding the relocation protective action guide(PAG)is calculated. The maximum calculated distance downwind from the accident at which the relocation PAG is exceeded is also determined. The consequences of the particulate portion of the release are evaluated using the HOTSPOT model and an EXCEL spreadsheet. The consequences of the tritium release are evaluated using UFOTRI.
Occurrence of high-speed solar wind streams over the Grand Modern Maximum
Mursula, Kalevi; Holappa, Lauri
2015-01-01T23:59:59.000Z
In the declining phase of the solar cycle, when the new-polarity fields of the solar poles are strengthened by the transport of same-signed magnetic flux from lower latitudes, the polar coronal holes expand and form non-axisymmetric extensions toward the solar equator. These extensions enhance the occurrence of high-speed solar wind streams (HSS) and related co-rotating interaction regions in the low-latitude heliosphere, and cause moderate, recurrent geomagnetic activity in the near-Earth space. Here, using a novel definition of geomagnetic activity at high (polar cap) latitudes and the longest record of magnetic observations at a polar cap station, we calculate the annually averaged solar wind speeds as proxies for the effective annual occurrence of HSS over the whole Grand Modern Maximum (GMM) from 1920s onwards. We find that a period of high annual speeds (frequent occurrence of HSS) occurs in the declining phase of each solar cycle 16-23. For most cycles the HSS activity clearly maximizes during one year...
Su-Jong Yoon; Piyush Sabharwall
2014-07-01T23:59:59.000Z
The operation temperature of advanced nuclear reactors is generally higher than commercial light water reactors and thermal energy from advanced nuclear reactor can be used for various purposes such as district heating, desalination, hydrogen production and other process heat applications, etc. The process heat industry/facilities will be located outside the nuclear island due to safety measures. This thermal energy from the reactor has to be transported a fair distance. In this study, analytical analysis was conducted to identify the maximum distance that thermal energy could be transported using various coolants such as molten-salts, helium and water by varying the pipe diameter and mass flow rate. The cost required to transport each coolant was also analyzed. The coolants analyzed are molten salts (such as: KClMgCl2, LiF-NaF-KF (FLiNaK) and KF-ZrF4), helium and water. Fluoride salts are superior because of better heat transport characteristics but chloride salts are most economical for higher temperature transportation purposes. For lower temperature water is a possible alternative when compared with He, because low pressure He requires higher pumping power which makes the process very inefficient and economically not viable for both low and high temperature application.
Structure of Turbulence in Katabatic Flows below and above the Wind-Speed Maximum
Grachev, Andrey A; Di Sabatino, Silvana; Fernando, Harindra J S; Pardyjak, Eric R; Fairall, Christopher W
2015-01-01T23:59:59.000Z
Measurements of small-scale turbulence made over the complex-terrain atmospheric boundary layer during the MATERHORN Program are used to describe the structure of turbulence in katabatic flows. Turbulent and mean meteorological data were continuously measured at multiple levels at four towers deployed along the East lower slope (2-4 deg) of Granite Mountain. The multi-level observations made during a 30-day long MATERHORN-Fall field campaign in September-October 2012 allowed studying of temporal and spatial structure of katabatic flows in detail, and herein we report turbulence and their variations in katabatic winds. Observed vertical profiles show steep gradients near the surface, but in the layer above the slope jet the vertical variability is smaller. It is found that the vertical (normal to the slope) momentum flux and horizontal (along the slope) heat flux in a slope-following coordinate system change their sign below and above the wind maximum of a katabatic flow. The vertical momentum flux is directed...
Thermal modification of bottomonium spectra from QCD sum rules with the maximum entropy method
Kei Suzuki; Philipp Gubler; Kenji Morita; Makoto Oka
2012-12-03T23:59:59.000Z
The bottomonium spectral functions at finite temperature are analyzed by employing QCD sum rules with the maximum entropy method. This approach enables us to extract the spectral functions without any phenomenological parametrization, and thus to visualize deformation of the spectral functions due to temperature effects estimated from quenched lattice QCD data. As a result, it is found that \\Upsilon and \\eta_b survive in hot matter of temperature up to at least 2.3T_c and 2.1T_c, respectively, while \\chi_{b0} and \\chi_{b1} will disappear at T<2.5T_c. Furthermore, a detailed analysis of the vector channel shows that the spectral function in the region of the lowest peak at T=0 contains contributions from the excited states, \\Upsilon(2S) and \\Upsilon(3S), as well as the ground states \\Upsilon (1S). Our results at finite T are consistent with the picture that the excited states of bottomonia dissociate at lower temperature than that of the ground state. Assuming this picture, we find that \\Upsilon(2S) and \\Upsilon(3S) disappear at T=1.5-2.0T_c.
Maximum-entropy calculation of end-to-end distance distribution of force stretching chains
Luru Dai; Fei Liu; Zhong-can Ou-Yang
2002-12-12T23:59:59.000Z
Using the maximum-entropy method, we calculate the end-to-end distance distribution of the force stretched chain from the moments of the distribution, which can be obtained from the extension-force curves recorded in single-molecule experiments. If one knows force expansion of the extension through the $(n-1)$th power of force, it is enough information to calculate the $n$ moments of the distribution. We examine the method with three force stretching chain models, Gaussian chain, free-joined chain and excluded-volume chain on two-dimension lattice. The method reconstructs all distributions precisely. We also apply the method to force stretching complex chain molecules: the hairpin and secondary structure conformations. We find that the distributions of homogeneous chains of two conformations are very different: there are two independent peaks in hairpin distribution; while only one peak is observed in the distribution of secondary structure conformations. Our discussion also shows that the end-to-end distance distribution may discover more critical physical information than the simpler extension-force curves can give.
Liu, Jian; Miller, William H.
2008-08-01T23:59:59.000Z
The maximum entropy analytic continuation (MEAC) method is used to extend the range of accuracy of the linearized semiclassical initial value representation (LSC-IVR)/classical Wigner approximation for real time correlation functions. The LSC-IVR provides a very effective 'prior' for the MEAC procedure since it is very good for short times, exact for all time and temperature for harmonic potentials (even for correlation functions of nonlinear operators), and becomes exact in the classical high temperature limit. This combined MEAC+LSC/IVR approach is applied here to two highly nonlinear dynamical systems, a pure quartic potential in one dimensional and liquid para-hydrogen at two thermal state points (25K and 14K under nearly zero external pressure). The former example shows the MEAC procedure to be a very significant enhancement of the LSC-IVR, for correlation functions of both linear and nonlinear operators, and especially at low temperature where semiclassical approximations are least accurate. For liquid para-hydrogen, the LSC-IVR is seen already to be excellent at T = 25K, but the MEAC procedure produces a significant correction at the lower temperature (T = 14K). Comparisons are also made to how the MEAC procedure is able to provide corrections for other trajectory-based dynamical approximations when used as priors.
Thermo-hydro-chemical Predictive analysis for the drift-scale predictive heater test,
Sonnenthal, Eric L.; Spycher, Nicolas; Apps, John; Simmons, Ardyth
1998-01-01T23:59:59.000Z
Characterization Project Thermo-Hydro-Chemical Predictive90-1116 Berkeley, C A 94720 Thermo-Hydro-Chemical PredictiveVersion 1.0 Thermo-Hydro-Chemical Predictive Analysis for
The Environmental Impacts of Subsidized Crop Insurance
LaFrance, Jeffrey T.; Shimshack, J. P.; Wu, S. Y.
2001-01-01T23:59:59.000Z
May 1996): 428-438. Environmental Impacts of Subsidized CropPaper No. 912 THE ENVIRONMENTAL IMPACTS OF SUBSIDIZED CROPsuch copies. The Environmental Impacts of Subsidized Crop
Paxton, Anthony T.
GLOBAL IMPACT FROM THE HEART OF NORTHERN IRELAND #12;A first-class student experience in world-CLASS 18 CONTACT 26 CONTENTS 3GLOBAL IMPACT FROM THE HEART OF NORTHERN IRELAND #12;Queen's is a powerhouse THE HEART OF NORTHERN IRELANDGLOBAL IMPACT FROM THE HEART OF NORTHERN IRELAND CHANCELLOR'S WELCOME
Environmental Impacts of Treated Wood
Florida, University of
Environmental Impacts of Treated Wood 6495_C000.fm Page iii Wednesday, February 1, 2006 5:48 PM #12 of treated-wood research and their efforts in organizing the con- ference entitled Environmental Impacts, "Environmental Impacts of Treated Wood", the conference proceedings which served as a starting point
Campus Planning Environmental Impact Report
Mullins, Dyche
F I N A L Campus Planning Environmental Impact Report UCSF Mount Zion Garage State Clearinghouse No ENVIRONMENTAL IMPACT REPORT Under the California Environmental Quality Act (CEQA) and the University of California procedures for implementing CEQA, following completion of a Draft Environmental Impact Report (EIR
"" EPAT# Risk Assessments Environmental Impact
"" EPAT# Risk Assessments Appendixes Environmental Impact Statement NESHAPS for Radionuclides for Hazardous Air Pollutants Risk Assessments Environmental Impact Statement for NESHAPS Radionuclides VOLUME 2 for Hazardous Air Pollutants EPA 520.1'1.-89-006,-2 Risk Assessments Environmental Impact Statement for NESHAPS
MĂĽller, Jens-Dominik
GLOBAL IMPACT FROM THE HEART OF NORTHERN IRELAND #12;#12;CHANCELLOR'S WELCOME 4 VICE: INNOVATIVE AND WORLD-CLASS 18 CONTACT 26 CONTENTS 3GLOBAL IMPACT FROM THE HEART OF NORTHERN IRELAND #12;Queen THE HEART OF NORTHERN IRELANDGLOBAL IMPACT FROM THE HEART OF NORTHERN IRELAND CHANCELLOR'S WELCOME
Potential Impacts of CLIMATE CHANGE
Sheridan, Jennifer
Potential Impacts of CLIMATE CHANGE on U.S. Transportation Potential Impacts of CLIMATE CHANGE on U.S. Transportation TRANSPORTATION RESEARCH BOARD SPECIAL REPORT 290 #12;#12;Committee on Climate Change and U Washington, D.C. 2008 www.TRB.org Potential Impacts of CLIMATE CHANGE on U.S. Transportation TRANSPORTATION
Technology's Impact on Production
Rachel Amann; Ellis Deweese; Deborah Shipman
2009-06-30T23:59:59.000Z
As part of a cooperative agreement with the United States Department of Energy (DOE) - entitled Technology's Impact on Production: Developing Environmental Solutions at the State and National Level - the Interstate Oil and Gas Compact Commission (IOGCC) has been tasked with assisting state governments in the effective, efficient, and environmentally sound regulation of the exploration and production of natural gas and crude oil, specifically in relation to orphaned and abandoned wells and wells nearing the end of productive life. Project goals include: (1) Developing (a) a model framework for prioritization and ranking of orphaned or abandoned well sites; (b) a model framework for disbursement of Energy Policy Act of 2005 funding; and (c) a research study regarding the current status of orphaned wells in the nation. (2) Researching the impact of new technologies on environmental protection from a regulatory perspective. Research will identify and document (a) state reactions to changing technology and knowledge; (b) how those reactions support state environmental conservation and public health; and (c) the impact of those reactions on oil and natural gas production. (3) Assessing emergent technology issues associated with wells nearing the end of productive life. Including: (a) location of orphaned and abandoned well sites; (b) well site remediation; (c) plugging materials; (d) plug placement; (e) the current regulatory environment; and (f) the identification of emergent technologies affecting end of life wells. New Energy Technologies - Regulating Change, is the result of research performed for Tasks 2 and 3.
Social Impact Management Plans: Innovation in corporate and public policy
Franks, Daniel M., E-mail: d.franks@uq.edu.au [Centre for Social Responsibility in Mining, The University of Queensland, Sustainable Minerals Institute, St Lucia, Brisbane, Queensland 4072 (Australia); Vanclay, Frank, E-mail: frank.vanclay@rug.nl [Department of Cultural Geography, Faculty of Spatial Sciences, The University of Groningen, P.O. Box 800, 9700 AV Groningen (Netherlands)] [Department of Cultural Geography, Faculty of Spatial Sciences, The University of Groningen, P.O. Box 800, 9700 AV Groningen (Netherlands)
2013-11-15T23:59:59.000Z
Social Impact Assessment (SIA) has traditionally been practiced as a predictive study for the regulatory approval of major projects, however, in recent years the drivers and domain of focus for SIA have shifted. This paper details the emergence of Social Impact Management Plans (SIMPs) and undertakes an analysis of innovations in corporate and public policy that have put in place ongoing processes – assessment, management and monitoring – to better identify the nature and scope of the social impacts that might occur during implementation and to proactively respond to change across the lifecycle of developments. Four leading practice examples are analyzed. The International Finance Corporation (IFC) Performance Standards require the preparation of Environmental and Social Management Plans for all projects financed by the IFC identified as having significant environmental and social risks. Anglo American, a major resources company, has introduced a Socio-Economic Assessment Toolbox, which requires mine sites to undertake regular assessments and link these assessments with their internal management systems, monitoring activities and a Social Management Plan. In South Africa, Social and Labour Plans are submitted with an application for a mining or production right. In Queensland, Australia, Social Impact Management Plans were developed as part of an Environmental Impact Statement, which included assessment of social impacts. Collectively these initiatives, and others, are a practical realization of theoretical conceptions of SIA that include management and monitoring as core components of SIA. The paper concludes with an analysis of the implications for the practice of impact assessment including a summary of key criteria for the design and implementation of effective SIMPs. -- Highlights: • Social impact management plans are effective strategies to manage social issues. • They are developed in partnership with regulatory agencies, investors and community. • SIMPs link assessment to ongoing management and address social and community issues. • SIMPs clarify responsibilities in the management of impacts, opportunities and risks. • SIMPs demonstrate a shift to include management as a core component of SIA practice.
Predicting confusions and intelligibility of noisy speech
Messing, David P. (David Patrick), 1979-
2007-01-01T23:59:59.000Z
Current predictors of speech intelligibility are inadequate for making predictions of speech confusions caused by acoustic interference. This thesis is inspired by the need for a capability to understand and predict speech ...
EVA: evaluation of protein structure prediction servers
Sali, Andrej
day, sequences of newly available protein structures in the Protein Data Bank (PDB) are sent performance of protein structure prediction servers through a battery of objective measures for prediction
A case model for predictive maintenance
Li, Jiawei, M. Eng. Massachusetts Institute of Technology
2008-01-01T23:59:59.000Z
This project is to respond to a need by Varian Semiconductor Equipment Associates, Inc. (VSEA) to help predict failure of ion implanters. Predictive maintenance would help to reduce the unscheduled downtime of ion implanters, ...
Predicting gene function from images of cells
Jones, Thouis Raymond, 1971-
2007-01-01T23:59:59.000Z
This dissertation shows that biologically meaningful predictions can be made by analyzing images of cells. In particular, groups of related genes and their biological functions can be predicted using images from large ...
Negative Ion Photoelectron Spectroscopy Confirms the Prediction...
Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)
Confirms the Prediction that (CO)5 and (CO)6 Each Has a Singlet Ground State. Negative Ion Photoelectron Spectroscopy Confirms the Prediction that (CO)5 and (CO)6 Each Has a...
Predicting Improved Chiller Performance Through Thermodynamic Modeling
Figueroa, I. E.; Cathey, M.; Medina, M. A.; Nutter, D. W.
This paper presents two case studies in which thermodynamic modeling was used to predict improved chiller performance. The model predicted the performance (COP and total energy consumption) of water-cooled centrifugal chillers as a function...
Transforms for prediction residuals in video coding
Kam??l?, Fatih
2010-01-01T23:59:59.000Z
Typically the same transform, the 2-D Discrete Cosine Transform (DCT), is used to compress both image intensities in image coding and prediction residuals in video coding. Major prediction residuals include the motion ...
Predicting risk for the appearance of melanoma.
Meyskens, Frank L Jr; Ransohoff, David F
2006-01-01T23:59:59.000Z
for projecting the absolute risk of breast cancer. J NatlD, Gail MH, et al: Cancer risk prediction models: A workshopal model of breast cancer risk prediction and implications
The Impact of Heart Irradiation on Dose-Volume Effects in the Rat Lung
Luijk, Peter van [Department of Radiation Oncology, University Medical Center, Groningen (Netherlands)], E-mail: p.van.luijk@rt.umcg.nl; Faber, Hette [Department of Radiation Oncology, University Medical Center, Groningen (Netherlands); Department of Cell Biology, Section Radiation and Stress Cell Biology, University Medical Center Groningen, University of Groningen, Groningen (Netherlands); Meertens, Harm [Department of Radiation Oncology, University Medical Center, Groningen (Netherlands); Schippers, Jacobus M. [Accelerator Department, Paul Scherrer Institut, Villigen, Switerland (Switzerland); Langendijk, Johannes A. [Department of Radiation Oncology, University Medical Center, Groningen (Netherlands); Brandenburg, Sytze [Kernfysisch Versneller Instituut, University of Groningen, Groningen (Netherlands); Kampinga, Harm H. [Department of Cell Biology, Section Radiation and Stress Cell Biology, University Medical Center Groningen, University of Groningen, Groningen (Netherlands); Coppes, Robert P. Ph.D. [Department of Radiation Oncology, University Medical Center, Groningen (Netherlands); Department of Cell Biology, Section Radiation and Stress Cell Biology, University Medical Center Groningen, University of Groningen, Groningen (Netherlands)
2007-10-01T23:59:59.000Z
Purpose: To test the hypothesis that heart irradiation increases the risk of a symptomatic radiation-induced loss of lung function (SRILF) and that this can be well-described as a modulation of the functional reserve of the lung. Methods and Materials: Rats were irradiated with 150-MeV protons. Dose-response curves were obtained for a significant increase in breathing frequency after irradiation of 100%, 75%, 50%, or 25% of the total lung volume, either including or excluding the heart from the irradiation field. A significant increase in the mean respiratory rate after 6-12 weeks compared with 0-4 weeks was defined as SRILF, based on biweekly measurements of the respiratory rate. The critical volume (CV) model was used to describe the risk of SRILF. Fits were done using a maximum likelihood method. Consistency between model and data was tested using a previously developed goodness-of-fit test. Results: The CV model could be fitted consistently to the data for lung irradiation only. However, this fitted model failed to predict the data that also included heart irradiation. Even refitting the model to all data resulted in a significant difference between model and data. These results imply that, although the CV model describes the risk of SRILF when the heart is spared, the model needs to be modified to account for the impact of dose to the heart on the risk of SRILF. Finally, a modified CV model is described that is consistent to all data. Conclusions: The detrimental effect of dose to the heart on the incidence of SRILF can be described by a dose dependent decrease in functional reserve of the lung.
Predicting low-frequency radio fluxes of known extrasolar planets
J. -M. Grießmeier; P. Zarka; H. Spreeuw
2008-06-02T23:59:59.000Z
Context. Close-in giant extrasolar planets (''Hot Jupiters'') are believed to be strong emitters in the decametric radio range. Aims. We present the expected characteristics of the low-frequency magnetospheric radio emission of all currently known extrasolar planets, including the maximum emission frequency and the expected radio flux. We also discuss the escape of exoplanetary radio emission from the vicinity of its source, which imposes additional constraints on detectability. Methods. We compare the different predictions obtained with all four existing analytical models for all currently known exoplanets. We also take care to use realistic values for all input parameters. Results. The four different models for planetary radio emission lead to very different results. The largest fluxes are found for the magnetic energy model, followed by the CME model and the kinetic energy model (for which our results are found to be much less optimistic than those of previous studies). The unipolar interaction model does not predict any observable emission for the present exoplanet census. We also give estimates for the planetary magnetic dipole moment of all currently known extrasolar planets, which will be useful for other studies. Conclusions. Our results show that observations of exoplanetary radio emission are feasible, but that the number of promising targets is not very high. The catalog of targets will be particularly useful for current and future radio observation campaigns (e.g. with the VLA, GMRT, UTR-2 and with LOFAR).
FACILITATORY NEURAL DYNAMICS FOR PREDICTIVE EXTRAPOLATION
Choe, Yoonsuck
FACILITATORY NEURAL DYNAMICS FOR PREDICTIVE EXTRAPOLATION A Dissertation by HEE JIN LIM Submitted DYNAMICS FOR PREDICTIVE EXTRAPOLATION A Dissertation by HEE JIN LIM Submitted to Texas A&M University: Computer Science #12;iii ABSTRACT Facilitatory Neural Dynamics for Predictive Extrapolation. (August 2006
semble Prediction Lizzie S. R. Froude1
Froude, Lizzie
by numerical weather prediction (NWP). Operational NWP models are based on a set of equations known for Medium Range Weather Forecasts (ECMWF) and the National Centers for Environmental Prediction (NCEP will grow rapidly, resulting in a total loss of predictability at higher forecast times. Today's models
Amending Numerical Weather Prediction forecasts using GPS
Stoffelen, Ad
to validate the amounts of humidity in Numerical Weather Prediction (NWP) model forecasts. This paper presents. Satellite images and Numerical Weather Prediction (NWP) models are used together with the synoptic surface. In this paper, a case is presented for which the operational Numerical Weather Prediction Model (NWP) HIRLAM
Prediction versus Projection: How weather forecasting and
Howat, Ian M.
Prediction versus Projection: How weather forecasting and climate models differ. Aaron B. Wilson Context: Global http://data.giss.nasa.gov/ #12;Numerical Weather Prediction Collect Observations alters associated weather patterns. Models used to predict weather depend on the current observed state
Short Specialist Review Gene structure prediction
Brendel, Volker
Short Specialist Review Gene structure prediction in plant genomes Volker Brendel Iowa State) within most genes makes the problem of computational gene structure prediction distinct from (and harder prediction in vertebrates. The second reason is pragmatic. Expressed Sequence Tag (EST) sequencing and whole
Prediction Markets Partition model of knowledge
Fiat, Amos
Prediction Markets Partition model of knowledge Distributed information markets Convergence time bounds Computational Aspects of Prediction Markets David M. Pennock and Rahul Sami December 5, 2012 Presented by: Rami Eitan David M. Pennock and Rahul Sami Computational Aspects of Prediction Markets #12
Prediction of Freshmen Academic Performance Iuliana Ianus
Prediction of Freshmen Academic Performance Iuliana Ianus Department of Statistics Carnegie Mellon is to improve prediction of freshman GPA based on college admission data to better inform the decision as to who algorithm for making this prediction. Data for two consecutive entering classes at CMU were used. Both
United States Department of Correlation and Prediction
Standiford, Richard B.
United States Department of Correlation and Prediction Agriculture Forest Service of Snow Water L. Azuma #12;McGurk, Bruce J.; Azuma, David L. 1992. Correlation and prediction of snow water" and, by implication, prediction of wilderness snow data by nonwilderness sensors that are typically
Theory and Applications of Competitive Prediction
Sheldon, Nathan D.
Theory and Applications of Competitive Prediction Fedor Zhdanov Computer Learning Research Centre;Abstract Predicting the future is an important purpose of machine learning research. In online learning, predictions are given sequentially rather than all at once. Peo- ple wish to make sensible decisions
Predicting the Wild Salmon Production Using Bayesian
MyllymĂ¤ki, Petri
Predicting the Wild Salmon Production Using Bayesian Networks Kimmo Valtonen, Tommi Mononen, Petri Karlsson and Ingemar PerÂ¨a December 22, 2002 HIIT TECHNICAL REPORT 2002Â7 #12;PREDICTING THE WILD SALMON elsewhere. #12;Predicting the wild salmon production using Bayesian networks Kimmo Valtonen, Tommi Mononen
Online prediction and control nonlinear stochastic systems
temperature in district heat- ing systems. Â· Prediction of power production from the wind turbines located and their application to prediction and control within district heating systems and for prediction of wind power. Here temperature in district heating systems', Techni- cal Report IMM-REP-2002-23, Informatics and Mathematical
Predictive modelling of boiler fouling
Not Available
1992-01-01T23:59:59.000Z
In this reporting period, efforts were initiated to supplement the comprehensive flow field description obtained from the RNG-Spectral Element Simulations by incorporating, in a general framework, appropriate modules to model particle and condensable species transport to the surface. Specifically, a brief survey of the literature revealed the following possible mechanisms for transporting different ash constituents from the host gas to boiler tubes as deserving prominence in building the overall comprehensive model: (1) Flame-volatilized species, chiefly sulfates, are deposited on cooled boiler tubes via the mechanism of classical vapor diffusion. This mechanism is more efficient than the particulate ash deposition, and as a result there is usually an enrichment of condensable salts, chiefly sulfates, in boiler deposits; (2) Particle diffusion (Brownian motion) may account for deposition of some fine particles below 0. 1 mm in diameter in comparison with the mechanism of vapor diffusion and particle depositions, however, the amount of material transported to the tubes via this route is probably small. (3) Eddy diffusion, thermophoretic and electrophoretic deposition mechanisms are likely to have a marked influence in transporting 0.1 to 5[mu]m particles from the host gas to cooled boiler tubes; (4) Inertial impaction is the dominant mechanism in transporting particles above 5[mu]m in diameter to water and steam tubes in pulverized coal fired boiler, where the typical flue gas velocity is between 10 to 25 m/s. Particles above 10[mu]m usually have kinetic energies in excess of what can be dissipated at impact (in the absence of molten sulfate or viscous slag deposit), resulting in their entrainment in the host gas.
Climate Change Impacts in the Amazon. Review of scientific literature
NONE
2006-04-15T23:59:59.000Z
The Amazon's hydrological cycle is a key driver of global climate, and global climate is therefore sensitive to changes in the Amazon. Climate change threatens to substantially affect the Amazon region, which in turn is expected to alter global climate and increase the risk of biodiversity loss. In this literature review the following subjects can be distinguished: Observed Climatic Change and Variability, Predicted Climatic Change, Impacts, Forests, Freshwater, Agriculture, Health, and Sea Level Rise.
FUEL CASK IMPACT LIMITER VULNERABILITIES
Leduc, D; Jeffery England, J; Roy Rothermel, R
2009-02-09T23:59:59.000Z
Cylindrical fuel casks often have impact limiters surrounding just the ends of the cask shaft in a typical 'dumbbell' arrangement. The primary purpose of these impact limiters is to absorb energy to reduce loads on the cask structure during impacts associated with a severe accident. Impact limiters are also credited in many packages with protecting closure seals and maintaining lower peak temperatures during fire events. For this credit to be taken in safety analyses, the impact limiter attachment system must be shown to retain the impact limiter following Normal Conditions of Transport (NCT) and Hypothetical Accident Conditions (HAC) impacts. Large casks are often certified by analysis only because of the costs associated with testing. Therefore, some cask impact limiter attachment systems have not been tested in real impacts. A recent structural analysis of the T-3 Spent Fuel Containment Cask found problems with the design of the impact limiter attachment system. Assumptions in the original Safety Analysis for Packaging (SARP) concerning the loading in the attachment bolts were found to be inaccurate in certain drop orientations. This paper documents the lessons learned and their applicability to impact limiter attachment system designs.
Predicting Neutrinoless Double Beta Decay
M. Hirsch; Ernest Ma; J. W. F. Valle; A. Villanova del Moral
2005-07-12T23:59:59.000Z
We give predictions for the neutrinoless double beta decay rate in a simple variant of the A_4 family symmetry model. We show that there is a lower bound for the neutrinoless double beta decay amplitude even in the case of normal hierarchical neutrino masses, corresponding to an effective mass parameter |m_{ee}| >= 0.17 \\sqrt{\\Delta m^2_{ATM}}. This result holds both for the CP conserving and CP violating cases. In the latter case we show explicitly that the lower bound on |m_{ee}| is sensitive to the value of the Majorana phase. We conclude therefore that in our scheme, neutrinoless double beta decay may be accessible to the next generation of high sensitivity experiments.
Model accurately predicts directional borehole trajectory
Mamedbekov, O.K. (Azerbaijan State Petroleum Academy, Baku (Azerbaijan))
1994-08-29T23:59:59.000Z
Theoretical investigations and field data analyses helped develop a new method of predicting the rate of inclination change in a deviated well bore to help reduce the frequency and magnitude of doglegs. Predicting borehole dogleg severity is one of the main problems in directional drilling. Predicting the tendency and magnitude of borehole deviation and comparing them to the planned well path makes it possible to improve bottom hole assembly (BHA) design and to reduce the number of correction runs. The application of adaptation models for predicting the rate of inclination change if measurement-while-drilling systems are used results in improved accuracy of prediction, and therefore a reduction in correction runs.
Alvarez, Pedro J.
Modeling the natural attenuation of benzene in groundwater impacted by ethanol-blended fuels: Effect of ethanol content on the lifespan and maximum length of benzene plumes Diego E. Gomez1 and Pedro 10 March 2009. [1] A numerical model was used to evaluate how the concentration of ethanol
Chao, R.M.; Ko, S.H.; Lin, I.H. [Department of Systems and Naval Mechatronics Engineering, National Cheng Kung University, Tainan, Taiwan 701 (China); Pai, F.S. [Department of Electronic Engineering, National University of Tainan (China); Chang, C.C. [Department of Environment and Energy, National University of Tainan (China)
2009-12-15T23:59:59.000Z
The historically high cost of crude oil price is stimulating research into solar (green) energy as an alternative energy source. In general, applications with large solar energy output require a maximum power point tracking (MPPT) algorithm to optimize the power generated by the photovoltaic effect. This work aims to provide a stand-alone solution for solar energy applications by integrating a DC/DC buck converter to a newly developed quadratic MPPT algorithm along with its appropriate software and hardware. The quadratic MPPT method utilizes three previously used duty cycles with their corresponding power outputs. It approaches the maximum value by using a second order polynomial formula, which converges faster than the existing MPPT algorithm. The hardware implementation takes advantage of the real-time controller system from National Instruments, USA. Experimental results have shown that the proposed solar mechatronics system can correctly and effectively track the maximum power point without any difficulties. (author)
Impact assisted segmented cutterhead
Morrell, Roger J. (Bloomington, MN); Larson, David A. (Minneapolis, MN); Ruzzi, Peter L. (Eagan, MN)
1992-01-01T23:59:59.000Z
An impact assisted segmented cutterhead device is provided for cutting various surfaces from coal to granite. The device comprises a plurality of cutting bit segments deployed in side by side relationship to form a continuous cutting face and a plurality of impactors individually associated with respective cutting bit segments. An impactor rod of each impactor connects that impactor to the corresponding cutting bit segment. A plurality of shock mounts dampening the vibration from the associated impactor. Mounting brackets are used in mounting the cutterhead to a base machine.
Maneuvering impact boring head
Zollinger, W. Thor (Idaho Falls, ID); Reutzel, Edward W. (Idaho Falls, ID)
1998-01-01T23:59:59.000Z
An impact boring head may comprise a main body having an internal cavity with a front end and a rear end. A striker having a head end and a tail end is slidably mounted in the internal cavity of the main body so that the striker can be reciprocated between a forward position and an aft position in response to hydraulic pressure. A compressible gas contained in the internal cavity between the head end of the striker and the front end of the internal cavity returns the striker to the aft position upon removal of the hydraulic pressure.
Maneuvering impact boring head
Zollinger, W.T.; Reutzel, E.W.
1998-08-18T23:59:59.000Z
An impact boring head may comprise a main body having an internal cavity with a front end and a rear end. A striker having a head end and a tail end is slidably mounted in the internal cavity of the main body so that the striker can be reciprocated between a forward position and an aft position in response to hydraulic pressure. A compressible gas contained in the internal cavity between the head end of the striker and the front end of the internal cavity returns the striker to the aft position upon removal of the hydraulic pressure. 8 figs.
Innovation Impact Publications | NREL
Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)
AFDC Printable Version Share this resource Send a link to EERE: Alternative Fuels Data Center Home Page to someone by E-mail Share EERE: Alternative Fuels Data Center Home Page on Facebook Tweet about EERE: Alternative Fuels Data Center Home Page on Twitter Bookmark EERE:1 First Use of Energy for All Purposes (Fuel and Nonfuel),Feet) Year Jan Feb Mar Apr MayAtmospheric Optical Depth7-1D: Vegetation ProposedUsingFunInfrared Land Surface Emissivity in theSurface. | Innovation Impact
EIS-0203: Programmatic Final Environmental Impact Statement ...
Broader source: Energy.gov (indexed) [DOE]
Impact Statement EIS-0203: Programmatic Final Environmental Impact Statement Spent Nuclear Fuel Management and Idaho National Engineering Laboratory Environmental...
Final Uranium Leasing Program Programmatic Environmental Impact...
Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site
Final Uranium Leasing Program Programmatic Environmental Impact Statement (PEIS) Final Uranium Leasing Program Programmatic Environmental Impact Statement (PEIS) Uranium Leasing...
The Asteroid Impact Hazard: Historical Perspective
Waliser, Duane E.
;Chelyabinsk, Ural Mountains, Russia February 15, 2013 #12;#12;#12;#12;Tunguska Impact Impact in Russian
reflected Air shower predictions:Air shower predictions: - Impact on energy and composition measurements- Impact on energy and composition measurements - Detector optimization- Detector optimization #12;Ralph)) 22nn particles afterparticles after nn interactionsinteractions Assumption:Assumption: shower maximum
Miami, University of
Climate Prediction Center Products in Support of National Security Mike Halpert, Deputy Director, Climate Prediction Center 5200 Auth Rd. Camp Springs, MD 20746 301-763-8000 x7535 Mike.Halpert@noaa.gov The Climate Prediction Center (CPC) delivers climate prediction, monitoring, and diagnostic products
Numerical Weather Prediction (NWP) and hybrid ARMA/ANN model to predict global radiation
Paris-Sud XI, UniversitĂ© de
Numerical Weather Prediction (NWP) and hybrid ARMA/ANN model to predict global radiation Cyril a hybrid ARMA/ANN model and data issued from a numerical weather prediction model (ALADIN). We particularly@gmail.com #12;Abstract. We propose in this paper an original technique to predict global radiation using
Prediction Error and Event Boundaries 1 Running Head: PREDICTION ERROR AND EVENT BOUNDARIES
Zacks, Jeffrey M.
Prediction Error and Event Boundaries 1 Running Head: PREDICTION ERROR AND EVENT BOUNDARIES A computational model of event segmentation from perceptual prediction. Jeremy R. Reynolds, Jeffrey M. Zacks, and Todd S. Braver Washington University Manuscript #12;Prediction Error and Event Boundaries 2 People tend
Timmer, Jens
The seizure prediction characteristic: a general framework to assess and compare seizure prediction, numerous methods have been suggested that claim to predict from the EEG the onset of epileptic seizures of a seizure prediction method and an intervention system, would improve patient quality of life. The question
Young, R. Michael
Can Fault Prediction Models and Metrics be Used for Vulnerability Prediction? Yonghee Shin to prioritize security inspection and testing efforts may be better served by a prediction model that indicates commonalities that may allow development teams to use traditional fault prediction models and metrics
Load Value Prediction Using Prediction Outcome Histories Martin Burtscher and Benjamin G. Zorn
Burtscher, Martin
Load Value Prediction Using Prediction Outcome Histories Martin Burtscher and Benjamin G. Zorn system perform- ance. Load value prediction alleviates this problem by allowing the CPU to speculatively can only correctly predict about 40 to 70 percent of the load instructions. Confi- dence estimators
Predicting Time-Delays under Real-Time Scheduling for Linear Model Predictive Control
Zhang, Fumin
Predicting Time-Delays under Real-Time Scheduling for Linear Model Predictive Control Zhenwu Shi prediction of time-delays caused by real-time scheduling. Then, a model predictive controller is designed, the interaction between real-time scheduling and control design has received interest in the literature
Nuclear-Powered Millisecond Pulsars and the Maximum Spin Frequency of Neutron Stars
Deepto Chakrabarty; Edward H. Morgan; Michael P. Muno; Duncan K. Galloway; Rudy Wijnands; Michiel van der Klis; Craig B. Markwardt
2003-07-01T23:59:59.000Z
Millisecond pulsars are neutron stars (NSs) that are thought to have been spun-up by mass accretion from a stellar companion. It is unknown whether there is a natural brake for this process, or if it continues until the centrifugal breakup limit is reached at submillisecond periods. Many NSs that are accreting from a companion exhibit thermonuclear X-ray bursts that last tens of seconds, caused by unstable nuclear burning on their surfaces. Millisecond brightness oscillations during bursts from ten NSs (as distinct from other rapid X-ray variability that is also observed) are thought to measure the stellar spin, but direct proof of a rotational origin has been lacking. Here, we report the detection of burst oscillations at the known spin frequency of an accreting millisecond pulsar, and we show that these oscillations always have the same rotational phase. This firmly establishes burst oscillations as nuclear-powered pulsations tracing the spin of accreting NSs, corroborating earlier evidence. The distribution of spin frequencies of the 11 nuclear-powered pulsars cuts off well below the breakup frequency for most NS models, supporting theoretical predictions that gravitational radiation losses can limit accretion torques in spinning up millisecond pulsars.
Impacts of Future Climate and Emission Changes on U.S. Air Quality
Penrod, Ashley; Zhang, Yang; Wang, K.; Wu, Shiang Yuh; Leung, Lai-Yung R.
2014-06-01T23:59:59.000Z
Changes in climate and emissions will affect future air quality. In this work, simulations of present (2001-2005) and future (2026-2030) regional air quality are conducted with the newly released CMAQ version 5.0 to examine the individual and combined impacts of simulated future climate and anthropogenic emission projections on air quality over the U.S. Current (2001-2005) meteorological and chemical predictions are evaluated against observational data to assess the model’s capability in reproducing the seasonal differences. Overall, WRF and CMAQ perform reasonably well. Increased temperatures (up to 3.18 °C) and decreased ventilation (up to 157 m in planetary boundary layer height) are found in both future winter and summer, with more prominent changes in winter. Increases in future temperatures result in increased isoprene and terpene emissions in winter and summer, driving the increase in maximum 8-h average O3 (up to 5.0 ppb) over the eastern U.S. in winter while decreases in NOx emissions drive the decrease in O3 over most of the U.S. in summer. Future concentrations of PM2.5 in winter and summer and many of its components including organic matter in winter, ammonium and nitrate in summer, and sulfate in winter and summer, decrease due to decreases in primary anthropogenic emissions and the concentrations of secondary anthropogenic pollutants and increased precipitation in winter. Future winter and summer dry and wet deposition fluxes are spatially variable and increase with increasing surface resistance and precipitation (e.g., NH4+ and NO3- dry and wet deposition fluxes increase in winter over much of the U.S.), respectively, and decrease with a decrease in ambient particulate concentrations (e.g., SO42- dry and wet deposition fluxes decrease over the eastern U.S. in summer and winter). Sensitivity simulations show that anthropogenic emission projections dominate over changes in climate in their impacts on the U.S. air quality in the near future. Changes in some regions/species, however, are dominated by climate and/or both climate and anthropogenic emissions, especially in future years that are marked by meteorological conditions conducive to poor air quality.
Predicting Stimulation Response Relationships For Engineered...
Broader source: Energy.gov (indexed) [DOE]
be better knownunderstood saving exploratory effort and money * Will upscale real fracture distributions to produce realistic formation initial conditions to enhance impact of...
Recalibration of the complaint prediction model
Federspiel, C.; Martin, R.; Yan, H.
2004-01-01T23:59:59.000Z
Proceedings of Healthy Buildings/IAQ ’97, pp. Wyon, D.P.332. Wyon, D.P. 1993. Healthy buildings and their impact on
Predicting the outcome of roulette
Michael Small; Chi Kong Tse
2012-07-13T23:59:59.000Z
There have been several popular reports of various groups exploiting the deterministic nature of the game of roulette for profit. Moreover, through its history the inherent determinism in the game of roulette has attracted the attention of many luminaries of chaos theory. In this paper we provide a short review of that history and then set out to determine to what extent that determinism can really be exploited for profit. To do this, we provide a very simple model for the motion of a roulette wheel and ball and demonstrate that knowledge of initial position, velocity and acceleration is sufficient to predict the outcome with adequate certainty to achieve a positive expected return. We describe two physically realisable systems to obtain this knowledge both incognito and {\\em in situ}. The first system relies only on a mechanical count of rotation of the ball and the wheel to measure the relevant parameters. By applying this techniques to a standard casino-grade European roulette wheel we demonstrate an expected return of at least 18%, well above the -2.7% expected of a random bet. With a more sophisticated, albeit more intrusive, system (mounting a digital camera above the wheel) we demonstrate a range of systematic and statistically significant biases which can be exploited to provide an improved guess of the outcome. Finally, our analysis demonstrates that even a very slight slant in the roulette table leads to a very pronounced bias which could be further exploited to substantially enhance returns.
Predicting the outcome of roulette
Small, Michael
2012-01-01T23:59:59.000Z
There have been several popular reports of various groups exploiting the deterministic nature of the game of roulette for profit. Moreover, through its history the inherent determinism in the game of roulette has attracted the attention of many luminaries of chaos theory. In this paper we provide a short review of that history and then set out to determine to what extent that determinism can really be exploited for profit. To do this, we provide a very simple model for the motion of a roulette wheel and ball and demonstrate that knowledge of initial position, velocity and acceleration is sufficient to predict the outcome with adequate certainty to achieve a positive expected return. We describe two physically realisable systems to obtain this knowledge both incognito and {\\em in situ}. The first system relies only on a mechanical count of rotation of the ball and the wheel to measure the relevant parameters. By applying this techniques to a standard casino-grade European roulette wheel we demonstrate an expec...
Otago, University of
Maximum During a Solar Proton Event A. SeppÂ¨alÂ¨a, P. T. Verronen, V. F. Sofieva, J. Tamminen, E. KyrÂ¨olÂ¨a Finnish Meteorological Institute, Earth Observation, Helsinki, Finland C. J. Rodger Physics Department to study the effects of the January 2005 solar storms on the polar winter middle atmosphere. The model
Benson, Peter Andrew
2013-12-13T23:59:59.000Z
/CO discrepancy. Ramfjord found 0.3 to 0.5 forward of CO to be physiologic [8]. Celenza, using the RUM definition of CR, found MI to be 0.02 to 0.36 mm forward of CO [9]. Parker recommends a maximum MI/CO discrepancy of 0.5 mm forward as a criterion... .............................................. 8 Overview.............................................................................................................. .8 Introduction .......................................................................................................... .9...
in improving the thermoelectric efficiency and maximum cooling mainly focuses on improving materials' figure , power factor; , thermal conductivity. Bi2Te3 has been the most popular thermoelectric material at room a high power factor. Most of the recent research on thermoelectrics focuses on improving the material
Massachusetts at Amherst, University of
the network administrator a multi dimensional view of the network traffic. Our method can detect anomalies classes that increase the relative entropy thus providing the network administrator information related1 Detecting Anomalies in Network Traffic Using Maximum Entropy Estimation Yu Gu, Andrew Mc
Rietzel, Eike [Department of Radiation Oncology, Massachusetts General Hospital, Harvard Medical School, Boston, MA (United States); Abteilung Biophysik, Gesellschaft fuer Schwerionenforschung, Darmstadt (Germany)], E-mail: eike@rietzel.net; Liu, Arthur K.; Chen, George T.Y.; Choi, Noah C. [Department of Radiation Oncology, Massachusetts General Hospital, Harvard Medical School, Boston, MA (United States)
2008-07-15T23:59:59.000Z
Purpose: To assess the accuracy of maximum-intensity volumes (MIV) for fast contouring of lung tumors including respiratory motion. Methods and Materials: Four-dimensional computed tomography (4DCT) data of 10 patients were acquired. Maximum-intensity volumes were constructed by assigning the maximum Hounsfield unit in all CT volumes per geometric voxel to a new, synthetic volume. Gross tumor volumes (GTVs) were contoured on all CT volumes, and their union was constructed. The GTV with all its respiratory motion was contoured on the MIV as well. Union GTVs and GTVs including motion were compared visually. Furthermore, planning target volumes (PTVs) were constructed for the union of GTVs and the GTV on MIV. These PTVs were compared by centroid position, volume, geometric extent, and surface distance. Results: Visual comparison of GTVs demonstrated failure of the MIV technique for 5 of 10 patients. For adequate GTV{sub MIV}s, differences between PTVs were <1.0 mm in centroid position, 5% in volume, {+-}5 mm in geometric extent, and {+-}0.5 {+-} 2.0 mm in surface distance. These values represent the uncertainties for successful MIV contouring. Conclusion: Maximum-intensity volumes are a good first estimate for target volume definition including respiratory motion. However, it seems mandatory to validate each individual MIV by overlaying it on a movie loop displaying the 4DCT data and editing it for possible inadequate coverage of GTVs on additional 4DCT motion states.
Sachdeva, Sandeep
2006-04-12T23:59:59.000Z
We propose a novel branch-and-price (B&P) approach to solve the maximum weighted independent set problem (MWISP). Our approach uses clones of vertices to create edge-disjoint partitions from vertex-disjoint partitions. We solve the MWISP on sub...
The maximum potential to generate wind power in the contiguous United States is more than three) study. The new analysis is based on the latest computer models and examines the wind potential at wind responsible for the increased wind potential in the study. Developed in collaboration with renewable energy
- 46 - TRAVEL ACCIDENT INSURANCE CHUBB Benefits The maximum benefit (Principal Sum) is $100 of the accident, the policy will pay as follows: Payment Schedule Injury or Dismemberment Policy Pays Loss of Life to seven days Aggregate Limit of Insurance: $1,000,000 per Accident Coverage y 24-Hour Business Travel y
- 53 - TRAVEL ACCIDENT INSURANCE CHUBB Benefits The maximum benefit (Principal Sum) is $100 of the accident, the policy will pay as follows: Payment Schedule Injury or Dismemberment Policy Pays Loss of Life to seven days Aggregate Limit of Insurance: $1,000,000 per Accident NOTE: The insurance coverage described
Rabatel, Antoine
Glacier recession on Cerro Charquini (168 S), Bolivia, since the maximum of the Little Ice Age (17 de Miraflores, La Paz, Bolivia 3 CP 9214, La Paz, Bolivia 4 Maison des Sciences de l'Eau, BP 64501, 34394 Montpellier, France ABSTRACT. Cerro Charquini, Bolivia (Cordillera Real, 5392 m a
Bioenergy Impact on Wisconsin's Workforce
Broader source: Energy.gov [DOE]
Troy Runge, Wisconsin Bioenergy Initiative, presents on bioenergy's impact on Wisconsin's workforce development for the Biomass/Clean Cities States webinar.
Environmental Impacts of Smart Grid
Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)
and distribution TOU Time of use UBC Unburned hydrocarbon UNDEERC University of North Dakota Energy and Environmental Research Center V2G Vehicle to grid Environmental Impacts of...
Trends in template/fragment-free protein structure prediction
Zhou, Yaoqi; Duan, Yong; Yang, Yuedong; Faraggi, Eshel; Lei, Hongxing
2011-01-01T23:59:59.000Z
that only real- value prediction allows the sampling ofII: protein structure prediction program for genome-scaleL (2001) Structure prediction meta server. Bioinformatics
Prediction Markets as an Aggregation Mechanism for Collective Intelligence
Watkins, Jennifer H.
2007-01-01T23:59:59.000Z
through online prediction markets (undergraduate thesis).J. , & Zitzewitz, E. (2004). Prediction markets. Journal ofPrediction Markets as an Aggregation Mechanism for