Prediction of vehicle impact forces
Kaderka, Darrell Laine
1990-01-01T23:59:59.000Z
PREDICTION OF VEHICLE IMPACT FORCES A Thesis by DARRELL LAINE KADERKA Submitted to the Office of Graduate Studies of Texas ARM University in partial fulfillment of the requirements for the degree of MASTER OF SCIENCE May 1990 Major Subject...: Civil Engineering PREDICTION OF VEHICLE IMPACT FORCES A Thesis by DARRELL LAINE KADERKA Approved as to style and content by: C. Eugene Buth (Chair of Committee) W. ynn Beason (Member) I? D n E. B ay (Member) es T. P. Yao (Departmen Head) May...
Prediction of vehicle impact forces
Kaderka, Darrell Laine
1990-01-01T23:59:59.000Z
PREDICTION OF VEHICLE IMPACT FORCES A Thesis by DARRELL LAINE KADERKA Submitted to the Office of Graduate Studies of Texas ARM University in partial fulfillment of the requirements for the degree of MASTER OF SCIENCE May 1990 Major Subject...: Civil Engineering PREDICTION OF VEHICLE IMPACT FORCES A Thesis by DARRELL LAINE KADERKA Approved as to style and content by: C. Eugene Buth (Chair of Committee) W. ynn Beason (Member) I? D n E. B ay (Member) es T. P. Yao (Departmen Head) May...
Improving predictability of time series using maximum entropy methods
Gregor Chliamovitch; Alexandre Dupuis; Bastien Chopard; Anton Golub
2014-11-28T23:59:59.000Z
We discuss how maximum entropy methods may be applied to the reconstruction of Markov processes underlying empirical time series and compare this approach to usual frequency sampling. It is shown that, at least in low dimension, there exists a subset of the space of stochastic matrices for which the MaxEnt method is more efficient than sampling, in the sense that shorter historical samples have to be considered to reach the same accuracy. Considering short samples is of particular interest when modelling smoothly non-stationary processes, for then it provides, under some conditions, a powerful forecasting tool. The method is illustrated for a discretized empirical series of exchange rates.
Improving predictability of time series using maximum entropy methods
Chliamovitch, Gregor; Chopard, Bastien; Golub, Anton
2014-01-01T23:59:59.000Z
We discuss how maximum entropy methods may be applied to the reconstruction of Markov processes underlying empirical time series and compare this approach to usual frequency sampling. It is shown that, at least in low dimension, there exists a subset of the space of stochastic matrices for which the MaxEnt method is more efficient than sampling, in the sense that shorter historical samples have to be considered to reach the same accuracy. Considering short samples is of particular interest when modelling smoothly non-stationary processes, for then it provides, under some conditions, a powerful forecasting tool. The method is illustrated for a discretized empirical series of exchange rates.
Mirant Potomac, Alexandria, Virginia: Maximum Impacts Predicted by
Broader source: Energy.gov (indexed) [DOE]
AFDC Printable Version Share this resource Send a link to EERE: Alternative Fuels Data Center Home Page to someone by E-mail Share EERE: Alternative Fuels Data Center Home Page on Facebook Tweet about EERE: Alternative Fuels Data Center Home Page on Twitter Bookmark EERE: Alternative1 First Use of Energy for All Purposes (Fuel and Nonfuel), 2002; Level: National5Sales for4,645 3,625 1,006 492 742Energy China 2015of 2005 atthe DistrictIndependentDepartment4.docfrom June
Mirant Potomac, Alexandria, Virginia: Maximum Impacts Predicted by
Broader source: Energy.gov (indexed) [DOE]
AFDC Printable Version Share this resource Send a link to EERE: Alternative Fuels Data Center Home Page to someone by E-mail Share EERE: Alternative Fuels Data Center Home Page on Facebook Tweet about EERE: Alternative Fuels Data Center Home Page on Twitter Bookmark EERE: Alternative1 First Use of Energy for All Purposes (Fuel and Nonfuel), 2002; Level: National5Sales for4,645 3,625 1,006 492 742Energy China 2015of 2005 atthe DistrictIndependentDepartment4.docfrom JuneAERMOD-PRIME, Units 3,
Mirant Potomac, Alexandria, Virginia: Maximum Impacts Predicted by
Broader source: Energy.gov (indexed) [DOE]
AFDC Printable Version Share this resource Send a link to EERE: Alternative Fuels Data Center Home Page to someone by E-mail Share EERE: Alternative Fuels Data Center Home Page on Facebook Tweet about EERE: Alternative Fuels Data Center Home Page on Twitter Bookmark EERE: Alternative1 First Use of Energy for All Purposes (Fuel and Nonfuel), 2002; Level: National5Sales for4,645 3,625 1,006 492 742Energy China 2015of 2005 atthe DistrictIndependentDepartment4.docfrom JuneAERMOD-PRIME, Units
Mirant Potomac, Alexandria, Virginia: Maximum Impacts Predicted by
Broader source: Energy.gov (indexed) [DOE]
AFDC Printable Version Share this resource Send a link to EERE: Alternative Fuels Data Center Home Page to someone by E-mail Share EERE: Alternative Fuels Data Center Home Page on Facebook Tweet about EERE: Alternative Fuels Data Center Home Page on Twitter Bookmark EERE: Alternative1 First Use of Energy for All Purposes (Fuel and Nonfuel), 2002; Level: National5Sales for4,645 3,625 1,006 492 742Energy China 2015of 2005 atthe DistrictIndependentDepartment4.docfrom JuneAERMOD-PRIME,
Mirant Potomac, Alexandria, Virginia: Maximum Impacts Predicted by
Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site
AFDC Printable Version Share this resource Send a link to EERE: Alternative Fuels Data Center Home Page to someone by E-mail Share EERE: Alternative Fuels Data Center Home Page on Facebook Tweet about EERE: Alternative Fuels Data Center Home Page on Twitter Bookmark EERE: Alternative Fuels Data Center Home Page on Google Bookmark EERE: Alternative Fuels DataDepartment of Energy Your Density Isn't YourTransport(FactDepartment3311,Official FileEnergy
Mirant Potomac, Alexandria, Virginia: Maximum Impacts Predicted by
Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site
AFDC Printable Version Share this resource Send a link to EERE: Alternative Fuels Data Center Home Page to someone by E-mail Share EERE: Alternative Fuels Data Center Home Page on Facebook Tweet about EERE: Alternative Fuels Data Center Home Page on Twitter Bookmark EERE: Alternative Fuels Data Center Home Page on Google Bookmark EERE: Alternative Fuels DataDepartment of Energy Your Density Isn't YourTransport(FactDepartment3311,Official FileEnergyAERMOD-PRIME, Units 3, 1, 2 SO2 Case |
Mirant Potomac, Alexandria, Virginia: Maximum Impacts Predicted by
Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site
AFDC Printable Version Share this resource Send a link to EERE: Alternative Fuels Data Center Home Page to someone by E-mail Share EERE: Alternative Fuels Data Center Home Page on Facebook Tweet about EERE: Alternative Fuels Data Center Home Page on Twitter Bookmark EERE: Alternative Fuels Data Center Home Page on Google Bookmark EERE: Alternative Fuels DataDepartment of Energy Your Density Isn't YourTransport(FactDepartment3311,Official FileEnergyAERMOD-PRIME, Units 3, 1, 2 SO2 Case
Mirant Potomac, Alexandria, Virginia: Maximum Impacts Predicted by
Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site
AFDC Printable Version Share this resource Send a link to EERE: Alternative Fuels Data Center Home Page to someone by E-mail Share EERE: Alternative Fuels Data Center Home Page on Facebook Tweet about EERE: Alternative Fuels Data Center Home Page on Twitter Bookmark EERE: Alternative Fuels Data Center Home Page on Google Bookmark EERE: Alternative Fuels DataDepartment of Energy Your Density Isn't YourTransport(FactDepartment3311,Official FileEnergyAERMOD-PRIME, Units 3, 1, 2 SO2
Predicting Highway Construction Impacts on a
Minnesota, University of
Predicting Highway Construction Impacts on a Community #12;Making the Best Decisions in the Face 212 &TH 52 Construction Challenges TH 212 &TH 52 Construction Challenges Ü TH 212 Unsuitable Matls Construction Access Environmentally Sensitive Areas Payment Curve Illegal dump sites Noise Mitigation Walls
Harrington, Jerry Y.
Radiative Impacts on the Growth of Drops within Simulated Marine Stratocumulus. Part I: Maximum Solar Heating CHRISTOPHER M. HARTMAN AND JERRY Y. HARRINGTON Department of Meteorology, The Pennsylvania November 2004) ABSTRACT The effects of solar heating and infrared cooling on the vapor depositional growth
Kempes, Christopher P; Dooris, William; West, Geoffrey B
2015-01-01T23:59:59.000Z
In the face of uncertain biological response to climate change and the many critiques concerning model complexity it is increasingly important to develop predictive mechanistic frameworks that capture the dominant features of ecological communities and their dependencies on environmental factors. This is particularly important for critical global processes such as biomass changes, carbon export, and biogenic climate feedback. Past efforts have successfully understood a broad spectrum of plant and community traits across a range of biological diversity and body size, including tree size distributions and maximum tree height, from mechanical, hydrodynamic, and resource constraints. Recently it was shown that global scaling relationships for net primary productivity are correlated with local meteorology and the overall biomass density within a forest. Along with previous efforts, this highlights the connection between widely observed allometric relationships and predictive ecology. An emerging goal of ecological...
Microbial impacts on geothermometry temperature predictions
Yoshiko Fujita; David W. Reed; Kaitlyn R. Nowak; Vicki S. Thompson; Travis L. McLing; Robert W. Smith
2013-02-01T23:59:59.000Z
Conventional geothermometry approaches assume that the composition of a collected water sample originating in a deep geothermal reservoir still reflects chemical equilibration of the water with the deep reservoir rocks. However, for geothermal prospecting samples whose temperatures have dropped to <120°C, temperature predictions may be skewed by the activity of microorganisms; microbial metabolism can drastically and rapidly change the water’s chemistry. We hypothesize that knowledge of microbial impacts on exploration sample geochemistry can be used to constrain input into geothermometry models and thereby improve the reliability of reservoir temperature predictions. To evaluate this hypothesis we have chosen to focus on sulfur cycling, because of the significant changes in redox state and pH associated with sulfur chemistry. Redox and pH are critical factors in defining the mineral-fluid equilibria that form the basis of solute geothermometry approaches. Initially we are developing assays to detect the process of sulfate reduction, using knowledge of genes specific to sulfate reducing microorganisms. The assays rely on a common molecular biological technique known as quantitative polymerase chain reaction (qPCR), which allows estimation of the number of target organisms in a particular sample by enumerating genes specific to the organisms rather than actually retrieving and characterizing the organisms themselves. For quantitation of sulfate reducing genes using qPCR, we constructed a plasmid (a piece of DNA) containing portions of two genes (known as dsrA and dsrB) that are directly involved with sulfate reduction and unique to sulfate reducing microorganisms. Using the plasmid as well as DNA from other microorganisms known to be sulfate reducers or non-sulfate reducers, we developed qPCR protocols and showed the assay’s specificity to sulfate reducers and that a qPCR standard curve using the plasmid was linear over >5 orders of magnitude. As a first test with actual field samples, the assay was applied to DNA extracted from water collected at springs located in and around the town of Soda Springs, Idaho. Soda Springs is located in the fold and thrust belt on the eastern boundary of the track of the Yellowstone Hotspot, where a deep carbon dioxide source believed to originate from Mississippian limestone contacts acidic hydrothermal fluids at depth. Both sulfate and sulfide have been measured in samples collected previously at Soda Springs. Preliminary results indicate that sulfate reducing genes were present in each of the samples tested. Our work supports evaluation of the potential for microbial processes to have altered water chemistry in geothermal exploration samples.
Predicting Equity Market Price Impact with Performance Weighted Ensembles of
Predicting Equity Market Price Impact with Performance Weighted Ensembles of Random Forests Ash.j.mcgroarty@soton.ac.uk Abstract--For many players in financial markets, the price impact of their trading activity represents a large proportion of their transaction costs. This paper proposes a novel machine learning method
Numerical Prediction of High-Impact Local Weather: A
Xue, Ming
Chapter 6 Numerical Prediction of High-Impact Local Weather: A Driver for Petascale Computing Ming winds, lightning, hurricanes and winter storms, cause hundreds of deaths and average annual economic of mitigating the impacts of such events on the economy and society is obvious, our ability to do so
Mine Impact Burial Prediction Experimental (MIBEX)
Chu, Peter C.
School Steven D. Haeger Naval Oceanographic Office #12;Modeling Mine Impact Burial Depth Modeling.5 0.6 Hs ig For MIS O S ite Hsig(m) Yea rday of 2000 #12;Surface Elevation Variance #12;R/V John
John Max Wilson; Keith Andrew
2012-07-27T23:59:59.000Z
We investigate the relative time scales associated with finite future cosmological singularities, especially those classified as Big Rip cosmologies, and the maximum predictability time of a coupled FRW-KG scalar cosmology with chaotic regimes. Our approach is to show that by starting with a FRW-KG scalar cosmology with a potential that admits an analytical solution resulting in a finite time future singularity there exists a Lyapunov time scale that is earlier than the formation of the singularity. For this singularity both the cosmological scale parameter a(t) and the Hubble parameter H(t) become infinite at a finite future time, the Big Rip time. We compare this time scale to the predictability time scale for a chaotic FRW-KG scalar cosmology. We find that there are cases where the chaotic time scale is earlier than the Big Rip singularity calling for special care in interpreting and predicting the formation of the future cosmological singularity.
Predicting on-site environmental impacts of municipal engineering works
Gangolells, Marta, E-mail: marta.gangolells@upc.edu; Casals, Miquel, E-mail: miquel.casals@upc.edu; Forcada, Núria, E-mail: nuria.forcada@upc.edu; Macarulla, Marcel, E-mail: marcel.macarulla@upc.edu
2014-01-15T23:59:59.000Z
The research findings fill a gap in the body of knowledge by presenting an effective way to evaluate the significance of on-site environmental impacts of municipal engineering works prior to the construction stage. First, 42 on-site environmental impacts of municipal engineering works were identified by means of a process-oriented approach. Then, 46 indicators and their corresponding significance limits were determined on the basis of a statistical analysis of 25 new-build and remodelling municipal engineering projects. In order to ensure the objectivity of the assessment process, direct and indirect indicators were always based on quantitative data from the municipal engineering project documents. Finally, two case studies were analysed and found to illustrate the practical use of the proposed model. The model highlights the significant environmental impacts of a particular municipal engineering project prior to the construction stage. Consequently, preventive actions can be planned and implemented during on-site activities. The results of the model also allow a comparison of proposed municipal engineering projects and alternatives with respect to the overall on-site environmental impact and the absolute importance of a particular environmental aspect. These findings are useful within the framework of the environmental impact assessment process, as they help to improve the identification and evaluation of on-site environmental aspects of municipal engineering works. The findings may also be of use to construction companies that are willing to implement an environmental management system or simply wish to improve on-site environmental performance in municipal engineering projects. -- Highlights: • We present a model to predict the environmental impacts of municipal engineering works. • It highlights significant on-site environmental impacts prior to the construction stage. • Findings are useful within the environmental impact assessment process. • They also help contractors to implement environmental management systems.
Using the Maximum X-ray Flux Ratio and X-ray Background to Predict Solar Flare Class
Winter, Lisa M
2015-01-01T23:59:59.000Z
We present the discovery of a relationship between the maximum ratio of the flare flux (namely, 0.5-4 Ang to the 1-8 Ang flux) and non-flare background (namely, the 1-8 Ang background flux), which clearly separates flares into classes by peak flux level. We established this relationship based on an analysis of the Geostationary Operational Environmental Satellites (GOES) X-ray observations of ~ 50,000 X, M, C, and B flares derived from the NOAA/SWPC flares catalog. Employing a combination of machine learning techniques (K-nearest neighbors and nearest-centroid algorithms) we show a separation of the observed parameters for the different peak flaring energies. This analysis is validated by successfully predicting the flare classes for 100% of the X-class flares, 76% of the M-class flares, 80% of the C-class flares and 81% of the B-class flares for solar cycle 24, based on the training of the parametric extracts for solar flares in cycles 22-23.
Letschert, Virginie; Desroches, Louis-Benoit; McNeil, Michael; Saheb, Yamina
2010-05-03T23:59:59.000Z
The US Department of Energy (US DOE) has placed lighting and appliance standards at a very high priority of the U.S. energy policy. However, the maximum energy savings and CO2 emissions reduction achievable via minimum efficiency performance standards (MEPS) has not yet been fully characterized. The Bottom Up Energy Analysis System (BUENAS), first developed in 2007, is a global, generic, and modular tool designed to provide policy makers with estimates of potential impacts resulting from MEPS for a variety of products, at the international and/or regional level. Using the BUENAS framework, we estimated potential national energy savings and CO2 emissions mitigation in the US residential sector that would result from the most aggressive policy foreseeable: standards effective in 2014 set at the current maximum technology (Max Tech) available on the market. This represents the most likely characterization of what can be maximally achieved through MEPS in the US. The authors rely on the latest Technical Support Documents and Analytical Tools published by the U.S. Department of Energy as a source to determine appliance stock turnover and projected efficiency scenarios of what would occur in the absence of policy. In our analysis, national impacts are determined for the following end uses: lighting, television, refrigerator-freezers, central air conditioning, room air conditioning, residential furnaces, and water heating. The analyzed end uses cover approximately 65percent of site energy consumption in the residential sector (50percent of the electricity consumption and 80percent of the natural gas and LPG consumption). This paper uses this BUENAS methodology to calculate that energy savings from Max Tech for the U.S. residential sector products covered in this paper will reach an 18percent reduction in electricity demand compared to the base case and 11percent in Natural Gas and LPG consumption by 2030 The methodology results in reductions in CO2 emissions of a similar magnitude.
Validating health impact assessment: Prediction is difficult (especially about the future)
Petticrew, Mark [MRC Social and Public Health Sciences Unit, 4 Lilybank Gardens, Glasgow, G12 8RZ (United Kingdom)]. E-mail: mark@msoc.mrc.gla.ac.uk; Cummins, Steven [Department of Geography, Queen Mary, University of London, Mile End Road, London, E1 4NS (United Kingdom); Sparks, Leigh [Institute for Retail Studies, University of Stirling, Stirling, FK9 4LA (United Kingdom); Findlay, Anne [Institute for Retail Studies, University of Stirling, Stirling, FK9 4LA (United Kingdom)
2007-01-15T23:59:59.000Z
Health impact assessment (HIA) has been recommended as a means of estimating how policies, programmes and projects may impact on public health and on health inequalities. This paper considers the difference between predicting health impacts and measuring those impacts. It draws upon a case study of the building of a new hypermarket in a deprived area of Glasgow, which offered an opportunity to reflect on the issue of the predictive validity of HIA, and to consider the difference between potential and actual impacts. We found that the actual impacts of the new hypermarket on diet differed from that which would have been predicted based on previous studies. Furthermore, they challenge current received wisdom about the impact of food retail outlets in poorer areas. These results are relevant to the validity of HIA as a process and emphasise the importance of further research on the predictive validity of HIA, which should help improve its value to decision-makers.
The Impact of Using Derived Fuel Consumption Maps to Predict...
Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site
Engine Waste Heat Recovery Concept Demonstration Efficient Thermally Variable Cooling System Potential Health and Environmental Impact from Emerging Technologies and...
New tool predicts economic impacts of natural gas stations |...
Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)
announced a new tool today for analyzing the economic impacts of building new compressed natural gas fueling stations. Called JOBS NG, the tool is freely available to the public....
Theoretical Prediction and Impact of Fundamental Electric Dipole Moments
Sebastian A. R. Ellis; Gordon L. Kane
2014-05-29T23:59:59.000Z
The predicted Standard Model (SM) electric dipole moments (EDMs) of electrons and quarks are tiny, providing an important window to observe new physics. Theories beyond the SM typically allow relatively large EDMs. The EDMs depend on the relative phases of terms in the effective Lagrangian of the extended theory, which are generally unknown. Underlying theories, such as string/M-theories compactified to four dimensions, could predict the phases and thus EDMs in the resulting supersymmetric (SUSY) theory. Earlier one of us, with collaborators, made such a prediction and found, unexpectedly, that the phases were predicted to be zero at tree level in the theory at the unification or string scale $\\sim\\mathcal{O}(10^{16}$ GeV). Electroweak (EW) scale EDMs still arise via running from the high scale, and depend only on the SM Yukawa couplings that also give the CKM phase. Here we extend the earlier work by studying the dependence of the low scale EDMs on the constrained but not fully known fundamental Yukawa couplings. The dominant contribution is from two loop diagrams and is not sensitive to the choice of Yukawa texture. The electron EDM should not be found to be larger than about $ 5\\times 10^{-30} e$ cm, and the neutron EDM should not be larger than about $5\\times 10^{-29}e$ cm. These values are quite a bit smaller than the reported predictions from Split SUSY and typical effective theories, but much larger than the Standard Model prediction. Also, since models with random phases typically give much larger EDMs, it is a significant testable prediction of compactified M-theory that the EDMs should not be above these upper limits. The actual EDMs can be below the limits, so once they are measured they could provide new insight into the fundamental Yukawa couplings of leptons and quarks. We comment also on the role of strong CP violation. EDMs probe fundamental physics near the Planck scale.
Predicted Impacts of Proton Temperature Anisotropy on Solar Wind Turbulence
Klein, Kristopher G
2015-01-01T23:59:59.000Z
Particle velocity distributions measured in the weakly collisional solar wind are frequently found to be non-Maxwellian, but how these non-Maxwellian distributions impact the physics of plasma turbulence in the solar wind remains unanswered. Using numerical solutions of the linear dispersion relation for a collisionless plasma with a bi-Maxwellian proton velocity distribution, we present a unified framework for the four proton temperature anisotropy instabilities, identifying the associated stable eigenmodes, highlighting the unstable region of wavevector space, and presenting the properties of the growing eigenfunctions. Based on physical intuition gained from this framework, we address how the proton temperature anisotropy impacts the nonlinear dynamics of the \\Alfvenic fluctuations underlying the dominant cascade of energy from large to small scales and how the fluctuations driven by proton temperature anisotropy instabilities interact nonlinearly with each other and with the fluctuations of the large-scal...
Impact Burial Prediction for Mine Breaching Using IMPACT35 Peter C. Chu
Chu, Peter C.
Impact Burial, Hydrodynamics, Pseudo-Cylinder Parameterization, Four Coordinate Transform, Six Degree of Freedom (DOF) Model, IMPACT35, Inverse Method, Drag Coefficient, Lift Coefficient, NPS-MIDEX-II LONG IMPACT35 for operational mine shapes such as Manta, Rockan, etc. · To implement a new technique (pseudo-cylinder
The impact of electricity market schemes on predictability being a decision factor in the wind farm
Paris-Sud XI, Université de
The impact of electricity market schemes on predictability being a decision factor in the wind farm used criterion of capacity factor on the investment phase of a wind farm and on spatial planning, it is now recognized that accurate short-term forecasts of wind farms´ power output over the next few hours
The impact of electricity market schemes on predictability being a decision factor in the wind farm
Paris-Sud XI, Université de
The impact of electricity market schemes on predictability being a decision factor in the wind farm of capacity factor on the investment phase of a wind farm and on spatial planning in an electricity market, it is now recognized that accurate short-term forecasts of wind farms´ power output over the next few hours
Not Available
1993-07-01T23:59:59.000Z
This document provides an analysis of the potential impacts associated with the proposed action, which is continued operation of Naval Petroleum Reserve No. I (NPR-1) at the Maximum Efficient Rate (MER) as authorized by Public law 94-258, the Naval Petroleum Reserves Production Act of 1976 (Act). The document also provides a similar analysis of alternatives to the proposed action, which also involve continued operations, but under lower development scenarios and lower rates of production. NPR-1 is a large oil and gas field jointly owned and operated by the federal government and Chevron U.SA Inc. (CUSA) pursuant to a Unit Plan Contract that became effective in 1944; the government`s interest is approximately 78% and CUSA`s interest is approximately 22%. The government`s interest is under the jurisdiction of the United States Department of Energy (DOE). The facility is approximately 17,409 acres (74 square miles), and it is located in Kern County, California, about 25 miles southwest of Bakersfield and 100 miles north of Los Angeles in the south central portion of the state. The environmental analysis presented herein is a supplement to the NPR-1 Final Environmental Impact Statement of that was issued by DOE in 1979 (1979 EIS). As such, this document is a Supplemental Environmental Impact Statement (SEIS).
Not Available
1992-05-01T23:59:59.000Z
The proposed action involves the continued operation of the Naval Petroleum Reserve No. 1 (NPR-1) at the Maximum Efficiency Rate (MER) through the year approximately 2025 in accordance with the requirements of the Naval Petroleum Reserves Production Act of 1976 (P.L. 94-258). NPR-1 is a large oil and gas field comprising 74 square miles. MER production primarily includes continued operation and maintenance of existing facilities; a well drilling and abandonment program; construction and operation of future gas processing, gas compression, and steamflood, waterflood, cogeneration, and butane isomerization facilities; and continued implementation of a comprehensive environmental protection program. The basis for the draft environment impact statement (DSEIS) proposed action is the April 1989 NPR-1 Long Range Plan which describes a myriad of planned operational, maintenance, and development activities over the next 25--30 years. These activities include the continued operation of existing facilities; additional well drilling; expanded steamflood operations; expanded waterflood programs; expanded gas compression, gas lift, gas processing and gas injection; construction of a new cogeneration facility; construction of a new isobutane facility; and a comprehensive environmental program designed to minimize environmental impacts.
Droegemeier, Kelvin K.
Impact of CASA Radar and Oklahoma Mesonet Data Assimilation on the Analysis and Prediction and Prediction of Storms, and School of Meteorology, University of Oklahoma, Norman, Oklahoma KEITH BREWSTER Center for Analysis and Prediction of Storms, Norman, Oklahoma JIDONG GAO National Severe Storms
On the impact of power corrections in the prediction of B->K*mu+mu- observables
Sébastien Descotes-Genon; Lars Hofer; Joaquim Matias; Javier Virto
2014-09-25T23:59:59.000Z
The recent LHCb angular analysis of the exclusive decay B->K^*mu+mu- has indicated significant deviations from the Standard Model expectations. Accurate predictions can be achieved at large K*-meson recoil for an optimised set of observables designed to have no sensitivity to hadronic input in the heavy-quark limit at leading order in alpha_s. However, hadronic uncertainties reappear through non-perturbative Lambda_QCD/m_b power corrections, which must be assessed precisely. In the framework of QCD factorisation we present a systematic method to include factorisable power corrections and point out that their impact on angular observables depends on the scheme chosen to define the soft form factors. Associated uncertainties are found to be under control, contrary to earlier claims in the literature. We also discuss the impact of possible non-factorisable power corrections, including an estimate of charm-loop effects. We provide results for angular observables at large recoil for two different sets of inputs for the form factors, spelling out the different sources of theoretical uncertainties. Finally, we comment on a recent proposal to explain the anomaly in B->K*mu+mu- observables through charm-resonance effects, and we propose strategies to test this proposal identifying observables and kinematic regions where either the charm-loop model can be disentangled from New Physics effects or the two options leave different imprints.
Maguire, J.; Burch, J.
2013-08-01T23:59:59.000Z
Modeling residential water heaters with dynamic simulation models can provide accurate estimates of their annual energy consumption, if the units? characteristics and use conditions are known. Most gas storage water heaters (GSWHs) include a standing pilot light. It is generally assumed that the pilot light energy will help make up standby losses and have no impact on the predicted annual energy consumption. However, that is not always the case. The gas input rate and conversion efficiency of a pilot light for a GSWH were determined from laboratory data. The data were used in simulations of a typical GSWH with and without a pilot light, for two cases: 1) the GSWH is used alone; and 2) the GSWH is the second tank in a solar water heating (SWH) system. The sensitivity of wasted pilot light energy to annual hot water use, climate, and installation location was examined. The GSWH used alone in unconditioned space in a hot climate had a slight increase in energy consumption. The GSWH with a pilot light used as a backup to an SWH used up to 80% more auxiliary energy than one without in hot, sunny locations, from increased tank losses.
Karak, Bidya Binay [Department of Physics, Indian Institute of Science, Bangalore 560012 (India); Nandy, Dibyendu, E-mail: bidya_karak@physics.iisc.ernet.in, E-mail: dnandi@iiserkol.ac.in [Indian Institute for Science Education and Research, Kolkata, Mohampur 741252, West Bengal (India)
2012-12-10T23:59:59.000Z
Prediction of the Sun's magnetic activity is important because of its effect on space environment and climate. However, recent efforts to predict the amplitude of the solar cycle have resulted in diverging forecasts with no consensus. Yeates et al. have shown that the dynamical memory of the solar dynamo mechanism governs predictability, and this memory is different for advection- and diffusion-dominated solar convection zones. By utilizing stochastically forced, kinematic dynamo simulations, we demonstrate that the inclusion of downward turbulent pumping of magnetic flux reduces the memory of both advection- and diffusion-dominated solar dynamos to only one cycle; stronger pumping degrades this memory further. Thus, our results reconcile the diverging dynamo-model-based forecasts for the amplitude of solar cycle 24. We conclude that reliable predictions for the maximum of solar activity can be made only at the preceding minimum-allowing about five years of advance planning for space weather. For more accurate predictions, sequential data assimilation would be necessary in forecasting models to account for the Sun's short memory.
Wurstner, S.K.; Freshley, M.D.
1994-12-01T23:59:59.000Z
A ground-water flow model was used to predict water level decline in selected wells in the operating areas (100, 200, 300, and 400 Areas) and the 600 Area. To predict future water levels, the unconfined aquifer system was stimulated with the two-dimensional version of a ground-water model of the Hanford Site, which is based on the Coupled Fluid, Energy, and Solute Transport (CFEST) Code in conjunction with the Geographic Information Systems (GIS) software package. The model was developed using the assumption that artificial recharge to the unconfined aquifer system from Site operations was much greater than any natural recharge from precipitation or from the basalt aquifers below. However, artificial recharge is presently decreasing and projected to decrease even more in the future. Wells currently used for monitoring at the Hanford Site are beginning to go dry or are difficult to sample, and as the water table declines over the next 5 to 10 years, a larger number of wells is expected to be impacted. The water levels predicted by the ground-water model were compared with monitoring well completion intervals to determine which wells will become dry in the future. Predictions of wells that will go dry within the next 5 years have less uncertainty than predictions for wells that will become dry within 5 to 10 years. Each prediction is an estimate based on assumed future Hanford Site operating conditions and model assumptions.
de Mink, S E
2015-01-01T23:59:59.000Z
The initial mass function (IMF), binary fraction and distributions of binary parameters (mass ratios, separations and eccentricities) are indispensable input for simulations of stellar populations. It is often claimed that these are poorly constrained significantly affecting evolutionary predictions. Recently, dedicated observing campaigns provided new constraints on the initial conditions for massive stars. Findings include a larger close binary fraction and a stronger preference for very tight systems. We investigate the impact on the predicted merger rates of neutron stars and black holes. Despite the changes with previous assumptions, we only find an increase of less than a factor 2 (insignificant compared with evolutionary uncertainties of typically a factor 10-100). We further show that the uncertainties in the new initial binary properties do not significantly affect (within a factor of 2) our predictions of double compact object merger rates. An exception is the uncertainty in IMF (variations by a fac...
Predicted Impact of Idling Reduction Options for Heavy-Duty Diesel...
Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site
Impact of Idling Reduction Options for Heavy-Duty Diesel Trucks: A Comparison of Full-Fuel-Cycle Emissions, Energy Use, and Proximity to Urban Populations in Five States...
Impact of vegetation properties on U.S. summer weather prediction
Xue, Y; Fennessy, M; sellers, P
2015-01-01T23:59:59.000Z
Meteorological Center, Mon. Weather Rev. , 108, 1279-1292,VEGETATION IN U.S. SUMMER WEATHER model (SIB) for use withinConference on Numerical Weather Prediction, pp. 726 -733,
MELE: Maximum Entropy Leuven Estimators
Paris, Quirino
2001-01-01T23:59:59.000Z
of the Generalized Maximum Entropy Estimator of the Generaland Douglas Miller, Maximum Entropy Econometrics, Wiley andCalifornia Davis MELE: Maximum Entropy Leuven Estimators by
Maximum Parsimony and Maximum Likelihood Methods Comparisons and Bootstrap Tests
Qiu, Weigang
Maximum Parsimony and Maximum Likelihood Methods Comparisons and Bootstrap Tests Character Likelihood Methods Comparisons and Bootstrap Tests Character Reconstruction PHYLIP and T-REX Exercises Outline 1 Maximum Parsimony and Maximum Likelihood 2 Methods Comparisons and Bootstrap Tests 3 Character
Ritchie, L.T.; Brown, W.D.; Wayland, J.R.
1980-05-01T23:59:59.000Z
A general temperate latitude cyclonic rainstorm model is presented which describes the effects of washout and runoff on consequences of atmospheric releases of radioactive material from potential nuclear reactor accidents. The model treats the temporal and spatial variability of precipitation processes. Predicted air and ground concentrations of radioactive material and resultant health consequences for the new model are compared to those of the original WASH-1400 model under invariant meteorological conditions and for realistic weather events using observed meteorological sequences. For a specific accident under a particular set of meteorological conditions, the new model can give significantly different results from those predicted by the WASH-1400 model, but the aggregate consequences produced for a large number of meteorological conditions are similar.
Maximum Entropy Correlated Equilibria
Ortiz, Luis E.
2006-03-20T23:59:59.000Z
We study maximum entropy correlated equilibria in (multi-player)games and provide two gradient-based algorithms that are guaranteedto converge to such equilibria. Although we do not provideconvergence rates for these ...
Balashov, Victor N.; Guthrie, George D.; Hakala, J. Alexandra; Lopano, Christina L. J.; Rimstidt, Donald; Brantley, Susan L.
2013-03-01T23:59:59.000Z
One idea for mitigating the increase in fossil-fuel generated CO{sub 2} in the atmosphere is to inject CO{sub 2} into subsurface saline sandstone reservoirs. To decide whether to try such sequestration at a globally significant scale will require the ability to predict the fate of injected CO{sub 2}. Thus, models are needed to predict the rates and extents of subsurface rock-water-gas interactions. Several reactive transport models for CO{sub 2} sequestration created in the last decade predicted sequestration in sandstone reservoirs of ~17 to ~90 kg CO{sub 2} m{sup -3|. To build confidence in such models, a baseline problem including rock + water chemistry is proposed as the basis for future modeling so that both the models and the parameterizations can be compared systematically. In addition, a reactive diffusion model is used to investigate the fate of injected supercritical CO{sub 2} fluid in the proposed baseline reservoir + brine system. In the baseline problem, injected CO{sub 2} is redistributed from the supercritical (SC) free phase by dissolution into pore brine and by formation of carbonates in the sandstone. The numerical transport model incorporates a full kinetic description of mineral-water reactions under the assumption that transport is by diffusion only. Sensitivity tests were also run to understand which mineral kinetics reactions are important for CO{sub 2} trapping. The diffusion transport model shows that for the first ~20 years after CO{sub 2} diffusion initiates, CO{sub 2} is mostly consumed by dissolution into the brine to form CO{sub 2,aq} (solubility trapping). From 20-200 years, both solubility and mineral trapping are important as calcite precipitation is driven by dissolution of oligoclase. From 200 to 1000 years, mineral trapping is the most important sequestration mechanism, as smectite dissolves and calcite precipitates. Beyond 2000 years, most trapping is due to formation of aqueous HCO{sub 3}{sup -}. Ninety-seven percent of the maximum CO{sub 2} sequestration, 34.5 kg CO{sub 2} per m{sup 3} of sandstone, is attained by 4000 years even though the system does not achieve chemical equilibrium until ~25,000 years. This maximum represents about 20% CO{sub 2} dissolved as CO{sub 2},aq, 50% dissolved as HCO{sub 3}{sup -}{sub ,aq}, and 30% precipitated as calcite. The extent of sequestration as HCO{sub 3}{sup -} at equilibrium can be calculated from equilibrium thermodynamics and is roughly equivalent to the amount of Na+ in the initial sandstone in a soluble mineral (here, oligoclase). Similarly, the extent of trapping in calcite is determined by the amount of Ca2+ in the initial oligoclase and smectite. Sensitivity analyses show that the rate of CO{sub 2} sequestration is sensitive to the mineral-water reaction kinetic constants between approximately 10 and 4000 years. The sensitivity of CO{sub 2} sequestration to the rate constants decreases in magnitude respectively from oligoclase to albite to smectite.
Renfrew, Ian
The impact of Greenland on the predictability of European weather systems Supervisors: Sue Gray (U-to-high latitude of Greenland means it has a major influence on the atmospheric circulation of the North Atlantic by the presence of Greenland as is the atmosphere well downstream, for example over the British Isles
university-logo Maximum likelihood
McCullagh, Peter
university-logo Maximum likelihood Applications and examples REML and residual likelihood Peter McCullagh REML #12;university-logo Maximum likelihood Applications and examples JAN: Some personal remarks... IC #12;university-logo Maximum likelihood Applications and examples Outline 1 Maximum likelihood REML
Maximum entropy principal for transportation
Bilich, F. [University of Brasilia (Brazil); Da Silva, R. [National Research Council (Brazil)
2008-11-06T23:59:59.000Z
In this work we deal with modeling of the transportation phenomenon for use in the transportation planning process and policy-impact studies. The model developed is based on the dependence concept, i.e., the notion that the probability of a trip starting at origin i is dependent on the probability of a trip ending at destination j given that the factors (such as travel time, cost, etc.) which affect travel between origin i and destination j assume some specific values. The derivation of the solution of the model employs the maximum entropy principle combining a priori multinomial distribution with a trip utility concept. This model is utilized to forecast trip distributions under a variety of policy changes and scenarios. The dependence coefficients are obtained from a regression equation where the functional form is derived based on conditional probability and perception of factors from experimental psychology. The dependence coefficients encode all the information that was previously encoded in the form of constraints. In addition, the dependence coefficients encode information that cannot be expressed in the form of constraints for practical reasons, namely, computational tractability. The equivalence between the standard formulation (i.e., objective function with constraints) and the dependence formulation (i.e., without constraints) is demonstrated. The parameters of the dependence-based trip-distribution model are estimated, and the model is also validated using commercial air travel data in the U.S. In addition, policy impact analyses (such as allowance of supersonic flights inside the U.S. and user surcharge at noise-impacted airports) on air travel are performed.
Cell development obeys maximum Fisher information
B. R. Frieden; R. A. Gatenby
2014-04-29T23:59:59.000Z
Eukaryotic cell development has been optimized by natural selection to obey maximal intracellular flux of messenger proteins. This, in turn, implies maximum Fisher information on angular position about a target nuclear pore complex (NPR). The cell is simply modeled as spherical, with cell membrane (CM) diameter 10 micron and concentric nuclear membrane (NM) diameter 6 micron. The NM contains about 3000 nuclear pore complexes (NPCs). Development requires messenger ligands to travel from the CM-NPC-DNA target binding sites. Ligands acquire negative charge by phosphorylation, passing through the cytoplasm over Newtonian trajectories toward positively charged NPCs (utilizing positive nuclear localization sequences). The CM-NPC channel obeys maximized mean protein flux F and Fisher information I at the NPC, with first-order delta I = 0 and approximate 2nd-order delta I = 0 stability to environmental perturbations. Many of its predictions are confirmed, including the dominance of protein pathways of from 1-4 proteins, a 4nm size for the EGFR protein and the approximate flux value F =10^16 proteins/m2-s. After entering the nucleus, each protein ultimately delivers its ligand information to a DNA target site with maximum probability, i.e. maximum Kullback-Liebler entropy HKL. In a smoothness limit HKL approaches IDNA/2, so that the total CM-NPC-DNA channel obeys maximum Fisher I. Thus maximum information approaches non-equilibrium, one condition for life.
Wang, Chien.; Prinn, Ronald G.
The possible trends for atmospheric carbon monoxide in the next 100 yr have been illustrated using a coupled atmospheric chemistry and climate model driven by emissions predicted by a global economic development model. ...
Achieve maximum application availability and
Bernstein, Phil
Highlights Achieve maximum application availability and data protection using SQL Server AlwaysOn and other high availability features Reduce planned downtime significantly with SQL Server on Windows and management of high availability and disaster recovery using integrated tools Achieve maximum application
Wang, J.; Claridge, D. E.
1998-01-01T23:59:59.000Z
the annual prediction error to 0.6% from -6.1 % . The modified heating regression models reduces the annual prediction error to 4.1% from 5.7%. Ec=6.6569+0.1 875(67.044-Tdb)'+0.6756(Tdb-67.044)7 Eh=0.909 1 -.3662(67.04-Tdb)'-.0462(Tdb-67.044)' Ec=5....8505+. 1736(67.7082-Tdb)*+.6794(Tdb-67.708)' ~h=0.97 18-0.341 (67.044-~db)'-0.0458(~db-67.044)+ CONCLUSIONS The results of the four cases studied indicate that when the AHUs operate 24 hours per day, the annual prediction error of the regular cooling...
Original article Restricted maximum likelihood
Paris-Sud XI, Université de
Original article Restricted maximum likelihood estimation of covariances in sparse linear models on the simplex algorithm of Nelder and Mead [40]. Kovac [29] made modifications that turned it into a stable
Bhoite, Sameer Prabhakarrao
2005-02-17T23:59:59.000Z
networks not only depend on the wireless losses, but also on the network congestion in the wired Internet. Delay variations (delay jitter) and packet drops are most important parameters impacting QoS for time sensitive real-time applications over the wired..., once it leaves the source application layer is called as ?delay? of the packet. Variation in the delay between consecutive packets is called as ?delay jitter?. Various types of delays and the causes of delay variation and packet drops are discussed...
Hu, Zhi; Huang, Ge; Sadanandam, Anguraj; Gu, Shenda; Lenburg, Marc E; Pai, Melody; Bayani, Nora; Blakely, Eleanor A; Gray, Joe W; Mao, Jian-Hua
2010-06-25T23:59:59.000Z
Introduction: HJURP (Holliday Junction Recognition Protein) is a newly discovered gene reported to function at centromeres and to interact with CENPA. However its role in tumor development remains largely unknown. The goal of this study was to investigate the clinical significance of HJURP in breast cancer and its correlation with radiotherapeutic outcome. Methods: We measured HJURP expression level in human breast cancer cell lines and primary breast cancers by Western blot and/or by Affymetrix Microarray; and determined its associations with clinical variables using standard statistical methods. Validation was performed with the use of published microarray data. We assessed cell growth and apoptosis of breast cancer cells after radiation using high-content image analysis. Results: HJURP was expressed at higher level in breast cancer than in normal breast tissue. HJURP mRNA levels were significantly associated with estrogen receptor (ER), progesterone receptor (PR), Scarff-Bloom-Richardson (SBR) grade, age and Ki67 proliferation indices, but not with pathologic stage, ERBB2, tumor size, or lymph node status. Higher HJURP mRNA levels significantly decreased disease-free and overall survival. HJURP mRNA levels predicted the prognosis better than Ki67 proliferation indices. In a multivariate Cox proportional-hazard regression, including clinical variables as covariates, HJURP mRNA levels remained an independent prognostic factor for disease-free and overall survival. In addition HJURP mRNA levels were an independent prognostic factor over molecular subtypes (normal like, luminal, Erbb2 and basal). Poor clinical outcomes among patients with high HJURP expression werevalidated in five additional breast cancer cohorts. Furthermore, the patients with high HJURP levels were much more sensitive to radiotherapy. In vitro studies in breast cancer cell lines showed that cells with high HJURP levels were more sensitive to radiation treatment and had a higher rate of apoptosis than those with low levels. Knock down of HJURP in human breast cancer cells using shRNA reduced the sensitivity to radiation treatment. HJURP mRNA levels were significantly correlated with CENPA mRNA levels. Conclusions: HJURP mRNA level is a prognostic factor for disease-free and overall survival in patients with breast cancer and is a predictive biomarker for sensitivity to radiotherapy.
Maximum likelihood estimation for cooperative sequential adsorption
Burton, Geoffrey R.
Maximum likelihood estimation for cooperative sequential adsorption Mathew D. Penrose and Vadim;Maximum likelihood estimation for cooperative sequential adsorption M.D. Penrose, Department of the region. Keywords: cooperative sequential adsorption, space-time point pro- cess, maximum likelihood
Estimating a mixed strategy employing maximum entropy
Golan, Amos; Karp, Larry; Perloff, Jeffrey M.
1996-01-01T23:59:59.000Z
MIXED STRATEGY EMPLOYING MAXIMUM ENTROPY by Amos Golan LarryMixed Strategy Employing Maximum Entropy Amos Golan Larry S.Abstract Generalized maximum entropy may be used to estimate
Boiler Maximum Achievable Control Technology (MACT) Technical...
Boiler Maximum Achievable Control Technology (MACT) Technical Assistance - Fact Sheet, April 2015 Boiler Maximum Achievable Control Technology (MACT) Technical Assistance - Fact...
The Principle of Maximum Conformality
Brodsky, Stanley J; /SLAC; Giustino, Di; /SLAC
2011-04-05T23:59:59.000Z
A key problem in making precise perturbative QCD predictions is the uncertainty in determining the renormalization scale of the running coupling {alpha}{sub s}({mu}{sup 2}). It is common practice to guess a physical scale {mu} = Q which is of order of a typical momentum transfer Q in the process, and then vary the scale over a range Q/2 and 2Q. This procedure is clearly problematic since the resulting fixed-order pQCD prediction will depend on the renormalization scheme, and it can even predict negative QCD cross sections at next-to-leading-order. Other heuristic methods to set the renormalization scale, such as the 'principle of minimal sensitivity', give unphysical results for jet physics, sum physics into the running coupling not associated with renormalization, and violate the transitivity property of the renormalization group. Such scale-setting methods also give incorrect results when applied to Abelian QED. Note that the factorization scale in QCD is introduced to match nonperturbative and perturbative aspects of the parton distributions in hadrons; it is present even in conformal theory and thus is a completely separate issue from renormalization scale setting. The PMC provides a consistent method for determining the renormalization scale in pQCD. The PMC scale-fixed prediction is independent of the choice of renormalization scheme, a key requirement of renormalization group invariance. The results avoid renormalon resummation and agree with QED scale-setting in the Abelian limit. The PMC global scale can be derived efficiently at NLO from basic properties of the PQCD cross section. The elimination of the renormalization scheme ambiguity using the PMC will not only increases the precision of QCD tests, but it will also increase the sensitivity of colliders to new physics beyond the Standard Model.
EERE Takes Important Steps to Ensure Maximum Impact of Technology...
storage from 1976 to 2012 and is ranked first in patent citations among the top-ten companies. graph1.png Another study of the Solar Energy Technologies Program completed in...
Reducing Degeneracy in Maximum Entropy Models of Networks
Horvát, Szabolcs; Toroczkai, Zoltán
2014-01-01T23:59:59.000Z
Based on Jaynes's maximum entropy principle, exponential random graphs provide a family of principled models that allow the prediction of network properties as constrained by empirical data. However, their use is often hindered by the degeneracy problem characterized by spontaneous symmetry-breaking, where predictions simply fail. Here we show that degeneracy appears when the corresponding density of states function is not log-concave. We propose a solution to the degeneracy problem for a large class of models by exploiting the nonlinear relationships between the constrained measures to convexify the domain of the density of states. We demonstrate the effectiveness of the method on examples, including on Zachary's karate club network data.
Some interesting consequences of the maximum entropy production principle
Martyushev, L. M. [Russian Academy of Sciences, Institute of Industrial Ecology, Ural Division (Russian Federation)], E-mail: mlm@ecko.uran.ru
2007-04-15T23:59:59.000Z
Two nonequilibrium phase transitions (morphological and hydrodynamic) are analyzed by applying the maximum entropy production principle. Quantitative analysis is for the first time compared with experiment. Nonequilibrium crystallization of ice and laminar-turbulent flow transition in a circular pipe are examined as examples of morphological and hydrodynamic transitions, respectively. For the latter transition, a minimum critical Reynolds number of 1200 is predicted. A discussion of this important and interesting result is presented.
Ranade, Abhiram G.
An Improved Maximum Likelihood Formulation for Accurate Genome Assembly Aditya Varma, Abhiram maximum likelihood method for genome assembly. We formulate the problem as one of direct convex estimate of the length of the genome or the need to use further expectation minimization to predict
Olson, Jessica J.
2011-01-01T23:59:59.000Z
this study. Changes in hydrology are not the only potentialA Tidal Hydrology Assessment for Reconnecting Spring Branchmay change the tidal hydrology and impact the area occupied
S.P. Rupp
2005-10-01T23:59:59.000Z
In May 2000, the Cerro Grande Fire burned approximately 17,200 ha in north-central New Mexico as the result of an escaped prescribed burn initiated by Bandelier National Monument. The interaction of large-scale fires, vegetation, and elk is an important management issue, but few studies have addressed the ecological implications of vegetative succession and landscape heterogeneity on ungulate populations following large-scale disturbance events. Primary objectives of this research were to identify elk movement pathways on local and landscape scales, to determine environmental factors that influence elk movement, and to evaluate movement and distribution patterns in relation to spatial and temporal aspects of the Cerro Grande Fire. Data collection and assimilation reflect the collaborative efforts of National Park Service, U.S. Forest Service, and Department of Energy (Los Alamos National Laboratory) personnel. Geographic positioning system (GPS) collars were used to track 54 elk over a period of 3+ years and locational data were incorporated into a multi-layered geographic information system (GIS) for analysis. Preliminary tests of GPS collar accuracy indicated a strong effect of 2D fixes on position acquisition rates (PARs) depending on time of day and season of year. Slope, aspect, elevation, and land cover type affected dilution of precision (DOP) values for both 2D and 3D fixes, although significant relationships varied from positive to negative making it difficult to delineate the mechanism behind significant responses. Two-dimensional fixes accounted for 34% of all successfully acquired locations and may affect results in which those data were used. Overall position acquisition rate was 93.3% and mean DOP values were consistently in the range of 4.0 to 6.0 leading to the conclusion collar accuracy was acceptable for modeling purposes. SAVANNA, a spatially explicit, process-oriented ecosystem model, was used to simulate successional dynamics. Inputs to the SAVANNA included a land cover map, long-term weather data, soil maps, and a digital elevation model. Parameterization and calibration were conducted using field plots. Model predictions of herbaceous biomass production and weather were consistent with available data and spatial interpolations of snow were considered reasonable for this study. Dynamic outputs generated by SAVANNA were integrated with static variables, movement rules, and parameters developed for the individual-based model through the application of a habitat suitability index. Model validation indicated reasonable model fit when compared to an independent test set. The finished model was applied to 2 realistic management scenarios for the Jemez Mountains and management implications were discussed. Ongoing validation of the individual-based model presented in this dissertation provides an adaptive management tool that integrates interdisciplinary experience and scientific information, which allows users to make predictions about the impact of alternative management policies.
EIS-0012: Final Environmental Impact Statement | Department of...
Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site
2: Final Environmental Impact Statement EIS-0012: Final Environmental Impact Statement Petroleum Production at Maximum Efficient Rate, Naval Petroleum Reserve No. 1 (Elk Hills),...
Optimization Online - Efficient Heuristic Algorithms for Maximum ...
T. G. J. Myklebust
2012-11-19T23:59:59.000Z
Nov 19, 2012 ... Efficient Heuristic Algorithms for Maximum Utility Product Pricing Problems. T. G. J. Myklebust(tmyklebu ***at*** csclub.uwaterloo.ca)
Impact of graphene polycrystallinity on the performance of graphene field-effect transistors
Jiménez, David; Chaves, Ferney [Departament d'Enginyeria Electrònica, Escola d'Enginyeria, Universitat Autònoma de Barcelona, 08193-Bellaterra (Spain); Cummings, Aron W.; Van Tuan, Dinh [ICN2, Institut Català de Nanociencia i Nanotecnologia, Campus UAB, 08193 Bellaterra (Barcelona) (Spain); Kotakoski, Jani [Faculty of Physics, University of Vienna, Boltzmanngasse 5, 1090 Wien (Austria); Department of Physics, University of Helsinki, P.O. Box 43, 00014 University of Helsinki (Finland); Roche, Stephan [ICN2, Institut Català de Nanociencia i Nanotecnologia, Campus UAB, 08193 Bellaterra (Barcelona) (Spain); ICREA, Institució Catalana de Recerca i Estudis Avançats, 08070 Barcelona (Spain)
2014-01-27T23:59:59.000Z
We have used a multi-scale physics-based model to predict how the grain size and different grain boundary morphologies of polycrystalline graphene will impact the performance metrics of graphene field-effect transistors. We show that polycrystallinity has a negative impact on the transconductance, which translates to a severe degradation of the maximum and cutoff frequencies. On the other hand, polycrystallinity has a positive impact on current saturation, and a negligible effect on the intrinsic gain. These results reveal the complex role played by graphene grain boundaries and can be used to guide the further development and optimization of graphene-based electronic devices.
Maximum stellar mass versus cluster membership number revisited
Th. Maschberger; C. J. Clarke
2008-09-05T23:59:59.000Z
We have made a new compilation of observations of maximum stellar mass versus cluster membership number from the literature, which we analyse for consistency with the predictions of a simple random drawing hypothesis for stellar mass selection in clusters. Previously, Weidner and Kroupa have suggested that the maximum stellar mass is lower, in low mass clusters, than would be expected on the basis of random drawing, and have pointed out that this could have important implications for steepening the integrated initial mass function of the Galaxy (the IGIMF) at high masses. Our compilation demonstrates how the observed distribution in the plane of maximum stellar mass versus membership number is affected by the method of target selection; in particular, rather low n clusters with large maximum stellar masses are abundant in observational datasets that specifically seek clusters in the environs of high mass stars. Although we do not consider our compilation to be either complete or unbiased, we discuss the method by which such data should be statistically analysed. Our very provisional conclusion is that the data is not indicating any striking deviation from the expectations of random drawing.
Maximum total organic carbon limit for DWPF melter feed
Choi, A.S.
1995-03-13T23:59:59.000Z
DWPF recently decided to control the potential flammability of melter off-gas by limiting the total carbon content in the melter feed and maintaining adequate conditions for combustion in the melter plenum. With this new strategy, all the LFL analyzers and associated interlocks and alarms were removed from both the primary and backup melter off-gas systems. Subsequently, D. Iverson of DWPF- T{ampersand}E requested that SRTC determine the maximum allowable total organic carbon (TOC) content in the melter feed which can be implemented as part of the Process Requirements for melter feed preparation (PR-S04). The maximum TOC limit thus determined in this study was about 24,000 ppm on an aqueous slurry basis. At the TOC levels below this, the peak concentration of combustible components in the quenched off-gas will not exceed 60 percent of the LFL during off-gas surges of magnitudes up to three times nominal, provided that the melter plenum temperature and the air purge rate to the BUFC are monitored and controlled above 650 degrees C and 220 lb/hr, respectively. Appropriate interlocks should discontinue the feeding when one or both of these conditions are not met. Both the magnitude and duration of an off-gas surge have a major impact on the maximum TOC limit, since they directly affect the melter plenum temperature and combustion. Although the data obtained during recent DWPF melter startup tests showed that the peak magnitude of a surge can be greater than three times nominal, the observed duration was considerably shorter, on the order of several seconds. The long surge duration assumed in this study has a greater impact on the plenum temperature than the peak magnitude, thus making the maximum TOC estimate conservative. Two models were used to make the necessary calculations to determine the TOC limit.
Maximum entropy segmentation of broadcast news
Christensen, Heidi; Kolluru, BalaKrishna; Gotoh, Yoshihiko; Renals, Steve
2005-01-01T23:59:59.000Z
speech recognizer and subsequently segmenting the text into utterances and topics. A maximum entropy approach is used to build statistical models for both utterance and topic segmentation. The experimental work addresses the effect on performance...
and agriculture increases, water supply decreases (ProClim and OcCC, 2007) as climate change alters the hydrologic of the economic impact of climate change and different adaptation strategies in the water sector is essential in Switzerland, mandated by the Federal Office for the Environment (FOEN). 4) Climate change and water resources
Yu Lifeng; Leng Shuai; Chen Lingyun; Kofler, James M.; McCollough, Cynthia H. [Department of Radiology, Mayo Clinic, Rochester, Minnesota 55905 (United States); Carter, Rickey E. [Division of Biomedical Statistics and Informatics, Mayo Clinic, Rochester, Minnesota 55905 (United States)
2013-04-15T23:59:59.000Z
Purpose: Efficient optimization of CT protocols demands a quantitative approach to predicting human observer performance on specific tasks at various scan and reconstruction settings. The goal of this work was to investigate how well a channelized Hotelling observer (CHO) can predict human observer performance on 2-alternative forced choice (2AFC) lesion-detection tasks at various dose levels and two different reconstruction algorithms: a filtered-backprojection (FBP) and an iterative reconstruction (IR) method. Methods: A 35 Multiplication-Sign 26 cm{sup 2} torso-shaped phantom filled with water was used to simulate an average-sized patient. Three rods with different diameters (small: 3 mm; medium: 5 mm; large: 9 mm) were placed in the center region of the phantom to simulate small, medium, and large lesions. The contrast relative to background was -15 HU at 120 kV. The phantom was scanned 100 times using automatic exposure control each at 60, 120, 240, 360, and 480 quality reference mAs on a 128-slice scanner. After removing the three rods, the water phantom was again scanned 100 times to provide signal-absent background images at the exact same locations. By extracting regions of interest around the three rods and on the signal-absent images, the authors generated 21 2AFC studies. Each 2AFC study had 100 trials, with each trial consisting of a signal-present image and a signal-absent image side-by-side in randomized order. In total, 2100 trials were presented to both the model and human observers. Four medical physicists acted as human observers. For the model observer, the authors used a CHO with Gabor channels, which involves six channel passbands, five orientations, and two phases, leading to a total of 60 channels. The performance predicted by the CHO was compared with that obtained by four medical physicists at each 2AFC study. Results: The human and model observers were highly correlated at each dose level for each lesion size for both FBP and IR. The Pearson's product-moment correlation coefficients were 0.986 [95% confidence interval (CI): 0.958-0.996] for FBP and 0.985 (95% CI: 0.863-0.998) for IR. Bland-Altman plots showed excellent agreement for all dose levels and lesions sizes with a mean absolute difference of 1.0%{+-} 1.1% for FBP and 2.1%{+-} 3.3% for IR. Conclusions: Human observer performance on a 2AFC lesion detection task in CT with a uniform background can be accurately predicted by a CHO model observer at different radiation dose levels and for both FBP and IR methods.
Maximum Likelihood Haplotyping for General Pedigrees
Friedman, Nir
networks. The use of Bayesian networks enables efficient maximum likelihood haplotyping for more complex for the variables of the Bayesian network. The presented optimization algorithm also improves likelihood Analysis, Pedigree, superlink. Abstract Haplotype data is valuable in mapping disease-susceptibility genes
Weak Scale From the Maximum Entropy Principle
Yuta Hamada; Hikaru Kawai; Kiyoharu Kawana
2014-09-23T23:59:59.000Z
The theory of multiverse and wormholes suggests that the parameters of the Standard Model are fixed in such a way that the radiation of the $S^{3}$ universe at the final stage $S_{rad}$ becomes maximum, which we call the maximum entropy principle. Although it is difficult to confirm this principle generally, for a few parameters of the Standard Model, we can check whether $S_{rad}$ actually becomes maximum at the observed values. In this paper, we regard $S_{rad}$ at the final stage as a function of the weak scale ( the Higgs expectation value ) $v_{h}$, and show that it becomes maximum around $v_{h}={\\cal{O}}(300\\text{GeV})$ when the dimensionless couplings in the Standard Model, that is, the Higgs self coupling, the gauge couplings, and the Yukawa couplings are fixed. Roughly speaking, we find that the weak scale is given by \\begin{equation} v_{h}\\sim\\frac{T_{BBN}^{2}}{M_{pl}y_{e}^{5}},\
Weak Scale From the Maximum Entropy Principle
Hamada, Yuta; Kawana, Kiyoharu
2014-01-01T23:59:59.000Z
The theory of multiverse and wormholes suggests that the parameters of the Standard Model are fixed in such a way that the radiation of the $S^{3}$ universe at the final stage $S_{rad}$ becomes maximum, which we call the maximum entropy principle. Although it is difficult to confirm this principle generally, for a few parameters of the Standard Model, we can check whether $S_{rad}$ actually becomes maximum at the observed values. In this paper, we regard $S_{rad}$ at the final stage as a function of the weak scale ( the Higgs expectation value ) $v_{h}$, and show that it becomes maximum around $v_{h}={\\cal{O}}(300\\text{GeV})$ when the dimensionless couplings in the Standard Model, that is, the Higgs self coupling, the gauge couplings, and the Yukawa couplings are fixed. Roughly speaking, we find that the weak scale is given by \\begin{equation} v_{h}\\sim\\frac{T_{BBN}^{2}}{M_{pl}y_{e}^{5}},\
Integrating Correlated Bayesian Networks Using Maximum Entropy
Jarman, Kenneth D.; Whitney, Paul D.
2011-08-30T23:59:59.000Z
We consider the problem of generating a joint distribution for a pair of Bayesian networks that preserves the multivariate marginal distribution of each network and satisfies prescribed correlation between pairs of nodes taken from both networks. We derive the maximum entropy distribution for any pair of multivariate random vectors and prescribed correlations and demonstrate numerical results for an example integration of Bayesian networks.
QCD Level Density from Maximum Entropy Method
Shinji Ejiri; Tetsuo Hatsuda
2005-09-24T23:59:59.000Z
We propose a method to calculate the QCD level density directly from the thermodynamic quantities obtained by lattice QCD simulations with the use of the maximum entropy method (MEM). Understanding QCD thermodynamics from QCD spectral properties has its own importance. Also it has a close connection to phenomenological analyses of the lattice data as well as experimental data on the basis of hadronic resonances. Our feasibility study shows that the MEM can provide a useful tool to study QCD level density.
Tissue Radiation Response with Maximum Tsallis Entropy
Sotolongo-Grau, O.; Rodriguez-Perez, D.; Antoranz, J. C.; Sotolongo-Costa, Oscar [UNED, Departamento de Fisica Matematica y de Fluidos, 28040 Madrid (Spain); UNED, Departamento de Fisica Matematica y de Fluidos, 28040 Madrid (Spain) and University of Havana, Catedra de Sistemas Complejos Henri Poincare, Havana 10400 (Cuba); University of Havana, Catedra de Sistemas Complejos Henri Poincare, Havana 10400 (Cuba)
2010-10-08T23:59:59.000Z
The expression of survival factors for radiation damaged cells is currently based on probabilistic assumptions and experimentally fitted for each tumor, radiation, and conditions. Here, we show how the simplest of these radiobiological models can be derived from the maximum entropy principle of the classical Boltzmann-Gibbs expression. We extend this derivation using the Tsallis entropy and a cutoff hypothesis, motivated by clinical observations. The obtained expression shows a remarkable agreement with the experimental data found in the literature.
A global maximum power point tracking DC-DC converter
Duncan, Joseph, 1981-
2005-01-01T23:59:59.000Z
This thesis describes the design, and validation of a maximum power point tracking DC-DC converter capable of following the true global maximum power point in the presence of other local maximum. It does this without the ...
articulatorily constrained maximum: Topics by E-print Network
Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)
weight spanning forests. Amitabha Bagchi; Ankur Bhargava; Torsten Suel 2005-01-01 27 Maximum Entropy Correlated Equilibria MIT - DSpace Summary: We study maximum entropy...
Conductivity maximum in a charged colloidal suspension
Bastea, S
2009-01-27T23:59:59.000Z
Molecular dynamics simulations of a charged colloidal suspension in the salt-free regime show that the system exhibits an electrical conductivity maximum as a function of colloid charge. We attribute this behavior to two main competing effects: colloid effective charge saturation due to counterion 'condensation' and diffusion slowdown due to the relaxation effect. In agreement with previous observations, we also find that the effective transported charge is larger than the one determined by the Stern layer and suggest that it corresponds to the boundary fluid layer at the surface of the colloidal particles.
Channel State Prediction in Cognitive Radio, Part II: Single-User Prediction
Qiu, Robert Caiming
Channel State Prediction in Cognitive Radio, Part II: Single-User Prediction Zhe Chen, Nan Guo-user prediction of channel state is proposed to minimize the negative impact of response delays caused by hardware-SU) prediction is proposed and examined. In order to have convincing performance evaluation results, real- world
DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)
Lowe, Douglas; Archer-Nicholls, Scott; Morgan, Will; Allan, James D.; Utembe, Steve; Ouyang, Bin; Aruffo, Eleonora; Le Breton, Michael; Zaveri, Rahul A.; di Carlo, Piero; et al
2015-01-01T23:59:59.000Z
Chemical modelling studies have been conducted over north-western Europe in summer conditions, showing that night-time dinitrogen pentoxide (N2O5) heterogeneous reactive uptake is important regionally in modulating particulate nitrate and has a~modest influence on oxidative chemistry. Results from Weather Research and Forecasting model with Chemistry (WRF-Chem) model simulations, run with a detailed volatile organic compound (VOC) gas-phase chemistry scheme and the Model for Simulating Aerosol Interactions and Chemistry (MOSAIC) sectional aerosol scheme, were compared with a series of airborne gas and particulate measurements made over the UK in July 2010. Modelled mixing ratios of key gas-phase species were reasonably accurate (correlationsmore »with measurements of 0.7–0.9 for NO2 and O3). However modelled loadings of particulate species were less accurate (correlation with measurements for particulate sulfate and ammonium were between 0.0 and 0.6). Sulfate mass loadings were particularly low (modelled means of 0.5–0.7 ?g kg?1air, compared with measurements of 1.0–1.5 ?g kg?1air). Two flights from the campaign were used as test cases – one with low relative humidity (RH) (60–70%), the other with high RH (80–90%). N2O5 heterogeneous chemistry was found to not be important in the low-RH test case; but in the high-RH test case it had a strong effect and significantly improved the agreement between modelled and measured NO3 and N2O5. When the model failed to capture atmospheric RH correctly, the modelled NO3 and N2O5 mixing ratios for these flights differed significantly from the measurements. This demonstrates that, for regional modelling which involves heterogeneous processes, it is essential to capture the ambient temperature and water vapour profiles. The night-time NO3 oxidation of VOCs across the whole region was found to be 100–300 times slower than the daytime OH oxidation of these compounds. The difference in contribution was less for alkenes (× 80) and comparable for dimethylsulfide (DMS). However the suppression of NO3 mixing ratios across the domain by N2O5 heterogeneous chemistry has only a very slight, negative, influence on this oxidative capacity. The influence on regional particulate nitrate mass loadings is stronger. Night-time N2O5 heterogeneous chemistry maintains the production of particulate nitrate within polluted regions: when this process is taken into consideration, the daytime peak (for the 95th percentile) of PM10 nitrate mass loadings remains around 5.6 ?g kg?1air, but the night-time minimum increases from 3.5 to 4.6 ?g kg?1air. The sustaining of higher particulate mass loadings through the night by this process improves model skill at matching measured aerosol nitrate diurnal cycles and will negatively impact on regional air quality, requiring this process to be included in regional models.« less
Mass Parameterizations and Predictions of Isotopic Observables
S. R. Souza; P. Danielewicz; S. Das Gupta; R. Donangelo; W. A. Friedman; W. G. Lynch; W. P. Tan; M. B. Tsang
2003-03-24T23:59:59.000Z
We discuss the accuracy of mass models for extrapolating to very asymmetric nuclei and the impact of such extrapolations on the predictions of isotopic observables in multifragmentation. We obtain improved mass predictions by incorporating measured masses and extrapolating to unmeasured masses with a mass formula that includes surface symmetry and Coulomb terms. We find that using accurate masses has a significant impact on the predicted isotopic observables.
Maximum screening fields of superconducting multilayer structures
Gurevich, Alex
2015-01-01T23:59:59.000Z
It is shown that a multilayer comprised of alternating thin superconducting and insulating layers on a thick substrate can fully screen the applied magnetic field exceeding the superheating fields $H_s$ of both the superconducting layers and the substrate, the maximum Meissner field is achieved at an optimum multilayer thickness. For instance, a dirty layer of thickness $\\sim 0.1\\; \\mu$m at the Nb surface could increase $H_s\\simeq 240$ mT of a clean Nb up to $H_s\\simeq 290$ mT. Optimized multilayers of Nb$_3$Sn, NbN, some of the iron pnictides, or alloyed Nb deposited onto the surface of the Nb resonator cavities could potentially double the rf breakdown field, pushing the peak accelerating electric fields above 100 MV/m while protecting the cavity from dendritic thermomagnetic avalanches caused by local penetration of vortices.
Broader source: Energy.gov [DOE]
Predictive maintenance aims to detect equipment degradation and address problems as they arise. The result indicates potential issues, which are controlled or eliminated prior to any significant system deterioration.
Maximum Entropy Method Approach to $?$ Term
Masahiro Imachi; Yasuhiko Shinno; Hiroshi Yoneyama
2004-06-09T23:59:59.000Z
In Monte Carlo simulations of lattice field theory with a $\\theta$ term, one confronts the complex weight problem, or the sign problem. This is circumvented by performing the Fourier transform of the topological charge distribution $P(Q)$. This procedure, however, causes flattening phenomenon of the free energy $f(\\theta)$, which makes study of the phase structure unfeasible. In order to treat this problem, we apply the maximum entropy method (MEM) to a Gaussian form of $P(Q)$, which serves as a good example to test whether the MEM can be applied effectively to the $\\theta$ term. We study the case with flattening as well as that without flattening. In the latter case, the results of the MEM agree with those obtained from the direct application of the Fourier transform. For the former, the MEM gives a smoother $f(\\theta)$ than that of the Fourier transform. Among various default models investigated, the images which yield the least error do not show flattening, although some others cannot be excluded given the uncertainty related to statistical error.
Maximum Throughput Power Control in CDMA Wireless Networks
Mellor-Crummey, John
Maximum Throughput Power Control in CDMA Wireless Networks Anastasios Giannoulis Department introduce crosslayer, distributed power control algorithms that guarantee maximum possible data throughput performing dynamic routing and scheduling together with power control. The crosslayer interaction consists
Business Development - Predictive Maintenance Products
Sceiczina, P.
2005-01-01T23:59:59.000Z
BUSINESS DEVELOPMENT - PREDICTIVE MAINTENANCE PRODUCTS Phillip Sceiczina, ifm efector, inc. In this time of global competitiveness, more companies are focusing on reducing manufacturing costs to increase profits. Energy costs can be a... significant portion of a company?s manufacturing costs. Compressed air leakage is often an overlooked area in predictive maintenance programs, however it greatly impacts the amount of electricity required to run a plant. This paper quantifies the cost...
GMM Estimation of a Maximum Entropy Distribution with Interval Data
Perloff, Jeffrey M.
GMM Estimation of a Maximum Entropy Distribution with Interval Data Ximing Wu* and Jeffrey M estimate it using a simple yet flexible maximum entropy density. Our Monte Carlo simulations show that the proposed maximum entropy density is able to approximate various distributions extremely well. The two
Maximum gravitational-wave energy emissible in magnetar flares
Alessandra Corsi; Benjamin J. Owen
2011-02-16T23:59:59.000Z
Recent searches of gravitational-wave (GW) data raise the question of what maximum GW energies could be emitted during gamma-ray flares of highly magnetized neutron stars (magnetars). The highest energies (\\sim 10^{49} erg) predicted so far come from a model [K. Ioka, Mon. Not. Roy. Astron. Soc. 327, 639 (2001)] in which the internal magnetic field of a magnetar experiences a global reconfiguration, changing the hydromagnetic equilibrium structure of the star and tapping the gravitational potential energy without changing the magnetic potential energy. The largest energies in this model assume very special conditions, including a large change in moment of inertia (which was observed in at most one flare), a very high internal magnetic field, and a very soft equation of state. Here we show that energies of 10^{48}-10^{49} erg are possible under more generic conditions by tapping the magnetic energy, and we note that similar energies may also be available through cracking of exotic solid cores. Current observational limits on gravitational waves from magnetar fundamental modes are just reaching these energies and will beat them in the era of advanced interferometers.
Rafael Brada; Mordehai Milgrom
1998-12-21T23:59:59.000Z
We have recently discovered that the modified dynamics (MOND) implies some universal upper bound on the acceleration that can be contributed by a `dark halo'--assumed in a Newtonian analysis to account for the effects of MOND. Not surprisingly, the limit is of the order of the acceleration constant of the theory. This can be contrasted directly with the results of structure-formation simulations. The new limit is substantial and different from earlier MOND acceleration limits (discussed in connection with the MOND explanation of the Freeman law for galaxy disks, and the Fish law for ellipticals): It pertains to the `halo', and not to the observed galaxy; it is absolute, and independent of further physical assumptions on the nature of the galactic system; and it applies at all radii, whereas the other limits apply only to the mean acceleration in the system.
Predicting Maximum Tree Heights and Other Traits from Allometric Scaling and Resource Limitations
Kempes, Chris Poling
Terrestrial vegetation plays a central role in regulating the carbon and water cycles, and adjusting planetary albedo. As such, a clear understanding and accurate characterization of vegetation dynamics is critical to ...
Model Predictive Control Wind Turbines
Model Predictive Control of Wind Turbines Martin Klauco Kongens Lyngby 2012 IMM-MSc-2012-65 #12;Summary Wind turbines are the biggest part of the green energy industry. Increasing interest control strategies. Control strategy has a significant impact on the wind turbine operation on many levels
A Near Maximum Likelihood Decoding Algorithm for MIMO Systems ...
Amin Mobasher
2005-10-03T23:59:59.000Z
Oct 3, 2005 ... A Near Maximum Likelihood Decoding Algorithm for MIMO Systems Based ... models are also used for soft output decoding in MIMO systems.
Computing the Maximum Volume Inscribed Ellipsoid of a Polytopic ...
Jianzhe Zhen
2015-01-23T23:59:59.000Z
Jan 23, 2015 ... Abstract: This paper introduces a method for computing the maximum volume inscribed ellipsoid and k-ball of a projected polytope. It is known ...
Solving Maximum-Entropy Sampling Problems Using Factored Masks
Samuel Burer
2005-03-02T23:59:59.000Z
Mar 2, 2005 ... Abstract: We present a practical approach to Anstreicher and Lee's masked spectral bound for maximum-entropy sampling, and we describe ...
A masked spectral bound for maximum-entropy sampling
Kurt Anstreicher
2003-09-16T23:59:59.000Z
Sep 16, 2003 ... Abstract: We introduce a new masked spectral bound for the maximum-entropy sampling problem. This bound is a continuous generalization of ...
Maximum entropy generation in open systems: the Fourth Law?
Umberto Lucia
2010-11-17T23:59:59.000Z
This paper develops an analytical and rigorous formulation of the maximum entropy generation principle. The result is suggested as the Fourth Law of Thermodynamics.
annual maximum extent: Topics by E-print Network
Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)
of the Sixteenth Annual Conference on Neural Information Processing Systems (NIPS2002) A Maximum Entropy Approach To Computer Technologies and Information Sciences Websites...
analog fixed maximum: Topics by E-print Network
Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)
state for given entanglement which can be viewed as an analogue of the Jaynes maximum entropy principle. Pawel Horodecki; Ryszard Horodecki; Michal Horodecki 1998-05-22...
IBM Research Report Solving Maximum-Entropy Sampling ...
2005-02-28T23:59:59.000Z
Feb 28, 2005 ... Solving Maximum-Entropy Sampling Problems Using. Factored Masks. Samuel Burer. Department of Management Sciences. University of Iowa.
A Requirement for Significant Reduction in the Maximum BTU Input...
A Requirement for Significant Reduction in the Maximum BTU Input Rate of Decorative Vented Gas Fireplaces Would Impose Substantial Burdens on Manufacturers A Requirement for...
Deterioration Process of Sintered Material by Impact Repetition
Shirakashi, Takahiro [Department of Precision Machinery Engineering, Tokyo Denki University, 2-2 Kandanishiki-cho, Chiyoda-ku, Tokyo (Japan)
2007-04-07T23:59:59.000Z
For prediction of time dependent tool breakage of sintered carbide tool in interrupted turning operation, the special impact stressing set-up is prepared. A change of fracture stress-deterioration process-of a sintered carbide tool material with both tensile and compressive impact stressing repetition is discussed and the process is evaluated through the fracture stress criterion superposed by Weibull's distribution. The reliability of fracture stress is decreased with the repetition, the maximum fracture stress, however, is not decreased. The equivalency between compressive and tensile stresses on the process is also discussed and the process is shown as change of probabilistic fracture locus with impact repetition times. Finally a deterioration state of sintered carbide tool under interrupted turning operation with the so called parallel entry and a very soft exit condition is estimated based on the deterioration process and the probability map of breakage occurrence on tool surface is shown under given cutting condition. The tool life based on breakage occurrence is also shown by fracture probability change with impact repetition and evaluated by experiments.
Appendix 22 Draft Nutrient Management Plan and Total Maximum Daily
Appendix 22 Draft Nutrient Management Plan and Total Maximum Daily Load for Flathead Lake, Montana. #12;11/01/01 DRAFT i October 30, 2001 Draft Nutrient Management Plan and Total Maximum Daily Load..............................................................................................................................2-11 SECTION 3.0 APPLICABLE WATER QUALITY STANDARDS
FAST SPEAKER ADAPTION VIA MAXIMUM PENALIZED LIKELIHOOD KERNEL REGRESSION
Tsang Wai Hung "Ivor"
of MLLR using non- linear regression. Specifically, kernel regression is applied with appropriate of Science and Technology Clear Water Bay, Hong Kong ABSTRACT Maximum likelihood linear regression (MLLR) has], and transformation-based methods, most notably, maximum likelihood linear regression (MLLR) adap- tation [3]. However
Digital tomosynthesis mammography using a parallel maximum likelihood reconstruction method
Meleis, Waleed
Digital tomosynthesis mammography using a parallel maximum likelihood reconstruction method Tao Wu , a Radiology Department, Massachusetts General Hospital, Boston, MA 02114 b Dept. of Electrical and Computer on an iterative maximum likelihood (ML) algorithm, is developed to provide fast reconstruction for digital
Exact computation of the Maximum Entropy Potential of spiking neural networks models
Cofre, Rodrigo
2014-01-01T23:59:59.000Z
Understanding how stimuli and synaptic connectivity in uence the statistics of spike patterns in neural networks is a central question in computational neuroscience. Maximum Entropy approach has been successfully used to characterize the statistical response of simultaneously recorded spiking neurons responding to stimuli. But, in spite of good performance in terms of prediction, the ?tting parameters do not explain the underlying mechanistic causes of the observed correlations. On the other hand, mathematical models of spiking neurons (neuro-mimetic models) provide a probabilistic mapping between stimulus, network architecture and spike patterns in terms of conditional proba- bilities. In this paper we build an exact analytical mapping between neuro-mimetic and Maximum Entropy models.
Liu, Jian
2008-01-01T23:59:59.000Z
1992). J. Skilling, in Maximum entropy and Bayesian methods,1989). S. F. Gull, in Maximum entropy and Bayesian methods,with the classical maximum entropy (CME) technique (MEAC-
Improved constraints on transit time distributions from argon 39: A maximum entropy approach
Holzer, Mark; Primeau, Francois W
2010-01-01T23:59:59.000Z
Gull (1991), Bayesian maximum entropy image reconstruction,Atlantic venti- lated? Maximum entropy inversions of bottlefrom argon 39: A maximum entropy approach Mark Holzer 1,2
Soffer, Bernard H; Kikuchi, Ryoichi
1994-01-01T23:59:59.000Z
of Confidence for Maximum Entropy Restoration and EstimationApril 3, 1992) The Maximum Entropy method, using physicalare discussed. Maximum Entropy (ME) estimation has been
Burin des Roziers, T.
2011-01-01T23:59:59.000Z
Mathematics. In optimal prediction. Communications press,and R. Kupferman. On the prediction of large- scale dynamicsand D. Levy. Optimal prediction and pertur- bation theory.
LANDFILL OPERATION FOR CARBON SEQUESTRATION AND MAXIMUM METHANE EMISSION CONTROL
Don Augenstein; Ramin Yazdani; Rick Moore; Michelle Byars; Jeff Kieffer; Professor Morton Barlaz; Rinav Mehta
2000-02-26T23:59:59.000Z
Controlled landfilling is an approach to manage solid waste landfills, so as to rapidly complete methane generation, while maximizing gas capture and minimizing the usual emissions of methane to the atmosphere. With controlled landfilling, methane generation is accelerated to more rapid and earlier completion to full potential by improving conditions (principally moisture, but also temperature) to optimize biological processes occurring within the landfill. Gas is contained through use of surface membrane cover. Gas is captured via porous layers, under the cover, operated at slight vacuum. A field demonstration project has been ongoing under NETL sponsorship for the past several years near Davis, CA. Results have been extremely encouraging. Two major benefits of the technology are reduction of landfill methane emissions to minuscule levels, and the recovery of greater amounts of landfill methane energy in much shorter times, more predictably, than with conventional landfill practice. With the large amount of US landfill methane generated, and greenhouse potency of methane, better landfill methane control can play a substantial role both in reduction of US greenhouse gas emissions and in US renewable energy. The work described in this report, to demonstrate and advance this technology, has used two demonstration-scale cells of size (8000 metric tons [tonnes]), sufficient to replicate many heat and compaction characteristics of larger ''full-scale'' landfills. An enhanced demonstration cell has received moisture supplementation to field capacity. This is the maximum moisture waste can hold while still limiting liquid drainage rate to minimal and safely manageable levels. The enhanced landfill module was compared to a parallel control landfill module receiving no moisture additions. Gas recovery has continued for a period of over 4 years. It is quite encouraging that the enhanced cell methane recovery has been close to 10-fold that experienced with conventional landfills. This is the highest methane recovery rate per unit waste, and thus progress toward stabilization, documented anywhere for such a large waste mass. This high recovery rate is attributed to moisture, and elevated temperature attained inexpensively during startup. Economic analyses performed under Phase I of this NETL contract indicate ''greenhouse cost effectiveness'' to be excellent. Other benefits include substantial waste volume loss (over 30%) which translates to extended landfill life. Other environmental benefits include rapidly improved quality and stabilization (lowered pollutant levels) in liquid leachate which drains from the waste.
Multichannel Blind Identification: From Subspace to Maximum Likelihood Methods
Tong, Lang
Multichannel Blind Identification: From Subspace to Maximum Likelihood Methods LANG TONG, MEMBER, IEEE, AND SYLVIE PERREAU Invited Paper A review of recent blind channel estimation algorithms is pre-- Blind equalization, parameter estimation, system identification. I. INTRODUCTION A. What Is Blind
Maximum containment : the most controversial labs in the world
Bruzek, Alison K. (Allison Kim)
2013-01-01T23:59:59.000Z
In 2002, following the September 11th attacks and the anthrax letters, the United States allocated money to build two maximum containment biology labs. Called Biosafety Level 4 (BSL-4) facilities, these labs were built to ...
On the maximum pressure rise rate in boosted HCCI operation
Wildman, Craig B.
This paper explores the combined effects of boosting, intake air temperature, trapped residual gas fraction, and dilution on the Maximum Pressure Rise Rate (MPRR) in a boosted single cylinder gasoline HCCI engine with ...
Maximum Photovoltaic Penetration Levels on Typical Distribution Feeders: Preprint
Hoke, A.; Butler, R.; Hambrick, J.; Kroposki, B.
2012-07-01T23:59:59.000Z
This paper presents simulation results for a taxonomy of typical distribution feeders with various levels of photovoltaic (PV) penetration. For each of the 16 feeders simulated, the maximum PV penetration that did not result in steady-state voltage or current violation is presented for several PV location scenarios: clustered near the feeder source, clustered near the midpoint of the feeder, clustered near the end of the feeder, randomly located, and evenly distributed. In addition, the maximum level of PV is presented for single, large PV systems at each location. Maximum PV penetration was determined by requiring that feeder voltages stay within ANSI Range A and that feeder currents stay within the ranges determined by overcurrent protection devices. Simulations were run in GridLAB-D using hourly time steps over a year with randomized load profiles based on utility data and typical meteorological year weather data. For 86% of the cases simulated, maximum PV penetration was at least 30% of peak load.
Bacteria Total Maximum Daily Load Task Force Final Report
Jones, C. Allan; Wagner, Kevin; Di Giovanni, George; Hauck, Larry; Mott, Joanna; Rifai, Hanadi; Srinivasan, Raghavan; Ward, George; Wythe, Kathy
2009-01-01T23:59:59.000Z
In September 2006, the Texas Commission on Environmental Quality (TCEQ) and Texas State Soil and Water Conservation Board (TSSWCB) charged a seven-person Bacteria Total Maximum Daily Load (TMDL) Task Force with: * examining approaches...
Maximum Likelihood Decoding of Reed Solomon Codes Madhu Sudan
Sudan, Madhu
Maximum Likelihood Decoding of Reed Solomon Codes Madhu Sudan Abstract We present a randomized and Welch [4] (see, for instance, Gem- mell and Sudan [9]). In this paper we present an algorithm which
Multi-Class Classification with Maximum Margin Multiple Kernel
Tomkins, Andrew
(named OBSCURE and UFO-MKL, respectively) are used to optimize primal versions of equivalent problems), the OBSCURE and UFO-MKL algorithms are compared against MCMKL #12;Multi-Class Classification with Maximum
Maximum entropy method and oscillations in the diffraction cone
O. Dumbrajs; J. Kontros; A. Lengyel
2000-07-15T23:59:59.000Z
The maximum entropy method has been applied to investigate the oscillating structure in the pbarp- and pp-elastic scattering differential cross-section at high energy and small momentum transfer. Oscillations satisfying quite realistic reliability criteria have been found.
EERE Takes Important Steps to Ensure Maximum Impact of Technology Program
Broader source: Energy.gov (indexed) [DOE]
AFDC Printable Version Share this resource Send a link to EERE: Alternative Fuels Data Center Home Page to someone by E-mail Share EERE: Alternative Fuels Data Center Home Page on Facebook Tweet about EERE: Alternative Fuels Data Center Home Page on Twitter Bookmark EERE: Alternative1 First Use of Energy for All Purposes (Fuel and Nonfuel), 2002; Level: National5Sales for4,645 3,625 1,006 492 742Energy China U.S.ContaminationJulySavannah RiverSustainabilityEnergyDownload the
EERE Takes Important Steps to Ensure Maximum Impact of Technology Program
Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site
AFDC Printable Version Share this resource Send a link to EERE: Alternative Fuels Data Center Home Page to someone by E-mail Share EERE: Alternative Fuels Data Center Home Page on Facebook Tweet about EERE: Alternative Fuels Data Center Home Page on Twitter Bookmark EERE: Alternative Fuels Data Center Home Page on Google Bookmark EERE: Alternative Fuels Data Center Home Page on Delicious Rank EERE: Alternative FuelsNovember 13, 2014ContributingDOEDepartment1-EReubenEEREInvestments | Department
Filtering Additive Measurement Noise with Maximum Entropy in the Mean
Henryk Gzyl; Enrique ter Horst
2007-09-04T23:59:59.000Z
The purpose of this note is to show how the method of maximum entropy in the mean (MEM) may be used to improve parametric estimation when the measurements are corrupted by large level of noise. The method is developed in the context on a concrete example: that of estimation of the parameter in an exponential distribution. We compare the performance of our method with the bayesian and maximum likelihood approaches.
The maximum entropy tecniques and the statistical description of systems
B. Z. Belashev; M. K. Suleymanov
2001-10-19T23:59:59.000Z
The maximum entropy technique (MENT) is used to determine the distribution functions of physical values. MENT naturally combines required maximum entropy, the properties of a system and connection conditions in the form of restrictions imposed on the system. It can, therefore, be employed to statistically describe closed and open systems. Examples in which MENT is used to describe equilibrium and non-equilibrium states, as well as steady states that are far from being in thermodynamic equilibrium, are discussed.
Variable Selection for Modeling the Absolute Magnitude at Maximum of Type Ia Supernovae
Uemura, Makoto; Kawabata, S; Ikeda, Shiro; Maeda, Keiichi
2015-01-01T23:59:59.000Z
We discuss what is an appropriate set of explanatory variables in order to predict the absolute magnitude at the maximum of Type Ia supernovae. In order to have a good prediction, the error for future data, which is called the "generalization error," should be small. We use cross-validation in order to control the generalization error and LASSO-type estimator in order to choose the set of variables. This approach can be used even in the case that the number of samples is smaller than the number of candidate variables. We studied the Berkeley supernova database with our approach. Candidates of the explanatory variables include normalized spectral data, variables about lines, and previously proposed flux-ratios, as well as the color and light-curve widths. As a result, we confirmed the past understanding about Type Ia supernova: i) The absolute magnitude at maximum depends on the color and light-curve width. ii) The light-curve width depends on the strength of Si II. Recent studies have suggested to add more va...
Robust Maximum Lifetime Routing and Energy Allocation in Wireless Sensor Networks
DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)
Paschalidis, Ioannis Ch.; Wu, Ruomin
2012-01-01T23:59:59.000Z
We consider the maximum lifetime routing problem in wireless sensor networks in two settings: (a) when nodes’ initial energy is given and (b) when it is subject to optimization. The optimal solution and objective value provide optimal flows and the corresponding predicted lifetime, respectively. We stipulate that there is uncertainty in various network parameters (available energy and energy depletion rates). In setting (a) we show that for specific, yet typical, network topologies, the actual network lifetime will reach the predicted value with a probability that converges to zero as the number of nodes grows large. In setting (b) the samemore »result holds for all topologies. We develop a series of robust problem formulations, ranging from pessimistic to optimistic. A set of parameters enable the tuning of the conservatism of the formulation to obtain network flows with a desirably high probability that the corresponding lifetime prediction is achieved. We establish a number of properties for the robust network flows and energy allocations and provide numerical results to highlight the tradeoff between predicted lifetime and the probability achieved. Further, we analyze an interesting limiting regime of massively deployed sensor networks and essentially solve a continuous version of the problem.« less
Bayesian prediction of the Gaussian states from n sample
F. Tanaka; F. Komaki
2006-05-12T23:59:59.000Z
Recently quantum prediction problem was proposed in the Bayesian framework. It is shown that Bayesian predictive density operators are the best predictive density operators when we evaluate them by using the average relative entropy based on a prior.As an illustrative example, we treat the Gaussian states family adopting the Gaussian distribution as a prior and give the Bayesian predictive density operator with the heterodyne measurement fixed. We show that it is better than the plug-in predictive density operator based on the maximum likelihood estimate by calculating each average relative entropy.
Modeling of Wave Impact Using a Pendulum System
Nie, Chunyong
2011-08-08T23:59:59.000Z
For high speed vessels and offshore structures, wave impact, a main source of environmental loads, causes high local stresses and structural failure. However, the prediction of wave impact loads presents numerous challenges due to the complex nature...
Modeling of Wave Impact Using a Pendulum System
Nie, Chunyong
2011-08-08T23:59:59.000Z
For high speed vessels and offshore structures, wave impact, a main source of environmental loads, causes high local stresses and structural failure. However, the prediction of wave impact loads presents numerous challenges due to the complex nature...
Minimum Entangling Power is Close to Its Maximum
Jianxin Chen; Zhengfeng Ji; David W Kribs; Bei Zeng
2012-10-04T23:59:59.000Z
Given a quantum gate $U$ acting on a bipartite quantum system, its maximum (average, minimum) entangling power is the maximum (average, minimum) entanglement generation with respect to certain entanglement measure when the inputs are restricted to be product states. In this paper, we mainly focus on the 'weakest' one, i.e., the minimum entangling power, among all these entangling powers. We show that, by choosing von Neumann entropy of reduced density operator or Schmidt rank as entanglement measure, even the 'weakest' entangling power is generically very close to its maximal possible entanglement generation. In other words, maximum, average and minimum entangling powers are generically close. We then study minimum entangling power with respect to other Lipschitiz-continuous entanglement measures and generalize our results to multipartite quantum systems. As a straightforward application, a random quantum gate will almost surely be an intrinsically fault-tolerant entangling device that will always transform every low-entangled state to near-maximally entangled state.
NGC2613, 3198, 6503, 7184: Case studies against `maximum' disks
B. Fuchs
1998-12-02T23:59:59.000Z
Decompositions of the rotation curves of NGC2613, 3198, 6505, and 7184 are analysed. For these galaxies the radial velocity dispersions of the stars have been measured and their morphology is clearly discernible. If the parameters of the decompositions are chosen according to the `maximum' disk hypothesis, the Toomre Q stability parameter is systematically less than one and the multiplicities of the spiral arms as expected from density wave theory are inconsitent with the observed morphologies of the galaxies. The apparent Q<1 instability, in particular, is a strong argument against the `maximum' disk hypothesis.
When are microcircuits well-modeled by maximum entropy methods?
2010-07-20T23:59:59.000Z
POSTER PRESENTATION Open Access When are microcircuits well-modeled by maximum entropy methods? Andrea K Barreiro1*, Eric T Shea-Brown1, Fred M Rieke2,3, Julijana Gjorgjieva4 From Nineteenth Annual Computational Neuroscience Meeting: CNS*2010 San... Antonio, TX, USA. 24-30 July 2010 Recent experiments in retina and cortex have demon- strated that pairwise maximum entropy (PME) methods can approximate observed spiking patterns to a high degree of accuracy [1,2]. In this paper we examine...
Valence quark distributions of the proton from maximum entropy approach
Rong Wang; Xurong Chen
2014-10-14T23:59:59.000Z
We present an attempt of maximum entropy principle to determine valence quark distributions in the proton at very low resolution scale $Q_0^2$. The initial three valence quark distributions are obtained with limited dynamical information from quark model and QCD theory. Valence quark distributions from this method are compared to the lepton deep inelastic scattering data, and the widely used CT10 and MSTW08 data sets. The obtained valence quark distributions are consistent with experimental observations and the latest global fits of PDFs. Maximum entropy method is expected to be particularly useful in the case where relatively little information from QCD calculation is given.
Valence quark distributions of the proton from maximum entropy approach
Wang, Rong
2014-01-01T23:59:59.000Z
We present an attempt of maximum entropy principle to determine valence quark distributions in the proton at very low resolution scale $Q_0^2$. The initial three valence quark distributions are obtained with limited dynamical information from quark model and QCD theory. Valence quark distributions from this method are compared to the lepton deep inelastic scattering data, and the widely used CT10 and MSTW08 data sets. The obtained valence quark distributions are consistent with experimental observations and the latest global fits of PDFs. Maximum entropy method is expected to be particularly useful in the case where relatively little information from QCD calculation is given.
Assessing complexity by means of maximum entropy models
Chliamovitch, Gregor; Velasquez, Lino
2014-01-01T23:59:59.000Z
We discuss a characterization of complexity based on successive approximations of the probability density describing a system by means of maximum entropy methods, thereby quantifying the respective role played by different orders of interaction. This characterization is applied on simple cellular automata in order to put it in perspective with the usual notion of complexity for such systems based on Wolfram classes. The overlap is shown to be good, but not perfect. This suggests that complexity in the sense of Wolfram emerges as an intermediate regime of maximum entropy-based complexity, but also gives insights regarding the role of initial conditions in complexity-related issues.
Brodsky, Stanley J.; /SLAC; Wu, Xing-Gang; /Chongqing U.
2012-04-02T23:59:59.000Z
The uncertainty in setting the renormalization scale in finite-order perturbative QCD predictions using standard methods substantially reduces the precision of tests of the Standard Model in collider experiments. It is conventional to choose a typical momentum transfer of the process as the renormalization scale and take an arbitrary range to estimate the uncertainty in the QCD prediction. However, predictions using this procedure depend on the choice of renormalization scheme, leave a non-convergent renormalon perturbative series, and moreover, one obtains incorrect results when applied to QED processes. In contrast, if one fixes the renormalization scale using the Principle of Maximum Conformality (PMC), all non-conformal {l_brace}{beta}{sub i}{r_brace}-terms in the perturbative expansion series are summed into the running coupling, and one obtains a unique, scale-fixed, scheme-independent prediction at any finite order. The PMC renormalization scale {mu}{sub R}{sup PMC} and the resulting finite-order PMC prediction are both to high accuracy independent of choice of the initial renormalization scale {mu}{sub R}{sup init}, consistent with renormalization group invariance. Moreover, after PMC scale-setting, the n!-growth of the pQCD expansion is eliminated. Even the residual scale-dependence at fixed order due to unknown higher-order {l_brace}{beta}{sub i}{r_brace}-terms is substantially suppressed. As an application, we apply the PMC procedure to obtain NNLO predictions for the t{bar t}-pair hadroproduction cross-section at the Tevatron and LHC colliders. There are no renormalization scale or scheme uncertainties, thus greatly improving the precision of the QCD prediction. The PMC prediction for {sigma}{sub t{bar t}} is larger in magnitude in comparison with the conventional scale-setting method, and it agrees well with the present Tevatron and LHC data. We also verify that the initial scale-independence of the PMC prediction is satisfied to high accuracy at the NNLO level: the total cross-section remains almost unchanged even when taking very disparate initial scales {mu}{sub R}{sup init} equal to m{sub t}, 20 m{sub t}, {radical}s.
Latent feature models for dyadic prediction /
Menon, Aditya Krishna
2013-01-01T23:59:59.000Z
prediction . . . . . . . . . . . . . . . . . . . . . . . . .Response prediction . . . . . . . . . . . . . . . . . . .2.4.3 Weighted link prediction . . . . . .
IC performance prediction system
Ramakrishnan, Venkatakrishnan
1996-01-01T23:59:59.000Z
electrical test data, supplemented with in-line and in-situ data to make performance predictions. Based on the waterlevel parametric test, we will predict chip performance in order to select the appropriate package. Predictions that fall outside acceptable...
Maximum likelihood estimation of the equity Efstathios Avdis
Kahana, Michael J.
premium is usually estimated by taking the sample mean of stock returns and subtracting a measure the expected return on the aggregate stock market less the government bill rate, is of central importance an alternative esti- mator, based on maximum likelihood, that takes into account informa- tion contained
STATE OF CALIFORNIA MAXIMUM RATED TOTAL COOLING CAPACITY
/09) CALIFORNIA ENERGY COMMISSION INSTALLATION CERTIFICATE CF-6R-MECH-27-HERS Maximum Rated Total Cooling Capacity of the installed system (Btu/hr) 3b Sum of the ARI Rated Total Cooling Capacities of multiple systems installed Cooling Capacities of the installed cooling systems must be calculated and entered in row 3b. 4a MRTCC
Maximum power tracking control scheme for wind generator systems
Mena Lopez, Hugo Eduardo
2008-10-10T23:59:59.000Z
The purpose of this work is to develop a maximum power tracking control strategy for variable speed wind turbine systems. Modern wind turbine control systems are slow, and they depend on the design parameters of the turbine and use wind and/or rotor...
Maximum power tracking control scheme for wind generator systems
Mena, Hugo Eduardo
2009-05-15T23:59:59.000Z
The purpose of this work is to develop a maximum power tracking control strategy for variable speed wind turbine systems. Modern wind turbine control systems are slow, and they depend on the design parameters of the turbine and use wind and/or rotor...
annual maximum water: Topics by E-print Network
Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)
annual maximum water First Page Previous Page 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 Next Page Last Page Topic Index 1 ORIGINAL PAPER The distribution of...
BRANCH-CUT-AND-PROPAGATE FOR THE MAXIMUM k ...
2011-03-16T23:59:59.000Z
maximum k-colorable subgraph problem consists of selecting a k-color- able induced subgraph of ..... a symmetric subgroup Sp of Aut(G) acts on Vp for all p ? [s]. Let Vp = {vp. 1,...,vp qp. } ...... [9] J. Crawford, M. Ginsberg, E. Luks, and A. Roy.
Renewable Energy Scheduling for Fading Channels with Maximum Power Constraint
Greenberg, Albert
Renewable Energy Scheduling for Fading Channels with Maximum Power Constraint Zhe Wang Electrical--In this paper, we develop efficient algorithm to obtain the optimal energy schedule for fading channel with energy harvesting. We assume that the side information of both the channel states and energy harvesting
What is a Hurricane? Tropical system with maximum sustained
Meyers, Steven D.
Andrew-Category 4· Category 4 Hurricane - Winds 131-155 mph. Wall failures in homes and complete roofHurricane 101 #12;What is a Hurricane? · Tropical system with maximum sustained surface wind of 74 mph or greater. A hurricane is the worst and the strongest of all tropical systems. · Also known
Individual Module Maximum Power Point Tracking for Thermoelectric Generator Systems
Schaltz, Erik
of Thermo Electric Generator (TEG) systems a power converter is often inserted between the TEG system that the TEG system produces the maximum power. However, if the conditions, e.g. temperature, health, age, etc find the best compromise of all modules. In order to increase the power production of the TEG system
Efficiency Improvement of an IPMSM using Maximum Efficiency Operating Strategy
Paderborn, Universität
Efficiency Improvement of an IPMSM using Maximum Efficiency Operating Strategy Daniel Pohlenz. These are characterized by high efficiency and high torque as well as power density. The generation of reference currents that the MTPC method deviates considerably from the best efficiency under certain boundary conditions. The use
Maximum power tracking control scheme for wind generator systems
Mena, Hugo Eduardo
2009-05-15T23:59:59.000Z
The purpose of this work is to develop a maximum power tracking control strategy for variable speed wind turbine systems. Modern wind turbine control systems are slow, and they depend on the design parameters of the turbine and use wind and/or rotor...
Maximum power tracking control scheme for wind generator systems
Mena Lopez, Hugo Eduardo
2008-10-10T23:59:59.000Z
The purpose of this work is to develop a maximum power tracking control strategy for variable speed wind turbine systems. Modern wind turbine control systems are slow, and they depend on the design parameters of the turbine and use wind and/or rotor...
MARTIN'S MAXIMUM AND TOWER FORCING SEAN COX AND MATTEO VIALE
Viale, Matteo
MARTIN'S MAXIMUM AND TOWER FORCING SEAN COX AND MATTEO VIALE Abstract. There are several examples, the Reflection Princi- ple (RP) implies that if I is a tower of ideals which concentrates on the class GIC1 of 1 [16], shows that if PFA+ or MM holds and there is an inaccessible cardinal, then there is a tower
Retrocommissioning Case Study - Applying Building Selection Criteria for Maximum Results
Luskay, L.; Haasl, T.; Irvine, L.; Frey, D.
2002-01-01T23:59:59.000Z
RETROCOMMISSIONING CASE STUDY ?Applying Building Selection Criteria for Maximum Results? Larry Luskay, Tudi Haasl, Linda Irvine Portland Energy Conservation, Inc. Portland, Oregon Donald Frey Architectural Energy Corporation Boulder.... The building was retrocommissioned by Portland Energy Conservation, Inc. (PECI), in conjunction with Architectural Energy Corporation (AEC). The building-specific goals were: 1) Obtain cost-effective energy savings from optimizing operation...
Branstator, Grant
2014-12-09T23:59:59.000Z
The overall aim of our project was to quantify and characterize predictability of the climate as it pertains to decadal time scale predictions. By predictability we mean the degree to which a climate forecast can be distinguished from the climate that exists at initial forecast time, taking into consideration the growth of uncertainty that occurs as a result of the climate system being chaotic. In our project we were especially interested in predictability that arises from initializing forecasts from some specific state though we also contrast this predictability with predictability arising from forecasting the reaction of the system to external forcing – for example changes in greenhouse gas concentration. Also, we put special emphasis on the predictability of prominent intrinsic patterns of the system because they often dominate system behavior. Highlights from this work include: • Development of novel methods for estimating the predictability of climate forecast models. • Quantification of the initial value predictability limits of ocean heat content and the overturning circulation in the Atlantic as they are represented in various state of the artclimate models. These limits varied substantially from model to model but on average were about a decade with North Atlantic heat content tending to be more predictable than North Pacific heat content. • Comparison of predictability resulting from knowledge of the current state of the climate system with predictability resulting from estimates of how the climate system will react to changes in greenhouse gas concentrations. It turned out that knowledge of the initial state produces a larger impact on forecasts for the first 5 to 10 years of projections. • Estimation of tbe predictability of dominant patterns of ocean variability including well-known patterns of variability in the North Pacific and North Atlantic. For the most part these patterns were predictable for 5 to 10 years. • Determination of especially predictable patterns in the North Atlantic. The most predictable of these retain predictability substantially longer than generic patterns, with some being predictable for two decades.
Impact damping applied to MDOF structures
McElhaney, John Michael
1995-01-01T23:59:59.000Z
AND BASE UNBALANCE MULTIPLE. . . . . . 4. 3. 2 PREDICTED WORST CASE VIBRATION FOR THE IMPACT DAMPER PARAMETER SELECTION W=25 LBS, D = 1. 7 MIL. . . . . . . . . . . . . . . . . 21 22 29 35 . . . 40 4. 3. 3 WORST CASE PULL SPEED VIBRATION...
Marcus Hutter -1 -Online Prediction Bayes versus Experts Online Prediction
Hutter, Marcus
Marcus Hutter - 1 - Online Prediction Bayes versus Experts Online Prediction: Bayes versus;Marcus Hutter - 2 - Online Prediction Bayes versus Experts Table of Contents · Sequential/online prediction: Setup · Bayesian Sequence Prediction (Bayes) · Prediction with Expert Advice (PEA) · PEA Bounds
Predictive Maintenance Technologies
Broader source: Energy.gov [DOE]
Several diagnostic technologies and best practices are available to assist Federal agencies with predictive maintenance programs.
Maximum Entropy Principle and the Higgs Boson Mass
Alves, Alexandre; da Silva, Roberto
2014-01-01T23:59:59.000Z
A successful connection between Higgs boson decays and the Maximum Entropy Principle is presented. Based on the information theory inference approach we determine the Higgs boson mass as $M_H= 125.04\\pm 0.25$ GeV, a value fully compatible to the LHC measurement. This is straightforwardly obtained by taking the Higgs boson branching ratios as the target probability distributions of the inference, without any extra assumptions beyond the Standard Model. Yet, the principle can be a powerful tool in the construction of any model affecting the Higgs sector. We give, as an example, the case where the Higgs boson has an extra invisible decay channel. Our findings suggest that a system of Higgs bosons undergoing a collective decay to Standard Model particles is among the most fundamental ones where the Maximum Entropy Principle applies.
Maximum Entropy Principle and the Higgs Boson Mass
Alexandre Alves; Alex G. Dias; Roberto da Silva
2014-11-18T23:59:59.000Z
A successful connection between Higgs boson decays and the Maximum Entropy Principle is presented. Based on the information theory inference approach we determine the Higgs boson mass as $M_H= 125.04\\pm 0.25$ GeV, a value fully compatible to the LHC measurement. This is straightforwardly obtained by taking the Higgs boson branching ratios as the target probability distributions of the inference, without any extra assumptions beyond the Standard Model. Yet, the principle can be a powerful tool in the construction of any model affecting the Higgs sector. We give, as an example, the case where the Higgs boson has an extra invisible decay channel. Our findings suggest that a system of Higgs bosons undergoing a collective decay to Standard Model particles is among the most fundamental ones where the Maximum Entropy Principle applies.
Martin Wilde, Principal Investigator
2012-12-31T23:59:59.000Z
ABSTRACT Application of Real-Time Offsite Measurements in Improved Short-Term Wind Ramp Prediction Skill Improved forecasting performance immediately preceding wind ramp events is of preeminent concern to most wind energy companies, system operators, and balancing authorities. The value of near real-time hub height-level wind data and more general meteorological measurements to short-term wind power forecasting is well understood. For some sites, access to onsite measured wind data - even historical - can reduce forecast error in the short-range to medium-range horizons by as much as 50%. Unfortunately, valuable free-stream wind measurements at tall tower are not typically available at most wind plants, thereby forcing wind forecasters to rely upon wind measurements below hub height and/or turbine nacelle anemometry. Free-stream measurements can be appropriately scaled to hub-height levels, using existing empirically-derived relationships that account for surface roughness and turbulence. But there is large uncertainty in these relationships for a given time of day and state of the boundary layer. Alternatively, forecasts can rely entirely on turbine anemometry measurements, though such measurements are themselves subject to wake effects that are not stationary. The void in free-stream hub-height level measurements of wind can be filled by remote sensing (e.g., sodar, lidar, and radar). However, the expense of such equipment may not be sustainable. There is a growing market for traditional anemometry on tall tower networks, maintained by third parties to the forecasting process (i.e., independent of forecasters and the forecast users). This study examines the value of offsite tall-tower data from the WINDataNOW Technology network for short-horizon wind power predictions at a wind farm in northern Montana. The presentation shall describe successful physical and statistical techniques for its application and the practicality of its application in an operational setting. It shall be demonstrated that when used properly, the real-time offsite measurements materially improve wind ramp capture and prediction statistics, when compared to traditional wind forecasting techniques and to a simple persistence model.
Max '91: flare research at the next solar maximum
Dennis, B.; Canfield, R.; Bruner, M.; Emslie, G.; Hildner, E.; Hudson, H.; Hurford, G.; Lin, R.; Novick, R.; Tarbell, T.
1988-01-01T23:59:59.000Z
To address the central scientific questions surrounding solar flares, coordinated observations of electromagnetic radiation and energetic particles must be made from spacecraft, balloons, rockets, and ground-based observatories. A program to enhance capabilities in these areas in preparation for the next solar maximum in 1991 is recommended. The major scientific issues are described, and required observations and coordination of observations and analyses are detailed. A program plan and conceptual budgets are provided.
Maximum Entry and Mandatory Separation Ages for Certain Security Employees
Broader source: Directives, Delegations, and Requirements [Office of Management (MA)]
2001-10-11T23:59:59.000Z
The policy establishes the DOE policy on maximum entry and mandatory separation ages for primary or secondary positions covered under special statutory retirement provisions and for those employees whose primary duties are the protection of officials of the United States against threats to personal safety or the investigation, apprehension, and detention of individuals suspected or convicted of offenses against the criminal laws of the United States. Admin Chg 1, dated 12-1-11, cancels DOE P 310.1.
Maximum entropy method for reconstruction of the CMB images
A. T. Bajkova
2002-05-21T23:59:59.000Z
We propose a new approach for the accurate reconstruction of cosmic microwave background distributions from observations containing in addition to the primary fluctuations the radiation from unresolved extragalactic point sources and pixel noise. The approach uses some effective realizations of the well-known maximum entropy method and principally takes into account {\\it a priori} information about finiteness and spherical symmetry of the power spectrum of the CMB satisfying the Gaussian statistics.
Occam's Razor Cuts Away the Maximum Entropy Principle
Rudnicki, ?ukasz
2014-01-01T23:59:59.000Z
I show that the maximum entropy principle can be replaced by a more natural assumption, that there exists a phenomenological function of entropy consistent with the microscopic model. The requirement of existence provides then a unique construction of the related probability density. I conclude the letter with an axiomatic formulation of the notion of entropy, which is suitable for exploration of the non-equilibrium phenomena.
PNNL: A Supervised Maximum Entropy Approach to Word Sense Disambiguation
Tratz, Stephen C.; Sanfilippo, Antonio P.; Gregory, Michelle L.; Chappell, Alan R.; Posse, Christian; Whitney, Paul D.
2007-06-23T23:59:59.000Z
In this paper, we described the PNNL Word Sense Disambiguation system as applied to the English All-Word task in Se-mEval 2007. We use a supervised learning approach, employing a large number of features and using Information Gain for dimension reduction. Our Maximum Entropy approach combined with a rich set of features produced results that are significantly better than baseline and are the highest F-score for the fined-grained English All-Words subtask.
SIPS: Solar Irradiance Prediction System Stefan Achleitner
Cerpa, Alberto E.
SIPS: Solar Irradiance Prediction System Stefan Achleitner Computer Science and Engineering-scaling capacities of renewable energy sources such as wind and solar. However, variability and uncertainty in power potentially limit the impact of fluctuations in solar power generation, specifically in cloudy days when
Beyond Boltzmann-Gibbs statistics: Maximum entropy hyperensembles out-of-equilibrium
Crooks, Gavin E.
2006-01-01T23:59:59.000Z
1957). J. Skilling, in Maximum Entropy and Bayesian Methods,45–52. J. Skilling, in Maximum Entropy and Bayesian Methods,e C. C. Rodriguez, in Maximum Entropy and Bayesian Methods,
Deriving the continuity of maximum-entropy basis functions via variational analysis
Sukumar, N.; Wets, R. J. -B.
2007-01-01T23:59:59.000Z
and V. J. DellaPietra, A maximum entropy approach to naturalJ. and R. K. Bryan, Maximum entropy image reconstruction:Heidelberg, Continuity of maximum-entropy basis functions p
Piecewise training for structured prediction
Sutton, Charles; McCallum, Andrew
2009-01-01T23:59:59.000Z
margin methods for structured prediction. In Internationalaccuracy computational gene prediction. PLoS Computationaltraining for structured prediction Charles Sutton · Andrew
Thirty-Year Solid Waste Generation Maximum and Minimum Forecast for SRS
Thomas, L.C.
1994-10-01T23:59:59.000Z
This report is the third phase (Phase III) of the Thirty-Year Solid Waste Generation Forecast for Facilities at the Savannah River Site (SRS). Phase I of the forecast, Thirty-Year Solid Waste Generation Forecast for Facilities at SRS, forecasts the yearly quantities of low-level waste (LLW), hazardous waste, mixed waste, and transuranic (TRU) wastes generated over the next 30 years by operations, decontamination and decommissioning and environmental restoration (ER) activities at the Savannah River Site. The Phase II report, Thirty-Year Solid Waste Generation Forecast by Treatability Group (U), provides a 30-year forecast by waste treatability group for operations, decontamination and decommissioning, and ER activities. In addition, a 30-year forecast by waste stream has been provided for operations in Appendix A of the Phase II report. The solid wastes stored or generated at SRS must be treated and disposed of in accordance with federal, state, and local laws and regulations. To evaluate, select, and justify the use of promising treatment technologies and to evaluate the potential impact to the environment, the generic waste categories described in the Phase I report were divided into smaller classifications with similar physical, chemical, and radiological characteristics. These smaller classifications, defined within the Phase II report as treatability groups, can then be used in the Waste Management Environmental Impact Statement process to evaluate treatment options. The waste generation forecasts in the Phase II report includes existing waste inventories. Existing waste inventories, which include waste streams from continuing operations and stored wastes from discontinued operations, were not included in the Phase I report. Maximum and minimum forecasts serve as upper and lower boundaries for waste generation. This report provides the maximum and minimum forecast by waste treatability group for operation, decontamination and decommissioning, and ER activities.
Maximum Achievable Control Technology for New Industrial Boilers (released in AEO2005)
Reports and Publications (EIA)
2005-01-01T23:59:59.000Z
As part of Clean Air Act 90 (CAAA90, the EPA on February 26, 2004, issued a final rulethe National Emission Standards for Hazardous Air Pollutants (NESHAP) to reduce emissions of hazardous air pollutants (HAPs) from industrial, commercial, and institutional boilers and process heaters. The rule requires industrial boilers and process heaters to meet limits on HAP emissions to comply with a Maximum Achievable Control Technology (MACT) floor level of control that is the minimum level such sources must meet to comply with the rule. The major HAPs to be reduced are hydrochloric acid, hydrofluoric acid, arsenic, beryllium, cadmium, and nickel. The EPA predicts that the boiler MACT rule will reduce those HAP emissions from existing sources by about 59,000 tons per year in 2005.
Spectral Analysis of Excited Nucleons in Lattice QCD with Maximum Entropy Method
Kiyoshi Sasaki; Shoichi Sasaki; Tetsuo Hatsuda
2005-07-12T23:59:59.000Z
We study the mass spectra of excited baryons with the use of the lattice QCD simulations. We focus our attention on the problem of the level ordering between the positive-parity excited state N'(1440) (the Roper resonance) and the negative-parity excited state N^*(1535). Nearly perfect parity projection is accomplished by combining the quark propagators with periodic and anti-periodic boundary conditions in the temporal direction. Then we extract the spectral functions from the lattice data by utilizing the maximum entropy method. We observe that the masses of the N' and N^* states are close for wide range of the quark masses (M_pi=0.61-1.22 GeV), which is in contrast to the phenomenological prediction of the quark models. The role of the Wilson doublers in the baryonic spectral functions is also studied.
Quantifying extrinsic noise in gene expression using the maximum entropy framework
Purushottam D. Dixit
2013-04-04T23:59:59.000Z
We present a maximum entropy framework to separate intrinsic and extrinsic contributions to noisy gene expression solely from the profile of expression. We express the experimentally accessible probability distribution of the copy number of the gene product (mRNA or protein) by accounting for possible variations in extrinsic factors. The distribution of extrinsic factors is estimated using the maximum entropy principle. Our results show that extrinsic factors qualitatively and quantitatively affect the probability distribution of the gene product. We work out, in detail, the transcription of mRNA from a constitutively expressed promoter in {\\it E. coli}. We suggest that the variation in extrinsic factors may account for the observed {\\it wider than Poisson} distribution of mRNA copy numbers. We successfully test our framework on a numerical simulation of a simple gene expression scheme that accounts for the variation in extrinsic factors. We also make falsifiable predictions, some of which are tested on previous experiments in {\\it E. coli} while others need verification. Application of the current framework to more complex situations is also discussed.
Discovery Park Impact NNSA PRISM Center for
Holland, Jeffrey
Discovery Park Impact NNSA PRISM Center for Prediction of Reliability, Integrity and Survivability in PRISM. Purdue is one of 5 centers funded under NNSA's Predictive Science Academic Alliance Program Computing, a division of Information Technology at Purdue. The NNSA national laboratories will be involved
Prediction of plant species distributions across six Peter B. Pearman,1
Zimmermann, Niklaus E.
LETTER Prediction of plant species distributions across six millennia Peter B. Pearman,1 The usefulness of species distribution models (SDMs) in predicting impacts of climate change on biodiversity alternative way to evaluate the predictive ability of SDMs across time is to compare their predictions
Research Summary Sustainability impact assessment: tools for environmental, social and economic
Research Summary Sustainability impact assessment: tools for environmental, social and economic to produce Sustainability Impact Assessment Tools (SIATs) that will be used to predict the impacts) and will be used as part of the Impact Assessment (IA) process, as set out in the Impact Assessment Guidelines
Ullmer, Brygg
PREDICTION OF CUTTINGS BED HEIGHT WITH COMPUTATIONAL FLUID DYNAMICS IN DRILLING HORIZONTAL parameters such as wellbore geometry, pump rate, drilling fluid rheology and density, and maximum drilling Computational Fluid Dynamics methods. Movement, concentration and accumulation of drilled cuttings in non
Better Nonlinear Models from Noisy Data: Attractors with Maximum Likelihood
Patrick E. McSharry; Leonard A. Smith
1999-11-30T23:59:59.000Z
A new approach to nonlinear modelling is presented which, by incorporating the global behaviour of the model, lifts shortcomings of both least squares and total least squares parameter estimates. Although ubiquitous in practice, a least squares approach is fundamentally flawed in that it assumes independent, normally distributed (IND) forecast errors: nonlinear models will not yield IND errors even if the noise is IND. A new cost function is obtained via the maximum likelihood principle; superior results are illustrated both for small data sets and infinitely long data streams.
Application of Maximum Entropy Method to Dynamical Fermions
Jonathan Clowser; Costas Strouthos
2001-10-16T23:59:59.000Z
The Maximum Entropy Method is applied to dynamical fermion simulations of the (2+1)-dimensional Nambu-Jona-Lasinio model. This model is particularly interesting because at T=0 it has a broken phase with a rich spectrum of mesonic bound states and a symmetric phase where there are resonances, and hence the simple pole assumption of traditional fitting procedures breaks down. We present results extracted from simulations on large lattices for the spectral functions of the elementary fermion, the pion, the sigma, the massive pseudoscalar meson and the symmetric phase resonances.
Excited nucleon spectrum from lattice QCD with maximum entropy method
K. Sasaki; S. Sasaki; T. Hatsuda; M. Asakawa
2003-09-29T23:59:59.000Z
We study excited states of the nucleon in quenched lattice QCD with the spectral analysis using the maximum entropy method. Our simulations are performed on three lattice sizes $16^3\\times 32$, $24^3\\times 32$ and $32^3\\times 32$, at $\\beta=6.0$ to address the finite volume issue. We find a significant finite volume effect on the mass of the Roper resonance for light quark masses. After removing this systematic error, its mass becomes considerably reduced toward the direction to solve the level order puzzle between the Roper resonance $N'(1440)$ and the negative-parity nucleon $N^*(1535)$.
Letham, Benjamin
In sequential event prediction, we are given a “sequence database” of past event sequences to learn from, and we aim to predict the next event within a current event sequence. We focus on applications where the set of the ...
Universal Prediction Neri Merhav
Merhav, Neri
Universal Prediction Neri Merhav y Meir Feder z July 23, 1998 Abstract This paper consists of an overview on universal prediction from an information-theoretic perspective. Special attention is given of the universal prediction problem are described with emphasis on the analogy and the di erences between results
Identification in Prediction Theory
Bielefeld, University of
Identification in Prediction Theory Lars BÂ¨aumer Bielefeld 2000 #12;Acknowledgment I wish to thank remarks. 1 #12;Contents 1 Introduction 3 2 Finite-State Predictability 7 2.1 A Universal Predictor Predictability and Identifiability . . . . . . 30 3.3 Markov Machines for Identification
Probable maximum flood control; Yucca Mountain Site Characterization Project
DeGabriele, C.E.; Wu, C.L. [Bechtel National, Inc., San Francisco, CA (United States)
1991-11-01T23:59:59.000Z
This study proposes preliminary design concepts to protect the waste-handling facilities and all shaft and ramp entries to the underground from the probable maximum flood (PMF) in the current design configuration for the proposed Nevada Nuclear Waste Storage Investigation (NNWSI) repository protection provisions were furnished by the United States Bureau of Reclamation (USSR) or developed from USSR data. Proposed flood protection provisions include site grading, drainage channels, and diversion dikes. Figures are provided to show these proposed flood protection provisions at each area investigated. These areas are the central surface facilities (including the waste-handling building and waste treatment building), tuff ramp portal, waste ramp portal, men-and-materials shaft, emplacement exhaust shaft, and exploratory shafts facility.
Maximum Margin Clustering for State Decomposition of Metastable Systems
Wu, Hao
2015-01-01T23:59:59.000Z
When studying a metastable dynamical system, a prime concern is how to decompose the phase space into a set of metastable states. Unfortunately, the metastable state decomposition based on simulation or experimental data is still a challenge. The most popular and simplest approach is geometric clustering which is developed based on the classical clustering technique. However, the prerequisites of this approach are: (1) data are obtained from simulations or experiments which are in global equilibrium and (2) the coordinate system is appropriately selected. Recently, the kinetic clustering approach based on phase space discretization and transition probability estimation has drawn much attention due to its applicability to more general cases, but the choice of discretization policy is a difficult task. In this paper, a new decomposition method designated as maximum margin metastable clustering is proposed, which converts the problem of metastable state decomposition to a semi-supervised learning problem so that...
Efficiency at maximum power of a chemical engine
Hooyberghs, Hans; Salazar, Alberto; Indekeu, Joseph O; Broeck, Christian Van den
2013-01-01T23:59:59.000Z
A cyclically operating chemical engine is considered that converts chemical energy into mechanical work. The working fluid is a gas of finite-sized spherical particles interacting through elastic hard collisions. For a generic transport law for particle uptake and release, the efficiency at maximum power $\\eta$ takes the form 1/2+c\\Delta \\mu + O(\\Delta \\mu^2), with 1/2 a universal constant and $\\Delta \\mu$ the chemical potential difference between the particle reservoirs. The linear coefficient c is zero for engines featuring a so-called left/right symmetry or particle fluxes that are antisymmetric in the applied chemical potential difference. Remarkably, the leading constant in $\\eta$ is non-universal with respect to an exceptional modification of the transport law. For a nonlinear transport model we obtain \\eta = 1/(\\theta +1), with \\theta >0 the power of $\\Delta \\mu$ in the transport equation
Reduction in maximum time uncertainty of paired time signals
Theodosiou, George E. (West Chicago, IL); Dawson, John W. (Clarendon Hills, IL)
1983-01-01T23:59:59.000Z
Reduction in the maximum time uncertainty (t.sub.max -t.sub.min) of a series of paired time signals t.sub.1 and t.sub.2 varying between two input terminals and representative of a series of single events where t.sub.1 .ltoreq.t.sub.2 and t.sub.1 +t.sub.2 equals a constant, is carried out with a circuit utilizing a combination of OR and AND gates as signal selecting means and one or more time delays to increase the minimum value (t.sub.min) of the first signal t.sub.1 closer to t.sub.max and thereby reduce the difference. The circuit may utilize a plurality of stages to reduce the uncertainty by factors of 20-800.
Reduction in maximum time uncertainty of paired time signals
Theodosiou, G.E.; Dawson, J.W.
1983-10-04T23:59:59.000Z
Reduction in the maximum time uncertainty (t[sub max]--t[sub min]) of a series of paired time signals t[sub 1] and t[sub 2] varying between two input terminals and representative of a series of single events where t[sub 1][<=]t[sub 2] and t[sub 1]+t[sub 2] equals a constant, is carried out with a circuit utilizing a combination of OR and AND gates as signal selecting means and one or more time delays to increase the minimum value (t[sub min]) of the first signal t[sub 1] closer to t[sub max] and thereby reduce the difference. The circuit may utilize a plurality of stages to reduce the uncertainty by factors of 20--800. 6 figs.
Reduction in maximum time uncertainty of paired time signals
Theodosiou, G.E.; Dawson, J.W.
1981-02-11T23:59:59.000Z
Reduction in the maximum time uncertainty (t/sub max/ - t/sub min/) of a series of paired time signals t/sub 1/ and t/sub 2/ varying between two input terminals and representative of a series of single events where t/sub 1/ less than or equal to t/sub 2/ and t/sub 1/ + t/sub 2/ equals a constant, is carried out with a circuit utilizing a combination of OR and AND gates as signal selecting means and one or more time delays to increase the minimum value (t/sub min/) of the first signal t/sub 1/ closer to t/sub max/ and thereby reduce the difference. The circuit may utilize a plurality of stages to reduce the uncertainty by factors of 20 to 800.
Improved Maximum Entropy Analysis with an Extended Search Space
Alexander Rothkopf
2013-01-07T23:59:59.000Z
The standard implementation of the Maximum Entropy Method (MEM) follows Bryan and deploys a Singular Value Decomposition (SVD) to limit the dimensionality of the underlying solution space apriori. Here we present arguments based on the shape of the SVD basis functions and numerical evidence from a mock data analysis, which show that the correct Bayesian solution is not in general recovered with this approach. As a remedy we propose to extend the search basis systematically, which will eventually recover the full solution space and the correct solution. In order to adequately approach problems where an exponentially damped kernel is used, we provide an open-source implementation, using the C/C++ language that utilizes high precision arithmetic adjustable at run-time. The LBFGS algorithm is included in the code in order to attack problems without the need to resort to a particular search space restriction.
Quantum maximum entropy principle for a system of identical particles
Trovato, M. [Dipartimento di Matematica, Universita di Catania, Viale A. Doria, 95125 Catania (Italy); Reggiani, L. [Dipartimento di Ingegneria dell' Innovazione and CNISM, Universita del Salento, Via Arnesano s/n, 73100 Lecce (Italy)
2010-02-15T23:59:59.000Z
By introducing a functional of the reduced density matrix, we generalize the definition of a quantum entropy which incorporates the indistinguishability principle of a system of identical particles. With the present definition, the principle of quantum maximum entropy permits us to solve the closure problem for a quantum hydrodynamic set of balance equations corresponding to an arbitrary number of moments in the framework of extended thermodynamics. The determination of the reduced Wigner function for equilibrium and nonequilibrium conditions is found to become possible only by assuming that the Lagrange multipliers can be expanded in powers of (Planck constant/2pi){sup 2}. Quantum contributions are expressed in powers of (Planck constant/2pi){sup 2} while classical results are recovered in the limit (Planck constant/2pi)->0.
A maximum entropy framework for non-exponential distributions
Peterson, Jack; Dill, Ken A
2015-01-01T23:59:59.000Z
Probability distributions having power-law tails are observed in a broad range of social, economic, and biological systems. We describe here a potentially useful common framework. We derive distribution functions $\\{p_k\\}$ for situations in which a `joiner particle' $k$ pays some form of price to enter a `community' of size $k-1$, where costs are subject to economies-of-scale (EOS). Maximizing the Boltzmann-Gibbs-Shannon entropy subject to this energy-like constraint predicts a distribution having a power-law tail; it reduces to the Boltzmann distribution in the absence of EOS. We show that the predicted function gives excellent fits to 13 different distribution functions, ranging from friendship links in social networks, to protein-protein interactions, to the severity of terrorist attacks. This approach may give useful insights into when to expect power-law distributions in the natural and social sciences.
Dynamic Prediction of Concurrency Errors
Sadowski, Caitlin
2012-01-01T23:59:59.000Z
Relation 15 Must-Before Race Prediction 16 Implementation 17viii Abstract Dynamic Prediction of Concurrency Errors bySANTA CRUZ DYNAMIC PREDICTION OF CONCURRENCY ERRORS A
Predictive energy management for hybrid electric vehicles -Prediction horizon and
Paris-Sud XI, Université de
Predictive energy management for hybrid electric vehicles - Prediction horizon and battery capacity of a combined hybrid electric vehicle. Keywords: Hybrid vehicles, Energy Management, Predictive control, Optimal predictive energy management realistic. This energy management strategy uses a dynamic programming algorithm
Bullard, K.L.
1994-08-01T23:59:59.000Z
The US Geological Survey (USGS), as part of the Yucca Mountain Project (YMP), is conducting studies at Yucca Mountain, Nevada. The purposes of these studies are to provide hydrologic and geologic information to evaluate the suitability of Yucca Mountain for development as a high-level nuclear waste repository, and to evaluate the ability of the mined geologic disposal system (MGDS) to isolate the waste in compliance with regulatory requirements. In particular, the project is designed to acquire information necessary for the Department of Energy (DOE) to demonstrate in its environmental impact statement (EIS) and license application whether the MGDS will meet the requirements of federal regulations 10 CFR Part 60, 10 CFR Part 960, and 40 CFR Part 191. Complete study plans for this part of the project were prepared by the USGS and approved by the DOE in August and September of 1990. The US Bureau of Reclamation (Reclamation) was selected by the USGS as a contractor to provide probable maximum flood (PMF) magnitudes and associated inundation maps for preliminary engineering design of the surface facilities at Yucca Mountain. These PMF peak flow estimates are necessary for successful waste repository design and construction. The PMF technique was chosen for two reasons: (1) this technique complies with ANSI requirements that PMF technology be used in the design of nuclear related facilities (ANSI/ANS, 1981), and (2) the PMF analysis has become a commonly used technology to predict a ``worst possible case`` flood scenario. For this PMF study, probable maximum precipitation (PMP) values were obtained for a local storm (thunderstorm) PMP event. These values were determined from the National Weather Services`s Hydrometeorological Report No. 49 (HMR 49).
Annette Schafer, Arthur S. Rood, A. Jeffrey Sondrup
2011-12-23T23:59:59.000Z
Groundwater impacts have been analyzed for the proposed remote-handled low-level waste disposal facility. The analysis was prepared to support the National Environmental Policy Act environmental assessment for the top two ranked sites for the proposed disposal facility. A four-phase screening and analysis approach was documented and applied. Phase I screening was site independent and applied a radionuclide half-life cut-off of 1 year. Phase II screening applied the National Council on Radiation Protection analysis approach and was site independent. Phase III screening used a simplified transport model and site-specific geologic and hydrologic parameters. Phase III neglected the infiltration-reducing engineered cover, the sorption influence of the vault system, dispersion in the vadose zone, vertical dispersion in the aquifer, and the release of radionuclides from specific waste forms. These conservatisms were relaxed in the Phase IV analysis which used a different model with more realistic parameters and assumptions. Phase I screening eliminated 143 of the 246 radionuclides in the inventory from further consideration because each had a half-life less than 1 year. An additional 13 were removed because there was no ingestion dose coefficient available. Of the 90 radionuclides carried forward from Phase I, 57 radionuclides had simulated Phase II screening doses exceeding 0.4 mrem/year. Phase III and IV screening compared the maximum predicted radionuclide concentration in the aquifer to maximum contaminant levels. Of the 57 radionuclides carried forward from Phase II, six radionuclides were identified in Phase III as having simulated future aquifer concentrations exceeding maximum contaminant limits. An additional seven radionuclides had simulated Phase III groundwater concentrations exceeding 1/100th of their respective maximum contaminant levels and were also retained for Phase IV analysis. The Phase IV analysis predicted that none of the thirteen remaining radionuclides would exceed the maximum contaminant levels for either site location. The predicted cumulative effective dose equivalent from all 13 radionuclides also was less than the dose criteria set forth in Department of Energy Order 435.1 for each site location. An evaluation of composite impacts showed one site is preferable over the other based on the potential for commingling of groundwater contamination with other facilities.
Savannah River Site radioiodine atmospheric releases and offsite maximum doses
Marter, W.L.
1990-11-01T23:59:59.000Z
Radioisotopes of iodine have been released to the atmosphere from the Savannah River Site since 1955. The releases, mostly from the 200-F and 200-H Chemical Separations areas, consist of the isotopes, I-129 and 1-131. Small amounts of 1-131 and 1-133 have also been released from reactor facilities and the Savannah River Laboratory. This reference memorandum was issued to summarize our current knowledge of releases of radioiodines and resultant maximum offsite doses. This memorandum supplements the reference memorandum by providing more detailed supporting technical information. Doses reported in this memorandum from consumption of the milk containing the highest I-131 concentration following the 1961 1-131 release incident are about 1% higher than reported in the reference memorandum. This is the result of using unrounded 1-131 concentrations of I-131 in milk in this memo. It is emphasized here that this technical report does not constitute a dose reconstruction in the same sense as the dose reconstruction effort currently underway at Hanford. This report uses existing published data for radioiodine releases and existing transport and dosimetry models.
LANDFILL OPERATION FOR CARBON SEQUESTRATION AND MAXIMUM METHANE EMISSION CONTROL
Don Augenstein
2001-02-01T23:59:59.000Z
The work described in this report, to demonstrate and advance this technology, has used two demonstration-scale cells of size (8000 metric tons [tonnes]), sufficient to replicate many heat and compaction characteristics of larger ''full-scale'' landfills. An enhanced demonstration cell has received moisture supplementation to field capacity. This is the maximum moisture waste can hold while still limiting liquid drainage rate to minimal and safely manageable levels. The enhanced landfill module was compared to a parallel control landfill module receiving no moisture additions. Gas recovery has continued for a period of over 4 years. It is quite encouraging that the enhanced cell methane recovery has been close to 10-fold that experienced with conventional landfills. This is the highest methane recovery rate per unit waste, and thus progress toward stabilization, documented anywhere for such a large waste mass. This high recovery rate is attributed to moisture, and elevated temperature attained inexpensively during startup. Economic analyses performed under Phase I of this NETL contract indicate ''greenhouse cost effectiveness'' to be excellent. Other benefits include substantial waste volume loss (over 30%) which translates to extended landfill life. Other environmental benefits include rapidly improved quality and stabilization (lowered pollutant levels) in liquid leachate which drains from the waste.
Maximum Entropy Analysis of the Spectral Functions in Lattice QCD
M. Asakawa; T. Hatsuda; Y. Nakahara
2001-02-26T23:59:59.000Z
First principle calculation of the QCD spectral functions (SPFs) based on the lattice QCD simulations is reviewed. Special emphasis is placed on the Bayesian inference theory and the Maximum Entropy Method (MEM), which is a useful tool to extract SPFs from the imaginary-time correlation functions numerically obtained by the Monte Carlo method. Three important aspects of MEM are (i) it does not require a priori assumptions or parametrizations of SPFs, (ii) for given data, a unique solution is obtained if it exists, and (iii) the statistical significance of the solution can be quantitatively analyzed. The ability of MEM is explicitly demonstrated by using mock data as well as lattice QCD data. When applied to lattice data, MEM correctly reproduces the low-energy resonances and shows the existence of high-energy continuum in hadronic correlation functions. This opens up various possibilities for studying hadronic properties in QCD beyond the conventional way of analyzing the lattice data. Future problems to be studied by MEM in lattice QCD are also summarized.
Improved Maximum Entropy Method with an Extended Search Space
Alexander Rothkopf
2012-08-25T23:59:59.000Z
We report on an improvement to the implementation of the Maximum Entropy Method (MEM). It amounts to departing from the search space obtained through a singular value decomposition (SVD) of the Kernel. Based on the shape of the SVD basis functions we argue that the MEM spectrum for given $N_\\tau$ data-points $D(\\tau)$ and prior information $m(\\omega)$ does not in general lie in this $N_\\tau$ dimensional singular subspace. Systematically extending the search basis will eventually recover the full search space and the correct extremum. We illustrate this idea through a mock data analysis inspired by actual lattice spectra, to show where our improvement becomes essential for the success of the MEM. To remedy the shortcomings of Bryan's SVD prescription we propose to use the real Fourier basis, which consists of trigonometric functions. Not only does our approach lead to more stable numerical behavior, as the SVD is not required for the determination of the basis functions, but also the resolution of the MEM becomes independent from the position of the reconstructed peaks.
Maximum entropy detection of planets around active stars
Petit, P; Hébrard, E; Morin, J; Folsom, C P; Böhm, T; Boisse, I; Borgniet, S; Bouvier, J; Delfosse, X; Hussain, G; Jeffers, S V; Marsden, S C; Barnes, J R
2015-01-01T23:59:59.000Z
(shortened for arXiv) We aim to progress towards more efficient exoplanet detection around active stars by optimizing the use of Doppler Imaging in radial velocity measurements. We propose a simple method to simultaneously extract a brightness map and a set of orbital parameters through a tomographic inversion technique derived from classical Doppler mapping. Based on the maximum entropy principle, the underlying idea is to determine the set of orbital parameters that minimizes the information content of the resulting Doppler map. We carry out a set of numerical simulations to perform a preliminary assessment of the robustness of our method, using an actual Doppler map of the very active star HR 1099 to produce a realistic synthetic data set for various sets of orbital parameters of a single planet in a circular orbit. Using a simulated time-series of 50 line profiles affected by a peak-to-peak activity jitter of 2.5 km/s, we are able in most cases to recover the radial velocity amplitude, orbital phase and o...
Predictability and Diagnosis of Low-Frequency Climate Processes in the Pacific
Dr. Arthur J. Miller
2008-10-15T23:59:59.000Z
Predicting the climate for the coming decades requires understanding both natural and anthropogenically forced climate variability. This variability is important because it has major societal impacts, for example by causing floods or droughts on land or altering fishery stocks in the ocean. Our results fall broadly into three topics: evaluating global climate model predictions; regional impacts of climate changes over western North America; and regional impacts of climate changes over the eastern North Pacific Ocean.
Maximum Power Point Tracking Control for Photovoltaic System Using Adaptive Neuro-Fuzzy
Paris-Sud XI, Université de
Maximum Power Point Tracking Control for Photovoltaic System Using Adaptive Neuro- Fuzzy "ANFIS energy demand. The mathematical modeling and simulation of the photovoltaic system is implemented) like ANFIS. This paper presents Maximum Power Point Tracking Control for Photovoltaic System Using
Setting the Renormalization Scale in QCD: The Principle of Maximum Conformality
Brodsky, Stanley J.; /SLAC /Southern Denmark U., CP3-Origins; Di Giustino, Leonardo; /SLAC
2011-08-19T23:59:59.000Z
A key problem in making precise perturbative QCD predictions is the uncertainty in determining the renormalization scale {mu} of the running coupling {alpha}{sub s}({mu}{sup 2}): The purpose of the running coupling in any gauge theory is to sum all terms involving the {beta} function; in fact, when the renormalization scale is set properly, all non-conformal {beta} {ne} 0 terms in a perturbative expansion arising from renormalization are summed into the running coupling. The remaining terms in the perturbative series are then identical to that of a conformal theory; i.e., the corresponding theory with {beta} = 0. The resulting scale-fixed predictions using the 'principle of maximum conformality' (PMC) are independent of the choice of renormalization scheme - a key requirement of renormalization group invariance. The results avoid renormalon resummation and agree with QED scale-setting in the Abelian limit. The PMC is also the theoretical principle underlying the BLM procedure, commensurate scale relations between observables, and the scale-setting method used in lattice gauge theory. The number of active flavors nf in the QCD {beta} function is also correctly determined. We discuss several methods for determining the PMC/BLM scale for QCD processes. We show that a single global PMC scale, valid at leading order, can be derived from basic properties of the perturbative QCD cross section. The elimination of the renormalization scheme ambiguity using the PMC will not only increase the precision of QCD tests, but it will also increase the sensitivity of collider experiments to new physics beyond the Standard Model.
A Maximum Entropy Algorithm for Rhythmic Analysis of Genome-Wide Expression Patterns
Richardson, David
A Maximum Entropy Algorithm for Rhythmic Analysis of Genome-Wide Expression Patterns Christopher James Langmead C. Robertson McClung Bruce Randall Donald ,,,Â§,Â¶ Abstract We introduce a maximum entropy-based spectral analysis, maximum entropy spectral reconstruction is well suited to signals of the type generated
1 A MAXIMUM ENTROPY METHOD FOR SUBNETWORK ORIGIN-DESTINATION 2 TRIP MATRIX ESTIMATION
Kockelman, Kara M.
1 A MAXIMUM ENTROPY METHOD FOR SUBNETWORK ORIGIN-DESTINATION 2 TRIP MATRIX ESTIMATION 3 4 Chi Xie 5, maximum entropy, linearization 36 algorithm, column generation 37 #12;C. Xie, K.M. Kockelman and S is the trip matrix of the simplified network. This paper discusses a5 maximum entropy method
Maximum entropy and Bayesian approaches to the ratio problem Edward Z. Shen*
Perloff, Jeffrey M.
Maximum entropy and Bayesian approaches to the ratio problem Edward Z. Shen* Jeffrey M. Perloff** January 2001 Abstract Maximum entropy and Bayesian approaches provide superior estimates of a ratio extra information in the supports for the underlying parameters for generalized maximum entropy (GME
Perloff, Jeffrey M.
Comparison of Maximum Entropy and Higher-Order Entropy Estimators Amos Golan* and Jeffrey M. Perloff** ABSTRACT We show that the generalized maximum entropy (GME) is the only estimation method- classes of estimators may outperform the GME estimation rule. Keywords: generalized entropy, maximum
A maximum entropy-least squares estimator for elastic origin-destination trip matrix estimation
Kockelman, Kara M.
A maximum entropy-least squares estimator for elastic origin- destination trip matrix estimation propose a combined maximum entropy-least squares (ME-LS) estimator, by which O- D flows are distributed-destination trip table; elastic demand; maximum entropy; least squares; subnetwork analysis; convex combination
Earthquake prediction: Simple methods for complex phenomena
Luen, Bradley
2010-01-01T23:59:59.000Z
and predictions . . . . . . . . . . . . . . . . . . . . .6.1 Assessing models and predictions . . . . . . .What are earthquake predictions and forecasts? . . . . . .
Brauner, Neima
PREDICTION OF TEMPERATURE-DEPENDENT PROPERTIES BY CORRELATIONS BASED ON SIMILARITY OF MOLECULAR and environmental impact assessment, hazard and operability analysis. Therefore, methods for reliable prediction of property data are needed. In particular, prediction of temperature-dependent properties (like vapor
3D Rigid Body Impact Burial Prediction Model
Chu, Peter C.
-fixed coordinate (E-coordinate) · cylinder's main-axis following coordinate (M-coordinate) · hydrodynamic force-Coordiante Hydrodynamic forces (drag and lift) are easily calculated. #12;Moment of Momentum Equations #12;Interfacial;Experiment · Hydrodynamic Model Development · Behavior of Falling Cylinder in Water Column (Chaotic
A Workshop to Identify Research Needs and Impacts in Predictive...
Broader source: Energy.gov (indexed) [DOE]
presicerpt.pdf More Documents & Publications Overview of the DOE Advanced Combustion Engine R&D Vehicle Technologies Office Merit Review 2014: Overview of the DOE Advanced...
Predicting High Impact Academic Papers Using Citation Network Features
Christen, Peter
strategies to remain competitive on a global scale. The utilisation of data mining techniques to make&D to the same extent as other economic powerhouses to take advantage of being `the first mover', with the development of insightful pre- dictive analytics over a range of data sources, it can become an `early adopter
Prediction of future fifteen solar cycles
K. M. Hiremath
2007-04-11T23:59:59.000Z
In the previous study (Hiremath 2006a), the solar cycle is modeled as a forced and damped harmonic oscillator and from all the 22 cycles (1755-1996), long-term amplitudes, frequencies, phases and decay factor are obtained. Using these physical parameters of the previous 22 solar cycles and by an {\\em autoregressive model}, we predict the amplitude and period of the future fifteen solar cycles. Predicted amplitude of the present solar cycle (23) matches very well with the observations. The period of the present cycle is found to be 11.73 years. With these encouraging results, we also predict the profiles of future 15 solar cycles. Important predictions are : (i) the period and amplitude of the cycle 24 are 9.34 years and 110 ($\\pm 11$), (ii) the period and amplitude of the cycle 25 are 12.49 years and 110 ($\\pm$ 11), (iii) during the cycles 26 (2030-2042 AD), 27 (2042-2054 AD), 34 (2118-2127 AD), 37 (2152-2163 AD) and 38 (2163-2176 AD), the sun might experience a very high sunspot activity, (iv) the sun might also experience a very low (around 60) sunspot activity during cycle 31 (2089-2100 AD) and, (v) length of the solar cycles vary from 8.65 yrs for the cycle 33 to maximum of 13.07 yrs for the cycle 35.
Predicting the Operating Behavior of Ceramic Filters from Thermo-Mechanical Ash Properties
Hemmer, G.; Kasper, G.
2002-09-19T23:59:59.000Z
Stable operation, in other words the achievement of a succession of uniform filtration cycles of reasonable length is a key issue in high-temperature gas filtration with ceramic media. Its importance has rather grown in recent years, as these media gain in acceptance due to their excellent particle retention capabilities. Ash properties have been known for some time to affect the maximum operating temperature of filters. However, softening and consequently ''stickiness'' of the ash particles generally depend on composition in a complex way. Simple and accurate prediction of critical temperature ranges from ash analysis--and even more so from coal analysis--is still difficult without practical and costly trials. In general, our understanding of what exactly happens during break-down of filtration stability is still rather crude and general. Early work was based on the concept that ash particles begin to soften and sinter near the melting temperatures of low-melting, often alkaline components. This softening coincides with a fairly abrupt increase of stickiness, that can be detected with powder mechanical methods in a Jenicke shear cell as first shown by Pilz (1996) and recently confirmed by others (Kamiya et al. 2001 and 2002, Kanaoka et al. 2001). However, recording {sigma}-{tau}-diagrams is very time consuming and not the only off-line method of analyzing or predicting changes in thermo-mechanical ash behavior. Pilz found that the increase in ash stickiness near melting was accompanied by shrinkage attributed to sintering. Recent work at the University of Karlsruhe has expanded the use of such thermo-analytical methods for predicting filtration behavior (Hemmer 2001). Demonstrating their effectiveness is one objective of this paper. Finally, our intent is to show that ash softening at near melting temperatures is apparently not the only phenomenon causing problems with filtration, although its impact is certainly the ''final catastrophe''. There are other significant changes in regeneration at intermediate temperatures, which may lead to long-term deterioration.
OPTIMIZED FUEL INJECTOR DESIGN FOR MAXIMUM IN-FURNACE NOx REDUCTION AND MINIMUM UNBURNED CARBON
A.F. SAROFIM; BROWN UNIVERSITY. R.A. LISAUSKAS; D.B. RILEY, INC.; E.G. EDDINGS; J. BROUWER; J.P. KLEWICKI; K.A. DAVIS; M.J. BOCKELIE; M.P. HEAP; REACTION ENGINEERING INTERNATIONAL. D.W. PERSHING; UNIVERSITY OF UTAH. R.H. HURT
1998-01-01T23:59:59.000Z
Reaction Engineering International (REI) has established a project team of experts to develop a technology for combustion systems which will minimize NO x emissions and minimize carbon in the fly ash. This much need technology will allow users to meet environmental compliance and produce a saleable by-product. This study is concerned with the NO x control technology of choice for pulverized coal fired boilers, ?in-furnace NO x control,? which includes: staged low-NO x burners, reburning, selective non-catalytic reduction (SNCR) and hybrid approaches (e.g., reburning with SNCR). The program has two primary objectives: 1) To improve the performance of ?in-furnace? NO x control processes. 2) To devise new, or improve existing, approaches for maximum ?in-furnace? NO x control and minimum unburned carbon. The program involves: 1) fundamental studies at laboratory- and bench-scale to define NO reduction mechanisms in flames and reburning jets; 2) laboratory experiments and computer modeling to improve our two-phase mixing predictive capability; 3) evaluation of commercial low-NO x burner fuel injectors to develop improved designs, and 4) demonstration of coal injectors for reburning and low-NO x burners at commercial scale. The specific objectives of the two-phase program are to: 1 Conduct research to better understand the interaction of heterogeneous chemistry and two phase mixing on NO reduction processes in pulverized coal combustion. 2 Improve our ability to predict combusting coal jets by verifying two phase mixing models under conditions that simulate the near field of low-NO x burners. 3 Determine the limits on NO control by in-furnace NO x control technologies as a function of furnace design and coal type. 5 Develop and demonstrate improved coal injector designs for commercial low-NO x burners and coal reburning systems. 6 Modify the char burnout model in REI?s coal combustion code to take account of recently obtained fundamental data on char reactivity during the late stages of burnout. This will improve our ability to predict carbon burnout with low-NO x firing systems.
Large scale disease prediction
Schmid, Patrick R. (Patrick Raphael)
2008-01-01T23:59:59.000Z
The objective of this thesis is to present the foundation of an automated large-scale disease prediction system. Unlike previous work that has typically focused on a small self-contained dataset, we explore the possibility ...
Predicting recreation priorities
Hunt, Kindal Alayne
2003-01-01T23:59:59.000Z
was to determine the relative contribution of these factors for predicting recreation priorities. Data was collected using a self-administered questionnaire distributed to all households at Fort Hood military housing post in Texas. The questionnaire was designed...
The myth of science-based predictive modeling.
Hemez, F. M. (François M.)
2004-01-01T23:59:59.000Z
A key aspect of science-based predictive modeling is the assessment of prediction credibility. This publication argues that the credibility of a family of models and their predictions must combine three components: (1) the fidelity of predictions to test data; (2) the robustness of predictions to variability, uncertainty, and lack-of-knowledge; and (3) the prediction accuracy of models in cases where measurements are not available. Unfortunately, these three objectives are antagonistic. A recently published Theorem that demonstrates the irrevocable trade-offs between fidelity-to-data, robustness-to-uncertainty, and confidence in prediction is summarized. High-fidelity models cannot be made increasingly robust to uncertainty and lack-of-knowledge. Similarly, robustness-to-uncertainty can only be improved at the cost of reducing the confidence in prediction. The concept of confidence in prediction relies on a metric for total uncertainty, capable of aggregating different representations of uncertainty (probabilistic or not). The discussion is illustrated with an engineering application where a family of models is developed to predict the acceleration levels obtained when impacts of varying levels propagate through layers of crushable hyper-foam material of varying thicknesses. Convex modeling is invoked to represent a severe lack-of-knowledge about the constitutive material behavior. The analysis produces intervals of performance metrics from which the total uncertainty and confidence levels are estimated. Finally, performance, robustness and confidence are extrapolated throughout the validation domain to assess the predictive power of the family of models away from tested configurations.
Global Health and Economic Impacts of Future Ozone Pollution
Webster, Mort D.
We assess the human health and economic impacts of projected 2000-2050 changes in ozone pollution using the MIT Emissions Prediction and Policy Analysis-Health Effects (EPPA-HE) model, in combination with results from the ...
Letschert, Virginie
2010-01-01T23:59:59.000Z
Administration, Annual Energy Outlook 2010 with Projectionsis taken from the annual energy outlook (AEO) 2010 (DOE/EIA-
Letschert, Virginie
2010-01-01T23:59:59.000Z
central air conditioners, water heaters and furnaces) areair conditioners, water heaters and furnaces) UnregulatedM i l l o 10 i n s 2. Water Heaters DOE has issued a final
Sun, Y.; Buscheck, T. A.; Lee, K. H.; Hao, Y.; James, S. C.
2010-01-01T23:59:59.000Z
Characterization of hydrogeologic units using matrix properties, Yucca Mountain, Nevada, U.S. Geological
Letschert, Virginie
2010-01-01T23:59:59.000Z
air conditioners, water heaters and furnaces) are modeledair conditioners, water heaters and furnaces) UnregulatedBAT option for electric water heater and condensing water
Sun, Y.; Buscheck, T. A.; Lee, K. H.; Hao, Y.; James, S. C.
2010-01-01T23:59:59.000Z
package spac- ing and waste-package heat generation rate,Radioactive heat of decay from waste packages emplaced inwaste packages and emplacement drifts, and for heat ?ow at
Letschert, Virginie
2010-01-01T23:59:59.000Z
Energy Consumption Survey (RECS) 2005 (EIA, 2009), whichEIA-0383(2010)) Energy Information Administration, Residential Energy Consumption Survey:
Impacts of Minnesota's Primary Seat Belt Law
Minnesota, University of
for Excellence in Rural Safety Humphrey School of Public Affairs #12;CERS's "Safe Six Regardless of Residence Urban/Small City Suburban Rural/Small Town Primary Seat; . . AND IN MINNESOTA #12;Predicted Impact 2009 and 2010 CERS Reports: · Primary Seat Belt Laws
Prospective Climate Change Impact on Large Rivers
Julien, Pierre Y.
1 Prospective Climate Change Impact on Large Rivers in the US and South Korea Pierre Y. Julien Dept. of Civil and Environ. Eng. Colorado State University Seoul, South Korea August 11, 2009 Climate Change and Large Rivers 1. Climatic changes have been on-going for some time; 2. Climate changes usually predict
Application of the Principle of Maximum Conformality to Top-Pair Production
Brodsky, Stanley J.; /SLAC; Wu, Xing-Gang; /SLAC /Chongqing U.
2013-05-13T23:59:59.000Z
A major contribution to the uncertainty of finite-order perturbative QCD predictions is the perceived ambiguity in setting the renormalization scale {mu}{sub r}. For example, by using the conventional way of setting {mu}{sub r} {element_of} [m{sub t}/2, 2m{sub t}], one obtains the total t{bar t} production cross-section {sigma}{sub t{bar t}} with the uncertainty {Delta}{sigma}{sub t{bar t}}/{sigma}{sub t{bar t}} {approx} (+3%/-4%) at the Tevatron and LHC even for the present NNLO level. The Principle of Maximum Conformality (PMC) eliminates the renormalization scale ambiguity in precision tests of Abelian QED and non-Abelian QCD theories. By using the PMC, all nonconformal {l_brace}{beta}{sub i}{r_brace}-terms in the perturbative expansion series are summed into the running coupling constant, and the resulting scale-fixed predictions are independent of the renormalization scheme. The correct scale-displacement between the arguments of different renormalization schemes is automatically set, and the number of active flavors n{sub f} in the {l_brace}{beta}{sub i}{r_brace}-function is correctly determined. The PMC is consistent with the renormalization group property that a physical result is independent of the renormalization scheme and the choice of the initial renormalization scale {mu}{sub r}{sup init}. The PMC scale {mu}{sub r}{sup PMC} is unambiguous at finite order. Any residual dependence on {mu}{sub r}{sup init} for a finite-order calculation will be highly suppressed since the unknown higher-order {l_brace}{beta}{sub i}{r_brace}-terms will be absorbed into the PMC scales higher-order perturbative terms. We find that such renormalization group invariance can be satisfied to high accuracy for {sigma}{sub t{bar t}} at the NNLO level. In this paper we apply PMC scale-setting to predict the t{bar t} cross-section {sigma}{sub t{bar t}} at the Tevatron and LHC colliders. It is found that {sigma}{sub t{bar t}} remains almost unchanged by varying {mu}{sub r}{sup init} within the region of [m{sub t}/4, 4m{sub t}]. The convergence of the expansion series is greatly improved. For the (q{bar q})-channel, which is dominant at the Tevatron, its NLO PMC scale is much smaller than the top-quark mass in the small x-region, and thus its NLO cross-section is increased by about a factor of two. In the case of the (gg)-channel, which is dominant at the LHC, its NLO PMC scale slightly increases with the subprocess collision energy {radical}s, but it is still smaller than m{sub t} for {radical} {approx}< 1 TeV, and the resulting NLO cross-section is increased by {approx}20%. As a result, a larger {sigma}{sub t{bar t}} is obtained in comparison to the conventional scale-setting method, which agrees well with the present Tevatron and LHC data. More explicitly, by setting m{sub t} = 172.9 {+-} 1.1 GeV, we predict {sigma}{sub Tevatron, 1.96 TeV} = 7.626{sub -0.257}{sup +0.265} pb, {sigma}{sub LHC, 7 TeV} = 171.8{sub -5.6}{sup +5.8} pb and {sigma}{sub LHC, 14 TeV} = 941.3{sub -26.5}{sup +28.4} pb.
Restarting TMI unit one: social and psychological impacts
Sorensen, J.; Soderstrom, J.; Bolin, R.; Copenhaver, E.; Carnes, S.
1983-12-01T23:59:59.000Z
A technical background is provided for preparing an environmental assessment of the social and psychological impacts of restarting the undamaged reactor at Three Mile Island (TMI). Its purpose is to define the factors that may cause impacts, to define what those impacts might be, and to make a preliminary assessment of how impacts could be mitigated. It does not attempt to predict or project the magnitude of impacts. Four major research activities were undertaken: a literature review, focus-group discussions, community profiling, and community surveys. As much as possible, impacts of the accident at Unit 2 were differentiated from the possible impacts of restarting Unit 1. It is concluded that restart will generate social conflict in the TMI vicinity which could lead to adverse effects. Furthermore, between 30 and 50 percent of the population possess characteristics which are associated with vulnerability to experiencing negative impacts. Adverse effects, however, can be reduced with a community-based mitigation strategy.
LIFETIME PREDICTION FOR MODEL 9975 O-RINGS IN KAMS
Hoffman, E.; Skidmore, E.
2009-11-24T23:59:59.000Z
The Savannah River Site (SRS) is currently storing plutonium materials in the K-Area Materials Storage (KAMS) facility. The materials are packaged per the DOE 3013 Standard and transported and stored in KAMS in Model 9975 shipping packages, which include double containment vessels sealed with dual O-rings made of Parker Seals compound V0835-75 (based on Viton{reg_sign} GLT). The outer O-ring of each containment vessel is credited for leaktight containment per ANSI N14.5. O-ring service life depends on many factors, including the failure criterion, environmental conditions, overall design, fabrication quality and assembly practices. A preliminary life prediction model has been developed for the V0835-75 O-rings in KAMS. The conservative model is based primarily on long-term compression stress relaxation (CSR) experiments and Arrhenius accelerated-aging methodology. For model development purposes, seal lifetime is defined as a 90% loss of measurable sealing force. Thus far, CSR experiments have only reached this target level of degradation at temperatures {ge} 300 F. At lower temperatures, relaxation values are more tolerable. Using time-temperature superposition principles, the conservative model predicts a service life of approximately 20-25 years at a constant seal temperature of 175 F. This represents a maximum payload package at a constant ambient temperature of 104 F, the highest recorded in KAMS to date. This is considered a highly conservative value as such ambient temperatures are only reached on occasion and for short durations. The presence of fiberboard in the package minimizes the impact of such temperature swings, with many hours to several days required for seal temperatures to respond proportionately. At 85 F ambient, a more realistic but still conservative value, bounding seal temperatures are reduced to {approx}158 F, with an estimated seal lifetime of {approx}35-45 years. The actual service life for O-rings in a maximum wattage package likely lies higher than the estimates due to the conservative assumptions used for the model. For lower heat loads at similar ambient temperatures, seal lifetime is further increased. The preliminary model is based on several assumptions that require validation with additional experiments and longer exposures at more realistic conditions. The assumption of constant exposure at peak temperature is believed to be conservative. Cumulative damage at more realistic conditions will likely be less severe but is more difficult to assess based on available data. Arrhenius aging behavior is expected, but non-Arrhenius behavior is possible. Validation of Arrhenius behavior is ideally determined from longer tests at temperatures closer to actual service conditions. CSR experiments will therefore continue at lower temperatures to validate the model. Ultrasensitive oxygen consumption analysis has been shown to be useful in identifying non-Arrhenius behavior within reasonable test periods. Therefore, additional experiments are recommended and planned to validate the model.
Predictive Energy Optimization
Dickinson, P.
2013-01-01T23:59:59.000Z
Predictive?Energy?Optimization Peter?Dickinson Phone:?+1?(415)?233?2306 Email:??Peterd@buildingiq.com Twitter:??@Pete_BIQ BuildingIQ?Overview 2 ? Software?to?intelligently?assess?and?control?HVAC? energy for?commercial?building?portfolios...
Thesis Proposal Anytime Prediction
Garlan, David
approach incrementally applies a sequence of weaker predictors as time progresses, using each new result computation as possible to provide the most accurate result. This issue is further complicated by applications. Such an algorithm rapidly produces an initial prediction and then continues to refine the result as time allows
Joshua Garland; Elizabeth Bradley
2015-03-05T23:59:59.000Z
Prediction models that capture and use the structure of state-space dynamics can be very effective. In practice, however, one rarely has access to full information about that structure, and accurate reconstruction of the dynamics from scalar time-series data---e.g., via delay-coordinate embedding---can be a real challenge. In this paper, we show that forecast models that employ incomplete embeddings of the dynamics can produce surprisingly accurate predictions of the state of a dynamical system. In particular, we demonstrate the effectiveness of a simple near-neighbor forecast technique that works with a two-dimensional embedding. Even though correctness of the topology is not guaranteed for incomplete reconstructions like this, the dynamical structure that they capture allows for accurate predictions---in many cases, even more accurate than predictions generated using a full embedding. This could be very useful in the context of real-time forecasting, where the human effort required to produce a correct delay-coordinate embedding is prohibitive.
The prediction problem Empirical studies
McCullagh, Peter
The prediction problem Empirical studies Details and summary Conditional prediction intervals 2009 Peter McCullagh, V. Vovk, I. Nouretdinov, D. Devetyarov and A. Gammerman #12;The prediction problem Empirical studies Details and summary Outline 1 The prediction problem Linear regression model
Zi-Niu Wu
2013-10-02T23:59:59.000Z
For many natural process of growth, with the growth rate independent of size due to Gibrat law and with the growth process following a log-normal distribution, the ratio between the time (D) for maximum value and the time (L) for maximum growth rate (inflexion point) is then equal to the square root of the base of the natural logarithm (e^{1/2}). On the logarithm scale this ratio becomes one half ((1/2)). It remains an open question, due to lack of complete data for various cases with restricted growth, whether this e^{1/2} ratio can be stated as e^{1/2}-Law. Two established examples already published, one for an epidemic spreading and one for droplet production, support however this ratio. Another example appears to be the height of humain body. For boys the maximum height occurs near 23 years old while the maximum growth rate is at the age near 14, and there ratio is close to e^{1/2}. The main theoretical base to obtain this conclusion is problem independent, provided the growth process is restricted, such as public intervention to control the spreading of communicable epidemics, so that an entropy is associated with the process and the role of dissipation, representing the mechanism of intervention, is maximized. Under this formulation the principle of maximum rate of entropy production is used to make the production process problem independent.
Penetration rate prediction for percussive drilling via dry friction model
Krivtsov, Anton M.
Penetration rate prediction for percussive drilling via dry friction model Anton M. Krivtsov a of percussive drilling assuming a dry friction mechanism to explain the experimentally observed drop in pene as a frictional pair, and this can generate the pattern of the impact forces close to reality. Despite quite
Predicting Performance of PESQ in Case of Single Frame Losses
Wichmann, Felix
Predicting Performance of PESQ in Case of Single Frame Losses Christian Hoene, Enhtuya Dulamsuren-Lalla Technical University of Berlin, Germany Fax: +49 30 31423819 Email: hoene@ieee.org Abstract ITU's objective can measure the impact of single frame losses Â a source of impairment for which PESQ has not been
Use of Two Distillation Columns in Systems with Maximum Temperature Limitations
Gilchrist, James F.
Use of Two Distillation Columns in Systems with Maximum Temperature Limitations Rebecca H. Masel, Pennsylvania 18015, United States ABSTRACT: Maximum temperature limitations are encountered in distillation of the bottoms product fixes the column base pressure and, hence, the condenser pressure. The distillate
Maximum Power Transfer Tracking in a Solar USB Charger for Smartphones
Pedram, Massoud
chargers do not perform the maximum power point tracking [2], [3] of the solar panel. We excludeMaximum Power Transfer Tracking in a Solar USB Charger for Smartphones Abstract--Battery life poor capacity utilization during solar energy harvesting. In this paper, we propose and demonstrate
LANGMUIR WAVE ACTIVITY: COMPARING THE ULYSSES SOLAR MINIMUM AND SOLAR MAXIMUM ORBITS
California at Berkeley, University of
). The top three panels correspond to the southern segment of the solar minimum orbit; repeated passesLANGMUIR WAVE ACTIVITY: COMPARING THE ULYSSES SOLAR MINIMUM AND SOLAR MAXIMUM ORBITS R. J at the electron plasma frequency) during the solar minimum and solar maximum orbits of Ulysses. At high latitudes
Wang, Yuqing
Energy Production, Frictional Dissipation, and Maximum Intensity of a Numerically Simulated is eventually dissipated due to surface friction. Since the energy production rate is a linear function while frictional dissipation rate balances the energy production rate near the radius of maximum wind (RMW
Wang, Yuqing
0 Energy Production, Frictional Dissipation, and Maximum Intensity of a Numerically Simulated is eventually dissipated due to surface friction. Since the energy production rate is a linear function while frictional dissipation rate balances the energy production rate near the radius of maximum wind (RMW
Efficiency at maximum power of low dissipation Carnot engines Massimiliano Esposito
Kawai, Ryoichi
Efficiency at maximum power of low dissipation Carnot engines Massimiliano Esposito Center the efficiency at maximum power, , of engines performing finite-time Carnot cycles between a hot and a cold reservoir at temperatures Th and Tc, respectively. For engines reaching Carnot efficiency C = 1 - Tc
Osterloh, Frank
Maximum Theoretical Efficiency Limit of Photovoltaic Devices: Effect of Band Structure on Excited a theoretical limit for the maximum energy conversion efficiency of single junction photovoltaic cells for the efficiency variations observed for real photovoltaic devices today.4-6 Here, we show that the extractable
Maximum-Power-Point Tracking Method of Photovoltaic Using Only Single Current Sensor
Fujimoto, Hiroshi
» «Solar cell systems» Abstract This paper describes a novel strategy of maximum-power-point tracking point using only a single current sensor, i.e., a Hall-effect CT. Output power of the photovoltaic can-climbing method is employed to seek the maximum power point, using the output power obtained from only the current
How Is the Maximum Entropy of a Quantized Surface Related to Its Area?
I. B. Khriplovich; R. V. Korkin
2001-12-27T23:59:59.000Z
The maximum entropy of a quantized surface is demonstrated to be proportional to the surface area in the classical limit. The result is valid in loop quantum gravity, and in a somewhat more general class of approaches to surface quantization. The maximum entropy is calculated explicitly for some specific cases.
Tadi?, Vladislav B
2009-01-01T23:59:59.000Z
This paper considers the asymptotic properties of the recursive maximum likelihood estimation in hidden Markov models. The paper is focused on the asymptotic behavior of the log-likelihood function and on the point-convergence and convergence rate of the recursive maximum likelihood estimator. Using the principle of analytical continuation, the analyticity of the asymptotic log-likelihood function is shown for analytically parameterized hidden Markov models. Relying on this fact and some results from differential geometry (Lojasiewicz inequality), the almost sure point-convergence of the recursive maximum likelihood algorithm is demonstrated, and relatively tight bounds on the convergence rate are derived. As opposed to the existing result on the asymptotic behavior of maximum likelihood estimation in hidden Markov models, the results of this paper are obtained without assuming that the log-likelihood function has an isolated maximum at which the Hessian is strictly negative definite.
Unification of Field Theory and Maximum Entropy Methods for Learning Probability Densities
Kinney, Justin B
2014-01-01T23:59:59.000Z
Bayesian field theory and maximum entropy are two methods for learning smooth probability distributions (a.k.a. probability densities) from finite sampled data. Both methods were inspired by statistical physics, but the relationship between them has remained unclear. Here I show that Bayesian field theory subsumes maximum entropy density estimation. In particular, the most common maximum entropy methods are shown to be limiting cases of Bayesian inference using field theory priors that impose no boundary conditions on candidate densities. This unification provides a natural way to test the validity of the maximum entropy assumption on one's data. It also provides a better-fitting nonparametric density estimate when the maximum entropy assumption is rejected.
Broader source: Energy.gov [DOE]
Original Impact Calculations, from the Tool Kit Framework: Small Town University Energy Program (STEP).
Predicting Steam Turbine Performance
Harriz, J. T.
," PREDICTING STEAM TURBINE PERFORMANCE James T. Harriz, EIT Waterland, Viar & Associates, Inc. Wilmington, Delaware ABSTRACT Tracking the performance of extraction, back pressure and condensing steam turbines is a crucial part... energy) and test data are presented. Techniques for deriving efficiency curves from each source are described. These techniques can be applied directly to any steam turbine reliability study effort. INTRODUCTION As the cost of energy resources...
Estimating the error in simulation prediction over the design space
Shinn, R. (Rachel); Hemez, F. M. (François M.); Doebling, S. W. (Scott W.)
2003-01-01T23:59:59.000Z
This study addresses the assessrnent of accuracy of simulation predictions. A procedure is developed to validate a simple non-linear model defined to capture the hardening behavior of a foam material subjected to a short-duration transient impact. Validation means that the predictive accuracy of the model must be established, not just in the vicinity of a single testing condition, but for all settings or configurations of the system. The notion of validation domain is introduced to designate the design region where the model's predictive accuracy is appropriate for the application of interest. Techniques brought to bear to assess the model's predictive accuracy include test-analysis coi-relation, calibration, bootstrapping and sampling for uncertainty propagation and metamodeling. The model's predictive accuracy is established by training a metalnodel of prediction error. The prediction error is not assumed to be systcmatic. Instead, it depends on which configuration of the system is analyzed. Finally, the prediction error's confidence bounds are estimated by propagating the uncertainty associated with specific modeling assumptions.
Compiler And Runtime Support For Predictive Control Of Power And Cooling
Dietz, Henry G. "Hank"
1 Compiler And Runtime Support For Predictive Control Of Power And Cooling Henry G. Dietz William R clusters make significant demands on the power and cooling infrastructure. Minimizing the impact achieving the best system performance by predicting and avoiding power and cooling problems. Although
Axis control using model predictive control: identification and friction effect reduction
Boyer, Edmond
Axis control using model predictive control: identification and friction effect reduction Pedro this numerical model is used to synthetize a predictive GPC controller reducing the impact of the friction Rodriguez-Ayerbe, Didier Dumur, Sylvain Lavernhe** * SUPELEC- E3S, Automatic Control, 3 rue Joliot Curie
Nonlinear Sound during Granular Impact
Abram H. Clark; Alec J. Petersen; Lou Kondic; R. P. Behringer
2014-08-08T23:59:59.000Z
How do dynamic stresses propagate in granular material after a high-speed impact? This occurs often in natural and industrial processes. Stress propagation in a granular material is controlled by the inter-particle force law, $f$, in terms of particle deformation, $\\delta$, often given by $f\\propto\\delta^{\\alpha}$, with $\\alpha>1$. This means that a linear wave description is invalid when dynamic stresses are large compared to the original confining pressure. With high-speed video and photoelastic grains with varying stiffness, we experimentally study how forces propagate following an impact and explain the results in terms of the nonlinear force law (we measure $\\alpha\\approx 1.4$). The spatial structure of the forces and the propagation speed, $v_f$, depend on a dimensionless parameter, $M'=t_cv_0/d$, where $v_0$ is the intruder speed at impact, $d$ is the grain diameter, and $t_c$ is a binary collision time between grains with relative speed $v_0$. For $M'\\ll 1$, propagati ng forces are chain-like, and the measured $v_f \\propto d/t_c\\propto v_b(v_0/v_b)^\\frac{\\alpha-1}{\\alpha+1}$, where $v_b$ is the bulk sound speed. For larger $M'$, the force response has a 2D character, and forces propagate faster than predicted by $d/t_c$ due to collective stiffening of a packing.
Atlantic Ocean circulation at the last glacial maximum : inferences from data and models
Dail, Holly Janine
2012-01-01T23:59:59.000Z
This thesis focuses on ocean circulation and atmospheric forcing in the Atlantic Ocean at the Last Glacial Maximum (LGM, 18-21 thousand years before present). Relative to the pre-industrial climate, LGM atmospheric CO? ...
Tropical climate variability from the last glacial maximum to the present
Dahl, Kristina Ariel
2005-01-01T23:59:59.000Z
This thesis evaluates the nature and magnitude of tropical climate variability from the Last Glacial Maximum to the present. The temporal variability of two specific tropical climate phenomena is examined. The first is the ...
Investigating the angle or response and maximum stability of a cohesive granular pile
Nowak, Sara Alice, 1982-
2004-01-01T23:59:59.000Z
In this thesis, I investigate the static and dynamic properties of a granular heap made cohesive by an interstitial fluid. I present the results of experimental work measuring the maximum angle of stability and the angle ...
Dynamical Reconstruction of Upper-Ocean Conditions in the Last Glacial Maximum Atlantic
Wunsch, Carl
Proxies indicate that the Last Glacial Maximum (LGM) Atlantic Ocean was marked by increased meridional and zonal near sea surface temperature gradients relative to today. Using a least squares fit of a full general circulation ...
Achieving Consistent Maximum Brake Torque with Varied Injection Timing in a DI Diesel Engine
Kroeger, Timothy H
2013-09-19T23:59:59.000Z
, revealing the premixed and diffusion burn fractions as well as important engine and exhaust design criteria such as maximum in-cylinder pressure and exhaust composition. These results are significant in diesel engine design because cheaper, lighter engines...
Microcontroller Servomotor for Maximum Effective Power Point for Solar Cell System
Al-Khalidy, M.; Al-Rawi, O.; Noaman, N.
2010-01-01T23:59:59.000Z
In this paper a Maximum Power point (MPP) tracking algorithm is developed using dual-axis servomotor feedback tracking control system. An efficient and accurate servomotor system is used to increase the system efficiency ...
Submodule Integrated Distributed Maximum Power Point Tracking for Solar Photovoltaic Applications
Pilawa-Podgurski, Robert C. N.
This paper explores the benefits of distributed power electronics in solar photovoltaic applications through the use of submodule integrated maximum power point trackers (MPPT). We propose a system architecture that provides ...
Author's personal copy Unified behaviour of maximum soot yields of methane, ethane and propane
Gülder, Ömer L.
Author's personal copy Unified behaviour of maximum soot yields of methane, ethane and propane the current study and the previous measurements in similar flames with methane, ethane, and propane flames
Microcontroller Servomotor for Maximum Effective Power Point for Solar Cell System
Al-Khalidy, M.; Al-Rawi, O.; Noaman, N.
2010-01-01T23:59:59.000Z
In this paper a Maximum Power point (MPP) tracking algorithm is developed using dual-axis servomotor feedback tracking control system. An efficient and accurate servomotor system is used to increase the system efficiency and reduces the solar cell...
Maximum Network Lifetime in Wireless Sensor Networks with Adjustable Sensing Ranges
Wu, Jie
1 Maximum Network Lifetime in Wireless Sensor Networks with Adjustable Sensing Ranges Mihaela problem in wireless sensor networks with adjustable sensing range. Communication and sensing consume Wireless sensor networks (WSNs) constitute the foundation of a broad range of applications related
Parameterization of Maximum Wave Heights Forced by Hurricanes: Application to Corpus Christi, Texas
Taylor, Sym 1978-
2012-12-07T23:59:59.000Z
sensitivity based on the investigation of several hurricane parameters. Also presented is the development of parameterized maximum significant wave height models. These are determined by incorporating three forms of an equivalent fetch into (1) dimensionless...
A more efficient formulation for computation of the maximum loading points in electric power systems
Chiang, H.D. [Cornell Univ., Ithaca, NY (United States). School of Electrical Engineering; Jean-Jumeau, R. [Electricite d`Haita, Port-au-Prince (Haiti)
1995-05-01T23:59:59.000Z
This paper presents a more efficient formulation for computation of the maximum loading points. A distinguishing feature of the new formulation is that it is of dimension (n + 1), instead of the existing formulation of dimension (2n + 1), for n-dimensional load flow equations. This feature makes computation of the maximum loading points very inexpensive in comparison with those required in the existing formulation. A theoretical basis for the new formulation is provided. The new problem formulation is derived by using a simple reparameterization scheme and exploiting the special properties of the power flow model. Moreover, the proposed test function is shown to be monotonic in the vicinity of a maximum loading point. Therefore, it allows one to monitor the approach to maximum loading points during the solution search process. Simulation results on a 234-bus system are presented.
Acoustic Space Dimensionality Selection and Combination using the Maximum Entropy Principle
Abdel-Haleem, Yasser H; Renals, Steve; Lawrence, Neil D
2004-01-01T23:59:59.000Z
In this paper we propose a discriminative approach to acoustic space dimensionality selection based on maximum entropy modelling. We form a set of constraints by composing the acoustic space with the space of phone classes, and use a continuous...
Magazine R729 Motor prediction
Flanagan, Randy
Magazine R729 Primer Motor prediction Daniel M. Wolpert* and J. Randall Flanagan The concept of motor prediction was first considered by Helmholtz when trying to understand how we localise visual position of the eye, predicted the gaze position based on a copy of the motor command acting on the eye
An Analysis of Maximum Residential Energy Efficiency in Hot and Humid Climates
Malhotra, M.; Haberl, J. S.
2006-01-01T23:59:59.000Z
Systems in Hot and Humid Climates, Orlando, Florida, July 24-26, 2006 Methodology 1. Development of the Basecase Simulation Model 2. Analysis of Energy Saving Measures 3. Development of the Maximum Energy-Efficient House 4. Economic Analysis DOE-2 Input...AN ANALYSIS OF MAXIMUM RESIDENTIAL ENERGY EFFICIENCY IN HOT AND HUMID CLIMATES Mini Malhotra Graduate Research Assistant Jeff Haberl, Ph.D., P.E. Professor/Associate Director Energy Systems Laboratory, Texas A&M University College...
ON THE PROBLEM OF UNIQUENESS FOR THE MAXIMUM STIRLING NUMBER(S) OF THE SECOND KIND
Pomerance, Carl
ON THE PROBLEM OF UNIQUENESS FOR THE MAXIMUM STIRLING NUMBER(S) OF THE SECOND KIND E. Rodney Say that an integer n is exceptional if the maximum Stirling number of the second kind S(n, k) occurs or equal to x is O(x3/5+ ), for any > 0. 1. Introduction Let S(n, k) be the Stirling number of the second
A stochastic model for sediment yield using the Principle of Maximum Entropy
Singh, V. P.; Krstanovic, P. F.
WATER RESOURCES RESEARCH, VOL. 23, NO. 5, PAGES 781-793, MAY 1987 A Stochastic Model for Sediment Yield Using the Principle of Maximum Entropy V. P. SINGH AND P. F. KRSTANOVIC Department of Civil Engineering, Louisiana State University, Baton... Rouge The principle of maximum entropy was applied to derive a stochastic model for sediment yield from upland watersheds. By maximizing the conditional entropy subject to certain constraints, a probability distribution of sediment yield conditioned...
Kim, Leonard, E-mail: kimlh@umdnj.edu [Department of Radiation Oncology, Cancer Institute of New Jersey, Robert Wood Johnson Medical School, University of Medicine and Dentistry of New Jersey, New Brunswick, NJ (United States); Narra, Venkat; Yue, Ning [Department of Radiation Oncology, Cancer Institute of New Jersey, Robert Wood Johnson Medical School, University of Medicine and Dentistry of New Jersey, New Brunswick, NJ (United States)
2013-07-01T23:59:59.000Z
Recent studies have reported potentially clinically meaningful dose differences when heterogeneity correction is used in breast balloon brachytherapy. In this study, we report on the relationship between heterogeneity-corrected and -uncorrected doses for 2 commonly used plan evaluation metrics: maximum point dose to skin surface and maximum point dose to ribs. Maximum point doses to skin surface and ribs were calculated using TG-43 and Varian Acuros for 20 patients treated with breast balloon brachytherapy. The results were plotted against each other and fit with a zero-intercept line. Max skin dose (Acuros) = max skin dose (TG-43) ? 0.930 (R{sup 2} = 0.995). The average magnitude of difference from this relationship was 1.1% (max 2.8%). Max rib dose (Acuros) = max rib dose (TG-43) ? 0.955 (R{sup 2} = 0.9995). The average magnitude of difference from this relationship was 0.7% (max 1.6%). Heterogeneity-corrected maximum point doses to the skin surface and ribs were proportional to TG-43-calculated doses. The average deviation from proportionality was 1%. The proportional relationship suggests that a different metric other than maximum point dose may be needed to obtain a clinical advantage from heterogeneity correction. Alternatively, if maximum point dose continues to be used in recommended limits while incorporating heterogeneity correction, institutions without this capability may be able to accurately estimate these doses by use of a scaling factor.
January 2, 2002 Bayesian Prediction of National MultiContaminant Trends in Community
of the US EPA Office of Ground Water and Drinking Water, Standards and Risk Management Division and heavy metals. The regulatory process begins by establishing a maximum contaminant level goal (MCLGJanuary 2, 2002 Bayesian Prediction of National MultiContaminant Trends in Community Water System
Annette Schafer; Arthur S. Rood; A. Jeffrey Sondrup
2011-08-01T23:59:59.000Z
The groundwater impacts have been analyzed for the proposed RH-LLW disposal facility. A four-step analysis approach was documented and applied. This assessment compared the predicted groundwater ingestion dose to the more restrictive of either the 25 mrem/yr all pathway dose performance objective, or the maximum contaminant limit performance objective. The results of this analysis indicate that the groundwater impacts for either proposed facility location are expected to be less than the performance objectives. The analysis was prepared to support the NEPA-EA for the top two ranking of the proposed RH-LLW sites. As such, site-specific conditions were incorporated for each set of results generated. These site-specific conditions were included to account for the transport of radionuclides through the vadose zone and through the aquifer at each site. Site-specific parameters included the thickness of vadose zone sediments and basalts, moisture characteristics of the sediments, and aquifer velocity. Sorption parameters (Kd) were assumed to be very conservative values used in Track II analysis of CERCLA sites at INL. Infiltration was also conservatively assumed to represent higher rates corresponding to disturbed soil conditions. The results of this analysis indicate that the groundwater impacts for either proposed facility location are expected to be less than the performance objectives.
Annette Schafer; Arthur S. Rood; A. Jeffrey Sondrup
2011-12-01T23:59:59.000Z
The groundwater impacts have been analyzed for the proposed RH-LLW disposal facility. A four-step analysis approach was documented and applied. This assessment compared the predicted groundwater ingestion dose to the more restrictive of either the 25 mrem/yr all pathway dose performance objective, or the maximum contaminant limit performance objective. The results of this analysis indicate that the groundwater impacts for either proposed facility location are expected to be less than the performance objectives. The analysis was prepared to support the NEPA-EA for the top two ranking of the proposed RH-LLW sites. As such, site-specific conditions were incorporated for each set of results generated. These site-specific conditions were included to account for the transport of radionuclides through the vadose zone and through the aquifer at each site. Site-specific parameters included the thickness of vadose zone sediments and basalts, moisture characteristics of the sediments, and aquifer velocity. Sorption parameters (Kd) were assumed to be very conservative values used in Track II analysis of CERCLA sites at INL. Infiltration was also conservatively assumed to represent higher rates corresponding to disturbed soil conditions. The results of this analysis indicate that the groundwater impacts for either proposed facility location are expected to be less than the performance objectives.
Requirements for Predictive Analytics
Troy Hiltbrand
2012-03-01T23:59:59.000Z
It is important to have a clear understanding of how traditional Business Intelligence (BI) and analytics are different and how they fit together in optimizing organizational decision making. With tradition BI, activities are focused primarily on providing context to enhance a known set of information through aggregation, data cleansing and delivery mechanisms. As these organizations mature their BI ecosystems, they achieve a clearer picture of the key performance indicators signaling the relative health of their operations. Organizations that embark on activities surrounding predictive analytics and data mining go beyond simply presenting the data in a manner that will allow decisions makers to have a complete context around the information. These organizations generate models based on known information and then apply other organizational data against these models to reveal unknown information.
Turbulent energy exchange: Calculation and relevance for profile prediction
Candy, J. [General Atomics, San Diego, California 92186 (United States)] [General Atomics, San Diego, California 92186 (United States)
2013-08-15T23:59:59.000Z
The anomalous heat production due to turbulence is neither routinely calculated in nonlinear gyrokinetic simulations nor routinely retained in profile prediction studies. In this work, we develop a symmetrized method to compute the exchange which dramatically reduces the intermittency in the time-dependent moment, thereby improving the accuracy of the time-average. We also examine the practical impact on transport-timescale simulations, and show that the exchange has only a minor impact on profile evolution for a well-studied DIII-D discharge.
Mutual colliding impact fast ignition
Winterberg, Friedwardt, E-mail: winterbe@unr.edu [Department of Physics, College of Science, University of Nevada, Reno, 1664 N. Virginia Street, Reno, Nevada 89557-0220 (United States)
2014-09-15T23:59:59.000Z
It is proposed to apply the well established colliding beam technology of high energy physics to the fast hot spot ignition of a highly compressed DT (deuterium-tritium) target igniting a larger D (deuterium) burn, by accelerating a small amount of solid deuterium, and likewise a small amount of tritium, making a head-on collision in the center of the target, projecting them through conical ducts situated at the opposite side of the target and converging in its center. In their head-on collision, the relative collision velocity is 5/3 times larger compared to the collision velocity of a stationary target. The two pieces have for this reason to be accelerated to a smaller velocity than would otherwise be needed to reach upon impact the same temperature. Since the velocity distribution of the two head-on colliding projectiles is with its two velocity peaks non-Maxwellian, the maximum cross section velocity product turns out to be substantially larger than the maximum if averaged over a Maxwellian. The D and T projectiles would have to be accelerated with two sabots driven by powerful particle or laser beams, permitting a rather large acceleration length. With the substantially larger cross section-velocity product by virtue of the non-Maxwellian velocity distribution, a further advantage is that the head-on collision produces a large magnetic field by the thermomagnetic Nernst effect, enhancing propagating burn. With this concept, the ignition of the neutron-less hydrogen-boron (HB{sup 11}) reaction might even be possible in a heterogeneous assembly of the hydrogen and the boron to reduce the bremsstrahlung-losses, resembling the heterogeneous assembly in a graphite-natural uranium reactor, there to reduce the neutron losses.
Benchmarking performance: Environmental impact statements in Egypt
Badr, El-Sayed A., E-mail: ebadr@mans.edu.e [Environmental Sciences Department, Faculty of Science at Damietta, Mansoura University, New Damietta City, PO Box 103 (Egypt); Zahran, Ashraf A., E-mail: ashraf_zahran@yahoo.co [Environmental Studies and Research Institute, Minufiya University, Sadat City, Sixth Zone, PO 32897 (Egypt); Cashmore, Matthew, E-mail: m.cashmore@uea.ac.u [InteREAM, School of Environmental Sciences, University of East Anglia, Norwich, Norfolk, NR4 7TJ (United Kingdom)
2011-04-15T23:59:59.000Z
Environmental impact assessment (EIA) was formally introduced in Egypt in 1994. This short paper evaluates 'how well' the EIA process is working in practice in Egypt, by reviewing the quality of 45 environmental impact statements (EISs) produced between 2000 and 2007 for a variety of project types. The Lee and Colley review package was used to assess the quality of the selected EISs. About 69% of the EISs sampled were found to be of a satisfactory quality. An assessment of the performance of different elements of the EIA process indicates that descriptive tasks tend to be performed better than scientific tasks. The quality of core elements of EIA (e.g., impact prediction, significance evaluation, scoping and consideration of alternatives) appears to be particularly problematic. Variables that influence the quality of EISs are identified and a number of broad recommendations are made for improving the effectiveness of the EIA system.
Period-luminosity and period-luminosity-colour relations for Mira variables at maximum light
S. M. Kanbur; M. A. Hendry; D. Clarke
1997-04-14T23:59:59.000Z
In this paper we confirm the existence of period-luminosity (PL) and period-luminosity-colour (PLC) relations at maximum light for O and C Mira variables in the LMC. We demonstrate that in the J and H bands the maximum light PL relations have a significantly smaller dispersion than their counterparts at mean light, while the K band and bolometric PL relations have a dispersion comparable to that at mean light. In the J, H and K bands the fitted PL relations for the O Miras are found to have smaller dispersion than those for the C Miras, at both mean and maximum light, while the converse is true for the relations based on bolometric magnitudes. The inclusion of a non-zero log period term is found to be highly significant in all cases except that of the C Miras in the J band, for which the data are found to be consistent with having constant absolute magnitude. This suggests the possibility of employing C Miras as standard candles. We suggest both a theoretical justification for the existence of Mira PL relations at maximum light and a possible explanation of why these relations should have a smaller dispersion than at mean light. The existence of such maximum light relations offers the possibility of extending the range and improving the accuracy of the Mira distance scale to Galactic globular clusters and to other galaxies.
HFIR Vessel Maximum Permissible Pressures for Operating Period 26 to 50 EFPY (100 MW)
Cheverton, R.D.; Inger, J.R.
1999-01-01T23:59:59.000Z
Extending the life of the HFIR pressure vessel from 26 to 50 EFPY (100 MW) requires an updated calculation of the maximum permissible pressure for a range in vessel operating temperatures (40-120 F). The maximum permissible pressure is calculated using the equal-potential method, which takes advantage of knowledge gained from periodic hydrostatic proof tests and uses the test conditions (pressure, temperature, and frequency) as input. The maximum permissible pressure decreases with increasing time between hydro tests but is increased each time a test is conducted. The minimum values that occur just prior to a test either increase or decrease with time, depending on the vessel temperature. The minimum value of these minimums is presently specified as the maximum permissible pressure. For three vessel temperatures of particular interest (80, 88, and 110 F) and a nominal time of 3.0 EFPY(100 MVV)between hydro tests, these pressures are 677, 753, and 850 psi. For the lowest temperature of interest (40 F), the maximum permissible pressure is 295 psi.
On the maximum value of the cosmic abundance of oxygen and the oxygen yield
L. S. Pilyugin; T. X. Thuan; J. M. Vilchez
2007-01-11T23:59:59.000Z
We search for the maximum oxygen abundance in spiral galaxies. Because this maximum value is expected to occur in the centers of the most luminous galaxies, we have constructed the luminosity - central metallicity diagram for spiral galaxies, based on a large compilation of existing data on oxygen abundances of HII regions in spiral galaxies. We found that this diagram shows a plateau at high luminosities (-22.3 oxygen abundance 12+log(O/H) ~ 8.87. This provides strong evidence that the oxygen abundance in the centers of the most luminous metal-rich galaxies reaches the maximum attainable value of oxygen abundance. Since some fraction of the oxygen (about 0.08 dex) is expected to be locked into dust grains, the maximum value of the true gas+dust oxygen abundance in spiral galaxies is 12+log(O/H) ~ 8.95. This value is a factor of ~ 2 higher than the recently estimated solar value. Based on the derived maximum oxygen abundance in galaxies, we found the oxygen yield to be about 0.0035, depending on the fraction of oxygen incorporated into dust grains.
Bootstrap Prediction Intervals for Time Series /
Pan, Li
2013-01-01T23:59:59.000Z
1.5 Joint Prediction intervals . . . . . . . . . . . . .1.6 Generalized Bootstrap prediction1.8.1 Bootstrap Prediction Intervals Based on Studentized
Prediction Intervals in Generalized Linear Mixed Models
Yang, Cheng-Hsueh
2013-01-01T23:59:59.000Z
3.1. BLP Based Prediction Intervals………………………………………..……3.2. BP Based Prediction Intervals………………..………………………..……4.1.1. BLP Based Prediction Interval………………………………………. 4.1.2.
Computational prediction and analysis of protein structure
Meruelo, Alejandro Daniel
2012-01-01T23:59:59.000Z
I, and Bowie JU. Kink prediction in membrane proteins.Los Angeles Computational prediction and analysis of proteinOF THE DISSERTATION Computational prediction and analysis of
Empirical Prediction Intervals for County Population Forecasts
Rayer, Stefan; Smith, Stanley K.; Tayman, Jeff
2009-01-01T23:59:59.000Z
in the determination and prediction of population forecastperformance of empirical prediction intervals? Table 5 shows26, 163–184. Empirical Prediction Intervals for County
PREDICTION COMPANY Slippage March 1996 1
-bond..................................................................................................................................5 2.2 Oil ..............................................................15 5. Market impact and dependence on order size ...............................................................................................................21 6. Market impact
Spectral Modeling of SNe Ia Near Maximum Light: Probing the Characteristics of Hydro Models
E. Baron; S. Bongard; David Branch; Peter H. Hauschildt
2006-03-03T23:59:59.000Z
We have performed detailed NLTE spectral synthesis modeling of 2 types of 1-D hydro models: the very highly parameterized deflagration model W7, and two delayed detonation models. We find that overall both models do about equally well at fitting well observed SNe Ia near to maximum light. However, the Si II 6150 feature of W7 is systematically too fast, whereas for the delayed detonation models it is also somewhat too fast, but significantly better than that of W7. We find that a parameterized mixed model does the best job of reproducing the Si II 6150 line near maximum light and we study the differences in the models that lead to better fits to normal SNe Ia. We discuss what is required of a hydro model to fit the spectra of observed SNe Ia near maximum light.
Fast singular value decomposition combined maximum entropy method for plasma tomography
Kim, Junghee; Choe, W. [Department of Physics, Korea Advanced Institute of Science and Technology, 373-1 Guseong-dong, Yuseong-gu, Daejeon 305-701(Korea, Republic of)
2006-02-15T23:59:59.000Z
The maximum entropy method (MEM) is a widely used reconstruction algorithm in plasma physics. Drawbacks of the conventional MEM are its heavy time-consuming process and possible generation of noisy reconstruction results. In this article, a modified maximum entropy algorithm is described which speeds up the calculation and shows better noise handling capability. Similar to the rapid minimum Fisher information method, the modified maximum entropy algorithm uses simple matrix operations instead of treating a fully nonlinear problem. The preprocess for rapid tomographic calculation is based on the vector operations and the singular value decomposition (SVD). The initial guess of the sought-for emissivity is calculated by SVD and this helped reconstruction about ten times faster than the conventional MEM. Therefore, the developed fast MEM can be used for intershot tomographic analyses of fusion plasmas.
Mechanistic-based Ductility Prediction
Broader source: Energy.gov (indexed) [DOE]
Predictive modeling & performance: - Performance validation of "demo" structure in corrosion, fatigue, and durability Total project funding DOE: 3,000,000 ...
Predictive maintenance: Waterwall wastage
Rich, J.T.; Bilmanis, A. Jr.; Gandhi, R. [Potomac Electric Power Company, Washington, DC (United States)
1996-07-01T23:59:59.000Z
With the installation and use of burner systems to minimize NO, emissions, boiler waterwall wastage has become a more significant issue. This paper provides a description of an established inspection program to monitor waterwall wastage and other furnace issues that has now been developed into a predictive maintenance and condition-monitoring tool. This allows plant personnel to forecast the need for waterwall replacement activities and provides a basis for determining the effects of modifying boiler operating parameters affecting wastage. Potomac Electric Power Company (PEPCO), Washington, DC, has performed waterwall inspections of its boilers during planned overhauls for several years. These inspections consist of visual and ultrasonic thickness inspections of waterwall tubes at selected locations in the boiler. The ultrasonic thickness data collected is used to calculate current wastage rates, and to extrapolate tube wall thickness to the next planned outage. This information, along with the visual inspection data and panel replacement/repair history is used to select replacement and repair locations for the current outage, and to plan the replacement strategy for the next planned outage. A computer program has been developed to aid in the analysis of the data, and to provide plots of waterwall repair/replacement history as well as wall thickness and wastage rate information. The principal goals driving this program are to: (1) eliminate tube leaks between planned overhauls; (2) provide data to the outage managers for use in making replacement decisions; (3) determine wastage patterns; (4) forecast future replacement and repair needs; and (5) determine effect of changes to operating parameters on tube life. The waterwall condition monitoring program employed by PEPCO has proven to be successful in achieving its goals over the years. It must be emphasized that NDE is only one aspect of the program.
Carroll, Susan A.; Keating, Elizabeth; Mansoor, Kayyum; Dai, Zhenxue; Sun, Yunwei; Trainor-Guitton, Whitney; Brown, Christopher F.; Bacon, Diana H.
2014-09-15T23:59:59.000Z
The National Risk Assessment Partnership (NRAP) is developing a science-based toolset for the analysis of potential impacts to groundwater chemistry from CO2 injection (www.netldoe.gov/nrap). The toolset adopts a stochastic approach in which predictions address uncertainties in shallow groundwater and leakage scenarios. It is derived from detailed physics and chemistry simulation results that are used to train more computationally efficient models, referred to here as reduced-order models (ROMs), for each component system. In particular, these tools can be used to help regulators and operators understand the expected sizes and longevity of plumes in pH, TDS, and dissolved metals that could result from a leakage of brine and/or CO2 from a storage reservoir into aquifers. This information can inform, for example, decisions on monitoring strategies that are both effective and efficient. We have used this approach to develop predictive reduced-order models for two common types of reservoirs, but the approach could be used to develop a model for a specific aquifer or other common types of aquifers. In this paper we describe potential impacts to groundwater quality due to CO2 and brine leakage, discuss an approach to calculate thresholds under which no impact to groundwater occurs, describe the time scale for impact on groundwater, and discuss the probability of detecting a groundwater plume should leakage occur. To facilitate this, multi-phase flow and reactive transport simulations and emulations were developed for two classes of aquifers, considering uncertainty in leakage source terms and aquifer hydrogeology. We targeted an unconfined fractured carbonate aquifer based on the Edwards aquifer in Texas and a confined alluvium aquifer based on the High Plains Aquifer in Kansas, which share characteristics typical of many drinking water aquifers in the United States. The hypothetical leakage scenarios centered on the notion that wellbores are the most likely conduits for brine and CO2 leaks. Leakage uncertainty was based on hypothetical injection of CO2 for 50 years at a rate of 5 million tons per year into a depleted oil/gas reservoir with high permeability and, one or more wells provided leakage pathways from the storage reservoir to the overlying aquifer. This scenario corresponds to a storage site with historical oil/gas production and some poorly completed legacy wells that went undetected through site evaluation, operations, and post-closure. For the aquifer systems and leakage scenarios studied here, CO2 and brine leakage are likely to drive pH below and increase total dissolved solids (TDS) above the “no-impact thresholds;” and the subsequent plumes, although small, are likely to persist for long periods of time in the absence of remediation. In these scenarios, however, risk to human health may not be significant for two reasons. First, our simulated plume volumes are much smaller than the average inter-well spacing for these representative aquifers, so the impacted groundwater would be unlikely to be pumped for drinking water. Second, even within the impacted plume volumes little water exceeds the primary maximum contamination levels.
Byrne, Raymond Harry; Silva Monroy, Cesar Augusto.
2012-12-01T23:59:59.000Z
The valuation of an electricity storage device is based on the expected future cash ow generated by the device. Two potential sources of income for an electricity storage system are energy arbitrage and participation in the frequency regulation market. Energy arbitrage refers to purchasing (stor- ing) energy when electricity prices are low, and selling (discharging) energy when electricity prices are high. Frequency regulation is an ancillary service geared towards maintaining system frequency, and is typically procured by the independent system operator in some type of market. This paper outlines the calculations required to estimate the maximum potential revenue from participating in these two activities. First, a mathematical model is presented for the state of charge as a function of the storage device parameters and the quantities of electricity purchased/sold as well as the quantities o ered into the regulation market. Using this mathematical model, we present a linear programming optimization approach to calculating the maximum potential revenue from an elec- tricity storage device. The calculation of the maximum potential revenue is critical in developing an upper bound on the value of storage, as a benchmark for evaluating potential trading strate- gies, and a tool for capital nance risk assessment. Then, we use historical California Independent System Operator (CAISO) data from 2010-2011 to evaluate the maximum potential revenue from the Tehachapi wind energy storage project, an American Recovery and Reinvestment Act of 2009 (ARRA) energy storage demonstration project. We investigate the maximum potential revenue from two di erent scenarios: arbitrage only and arbitrage combined with the regulation market. Our analysis shows that participation in the regulation market produces four times the revenue compared to arbitrage in the CAISO market using 2010 and 2011 data. Then we evaluate several trading strategies to illustrate how they compare to the maximum potential revenue benchmark. We conclude with a sensitivity analysis with respect to key parameters.
What is the maximum rate at which entropy of a string can increase?
Ropotenko, Kostyantyn [State Administration of Communications, Ministry of Transport and Communications of Ukraine 22, Khreschatyk, 01001, Kyiv (Ukraine)
2009-03-15T23:59:59.000Z
According to Susskind, a string falling toward a black hole spreads exponentially over the stretched horizon due to repulsive interactions of the string bits. In this paper such a string is modeled as a self-avoiding walk and the string entropy is found. It is shown that the rate at which information/entropy contained in the string spreads is the maximum rate allowed by quantum theory. The maximum rate at which the black hole entropy can increase when a string falls into a black hole is also discussed.
Hydrodynamic Relaxation of an Electron Plasma to a Near-Maximum Entropy State
Rodgers, D. J.; Servidio, S.; Matthaeus, W. H.; Mitchell, T. B.; Aziz, T. [Department of Physics and Astronomy, University of Delaware, Newark, Delaware 19716 (United States); Montgomery, D. C. [Department of Physics and Astronomy, Dartmouth College, Hanover, New Hampshire 03755 (United States)
2009-06-19T23:59:59.000Z
Dynamical relaxation of a pure electron plasma in a Malmberg-Penning trap is studied, comparing experiments, numerical simulations and statistical theories of weakly dissipative two-dimensional (2D) turbulence. Simulations confirm that the dynamics are approximated well by a 2D hydrodynamic model. Statistical analysis favors a theoretical picture of relaxation to a near-maximum entropy state with constrained energy, circulation, and angular momentum. This provides evidence that 2D electron fluid relaxation in a turbulent regime is governed by principles of maximum entropy.
Maximum-Entropy Closures for Kinetic Theories of Neuronal Network Dynamics
Rangan, Aaditya V.; Cai, David [Courant Institute of Mathematical Sciences, New York University, New York, New York 10012 (United States)
2006-05-05T23:59:59.000Z
We analyze (1+1)D kinetic equations for neuronal network dynamics, which are derived via an intuitive closure from a Boltzmann-like equation governing the evolution of a one-particle (i.e., one-neuron) probability density function. We demonstrate that this intuitive closure is a generalization of moment closures based on the maximum-entropy principle. By invoking maximum-entropy closures, we show how to systematically extend this kinetic theory to obtain higher-order (1+1)D kinetic equations and to include coupled networks of both excitatory and inhibitory neurons.
Maximum Entropy Models of Shortest Path and Outbreak Distributions in Networks
Bauckhage, Christian; Hadiji, Fabian
2015-01-01T23:59:59.000Z
Properties of networks are often characterized in terms of features such as node degree distributions, average path lengths, diameters, or clustering coefficients. Here, we study shortest path length distributions. On the one hand, average as well as maximum distances can be determined therefrom; on the other hand, they are closely related to the dynamics of network spreading processes. Because of the combinatorial nature of networks, we apply maximum entropy arguments to derive a general, physically plausible model. In particular, we establish the generalized Gamma distribution as a continuous characterization of shortest path length histograms of networks or arbitrary topology. Experimental evaluations corroborate our theoretical results.
Predicting hydraulic tensile fracture spacing in strata-bound systems$ C.I. McDermott n
Haszeldine, Stuart
these systems could show. The model emphasises the importance of the local stress distributions on the formation will influence the distribution of fluid pressure and impact on the predicted spacings. However, the model can
Original article Predicted global warming
Boyer, Edmond
Original article Predicted global warming and Douglas-fir chilling requirements DD McCreary1 DP to predicted global warming. Douglas-fir / chilling / global warming / bud burst / reforestation RÃ©sumÃ© offer evidence that mean global warming of 3-4 Â°C could occur within the next century, particularly
Chu, Peter C.
-4 May, 2006 1 1 Abstract - The Navy's Impact Burial Model (IMPACT35) predicts the cylindrical mine.S. Navy from "blue" water, anti-Soviet focus, towards a concentration on the regional littoral threats of the world. With the increasing number of regional and asymmetric threats, the Navy must operate effectively
NOAA Technical Memorandum NWS HYDRO 39 PROBABLE MAXIMUM PRECIPITATION FOR THE UPPER
NOAA Technical Memorandum NWS HYDRO 39 PROBABLE MAXIMUM PRECIPITATION FOR THE UPPER DEERFIELD RIVER The Office of Hydrology (HYDRO) of the National Weather Service (NWS) develops procedures for making river agencies, and conducts pertinent research and development. NOAA Technical Memorandums in the NWS HYDRO
Analysis and Optimization of Maximum Power Point Tracking Algorithms in the Presence of
Odam, Kofi
, Charles R. Sullivan, Senior Member, IEEE Abstract--This paper analyzes the effect of noise on sev- eral maximum power point tracking (MPPT) algorithms for photovoltaic systems. Noise is an essential of the signals, mitigating the noise. The effect of noise and other parameters on tracking performance
Maximum Output Amplitude of Linear Systems for certain Input Constraints1
Sontag, Eduardo
of this input and calculates the maximum amplitude of the output. The solution of this problem is a necessary, Linear Sys- tems. 1 Introduction and Motivation Most practical control problems are dominated by hard bounds. Valves can only be operated between fully open and fully closed, pumps and compressors have
Blind Equalization via Approximate Maximum Likelihood Source Seungjin CHOI x1 and Andrzej CICHOCKI y
Choi, Seungjin
Blind Equalization via Approximate Maximum Likelihood Source Separation Seungjin CHOI x1, RIKEN 2-1 Hirosawa, Wako-shi Saitama 351-0198, JAPAN Abstract Blind equalization of single input multiple output (SIMO) FIR channels can be refor- mulated as the problem of blind source separation
Mandelis, Andreas
Photothermoacoustic imaging of biological tissues: maximum depth characterization comparison for Advanced Diffusion-Wave Technologies Department of Mechanical and Industrial Engineering 5 King's College induced in light-absorbing materials can be observed either as a transient signal in time domain
Maximum principle and bang-bang property of time optimal controls for Schrodinger type systems
Paris-Sud XI, Université de
Maximum principle and bang-bang property of time optimal controls for Schr¨odinger type systems J conditions for the bang- bang property of optimal controls. The results are then applied to some systems-Bang property, Schr¨odinger equation 1 Introduction Time optimal control is a classical problem for linear
Paris-Sud XI, Université de
Recursive maximum likelihood estimation for structural health monitoring: Kalman and particle by a likelihood approach. In a first part the structural health monitoring problem is written in term of recursive al [6] in a more simple framework. Particle approximation for health monitoring was already proposed
Maximum-Power-Point Tracking Method of Photovoltaic Power System Using Single Transducer
Fujimoto, Hiroshi
Maximum-Power-Point Tracking Method of Photovoltaic Power System Using Single Transducer Toshihiko) method of a photovoltaic power system with less transducer count. A unique feature of this method concern on an environmental issue since 1990's. Above all, a photovoltaic power generation system is one
Design of wind farm layout for maximum wind energy capture Andrew Kusiak*, Zhe Song
Kusiak, Andrew
Design of wind farm layout for maximum wind energy capture Andrew Kusiak*, Zhe Song Intelligent Accepted 24 August 2009 Available online 22 September 2009 Keywords: Wind farm Wind turbine Layout design Optimization Evolutionary algorithms Operations research a b s t r a c t Wind is one of the most promising
Electrical Estimation of Conditional Probability for Maximum-likelihood Based PMD Mitigation
Zweck, John
Xi, T¨ulay Adali, and John Zweck Department of Computer Science and Electrical Engineering UniversityElectrical Estimation of Conditional Probability for Maximum-likelihood Based PMD Mitigation Wenze probability density functions in the presence of both all-order PMD and ASE noise are estimated electronically
Brayton Cycles: From the ...rst law, the maximum transfers for component SSSF control volumes are
. For the simple reversible Brayton cycle of given pressure ratio rp $ (p2=p1) = (p3=p4) [compare Example 9.4, pp is higher than 1 r (2=7) . For the ideal regenerator, the thermal e¢ ciency approaches the Carnot-cycle eBrayton Cycles: From the ...rst law, the maximum transfers for component SSSF control volumes are w
Maximum Likelihood Estimation of Mixture Densities for Binned and Truncated Multivariate
Smyth, Padhraic
Maximum Likelihood Estimation of Mixture Densities for Binned and Truncated Multivariate Data in data analysis and machine learning. This paper addresses the problem of fitting mixture densities to multivariate binned and truncated data. The EM approach proposed by McLachlan and Jones (1988
EXTENSION OF THE MAXIMUM POWER REGION OF DOUBLY-SALIENT VARIABLE RELUCTANCE MOTORS
Paris-Sud XI, UniversitÃ© de
-Salient Variable Reluctance Motors (DSVRM) has been investigated and developed for variable-speed drives during, variable-frequency generators, wind wheels, machine tools, etc.). In these applications, it is generally necessary to operate in a regime of a high speed ux-weakening (zone of maximum constant power), for a better
Turro, Nicholas J.
Hydrogen Molecules inside Fullerene C70: Quantum Dynamics, Energetics, Maximum Occupancy of Chemistry, New York UniVersity, New York, New York 10003, Department of Chemistry, Brown UniVersity, ProVidence, Rhode Island 02912, and Department of Chemistry, Columbia UniVersity, New York, New York 10027 Received
Vision Research 40 (2000) 11571165 Local luminance factors that determine the maximum disparity for
Kingdom, Frederick A. A.
2000-01-01T23:59:59.000Z
Vision Research 40 (2000) 11571165 Local luminance factors that determine the maximum disparity dense arrays of micropatterns, whose luminance characteristics were manipulated. In Experiment 1, we with luminance spatial frequency and with Gabor size, but was constant for a constant bandwidth (frequency times
Exact Maximum Likelihood estimator for the BL-GARCH model under elliptical distributed
Paris-Sud XI, Université de
Exact Maximum Likelihood estimator for the BL-GARCH model under elliptical distributed innovations, Brisbane QLD 4001, Australia Abstract We are interested in the parametric class of Bilinear GARCH (BL-GARCH examine, in this paper, the BL-GARCH model in a general setting under some non-normal distributions. We
Learning with MaximumEntropy Distributions \\Lambda Yishay Mansour Mariano Schain
Mansour, Yishay
Learning with MaximumEntropy Distributions \\Lambda Yishay Mansour Mariano Schain Computer Science Dept. TelAviv University fmansour,marianog@math.tau.ac.il Abstract We are interested in distributions which are de rived as a maximumentropy distribution given a set of constraints. More specifically, we
A Global Maximum Power Point Tracking Method for PV Module Integrated Converters
Liberzon, Daniel
with large arrays of series-connected PV mod- ules connected to a central inverter. Figure 1(a) depicts, it is conceivable that these systems do not extract the maximum possible power from the PV array when individual PV to partial shading. In such systems, power electronics circuits are integrated directly with PV modules
Demirel, Melik C.
Degradation Overview Westinghouse applies a conservative approach when evaluating degradation on a gasket evaluation. The team was tasked with collecting this data to determine when degradation endangers the pressure seal. Objectives The team's objectives were to determine the maximum degradation which the gasket
Stone, G. A.; DeVito, E. M.; Nease, N. H.
2002-01-01T23:59:59.000Z
Texas adopted in its residential building energy code a maximum 0.40 solar heat gain coefficient (SHGC) for fenestration (e.g., windows, glazed doors and skylights)-a critical driver of cooling energy use, comfort and peak demand. An analysis...
Jagannatham, Aditya K.
in wireless sensor networks (WSN). The proposed algo- rithm employs the temporal correlation of the narrowband Wireless Sensor Network (WSN), Likelihood, Sphere De- coder 1. INTRODUCTION Wireless Sensor Networks (WSNBayesian Data and Channel Joint Maximum-Likelihood Based Error Correction in Wireless Sensor
Maximum-Lifetime Multi-Channel Routing in Wireless Sensor Networks
Nasipuri, Asis
Maximum-Lifetime Multi-Channel Routing in Wireless Sensor Networks Amitangshu Pal and Asis Nasipuri and routing problem in multi-channel wireless sensor networks for maximizing the worst case network lifetime solution for the problem. Keywords: Wireless sensor networks, multi-channel rout- ing, distributed
The chronology of the Last Glacial Maximum and deglacial events in central Argentine Patagonia
The chronology of the Last Glacial Maximum and deglacial events in central Argentine Patagonia and deglaciation in the Lago PueyrredoÂ´n valley of central Patagonia, 47.5 S, Argentina. The valley was a major and the onset of deglaciation occurred broadly synchronously throughout Patagonia. Deglaciation resulted
Paris-Sud XI, UniversitÃ© de
periods often appear in industry due to a machine breakdown (stochastic) or preventive maintenance of machine unavailability. However, in some cases (e.g. preventive maintenance), the maintenance of a machineSingle-machine scheduling with periodic and exible periodic maintenance to minimize maximum
THE SECOND LAW OF THERMODYNAMICS AND THE GLOBAL CLIMATE SYSTEM: A REVIEW OF THE MAXIMUM
Lorenz, Ralph D.
to absorption of solar radiation in the climate system is found to be irrelevant to the maximized prop- erties from hot to cold places, thereby producing the kinetic energy of the fluid itself. His generalTHE SECOND LAW OF THERMODYNAMICS AND THE GLOBAL CLIMATE SYSTEM: A REVIEW OF THE MAXIMUM ENTROPY
Hydraulic limits on maximum plant transpiration and the emergence of the safetyefficiency trade-off
Jackson, Robert B.
Hydraulic limits on maximum plant transpiration and the emergence of the safetyefficiency trade.12126 Key words: hydraulic limitation, safety efficiency trade-off, soilplantatmosphere model, trait hydraulics constrain ecosystem productivity by setting physical limits to water transport and hence carbon
Performance of Photovoltaic Maximum Power Point Tracking Algorithms in the Presence of Noise
Odam, Kofi
Performance of Photovoltaic Maximum Power Point Tracking Algorithms in the Presence of Noise tracking (MPPT) algorithms for photovoltaic systems, including how noise affects both tracking speed-performance photovoltaic sys- tems. An intelligent controller adjusts the voltage, current, or impedance seen by a solar
PHYSICAL REVIEW E 86, 041144 (2012) Efficiency at maximum power for classical particle transport
Lindenberg, Katja
2012-01-01T23:59:59.000Z
PHYSICAL REVIEW E 86, 041144 (2012) Efficiency at maximum power for classical particle transport transport. DOI: 10.1103/PhysRevE.86.041144 PACS number(s): 05.70.Ln, 05.40.-a, 05.20.-y I. INTRODUCTION Over, operating between a hot and cold bath at temperatures T (1) and T (2) , respectively, possesses universal
Optimization of stomatal conductance for maximum carbon gain under dynamic soil moisture
Katul, Gabriel
Optimization of stomatal conductance for maximum carbon gain under dynamic soil moisture Stefano Accepted 26 September 2013 Available online 9 October 2013 Keywords: Optimization Photosynthesis Soil moisture Stomatal conductance Transpiration a b s t r a c t Optimization theories explain a variety
A Distributed Approach to Maximum Power Point Tracking for Photovoltaic Sub-Module Differential
Liberzon, Daniel
of the proposed distributed algorithm. I. INTRODUCTION IN photovoltaic (PV) energy systems, PV modules are often of the system, small size and low power ratings of the power electronics circuit components. Due1 A Distributed Approach to Maximum Power Point Tracking for Photovoltaic Sub-Module Differential
An Analysis of the Maximum Drawdown Risk Malik Magdon-Ismail
Magdon-Ismail, Malik
Engineering Cairo University Giza, Egypt. amir@alumni.caltech.edu Introduction. The maximum cumulative loss to the Calmar ratio is the Sterling ratio, Sterling(T) = Return over [0,T ] MDD over [0,T ]-10% , and our discussion applies equally well to the Sterling ratio. 1 #12;primarily due to a lack of an analytical
An Analysis of the Maximum Drawdown Risk Malik MagdonIsmail
Magdon-Ismail, Malik
Engineering Cairo University Giza, Egypt. amir@alumni.caltech.edu Introduction. The maximum cumulative loss is not prevalent 1 Similar to the Calmar ratio is the Sterling ratio, Sterling(T ) = Return over [0,T ] MDD over [0,T ]-10% , and our discussion applies equally well to the Sterling ratio. 1 #12; primarily due
Extraction of Spectral Functions from Dyson-Schwinger Studies via the Maximum Entropy Method
Dominik Nickel
2006-07-20T23:59:59.000Z
It is shown how to apply the Maximum Entropy Method (MEM) to numerical Dyson-Schwinger studies for the extraction of spectral functions of correlators from their corresponding Euclidean propagators. Differences to the application in lattice QCD are emphasized and, as an example, the spectral functions of massless quarks in cold and dense matter are presented.
Nasser, Hassan
2014-01-01T23:59:59.000Z
We propose a numerical method to learn Maximum Entropy (MaxEnt) distributions with spatio-temporal constraints from experimental spike trains. This is an extension of two papers [10] and [4] who proposed the estimation of parameters where only spatial constraints were taken into account. The extension we propose allows to properly handle memory effects in spike statistics, for large sized neural networks.
Beyond Boltzmann-Gibbs statistics: Maximum entropy hyperensembles out of equilibrium Gavin E at equilibrium? Here, we argue the most appropriate additional parameter is the nonequilibrium entropy of ways that the same system can be out of equilibrium. That the equilibrium entropy is maximized given
Extraction of spectral functions from Dyson-Schwinger studies via the maximum entropy method
Nickel, Dominik [Institut fuer Kernphysik, Technische Universitaet Darmstadt, D-64289 Darmstadt (Germany)]. E-mail: dominik.nickel@physik.tu-darmstadt.de
2007-08-15T23:59:59.000Z
It is shown how to apply the Maximum Entropy Method (MEM) to numerical Dyson-Schwinger studies for the extraction of spectral functions of correlators from their corresponding Euclidean propagators. Differences to the application in lattice QCD are emphasized and, as an example, the spectral functions of massless quarks in cold and dense matter are presented.
Lattice Field Theory with the Sign Problem and the Maximum Entropy Method
Masahiro Imachi; Yasuhiko Shinno; Hiroshi Yoneyama
2007-02-09T23:59:59.000Z
Although numerical simulation in lattice field theory is one of the most effective tools to study non-perturbative properties of field theories, it faces serious obstacles coming from the sign problem in some theories such as finite density QCD and lattice field theory with the $\\theta$ term. We reconsider this problem from the point of view of the maximum entropy method.
Relating maximum airway dilation and subsequent reconstriction to reactivity in human lungs
Lutchen, Kenneth
Relating maximum airway dilation and subsequent reconstriction to reactivity in human lungs Lauren in human lungs. J Appl Physiol 96: 18081814, 2004. First published February 6, 2004; 10.1152/japplphysiol reactivity in healthy lungs by prohibiting DI for an extended period. The present study had two goals. First
DIAGNOSIS OF CONDITIONAL MAXIMUM TORNADO DAMAGE PROBABILITIESP2.20 Bryan T. Smith1
. Thompson1 , Harold E. Brooks2 , Andrew R. Dean1 , and Kimberly L. Elmore2 1 NOAA/NWS/NCEP/Storm Prediction Center, Norman, Oklahoma 2 NOAA/National Severe Storms Laboratory, Norman, Oklahoma 1. Introduction. Smith, NOAA/NWS/NCEP/Storm Prediction Center, 120 David L. Boren Blvd., Suite 2300, Norman, OK 73072
SALTSTONE DISPOSAL FACILITY: DETERMINATION OF THE PROBABLE MAXIMUM WATER TABLE ELEVATION
Hiergesell, R
2005-04-01T23:59:59.000Z
A coverage depicting the configuration of the probable maximum water table elevation in the vicinity of the Saltstone Disposal Facility (SDF) was developed to support the Saltstone program. This coverage is needed to support the construction of saltstone vaults to assure that they remain above the maximum elevation of the water table during the Performance Assessment (PA) period of compliance. A previous investigation to calculate the historical high water table beneath the SDF (Cook, 1983) was built upon to incorporate new data that has since become available to refine that estimate and develop a coverage that could be extended to the perennial streams adjacent to the SDF. This investigation incorporated the method used in the Cook, 1983 report to develop an estimate of the probable maximum water table for a group of wells that either existed at one time at or near the SDF or which currently exist. Estimates of the probable maximum water table at these wells were used to construct 2D contour lines depicting this surface beneath the SDF and extend them to the nearby hydrologic boundaries at the perennial streams adjacent to the SDF. Although certain measures were implemented to assure that the contour lines depict a surface above which the water table will not rise, the exact elevation of this surface cannot be known with complete certainty. It is therefore recommended that the construction of saltstone vaults incorporate a vertical buffer of at least 5-feet between the base of the vaults and the depicted probable maximum water table elevation. This should provide assurance that the water table under the wet extreme climatic condition will never rise to intercept the base of a vault.
Deschenes, Olivier; Greenstone, Michael
2004-01-01T23:59:59.000Z
1989): “Global Climate Change and Agriculture: An Economicpart of climate change for agriculture. These predictedAgriculture,” in Robert Mendelsohn and James E. Neumann (editors), The Impact of Climate Change
The economic impact of global climate and tropospheric oxone on world agricultural production
Wang, Xiaodu
2005-01-01T23:59:59.000Z
The objective of my thesis is to analyze the economic impact on agriculture production from changes in climate and tropospheric ozone, and related policy interventions. The analysis makes use of the Emissions Prediction ...
Avoiding Earth Impacts Using Albedo Modification as Applied to 99942 Apophis
Margulieux, Richard Steven
2011-08-08T23:59:59.000Z
Current orbital solutions for 99942 Apophis predict a close approach to the Earth in April 2029. The parameters of that approach affect the future trajectory of Apophis, potentially leading to an impact in 2036, 2056, 2068, etc. The dynamic model...
The impacts of mining on the habitat ecology of raccoons in east-central Texas
Beucler, Michele
1995-01-01T23:59:59.000Z
Habitat alterations associated with strip-mining and reclamation may reduce the suitability of an area for wildlife by redistributing requirements for survival and reproduction. I evaluated several predictions regarding the impacts of habitat...
SHORT-TERM SOLAR FLARE PREDICTION USING MULTIRESOLUTION PREDICTORS
Yu Daren; Huang Xin; Hu Qinghua; Zhou Rui [Harbin Institute of Technology, No. 92 West Da-Zhi Street, Harbin, Heilongjiang Province (China); Wang Huaning [National Astronomical Observatories, 20A Datun Road, Chaoyang District, Beijing (China); Cui Yanmei, E-mail: huangxinhit@yahoo.com.c [Center for Space Science and Applied Research, No. 1 Nanertiao, Zhongguancun, Haidian District, Beijing (China)
2010-01-20T23:59:59.000Z
Multiresolution predictors of solar flares are constructed by a wavelet transform and sequential feature extraction method. Three predictors-the maximum horizontal gradient, the length of neutral line, and the number of singular points-are extracted from Solar and Heliospheric Observatory/Michelson Doppler Imager longitudinal magnetograms. A maximal overlap discrete wavelet transform is used to decompose the sequence of predictors into four frequency bands. In each band, four sequential features-the maximum, the mean, the standard deviation, and the root mean square-are extracted. The multiresolution predictors in the low-frequency band reflect trends in the evolution of newly emerging fluxes. The multiresolution predictors in the high-frequency band reflect the changing rates in emerging flux regions. The variation of emerging fluxes is decoupled by wavelet transform in different frequency bands. The information amount of these multiresolution predictors is evaluated by the information gain ratio. It is found that the multiresolution predictors in the lowest and highest frequency bands contain the most information. Based on these predictors, a C4.5 decision tree algorithm is used to build the short-term solar flare prediction model. It is found that the performance of the short-term solar flare prediction model based on the multiresolution predictors is greatly improved.
Information theory and climate prediction
Leung, Lai-yung
1988-01-01T23:59:59.000Z
as different as two states chosen at random (Lorenz, 1969b). ln this research, we are concerned with the prediction of climate in which there is no change of external forcing. Lorenz (1975) referred to this case as the predictability of the first kind... used as such a quantity (e. g. Barnett and Hasselmann, 1979). Time Figure 1. A Subensemble of Systems that Pass Through the Neighborhood of Initial Temperature Anomaly To The signal is the change of some climatic variable which we want to predict...
Feigon, Brooke
Predicting future climate change for the UK and East AngliaPredicting future climate change confidence in the following future changes in UK climate: Average temperature increases Summer temperature part in farming, so we might expect these changes to have an impact on agriculture affecting both
Peltier, W. Richard
, and Climatological Implications STEPHEN D. GRIFFITHS AND W. RICHARD PELTIER Department of Physics, University; Griffiths and Peltier 2008; Arbic et al. 2008). Such changes can impact the climate system in a variety
Model prediction for reactor control
Ardell, G.G.; Gumowski, B.
1983-06-01T23:59:59.000Z
Model prediction is offered as a substitute to lengthy analysis of sample procedures to control product properties not amendable to direct measurement during chemical processing. A computer model of a reactor is set up, and control actions, based on current predicted values, are established. The control is based on predicted ''measurements'' which are derived using a dynamic process model solved on-line. The model is corrected by real measurements in the process operation. A two phase exothermic catalyzed reaction, with the objective of producing material with specified properties, is tested in this paper. The model prediction performance was very good. Model systems enable a more effective control to be exercised than the sample method.
Predictive metrics for supply chains
Haydamous, Linda (Linda A.)
2009-01-01T23:59:59.000Z
The economic crisis that the world has been experiencing since 2008 has led several organizations to announce record losses and bankruptcies. But couldn't the chief factors have been predicted, at least to some extent? ...
Optimal prediction in molecular dynamics
Benjamin Seibold
2008-08-22T23:59:59.000Z
Optimal prediction approximates the average solution of a large system of ordinary differential equations by a smaller system. We present how optimal prediction can be applied to a typical problem in the field of molecular dynamics, in order to reduce the number of particles to be tracked in the computations. We consider a model problem, which describes a surface coating process, and show how asymptotic methods can be employed to approximate the high dimensional conditional expectations, which arise in optimal prediction. The thus derived smaller system is compared to the original system in terms of statistical quantities, such as diffusion constants. The comparison is carried out by Monte-Carlo simulations, and it is shown under which conditions optimal prediction yields a valid approximation to the original system.
Predicting System Performance with Uncertainty
Yan, B.; Malkawi, A.
2012-01-01T23:59:59.000Z
on uncertainty in input values for predictions. The input values associated with predictions can come from estimations or measurements corrupted with noise. Therefore, it is more reasonable to assign probability distributions over their domains of plausible... increases, the number of simulations required increases significantly. The time cost limits the extension of uncertainty analysis. Current studies have not covered uncertainty related to system controls in operations. Measurements in system operations...
Indeterminism and predictability in economics
Barton, David Merritt
1968-01-01T23:59:59.000Z
happens to embrace. Empirical evidence, however, does not support the view that economic behavior is predictable, nor the view that deterministic theories of economic change are suitable descriptions of the real world, A C K N 0 N L E D g E ME N T I... I. DETERMINISil VERSUS INDETERMINISM--THE ISSUE II. THE HISTORICAL BACKGROUND III. THE CONTROVERSY XN MODERN PHYSICS AVD BIOLOGY 13 23 IV. THE PROBLEM OF PREDICTABILITY IN ECONOMIC THEORY CONCLUSION 44 66 BIBLIOGRAPHY VITA 75 INTRODUCTION...
Three dimensional winds: A maximum cross-correlation application to elastic lidar data
Buttler, W.T.
1996-05-01T23:59:59.000Z
Maximum cross-correlation techniques have been used with satellite data to estimate winds and sea surface velocities for several years. Los Alamos National Laboratory (LANL) is currently using a variation of the basic maximum cross-correlation technique, coupled with a deterministic application of a vector median filter, to measure transverse winds as a function of range and altitude from incoherent elastic backscatter lidar (light detection and ranging) data taken throughout large volumes within the atmospheric boundary layer. Hourly representations of three-dimensional wind fields, derived from elastic lidar data taken during an air-quality study performed in a region of complex terrain near Sunland Park, New Mexico, are presented and compared with results from an Environmental Protection Agency (EPA) approved laser doppler velocimeter. The wind fields showed persistent large scale eddies as well as general terrain-following winds in the Rio Grande valley.
On the minimum and maximum mass of neutron stars and the delayed collapse
Strobel, K; Strobel, Klaus; Weigel, Manfred K.
2001-01-01T23:59:59.000Z
The minimum and maximum mass of protoneutron stars and neutron stars are investigated. The hot dense matter is described by relativistic (including hyperons) and non-relativistic equations of state. We show that the minimum mass ($\\sim$ 0.88 - 1.28 $M_{\\sun}$) of a neutron star is determined by the earliest stage of its evolution and is nearly unaffected by the presence of hyperons. The maximum mass of a neutron star is limited by the protoneutron star or hot neutron star stage. Further we find that the delayed collapse of a neutron star into a black hole during deleptonization is not only possible for equations of state with softening components, as for instance, hyperons, meson condensates etc., but also for neutron stars with a pure nucleonic-leptonic equation of state.
On the minimum and maximum mass of neutron stars and the delayed collapse
Klaus Strobel; Manfred K. Weigel
2000-12-14T23:59:59.000Z
The minimum and maximum mass of protoneutron stars and neutron stars are investigated. The hot dense matter is described by relativistic (including hyperons) and non-relativistic equations of state. We show that the minimum mass ($\\sim$ 0.88 - 1.28 $M_{\\sun}$) of a neutron star is determined by the earliest stage of its evolution and is nearly unaffected by the presence of hyperons. The maximum mass of a neutron star is limited by the protoneutron star or hot neutron star stage. Further we find that the delayed collapse of a neutron star into a black hole during deleptonization is not only possible for equations of state with softening components, as for instance, hyperons, meson condensates etc., but also for neutron stars with a pure nucleonic-leptonic equation of state.
Sullivan, Terry [Brookhaven National Lab. (BNL), Upton, NY (United States). Biological, Environmental, and Climate Sciences Dept.
2014-12-02T23:59:59.000Z
ZionSolutions is in the process of decommissioning the Zion Nuclear Power Plant in order to establish a new water treatment plant. There is some residual radioactive particles from the plant which need to be brought down to levels so an individual who receives water from the new treatment plant does not receive a radioactive dose in excess of 25 mrem/y?¹. The objectives of this report are: (a) To present a simplified conceptual model for release from the buildings with residual subsurface structures that can be used to provide an upper bound on contaminant concentrations in the fill material; (b) Provide maximum water concentrations and the corresponding amount of mass sorbed to the solid fill material that could occur in each building for use in dose assessment calculations; (c) Estimate the maximum concentration in a well located outside of the fill material; and (d) Perform a sensitivity analysis of key parameters.
Maximum-Entropy Meshfree Method for Compressible and Near-Incompressible Elasticity
Ortiz, A; Puso, M A; Sukumar, N
2009-09-04T23:59:59.000Z
Numerical integration errors and volumetric locking in the near-incompressible limit are two outstanding issues in Galerkin-based meshfree computations. In this paper, we present a modified Gaussian integration scheme on background cells for meshfree methods that alleviates errors in numerical integration and ensures patch test satisfaction to machine precision. Secondly, a locking-free small-strain elasticity formulation for meshfree methods is proposed, which draws on developments in assumed strain methods and nodal integration techniques. In this study, maximum-entropy basis functions are used; however, the generality of our approach permits the use of any meshfree approximation. Various benchmark problems in two-dimensional compressible and near-incompressible small strain elasticity are presented to demonstrate the accuracy and optimal convergence in the energy norm of the maximum-entropy meshfree formulation.
Hanel, Rudolf; Gell-Mann, Murray
2014-01-01T23:59:59.000Z
The maximum entropy principle (MEP) is a method for obtaining the most likely distribution functions of observables from statistical systems, by maximizing entropy under constraints. The MEP has found hundreds of applications in ergodic and Markovian systems in statistical mechanics, information theory, and statistics. For several decades there exists an ongoing controversy whether the notion of the maximum entropy principle can be extended in a meaningful way to non-extensive, non-ergodic, and complex statistical systems and processes. In this paper we start by reviewing how Boltzmann-Gibbs-Shannon entropy is related to multiplicities of independent random processes. We then show how the relaxation of independence naturally leads to the most general entropies that are compatible with the first three Shannon-Khinchin axioms, the (c,d)-entropies. We demonstrate that the MEP is a perfectly consistent concept for non-ergodic and complex statistical systems if their relative entropy can be factored into a general...
Maximum-entropy principle for static and dynamic high-field transport in semiconductors
Trovato, M. [Dipartimento di Matematica, Universita di Catania, Viale A. Doria, 95125 Catania (Italy); Reggiani, L. [Dipartimento di Ingegneria dell' Innovazione e Nanotechnology National Laboratory of CNR-INFM, Universita di Lecce, Via Arnesano s/n, 73100 Lecce (Italy)
2006-06-15T23:59:59.000Z
Within the maximum entropy principle we present a general theory able to provide, in a dynamical context, the macroscopic relevant variables for carrier transport under electric fields of arbitrary strength. For the macroscopic variables the linearized maximum entropy approach is developed including full-band effects within a total energy scheme. Under spatially homogeneous conditions, we construct a closed set of hydrodynamic equations for the small-signal (dynamic) response of the macroscopic variables. The coupling between the driving field and the energy dissipation is analyzed quantitatively by using an arbitrary number of moments of the distribution function. The theoretical approach is applied to n-Si at 300 K and is validated by comparing numerical calculations with ensemble Monte Carlo simulations and with experimental data.
Hydrodynamic equations for electrons in graphene obtained from the maximum entropy principle
Barletti, Luigi, E-mail: luigi.barletti@unifi.it [Dipartimento di Matematica e Informatica “Ulisse Dini”, Università degli Studi di Firenze, Viale Morgagni 67/A, 50134 Firenze (Italy)
2014-08-15T23:59:59.000Z
The maximum entropy principle is applied to the formal derivation of isothermal, Euler-like equations for semiclassical fermions (electrons and holes) in graphene. After proving general mathematical properties of the equations so obtained, their asymptotic form corresponding to significant physical regimes is investigated. In particular, the diffusive regime, the Maxwell-Boltzmann regime (high temperature), the collimation regime and the degenerate gas limit (vanishing temperature) are considered.
REMARKS ON THE MAXIMUM ENTROPY METHOD APPLIED TO FINITE TEMPERATURE LATTICE QCD.
UMEDA, T.; MATSUFURU, H.
2005-07-25T23:59:59.000Z
We make remarks on the Maximum Entropy Method (MEM) for studies of the spectral function of hadronic correlators in finite temperature lattice QCD. We discuss the virtues and subtlety of MEM in the cases that one does not have enough number of data points such as at finite temperature. Taking these points into account, we suggest several tests which one should examine to keep the reliability for the results, and also apply them using mock and lattice QCD data.
Towards the application of the Maximum Entropy Method to finite temperature Upsilon Spectroscopy
M. Oevers; C. Davies; J. Shigemitsu
2000-09-22T23:59:59.000Z
According to the Narnhofer Thirring Theorem interacting systems at finite temperature cannot be described by particles with a sharp dispersion law. It is therefore mandatory to develop new methods to extract particle masses at finite temperature. The Maximum Entropy method offers a path to obtain the spectral function of a particle correlation function directly. We have implemented the method and tested it with zero temperature Upsilon correlation functions obtained from an NRQCD simulation. Results for different smearing functions are discussed.
Maximum entropy deconvolution of resonant inelastic x-ray scattering spectra
J. Laverock; A. R. H. Preston; D. Newby Jr; K. E. Smith; S. B. Dugdale
2012-02-10T23:59:59.000Z
Resonant inelastic x-ray scattering (RIXS) has become a powerful tool in the study of the electronic structure of condensed matter. Although the linewidths of many RIXS features are narrow, the experimental broadening can often hamper the identification of spectral features. Here, we show that the Maximum Entropy technique can successfully be applied in the deconvolution of RIXS spectra, improving the interpretation of the loss features without a severe increase in the noise ratio.
Remarks on the Maximum Entropy Method applied to finite temperature lattice QCD
Takashi Umeda; Hideo Matsufuru
2005-10-05T23:59:59.000Z
We make remarks on the Maximum Entropy Method (MEM) for studies of the spectral function of hadronic correlators in finite temperature lattice QCD. We discuss the virtues and subtlety of MEM in the cases that one does not have enough number of data points such as at finite temperature. Taking these points into account, we suggest several tests which one should examine to keep the reliability for the results, and also apply them using mock and lattice QCD data.
Holzer, Mark; Primeau, Francois W; Smethie, William M; Khatiwala, Samar
2010-01-01T23:59:59.000Z
Gull (1991), Bayesian maximum entropy image reconstruction,F. Primeau (2006), A maximum entropy approach to water massSouth- ern Ocean? A maximum entropy approach to global water
Environmental impact report (draft)
Not Available
1980-05-01T23:59:59.000Z
The three projects as proposed by Pacific Gas and Electric Company and the environmental analysis of the projects are discussed. Sections on the natural and social environments of the proposed projects and their surrounding areas consist of descriptions of the setting, discussions of the adverse and beneficial consequences of the project, and potential mitigation measures to reduce the effects of adverse impacts. The Environmental Impact Report includes discussions of unavoidable adverse effects, irreversible changes, long-term and cumulative impacts, growth-inducing effects, and feasible alternatives to the project. (MHR)
Pflugrath, Brett D.; Brown, Richard S.; Carlson, Thomas J.
2012-03-01T23:59:59.000Z
This study investigated the maximum depth at which juvenile Chinook salmon Oncorhynchus tshawytscha can acclimate by attaining neutral buoyancy. Depth of neutral buoyancy is dependent upon the volume of gas within the swim bladder, which greatly influences the occurrence of injuries to fish passing through hydroturbines. We used two methods to obtain maximum swim bladder volumes that were transformed into depth estimations - the increased excess mass test (IEMT) and the swim bladder rupture test (SBRT). In the IEMT, weights were surgically added to the fishes exterior, requiring the fish to increase swim bladder volume in order to remain neutrally buoyant. SBRT entailed removing and artificially increasing swim bladder volume through decompression. From these tests, we estimate the maximum acclimation depth for juvenile Chinook salmon is a median of 6.7m (range = 4.6-11.6 m). These findings have important implications to survival estimates, studies using tags, hydropower operations, and survival of juvenile salmon that pass through large Kaplan turbines typical of those found within the Columbia and Snake River hydropower system.
THE IMPACT OF GENERATION MIX ON PLACEMENT OF STATIC VAR COMPENSATORS
THE IMPACT OF GENERATION MIX ON PLACEMENT OF STATIC VAR COMPENSATORS Robert H. Lasseter, Fellow to provide the maximum transfer capability for all possible generation mixes. The margin to low voltage limit bus system will be used to demonstrate this method over a wide range of generation patterns. Keywords
THE IMPACT OF GENERATION MIX ON PLACEMENT OF STATIC VAR COMPENSATORS
flexible and competitive. Due to the marketplace forces the generation mix could change day to dayTHE IMPACT OF GENERATION MIX ON PLACEMENT OF STATIC VAR COMPENSATORS Robert H. Lasseter, Fellow to provide maximum transfer capability for all possible generation mixes. The margin to low voltage limit
Impact of Low-Level Jets on the Nocturnal Urban Heat Island Intensity in Oklahoma City
Xue, Ming
Impact of Low-Level Jets on the Nocturnal Urban Heat Island Intensity in Oklahoma City XIAO-MING HU Center for Analysis and Prediction of Storms, University of Oklahoma, Norman, Oklahoma PETRA M. KLEIN AND MING XUE Center for Analysis and Prediction of Storms and School of Meteorology, University of Oklahoma
What Is An Environmental Impact ...
National Nuclear Security Administration (NNSA)
What Is An Environmental Impact Statement? What Is An Environmental Impact Statement? An EIS is prepared in a series of steps: gathering government and public comments to define...
Tafreshi, Hooman Vahedi
Predicting shape and stability of airwater interface on superhydrophobic surfaces comprised-differential equation for the three dimensional shape of airwater interface on superhydrophobic surfaces comprised is drawn for designing pore shapes for superhydrophobic surfaces with maximum stability. VC 2012 American
Draft Environmental Impact Statement
Broader source: Energy.gov (indexed) [DOE]
Environmental Impact Statement for the Searchlight Wind Energy Project NVN-084626 and NVN-086777 DES 11-52 Bureau of Land Management Las Vegas Field Office in cooperation with...
Levi, Ran
Cool Farming: Climate impacts of agriculture and mitigation potential greenpeace.org Campaigningfor meat categories as well as milk and selected plant products for comparison. 36 Figure 1: Total global
Impacted material placement plans
Hickey, M.J.
1997-01-29T23:59:59.000Z
Impacted material placement plans (IMPP) are documents identifying the essential elements in placing remediation wastes into disposal facilities. Remediation wastes or impacted material(s) are those components used in the construction of the disposal facility exclusive of the liners and caps. The components might include soils, concrete, rubble, debris, and other regulatory approved materials. The IMPP provides the details necessary for interested parties to understand the management and construction practices at the disposal facility. The IMPP should identify the regulatory requirements from applicable DOE Orders, the ROD(s) (where a part of a CERCLA remedy), closure plans, or any other relevant agreements or regulations. Also, how the impacted material will be tracked should be described. Finally, detailed descriptions of what will be placed and how it will be placed should be included. The placement of impacted material into approved on-site disposal facilities (OSDF) is an integral part of gaining regulatory approval. To obtain this approval, a detailed plan (Impacted Material Placement Plan [IMPP]) was developed for the Fernald OSDF. The IMPP provides detailed information for the DOE, site generators, the stakeholders, regulatory community, and the construction subcontractor placing various types of impacted material within the disposal facility.
Forecasting consumer products using prediction markets
Trepte, Kai
2009-01-01T23:59:59.000Z
Prediction Markets hold the promise of improving the forecasting process. Research has shown that Prediction Markets can develop more accurate forecasts than polls or experts. Our research concentrated on analyzing Prediction ...
Availability of corona cage for predicting audible noise generated from HVDC transmission line
Nakano, Y.; Sunaga, Y.
1989-04-01T23:59:59.000Z
This paper describes a prospect that a corona cage is available for predicting audible noise (AN) generated from HVDC transmission line. This is based on the assumption that generation quantities of AN and corona current are determined by Fmax (the true maximum conductor surface gradient in the presence of space charge) regardless of the surrounding electrode arrangement. This assumption has been verified by tests using corona cages and a test line.
Availabilty of corona cage for predicting radio interference generated from HVDC transmission line
Nakano, Y.; Sunaga, Y. (Central Research Inst. of Electric Power Industry, Tokyo (Japan))
1990-07-01T23:59:59.000Z
This paper describes a prospect that a corona cage is available for predicting radio interference (RI) generated from HVDC transmission lines. This is based on the assumption that the generation quantity of RI is determined by Fmax (the true maximum conductor surface gradient in the presence of space charge), regardless of surrounding electrode arrangement. This assumption has been verified by tests using corona cages and a test line.
Sandia National Laboratories: Predictive Simulation of Internal...
Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)
Predictive Simulation of Internal Combustion Engines Sandia and General Motors: Advancing Clean Combustion Engines with Predictive Simulation Tools On February 14, 2013, in CRF,...
Bootstrap prediction intervals for Markov processes
Pan, Li; Politis, Dimitris
2014-01-01T23:59:59.000Z
and William R Schucany. Bootstrap prediction intervals forWolf and Dan Wunderli. Bootstrap joint prediction regions.intuitive to construct bootstrap procedures that run forward
Chen, Sheng
Blind Joint Maximum Likelihood Channel Estimation and Data Detection for Single-Input Multiple of Southampton, Southampton SO17 1BJ, U.K. Abstract--A blind adaptive scheme is proposed for joint maximum. A simulation example is used to demon- strate the effectiveness of this joint ML optimization scheme for blind
Mitchell, Richard
On Maximum Available Feedback and PID Control - 1 IEEE SMC UK&RI Applied Cybernetics Â© Dr Richard Mitchell 2005 ON MAXIMUM AVAILABLE FEEDBACK AND PID CONTROL Dr Richard Mitchell, Cybernetics, University frequencies A recent IEEE SMC Paper describes a robust PID controller whose phase is flat at key frequencies
Chapman, Patrick
Abstract--The many different techniques for maximum power point tracking of photovoltaic arrays on implementation. This manuscript should serve as a convenient reference for future work in photovoltaic power generation. Index Terms--maximum power point tracking, MPPT, photovoltaic, PV. I. INTRODUCTION RACKING
Pražnikar, Jure [Institute Jožef Stefan, Jamova 39, 1000 Ljubljana (Slovenia); University of Primorska, (Slovenia); Turk, Dušan, E-mail: dusan.turk@ijs.si [Institute Jožef Stefan, Jamova 39, 1000 Ljubljana (Slovenia); Center of Excellence for Integrated Approaches in Chemistry and Biology of Proteins, (Slovenia)
2014-12-01T23:59:59.000Z
The maximum-likelihood free-kick target, which calculates model error estimates from the work set and a randomly displaced model, proved superior in the accuracy and consistency of refinement of crystal structures compared with the maximum-likelihood cross-validation target, which calculates error estimates from the test set and the unperturbed model. The refinement of a molecular model is a computational procedure by which the atomic model is fitted to the diffraction data. The commonly used target in the refinement of macromolecular structures is the maximum-likelihood (ML) function, which relies on the assessment of model errors. The current ML functions rely on cross-validation. They utilize phase-error estimates that are calculated from a small fraction of diffraction data, called the test set, that are not used to fit the model. An approach has been developed that uses the work set to calculate the phase-error estimates in the ML refinement from simulating the model errors via the random displacement of atomic coordinates. It is called ML free-kick refinement as it uses the ML formulation of the target function and is based on the idea of freeing the model from the model bias imposed by the chemical energy restraints used in refinement. This approach for the calculation of error estimates is superior to the cross-validation approach: it reduces the phase error and increases the accuracy of molecular models, is more robust, provides clearer maps and may use a smaller portion of data for the test set for the calculation of R{sub free} or may leave it out completely.
A New Maximum-Likelihood Change Estimator for Two-Pass SAR Coherent Change Detection.
Wahl, Daniel E.; Yocky, David A.; Jakowatz, Charles V,
2014-09-01T23:59:59.000Z
In this paper, we derive a new optimal change metric to be used in synthetic aperture RADAR (SAR) coherent change detection (CCD). Previous CCD methods tend to produce false alarm states (showing change when there is none) in areas of the image that have a low clutter-to-noise power ratio (CNR). The new estimator does not suffer from this shortcoming. It is a surprisingly simple expression, easy to implement, and is optimal in the maximum-likelihood (ML) sense. The estimator produces very impressive results on the CCD collects that we have tested.
Dynamics of multi-modes maximum entangled coherent state over amplitude damping channel
A. El Allati; Y. Hassouni; N. Metwally
2012-02-18T23:59:59.000Z
The dynamics of maximum entangled coherent state travels through an amplitude damping channel is investigated. For small values of the transmissivity rate the travelling state is very fragile to this noise channel, where it suffers from the phase flip error with high probability. The entanglement decays smoothly for larger values of the transmissivity rate and speedily for smaller values of this rate. As the number of modes increases, the travelling state over this noise channel loses its entanglement hastily. The odd and even states vanish at the same value of the field intensity.
On Weyl channels being covariant with respect to the maximum commutative group of unitaries
G. G. Amosov
2006-08-10T23:59:59.000Z
We investigate the Weyl channels being covariant with respect to the maximum commutative group of unitary operators. This class includes the quantum depolarizing channel and the "two-Pauli" channel as well. Then, we show that our estimation of the output entropy for a tensor product of the phase damping channel and the identity channel based upon the decreasing property of the relative entropy allows to prove the additivity conjecture for the minimal output entropy for the quantum depolarizing channel in any prime dimesnsion and for the "two Pauli" channel in the qubit case.
A reliable, fast and low cost maximum power point tracker for photovoltaic applications
Enrique, J.M.; Andujar, J.M.; Bohorquez, M.A. [Departamento de Ingenieria Electronica, de Sistemas Informaticos y Automatica, Universidad de Huelva (Spain)
2010-01-15T23:59:59.000Z
This work presents a new maximum power point tracker system for photovoltaic applications. The developed system is an analog version of the ''P and O-oriented'' algorithm. It maintains its main advantages: simplicity, reliability and easy practical implementation, and avoids its main disadvantages: inaccurateness and relatively slow response. Additionally, the developed system can be implemented in a practical way at a low cost, which means an added value. The system also shows an excellent behavior for very fast variables in incident radiation levels. (author)
Study on Two Optimization Problems: Line Cover and Maximum Genus Embedding
Cao, Cheng
2012-07-16T23:59:59.000Z
programming duality. We de ne a new problem called Non-collinear Packing Problem (NPP) as the following: De nition B.1. Non-collinear Packing Problem Given a set P of n points on the Euclidean plane R2, nd a maximum subset S P of non-collinear points, i....e. any three points are not collinear. Before proving the duality between NPP and LCP, we need to show how to formulate both problems to linear programming. To formulate the form of linear pro- gramming for instances of LCP and NNP, we use a few...
Application of Maximum Entropy Method to Lattice Field Theory with a Topological Term
M. Imachi; Y. Shinno; H. Yoneyama
2003-09-22T23:59:59.000Z
In Monte Carlo simulation, lattice field theory with a $\\theta$ term suffers from the sign problem. This problem can be circumvented by Fourier-transforming the topological charge distribution $P(Q)$. Although this strategy works well for small lattice volume, effect of errors of $P(Q)$ becomes serious with increasing volume and prevents one from studying the phase structure. This is called flattening. As an alternative approach, we apply the maximum entropy method (MEM) to the Gaussian $P(Q)$. It is found that the flattening could be much improved by use of the MEM.
Conditional maximum-entropy method for selecting prior distributions in Bayesian statistics
Abe, Sumiyoshi
2014-01-01T23:59:59.000Z
The conditional maximum-entropy method (abbreviated here as C-MaxEnt) is formulated for selecting prior probability distributions in Bayesian statistics for parameter estimation. This method is inspired by a statistical-mechanical approach to systems governed by dynamics with largely-separated time scales and is based on three key concepts: conjugate pairs of variables, dimensionless integration measures with coarse-graining factors and partial maximization of the joint entropy. The method enables one to calculate a prior purely from a likelihood in a simple way. It is shown in particular how it not only yields Jeffreys's rules but also reveals new structures hidden behind them.
Charmonium spectra at finite temperature from QCD sum rules with the maximum entropy method
Philipp Gubler; Kenji Morita; Makoto Oka
2011-08-30T23:59:59.000Z
Charmonia spectral functions at finite temperature are studied using QCD sum rules in combination with the maximum entropy method. This approach enables us to directly obtain the spectral function from the sum rules, without having to introduce any specific assumption about its functional form. As a result, it is found that while J/psi and eta_c manifest themselves as significant peaks in the spectral function below the deconfinement temperature T_c, they quickly dissolve into the continuum and almost completely disappear at temperatures between 1.0 T_c and 1.1 T_c.
H. Rudolf Fiebig
2002-10-31T23:59:59.000Z
We study various aspects of extracting spectral information from time correlation functions of lattice QCD by means of Bayesian inference with an entropic prior, the maximum entropy method (MEM). Correlator functions of a heavy-light meson-meson system serve as a repository for lattice data with diverse statistical quality. Attention is given to spectral mass density functions, inferred from the data, and their dependence on the parameters of the MEM. We propose to employ simulated annealing, or cooling, to solve the Bayesian inference problem, and discuss practical issues of the approach.
Maximum Entropy and the Stress Distribution in Soft Disk Packings Above Jamming
Yegang Wu; S. Teitel
2014-10-17T23:59:59.000Z
We show that the maximum entropy hypothesis can successfully explain the distribution of stresses on compact clusters of particles within disordered mechanically stable packings of soft, isotropically stressed, frictionless disks above the jamming transition. We show that, in our two dimensional case, it becomes necessary to consider not only the stress but also the Maxwell-Cremona force-tile area, as a constraining variable that determines the stress distribution. The importance of the force-tile area was suggested by earlier computations on an idealized force-network ensemble.
Spectral Functions, Maximum Entropy Method and Unconventional Methods in Lattice Field Theory
Chris Allton; Danielle Blythe; Jonathan Clowser
2002-04-26T23:59:59.000Z
We present two unconventional methods of extracting information from hadronic 2-point functions produced by Monte Carlo simulations. The first is an extension of earlier work by Leinweber which combines a QCD Sum Rule approach with lattice data. The second uses the Maximum Entropy Method to invert the 2-point data to obtain estimates of the spectral function. The first approach is applied to QCD data, and the second method is applied to the Nambu--Jona-Lasinio model in (2+1)D. Both methods promise to augment the current approach where physical quantities are extracted by fitting to pure exponentials.
On Weyl channels being covariant with respect to the maximum commutative group of unitaries
Amosov, Grigori G. [Department of Higher Mathematics, Moscow Institute of Physics and Technology, Dolgoprudny 141700 (Russian Federation)
2007-01-15T23:59:59.000Z
We investigate the Weyl channels being covariant with respect to the maximum commutative group of unitary operators. This class includes the quantum depolarizing channel and the 'two-Pauli' channel as well. Then, we show that our estimation of the output entropy for a tensor product of the phase damping channel and the identity channel based upon the decreasing property of the relative entropy allows to prove the additivity conjecture for the minimal output entropy for the quantum depolarizing channel in any prime dimension and for the two-Pauli channel in the qubit case.
Maximum entropy analysis of hadron spectral functions and excited states in quenched lattice QCD
CP-PACS Collaboration; :; S. Aoki; R. Burkhalter; M. Fukugita; S. Hashimoto; N. Ishizuka; Y. Iwasaki; K. Kanaya; T. Kaneko; Y. Kuramashi; M. Okawa; Y. Taniguchi; A. Ukawa; T. Yamazaki; T. Yoshié
2001-10-16T23:59:59.000Z
Employing the maximum entropy method we extract the spectral functions from meson correlators at four lattice spacings in quenched QCD with the Wilson quark action. We confirm that the masses and decay constants, obtained from the position and the area of peaks, agree well with the results from the conventional exponential fit. For the first excited state, we obtain $m_{\\pi_1} = 660(590)$ MeV, $m_{\\rho_1} = 1540(570)$ MeV, and $f_{\\rho_1} = 0.085(36)$ in the continuum limit.
Community Impact Analysis Emerging Approaches
Minnesota, University of
to require preparation of an environmental impact statement. When an environmental impact statement, then the environmental impact statement will discuss all of these effects on the human environment. #12;PrioritiesCommunity Impact Analysis Emerging Approaches #12;1960s · 1964 Title VI of Civil Rights Act of 1964
Journal Information Journal Impact Factor
Krejcí, Pavel
Journal Information Journal Impact Factor 5-Year Journal Impact Factor Journal Self Cites Journal Immediacy Index Journal Cited Half-Life 2012 JCR Science Edition Journal: CZECHOSLOVAK MATHEMATICAL JOURNAL Mark Journal Title ISSN Total Cites Impact Factor 5-Year Impact Factor Immediacy Index Citable Items
. A novel paleogeographic analysis applying a recent paleomagnetic pole from the Faeroe Islands indicates atmospheric greenhouse gas concentrations. The climatic event led to changes in the hydrologic cycle, which bacteria and production as a fallout condensate following a cometary impact. Magnetotactic bacteria
Space Perception by Visuokinesthetic Prediction
Moeller, Ralf
propose a robot model of space perception in a restricted domain in which a robot arm pushes a small block predicts the visual image of the gripper tool and the kinesthetic state of the robot arm after a small which would move the gripper of the robot arm from its current position to a position where it would
Deforestation Deforestation predictions for Amazonia
Camara, Gilberto
Amazon Deforestation Models Deforestation predictions for Amazonia presented by W. F. Laurance et deforestation. Much has already been said by the scientific community about their model--its apocalyptical ("Deforestation in Amazonia," 21 May 2004, p. 1109), blaming planned infrastructure and the land speculation
TRITIUM RESERVOIR STRUCTURAL PERFORMANCE PREDICTION
Lam, P.S.; Morgan, M.J
2005-11-10T23:59:59.000Z
The burst test is used to assess the material performance of tritium reservoirs in the surveillance program in which reservoirs have been in service for extended periods of time. A materials system model and finite element procedure were developed under a Savannah River Site Plant-Directed Research and Development (PDRD) program to predict the structural response under a full range of loading and aged material conditions of the reservoir. The results show that the predicted burst pressure and volume ductility are in good agreement with the actual burst test results for the unexposed units. The material tensile properties used in the calculations were obtained from a curved tensile specimen harvested from a companion reservoir by Electric Discharge Machining (EDM). In the absence of exposed and aged material tensile data, literature data were used for demonstrating the methodology in terms of the helium-3 concentration in the metal and the depth of penetration in the reservoir sidewall. It can be shown that the volume ductility decreases significantly with the presence of tritium and its decay product, helium-3, in the metal, as was observed in the laboratory-controlled burst tests. The model and analytical procedure provides a predictive tool for reservoir structural integrity under aging conditions. It is recommended that benchmark tests and analysis for aged materials be performed. The methodology can be augmented to predict performance for reservoir with flaws.
A prediction for bubbling geometries
Takuya Okuda
2008-02-11T23:59:59.000Z
We study the supersymmetric circular Wilson loops in N=4 Yang-Mills theory. Their vacuum expectation values are computed in the parameter region that admits smooth bubbling geometry duals. The results are a prediction for the supergravity action evaluated on the bubbling geometries for Wilson loops.
The ACT{sup 2} project: Demonstration of maximum energy efficiency in real buildings
Crawley, D.B. [Pacific Northwest Lab., Richland, WA (United States); Krieg, B.L. [Pacific Gas and Electric Co., San Ramon, CA (United States)
1991-11-01T23:59:59.000Z
A large US utility recently began a project to determine whether the use of new energy-efficient end-use technologies and systems would economically achieve substantial energy savings (perhaps as high as 75% over current practice). Using a field-based demonstration approach, the Advanced Customer Technology Test (ACT{sup 2}) for Maximum Energy Efficiency is providing information on the maximum energy savings possible when integrated packages of new high-efficiency end-use technologies are incorporated into commercial and residential buildings and industrial and agricultural processes. This paper details the underlying rationale, approach, results to date, and future plans for ACT{sup 2}. The ultimate goal is energy efficiency (doing more with less energy) rather than energy conservation (freezing in the dark). In this paper, we first explain why a major United States utility is committed to pursuing demand-side management so aggressively. Next, we discuss the approach the utility chose for conducting the ACT{sup 2} project. We then review results obtained to date from the project`s pilot demonstration site. Last, we describe other related demonstration projects being proposed by the utility.
The ACT sup 2 project: Demonstration of maximum energy efficiency in real buildings
Crawley, D.B. (Pacific Northwest Lab., Richland, WA (United States)); Krieg, B.L. (Pacific Gas and Electric Co., San Ramon, CA (United States))
1991-11-01T23:59:59.000Z
A large US utility recently began a project to determine whether the use of new energy-efficient end-use technologies and systems would economically achieve substantial energy savings (perhaps as high as 75% over current practice). Using a field-based demonstration approach, the Advanced Customer Technology Test (ACT{sup 2}) for Maximum Energy Efficiency is providing information on the maximum energy savings possible when integrated packages of new high-efficiency end-use technologies are incorporated into commercial and residential buildings and industrial and agricultural processes. This paper details the underlying rationale, approach, results to date, and future plans for ACT{sup 2}. The ultimate goal is energy efficiency (doing more with less energy) rather than energy conservation (freezing in the dark). In this paper, we first explain why a major United States utility is committed to pursuing demand-side management so aggressively. Next, we discuss the approach the utility chose for conducting the ACT{sup 2} project. We then review results obtained to date from the project's pilot demonstration site. Last, we describe other related demonstration projects being proposed by the utility.
Trovato, M. [Dipartimento di Matematica, Universita di Catania, Viale A. Doria, I-95125 Catania (Italy); Reggiani, L. [Dipartimento di Ingegneria dell' Innovazione and CNISM, Universita del Salento, Via Arnesano s/n, I-73100 Lecce (Italy)
2011-12-15T23:59:59.000Z
By introducing a quantum entropy functional of the reduced density matrix, the principle of quantum maximum entropy is asserted as fundamental principle of quantum statistical mechanics. Accordingly, we develop a comprehensive theoretical formalism to construct rigorously a closed quantum hydrodynamic transport within a Wigner function approach. The theoretical formalism is formulated in both thermodynamic equilibrium and nonequilibrium conditions, and the quantum contributions are obtained by only assuming that the Lagrange multipliers can be expanded in powers of ({h_bar}/2{pi}){sup 2}. In particular, by using an arbitrary number of moments, we prove that (1) on a macroscopic scale all nonlocal effects, compatible with the uncertainty principle, are imputable to high-order spatial derivatives, both of the numerical density n and of the effective temperature T; (2) the results available from the literature in the framework of both a quantum Boltzmann gas and a degenerate quantum Fermi gas are recovered as a particular case; (3) the statistics for the quantum Fermi and Bose gases at different levels of degeneracy are explicitly incorporated; (4) a set of relevant applications admitting exact analytical equations are explicitly given and discussed; (5) the quantum maximum entropy principle keeps full validity in the classical limit, when ({h_bar}/2{pi}){yields}0.
Murton, Mark; Bouchier, Francis A.; vanDongen, Dale T.; Mack, Thomas Kimball; Cutler, Robert Paul; Ross, Michael P.
2013-08-01T23:59:59.000Z
Although technological advances provide new capabilities to increase the robustness of security systems, they also potentially introduce new vulnerabilities. New capability sometimes requires new performance requirements. This paper outlines an approach to establishing a key performance requirement for an emerging intrusion detection sensor: the sensored net. Throughout the security industry, the commonly adopted standard for maximum opening size through barriers is a requirement based on square inches-typically 96 square inches. Unlike standard rigid opening, the dimensions of a flexible aperture are not fixed, but variable and conformable. It is demonstrably simple for a human intruder to move through a 96-square-inch opening that is conformable to the human body. The longstanding 96-square-inch requirement itself, though firmly embedded in policy and best practice, lacks a documented empirical basis. This analysis concluded that the traditional 96-square-inch standard for openings is insufficient for flexible openings that are conformable to the human body. Instead, a circumference standard is recommended for these newer types of sensored barriers. The recommended maximum circumference for a flexible opening should be no more than 26 inches, as measured on the inside of the netting material.
Bounds and phase diagram of efficiency at maximum power for tight-coupling molecular motors
Z. C. Tu
2013-02-08T23:59:59.000Z
The efficiency at maximum power (EMP) for tight-coupling molecular motors is investigated within the framework of irreversible thermodynamics. It is found that the EMP depends merely on the constitutive relation between the thermodynamic current and force. The motors are classified into four generic types (linear, superlinear, sublinear, and mixed types) according to the characteristics of the constitutive relation, and then the corresponding ranges of the EMP for these four types of molecular motors are obtained. The exact bounds of the EMP are derived and expressed as the explicit functions of the free energy released by the fuel in each motor step. A phase diagram is constructed which clearly shows how the region where the parameters (the load distribution factor and the free energy released by the fuel in each motor step) are located can determine whether the value of the EMP is larger or smaller than 1/2. This phase diagram reveals that motors using ATP as fuel under physiological conditions can work at maximum power with higher efficiency ($>1/2$) for a small load distribution factor ($<0.1$).
Robert Felix Tournier
2015-02-23T23:59:59.000Z
An undercooled liquid is unstable. The driving force of the glass transition at Tg is a change of the undercooled-liquid Gibbs free energy. The classical Gibbs free energy change for a crystal formation is completed including an enthalpy saving. The crystal growth critical nucleus is used as a probe to observe the Laplace pressure change Dp accompanying the enthalpy change -Vm *Dp at Tg where Vm is the molar volume. A stable glass-liquid transition model predicts the specific heat jump of fragile liquids at temperatures smaller than Tg, the Kauzmann temperature TK where the liquid entropy excess with regard to crystal goes to zero, the equilibrium enthalpy between TK and Tg, the maximum nucleation rate at TK of superclusters containing magic atom numbers, and the equilibrium latent heats at Tg and TK. Strong-to-fragile and strong-to-strong liquid transitions at Tg are also described and all their thermodynamic parameters are determined from their specific heat jumps. The existence of fragile liquids quenched in the amorphous state, which do not undergo liquid-liquid transition during heating preceding their crystallization, is predicted. Long ageing times leading to the formation at TK of a stable glass composed of superclusters containing up to 147 atoms, touching and interpenetrating, are evaluated from nucleation rates. A fragile-to-fragile liquid transition occurs at Tg without stable-glass formation while a strong glass is stable after transition.
Bootstrap Prediction Intervals for Time Series /
Pan, Li
2013-01-01T23:59:59.000Z
Local Bootstrap . . . . . . . . . . . . . . . . . . . . .1.6 Generalized Bootstrap predictionSieve/PRR Bootstrap . . . . . . . . . . . . . . . . . . .
Intelligent wind power prediction systems final report
Intelligent wind power prediction systems final report Henrik Aalborg Nielsen (han (FU 4101) Ens. journal number: 79029-0001 Project title: Intelligent wind power prediction systems #12;#12;Intelligent wind power prediction systems 1/36 Contents 1 Introduction 6 2 The Wind Power Prediction Tool 7 3
FINAL DRAFT VI. Application 3: Recruitment Prediction
Miller, Tom
FINAL DRAFT 106 VI. Application 3: Recruitment Prediction Contributors: S. Sarah Hinckley, Bernard Megrey, Thomas Miller Definition What do we mean by recruitment prediction? The first thing to consider in defining this term is the time horizon of the prediction. Short-term predictions mean the use of individual
STOCHASTIC METHODS FOR THE PREDICTION OF
New York at Stoney Brook, State University of
STOCHASTIC METHODS FOR THE PREDICTION OF COMPLEX MULTISCALE PHENOMENA James Glimm, \\Lambda Alamos, NM 87545 Abstract The purpose of this paper is to develop a general framework for the prediction of current interest to the authors. Prediction involves a two step process of inverse prediction to describe
Checkpointing strategies with prediction windows Regular paper
Paris-Sud XI, Université de
Checkpointing strategies with prediction windows Regular paper Guillaume Aupy1,3, Yves Robert1, a regular mode outside prediction windows, and a proactive mode inside prediction windows, whenever the size of these windows is large enough. We are able to compute the best period for any size of the prediction windows
Asphaug, Erik; Jutzi, Martin
2015-01-01T23:59:59.000Z
Global scale impacts modify the physical or thermal state of a substantial fraction of a target asteroid. Specific effects include accretion, family formation, reshaping, mixing and layering, shock and frictional heating, fragmentation, material compaction, dilatation, stripping of mantle and crust, and seismic degradation. Deciphering the complicated record of global scale impacts, in asteroids and meteorites, will lead us to understand the original planet-forming process and its resultant populations, and their evolution in time as collisions became faster and fewer. We provide a brief overview of these ideas, and an introduction to models.
Nizkorodov, Sergey
Aerosols David R. Fooshee,, Tran B. Nguyen,§, Sergey A. Nizkorodov,*,§ Julia Laskin, Alexander Laskin aerosols (OA) represent a significant fraction of airborne particulate matter and can impact climate insignificant. COBRA is not limited to atmospheric aerosol chemistry; it should be applicable to the prediction
RISK PREDICTION OF A BEHAVIOR-BASED ADHESION CONTROL NETWORK FOR ONLINE SAFETY ANALYSIS OF
Berns, Karsten
by default. But for wheeled driving on concrete walls via negative pressure adhesion a prediction of risks- ited payload. Also the impact of features like surface roughness, sheathing defects, porous areas is de- signed to be used for inspections of large concrete buildings as depicted in figure 1
Predicting and mitigating the global warming potential of agro-ecosystems
Paris-Sud XI, UniversitÃ© de
Predicting and mitigating the global warming potential of agro-ecosystems S. Lehugera 1 , B and methane are the main biogenic greenhouse gases (GHG) con-2 tributing to the global warming potential (GWP to design productive16 agro-ecosystems with low global warming impact.17 Keywords18 Global warming potential
Predicting fracture in micron-scale polycrystalline silicon MEMS structures.
Hazra, Siddharth S. (Carnegie Mellon University, Pittsburgh, PA); de Boer, Maarten Pieter (Carnegie Mellon University, Pittsburgh, PA); Boyce, Brad Lee; Ohlhausen, James Anthony; Foulk, James W., III; Reedy, Earl David, Jr.
2010-09-01T23:59:59.000Z
Designing reliable MEMS structures presents numerous challenges. Polycrystalline silicon fractures in a brittle manner with considerable variability in measured strength. Furthermore, it is not clear how to use a measured tensile strength distribution to predict the strength of a complex MEMS structure. To address such issues, two recently developed high throughput MEMS tensile test techniques have been used to measure strength distribution tails. The measured tensile strength distributions enable the definition of a threshold strength as well as an inferred maximum flaw size. The nature of strength-controlling flaws has been identified and sources of the observed variation in strength investigated. A double edge-notched specimen geometry was also tested to study the effect of a severe, micron-scale stress concentration on the measured strength distribution. Strength-based, Weibull-based, and fracture mechanics-based failure analyses were performed and compared with the experimental results.
Predicting low-frequency radio fluxes of known extrasolar planets
Grießmeier, J -M; Spreeuw, H
2008-01-01T23:59:59.000Z
Context. Close-in giant extrasolar planets (''Hot Jupiters'') are believed to be strong emitters in the decametric radio range. Aims. We present the expected characteristics of the low-frequency magnetospheric radio emission of all currently known extrasolar planets, including the maximum emission frequency and the expected radio flux. We also discuss the escape of exoplanetary radio emission from the vicinity of its source, which imposes additional constraints on detectability. Methods. We compare the different predictions obtained with all four existing analytical models for all currently known exoplanets. We also take care to use realistic values for all input parameters. Results. The four different models for planetary radio emission lead to very different results. The largest fluxes are found for the magnetic energy model, followed by the CME model and the kinetic energy model (for which our results are found to be much less optimistic than those of previous studies). The unipolar interaction model does ...
Simulated combined abnormal environment fire calculations for aviation impacts.
Brown, Alexander L.
2010-08-01T23:59:59.000Z
Aircraft impacts at flight speeds are relevant environments for aircraft safety studies. This type of environment pertains to normal environments such as wildlife impacts and rough landings, but also the abnormal environment that has more recently been evidenced in cases such as the Pentagon and World Trade Center events of September 11, 2001, and the FBI building impact in Austin. For more severe impacts, the environment is combined because it involves not just the structural mechanics, but also the release of the fuel and the subsequent fire. Impacts normally last on the order of milliseconds to seconds, whereas the fire dynamics may last for minutes to hours, or longer. This presents a serious challenge for physical models that employ discrete time stepping to model the dynamics with accuracy. Another challenge is that the capabilities to model the fire and structural impact are seldom found in a common simulation tool. Sandia National Labs maintains two codes under a common architecture that have been used to model the dynamics of aircraft impact and fire scenarios. Only recently have these codes been coupled directly to provide a fire prediction that is better informed on the basis of a detailed structural calculation. To enable this technology, several facilitating models are necessary, as is a methodology for determining and executing the transfer of information from the structural code to the fire code. A methodology has been developed and implemented. Previous test programs at the Sandia National Labs sled track provide unique data for the dynamic response of an aluminum tank of liquid water impacting a barricade at flight speeds. These data are used to validate the modeling effort, and suggest reasonable accuracy for the dispersion of a non-combustible fluid in an impact environment. The capability is also demonstrated with a notional impact of a fuel-filled container at flight speed. Both of these scenarios are used to evaluate numeric approximations, and help provide an understanding of the quantitative accuracy of the modeling methods.
DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)
Phillips, Claire L.; Gregg, Jillian W. [Terrestrial Ecosystems Research Associates; 200 SW 35th St.; Corvallis; OR; 97333; USA; Wilson, John K. [Terrestrial Ecosystems Research Associates; 200 SW 35th St.; Corvallis; OR; 97333; USA
2011-11-01T23:59:59.000Z
Daily minimum temperature (Tmin) has increased faster than daily maximum temperature (Tmax) in many parts of the world, leading to decreases in diurnal temperature range (DTR). Projections suggest these trends are likely to continue in many regions, particularly northern latitudes and in arid regions. Despite wide speculation that asymmetric warming has different impacts on plant and ecosystem production than equal-night-and-day warming, there has been little direct comparison of these scenarios. Reduced DTR has also been widely misinterpreted as a result of night-only warming, when in fact Tmin occurs near dawn, indicating higher morning as well as night temperatures. We report on the first experiment to examine ecosystem-scale impacts of faster increases in Tmin than Tmax, using precise temperature controls to create realistic diurnal temperature profiles with gradual day-night temperature transitions and elevated early morning as well as night temperatures. Studying a constructed grassland ecosystem containing species native to Oregon, USA, we found the ecosystem lost more carbon at elevated than ambient temperatures, but was unaffected by the 3ºC difference in DTR between symmetric warming (constantly ambient +3.5ºC) and asymmetric warming (dawn Tmin=ambient +5ºC, afternoon Tmax= ambient +2ºC). Reducing DTR had no apparent effect on photosynthesis, likely because temperatures were most different in the morning and late afternoon when light was low. Respiration was also similar in both warming treatments, because respiration temperature sensitivity was not sufficient to respond to the limited temperature differences between asymmetric and symmetric warming. We concluded that changes in daily mean temperatures, rather than changes in Tmin/Tmax, were sufficient for predicting ecosystem carbon fluxes in this reconstructed Mediterranean grassland system.
DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)
Phillips, Claire L.; Gregg, Jillian W.; Wilson, John K.
2011-11-01T23:59:59.000Z
Daily minimum temperature (Tmin) has increased faster than daily maximum temperature (Tmax) in many parts of the world, leading to decreases in diurnal temperature range (DTR). Projections suggest these trends are likely to continue in many regions, particularly northern latitudes and in arid regions. Despite wide speculation that asymmetric warming has different impacts on plant and ecosystem production than equal-night-and-day warming, there has been little direct comparison of these scenarios. Reduced DTR has also been widely misinterpreted as a result of night-only warming, when in fact Tmin occurs near dawn, indicating higher morning as well as night temperatures. We reportmore »on the first experiment to examine ecosystem-scale impacts of faster increases in Tmin than Tmax, using precise temperature controls to create realistic diurnal temperature profiles with gradual day-night temperature transitions and elevated early morning as well as night temperatures. Studying a constructed grassland ecosystem containing species native to Oregon, USA, we found the ecosystem lost more carbon at elevated than ambient temperatures, but was unaffected by the 3ºC difference in DTR between symmetric warming (constantly ambient +3.5ºC) and asymmetric warming (dawn Tmin=ambient +5ºC, afternoon Tmax= ambient +2ºC). Reducing DTR had no apparent effect on photosynthesis, likely because temperatures were most different in the morning and late afternoon when light was low. Respiration was also similar in both warming treatments, because respiration temperature sensitivity was not sufficient to respond to the limited temperature differences between asymmetric and symmetric warming. We concluded that changes in daily mean temperatures, rather than changes in Tmin/Tmax, were sufficient for predicting ecosystem carbon fluxes in this reconstructed Mediterranean grassland system.« less
Matus, Kira J. (Kira Jen)
2005-01-01T23:59:59.000Z
In China, elevated levels of urban air pollution result in significant adverse health impacts for its large and rapidly growing urban population. An expanded version of the Emissions Prediction and Policy Analysis (EPPA), ...
THE IMPACT OF THERMAL ENGINEERING RESEARCH ON GLOBAL CLIMATE CHANGE
Phelan, Patrick [Arizona State University; Abdelaziz, Omar [ORNL; Otanicar, Todd [University of Tulsa; Phelan, Bernadette [Phelan Research Solutions, Inc.; Prasher, Ravi [Arizona State University; Taylor, Robert [University of New South Wales, Sydney, Australia; Tyagi, Himanshu [Indian Institute of Technology Ropar, India
2014-01-01T23:59:59.000Z
Global climate change is recognized by many people around the world as being one of the most pressing issues facing our society today. The thermal engineering research community clearly plays an important role in addressing this critical issue, but what kind of thermal engineering research is, or will be, most impactful? In other words, in what directions should thermal engineering research be targeted in order to derive the greatest benefit with respect to global climate change? To answer this question we consider the potential reduction in greenhouse gas (GHG) emissions, coupled with potential economic impacts, resulting from thermal engineering research. Here a new model framework is introduced that allows a technological, sector-by-sector analysis of GHG emissions avoidance. For each sector, we consider the maximum reduction in CO2 emissions due to such research, and the cost effectiveness of the new efficient technologies. The results are normalized on a country-by-country basis, where we consider the USA, the European Union, China, India, and Australia as representative countries or regions. Among energy supply-side technologies, improvements in coal-burning power generation are seen as having the most beneficial CO2 and economic impacts. The one demand-side technology considered, residential space cooling, offers positive but limited impacts. The proposed framework can be extended to include additional technologies and impacts, such as water consumption.
Ferrara, Emilio
Journal title 2010 Impact Factor Information Sciences 2.833 Web Semantics 2.789 Artificial Intelligence 2.511 Future Generation Computer Systems 2.365 International Journal of Medical Informatics 2.244 Applied Soft Computing Journal 2.084 Expert Systems With Applications 1.924 Fuzzy Sets And Systems 1
Environmental Impacts of Nanotechnology
Zhang, Junshan
Environmental Impacts of Nanotechnology Paul Westerhoff, Ph.D., PE Professor and Chair Civil · Proposed Center for Environmental Implications of Nanotechnology (CEIN) · Successes by ASU researchers #12 of nanotechnology? #12;Nanomaterials are used in everyday life (> 500 products to date) Nano-silver in Bandages
Extended foundations of stochastic prediction
Sergey Kamenshchikov
2014-06-28T23:59:59.000Z
The basic purpose of this work was to suggest universal quantitative description of ergodic system intermediate bifurcation and obligatory conditions of this transition. Conditions for existence of phase state and first order phase transition were introduced in terms of energy balance for system volume unit. Extended Fokker - Plank equation with time dependent diffusion factor was formulated. It turned out that for ergodic system with fixed boundary quantized energy spectrum of phase stable states exists. Obtained results may be applied for prediction of ergodic system behavior. If isolation condition is satisfied, phase spectrum quantization allows selecting proper control parameters for system stabilization. Information about current system coarsened energy allows predicting of future stochastic system behavior on the basis of extended Fokker - Plank model.
Catalysis-by-design impacts assessment
Fassbender, L L; Young, J K [Pacific Northwest Lab., Richland, WA (USA); Sen, R K [Sen (R.K.) and Associates, Washington, DC (USA)
1991-05-01T23:59:59.000Z
Catalyst researchers have always recognized the need to develop a detailed understanding of the mechanisms of catalytic processes, and have hoped that it would lead to developing a theoretical predictive base to guide the search for new catalysts. This understanding allows one to develop a set of hierarchical models, from fundamental atomic-level ab-initio models to detailed engineering simulations of reactor systems, to direct the search for optimized, efficient catalyst systems. During the last two decades, the explosions of advanced surface analysis techniques have helped considerably to develop the building blocks for understanding various catalytic reactions. An effort to couple these theoretical and experimental advances to develop a set of hierarchical models to predict the nature of catalytic materials is a program entitled Catalysis-by-Design (CRD).'' In assessing the potential impacts of CBD on US industry, the key point to remember is that the value of the program lies in developing a novel methodology to search for new catalyst systems. Industrial researchers can then use this methodology to develop proprietary catalysts. Most companies involved in catalyst R D have two types of ongoing projects. The first type, what we call market-driven R D,'' are projects that support and improve upon a company's existing product lines. Project of the second type, technology-driven R D,'' are longer term, involve the development of totally new catalysts, and are initiated through scientists' research ideas. The CBD approach will impact both types of projects. However, this analysis indicates that the near-term impacts will be on market-driven'' projects. The conclusions and recommendations presented in this report were obtained by the authors through personal interviews with individuals involved in a variety of industrial catalyst development programs and through the three CBD workshops held in the summer of 1989. 34 refs., 7 figs., 7 tabs.
G. Litak; T. Kaminski; J. Czarnigowski; A. K. Sen; M. Wendeker
2006-11-29T23:59:59.000Z
In this paper we analyze the cycle-to-cycle variations of maximum pressure $p_{max}$ and peak pressure angle $\\alpha_{pmax}$ in a four-cylinder spark ignition engine. We examine the experimental time series of $p_{max}$ and $\\alpha_{pmax}$ for three different spark advance angles. Using standard statistical techniques such as return maps and histograms we show that depending on the spark advance angle, there are significant differences in the fluctuations of $p_{max}$ and $\\alpha_{pmax}$. We also calculate the multiscale entropy of the various time series to estimate the effect of randomness in these fluctuations. Finally, we explain how the information on both $p_{max}$ and $\\alpha_{pmax}$ can be used to develop optimal strategies for controlling the combustion process and improving engine performance.
Role of ocean-atmosphere interactions in tropical cooling during the last glacial maximum
Bush, A.B.G. [Univ. of Alberta, Edmonton (Canada)] [Univ. of Alberta, Edmonton (Canada); Philander, S.G.H. [Princeton Univ., NJ (United States)] [Princeton Univ., NJ (United States)
1998-02-27T23:59:59.000Z
A simulation with a coupled atmosphere-ocean general circulation model configured for the Last Glacial Maximum delivered a tropical climate that is much cooler than that produced by atmosphere-only models. The main reason is a decrease in tropical sea surface temperatures, up to 6{degree}C in the western tropical Pacific, which occurs because of two processes. The trade winds induce equatorial upwelling and zonal advection of cold water that further intensify the trade winds, and an exchange of water occurs between the tropical and extratropical Pacific in which the poleward surface flow is balanced by equatorward flow of cold water in the thermocline. Simulated tropical temperature depressions are of the same magnitude as those that have been proposed from recent proxy data. 25 refs., 4 figs.
A. Vaudrey; P. Baucour; F. Lanzetta; R. Glises
2010-08-30T23:59:59.000Z
Producing useful electrical work in consuming chemical energy, the fuel cell have to reject heat to its surrounding. However, as it occurs for any other type of engine, this thermal energy cannot be exchanged in an isothermal way in finite time through finite areas. As it was already done for various types of systems, we study the fuel cell within the finite time thermodynamics framework and define an endoreversible fuel cell. Considering different types of heat transfer laws, we obtain an optimal value of the operating temperature, corresponding to a maximum produced power. This analysis is a first step of a thermodynamical approach of design of thermal management devices, taking into account performances of the whole system.
Vaudrey, A; Lanzetta, F; Glises, R
2009-01-01T23:59:59.000Z
Producing useful electrical work in consuming chemical energy, the fuel cell have to reject heat to its surrounding. However, as it occurs for any other type of engine, this thermal energy cannot be exchanged in an isothermal way in finite time through finite areas. As it was already done for various types of systems, we study the fuel cell within the finite time thermodynamics framework and define an endoreversible fuel cell. Considering different types of heat transfer laws, we obtain an optimal value of the operating temperature, corresponding to a maximum produced power. This analysis is a first step of a thermodynamical approach of design of thermal management devices, taking into account performances of the whole system.
From Physics to Economics: An Econometric Example Using Maximum Relative Entropy
Giffin, Adom
2009-01-01T23:59:59.000Z
Econophysics, is based on the premise that some ideas and methods from physics can be applied to economic situations. We intend to show in this paper how a physics concept such as entropy can be applied to an economic problem. In so doing, we demonstrate how information in the form of observable data and moment constraints are introduced into the method of Maximum relative Entropy (MrE). A general example of updating with data and moments is shown. Two specific econometric examples are solved in detail which can then be used as templates for real world problems. A numerical example is compared to a large deviation solution which illustrates some of the advantages of the MrE method.
Reginatto, Marcel; Zimbal, Andreas [Physikalisch-Technische Bundesanstalt, 38116 Braunschweig (Germany)
2008-02-15T23:59:59.000Z
In applications of neutron spectrometry to fusion diagnostics, it is advantageous to use methods of data analysis which can extract information from the spectrum that is directly related to the parameters of interest that describe the plasma. We present here methods of data analysis which were developed with this goal in mind, and which were applied to spectrometric measurements made with an organic liquid scintillation detector (type NE213). In our approach, we combine Bayesian parameter estimation methods and unfolding methods based on the maximum entropy principle. This two-step method allows us to optimize the analysis of the data depending on the type of information that we want to extract from the measurements. To illustrate these methods, we analyze neutron measurements made at the PTB accelerator under controlled conditions, using accelerator-produced neutron beams. Although the methods have been chosen with a specific application in mind, they are general enough to be useful for many other types of measurements.
Urniezius, Renaldas [Kaunas University of Technology, Kaunas (Lithuania)
2011-03-14T23:59:59.000Z
The principle of Maximum relative Entropy optimization was analyzed for dead reckoning localization of a rigid body when observation data of two attached accelerometers was collected. Model constraints were derived from the relationships between the sensors. The experiment's results confirmed that accelerometers each axis' noise can be successfully filtered utilizing dependency between channels and the dependency between time series data. Dependency between channels was used for a priori calculation, and a posteriori distribution was derived utilizing dependency between time series data. There was revisited data of autocalibration experiment by removing the initial assumption that instantaneous rotation axis of a rigid body was known. Performance results confirmed that such an approach could be used for online dead reckoning localization.
Azimuthal Anisotropy in Heavy Ion Collisions from the Maximum Entropy Method
Pirner, Hans J
2014-01-01T23:59:59.000Z
We investigate the azimuthal anisotropy v2 of particle production in nucleus-nucleus collisions in the maximum entropy approach. This necessitates two new parameters delta and lambda2. The parameter delta describes the deformation of transverse configuration space and is related to the anisotropy of the overlap zone of the two nuclei. The parameter lambda2 defines the anisotropy of the particle distribution in momentum space. Assuming deformed flux tubes at the early stage of the collision we relate the momentum to the space asymmetry i.e. lambda2 to delta with the uncertainty relation. We compute the anisotropy v2 as a function of centrality, transverse momentum and rapidity using gluon-hadron duality. The general features of LHC data are reproduced.
Source Function Determined from HBT Correlations by the Maximum Entropy Principle
Wu Yuanfang; Ulrich Heinz
1996-07-18T23:59:59.000Z
We study the reconstruction of the source function in space-time directly from the measured HBT correlation function using the Maximum Entropy Principle. We find that the problem is ill-defined without at least one additional theoretical constraint as input. Using the requirement of a finite source lifetime for the latter we find a new Gaussian parametrization of the source function directly in terms of the measured HBT radius parameters and its lifetime, where the latter is a free parameter which is not directly measurable by HBT. We discuss the implications of our results for the remaining freedom in building source models consistent with a given set of measured HBT radius parameters.
A maximum-entropy approach to the adiabatic freezing of a supercooled liquid
Santi Prestipino
2013-04-29T23:59:59.000Z
I employ the van der Waals theory of Baus and coworkers to analyze the fast, adiabatic decay of a supercooled liquid in a closed vessel with which the solidification process usually starts. By imposing a further constraint on either the system volume or pressure, I use the maximum-entropy method to quantify the fraction of liquid that is transformed into solid as a function of undercooling and of the amount of a foreign gas that could possibly be also present in the test tube. Upon looking at the implications of thermal and mechanical insulation for the energy cost of forming a solid droplet within the liquid, I identify one situation where the onset of solidification inevitably occurs near the wall in contact with the bath.
Parthapratim Biswas; H. Shimoyama; L. R. Mead
2009-10-23T23:59:59.000Z
We apply the maximum entropy principle to construct the natural invariant density and Lyapunov exponent of one-dimensional chaotic maps. Using a novel function reconstruction technique that is based on the solution of Hausdorff moment problem via maximizing Shannon entropy, we estimate the invariant density and the Lyapunov exponent of nonlinear maps in one-dimension from a knowledge of finite number of moments. The accuracy and the stability of the algorithm are illustrated by comparing our results to a number of nonlinear maps for which the exact analytical results are available. Furthermore, we also consider a very complex example for which no exact analytical result for invariant density is available. A comparison of our results to those available in the literature is also discussed.
Spectral function and excited states in lattice QCD with maximum entropy method
CP-PACS Collaboration; :; T. Yamazaki; S. Aoki; R. Burkhalter; M. Fukugita; S. Hashimoto; N. Ishizuka; Y. Iwasaki; K. Kanaya; T. Kaneko; Y. Kuramashi; M. Okawa; Y. Taniguchi; A. Ukawa; T. Yoshié
2001-05-29T23:59:59.000Z
We apply the maximum entropy method to extract the spectral functions for pseudoscalar and vector mesons from hadron correlators previously calculated at four different lattice spacings in quenched QCD with the Wilson quark action. We determine masses and decay constants for the ground and excited states of the pseudoscalar and vector channels from position and area of peaks in the spectral functions. We obtain the results, $m_{\\pi_1} = 660(590)$ MeV and $m_{\\rho_1} = 1540(570)$ MeV for the masses of the first excited state masses, in the continuum limit of quenched QCD. We also find unphysical states which have infinite mass in the continuum limit, and argue that they are bound states of two doublers of the Wilson quark action. If the interpretation is correct, this is the first time that the state of doublers is identified in lattice QCD numerical simulations.
Application of the Maximum Entropy Method to the (2+1)d Four-Fermion Model
C. R. Allton; J. E. Clowser; S. J. Hands; J. B. Kogut; C. G. Strouthos
2002-08-19T23:59:59.000Z
We investigate spectral functions extracted using the Maximum Entropy Method from correlators measured in lattice simulations of the (2+1)-dimensional four-fermion model. This model is particularly interesting because it has both a chirally broken phase with a rich spectrum of mesonic bound states and a symmetric phase where there are only resonances. In the broken phase we study the elementary fermion, pion, sigma and massive pseudoscalar meson; our results confirm the Goldstone nature of the pi and permit an estimate of the meson binding energy. We have, however, seen no signal of sigma -> pi pi decay as the chiral limit is approached. In the symmetric phase we observe a resonance of non-zero width in qualitative agreement with analytic expectations; in addition the ultra-violet behaviour of the spectral functions is consistent with the large non-perturbative anomalous dimension for fermion composite operators expected in this model.
Azimuthal Anisotropy in Heavy Ion Collisions from the Maximum Entropy Method
Hans J. Pirner
2014-05-09T23:59:59.000Z
We investigate the azimuthal anisotropy v2 of particle production in nucleus-nucleus collisions in the maximum entropy approach. This necessitates two new parameters delta and lambda2. The parameter delta describes the deformation of transverse configuration space and is related to the anisotropy of the overlap zone of the two nuclei. The parameter lambda2 defines the anisotropy of the particle distribution in momentum space. Assuming deformed flux tubes at the early stage of the collision we relate the momentum to the space asymmetry i.e. lambda2 to delta with the uncertainty relation. We compute the anisotropy v2 as a function of centrality, transverse momentum and rapidity using gluon-hadron duality. The general features of LHC data are reproduced.
CP$^{N-1}$ model with the theta term and maximum entropy method
Masahiro Imachi; Yasuhiko Shinno; Hiroshi Yoneyama
2004-09-25T23:59:59.000Z
A $\\theta$ term in lattice field theory causes the sign problem in Monte Carlo simulations. This problem can be circumvented by Fourier-transforming the topological charge distribution $P(Q)$. This strategy, however, has a limitation, because errors of $P(Q)$ prevent one from calculating the partition function ${\\cal Z}(\\theta)$ properly for large volumes. This is called flattening. As an alternative approach to the Fourier method, we utilize the maximum entropy method (MEM) to calculate ${\\cal Z}(\\theta)$. We apply the MEM to Monte Carlo data of the CP$^3$ model. It is found that in the non-flattening case, the result of the MEM agrees with that of the Fourier transform, while in the flattening case, the MEM gives smooth ${\\cal Z}(\\theta)$.
Is the friction angle the maximum slope of a free surface of a non cohesive material?
A. Modaressi; P. Evesque
2005-07-13T23:59:59.000Z
Starting from a symmetric triangular pile with a horizontal basis and rotating the basis in the vertical plane, we have determined the evolution of the stress distribution as a function of the basis inclination using Finite Elements method with an elastic-perfectly plastic constitutive model, defined by its friction angle, without cohesion. It is found that when the yield function is the Drucker-Prager one, stress distribution satisfying equilibrium can be found even when one of the free-surface slopes is larger than the friction angle. This means that piles with a slope larger than the friction angle can be (at least) marginally stable and that slope rotation is not always a destabilising perturbation direction. On the contrary, it is found that the slope cannot overpass the friction angle when a Mohr-Coulomb yield function is used. Theoretical explanation of these facts is given which enlightens the role plaid by the intermediate principal stress in both cases of the Mohr-Coulomb criterion and of the Drucker-Prager one. It is then argued that the Mohr-Coulomb criterion assumes a spontaneous symmetry breaking, as soon as the two smallest principal stresses are different ; this is not physical most likely; so this criterion shall be replaced by a Drucker-Prager criterion in the vicinity of the equality, which leads to the previous anomalous behaviour ; so these numerical computations enlighten the avalanche process: they show that no dynamical angle larger than the static one is needed to understand avalanching. It is in agreement with previous experimental results. Furthermore, these results show that the maximum angle of repose can be modified using cyclic rotations; we propose a procedure that allows to achieve a maximum angle of repose to be equal to the friction angle .
On Maximum Norm of Exterior Product and A Conjecture of C.N. Yang
Zhilin Luo
2015-01-08T23:59:59.000Z
Let $V$ be a finite dimensional inner product space over $\\mathbb{R}$ with dimension $n$, where $n\\in \\mathbb{N}$, $\\wedge^{r}V$ be the exterior algebra of $V$, the problem is to find $\\max_{\\| \\xi \\| = 1, \\| \\eta \\| = 1}\\| \\xi \\wedge \\eta \\|$ where $k,l$ $\\in \\mathbb{N},$ $\\forall \\xi \\in \\wedge^{k}V, \\eta \\in \\wedge^{l}V.$ This is a problem suggested by the famous Nobel Prize Winner C.N. Yang. He solved this problem for $k\\leq 2$ in [1], and made the following \\textbf{conjecture} in [2] : If $n=2m$, $k=2r$, $l=2s$, then the maximum is achieved when $\\xi_{max} = \\frac{\\omega^{k}}{\\| \\omega^{k}\\|}, \\eta_{max} = \\frac{\\omega^{l}}{\\| \\omega^{l}\\|}$, where $ \\omega = \\Sigma_{i=1}^m e_{2i-1}\\wedge e_{2i}, $ and $\\{e_{k}\\}_{k=1}^{2m}$ is an orthonormal basis of V. From a physicist's point of view, this problem is just the dual version of the easier part of the well-known Beauzamy-Bombieri inequality for product of polynomials in many variables, which is discussed in [4]. Here the duality is referred as the well known Bose-Fermi correspondence, where we consider the skew-symmetric algebra(alternative forms) instead of the familiar symmetric algebra(polynomials in many variables) In this paper, for two cases we give estimations of the maximum of exterior products, and the Yang's conjecture is answered partially under some special cases.
MADmap: A Massively Parallel Maximum-Likelihood Cosmic Microwave Background Map-Maker
Cantalupo, Christopher; Borrill, Julian; Jaffe, Andrew; Kisner, Theodore; Stompor, Radoslaw
2009-06-09T23:59:59.000Z
MADmap is a software application used to produce maximum-likelihood images of the sky from time-ordered data which include correlated noise, such as those gathered by Cosmic Microwave Background (CMB) experiments. It works efficiently on platforms ranging from small workstations to the most massively parallel supercomputers. Map-making is a critical step in the analysis of all CMB data sets, and the maximum-likelihood approach is the most accurate and widely applicable algorithm; however, it is a computationally challenging task. This challenge will only increase with the next generation of ground-based, balloon-borne and satellite CMB polarization experiments. The faintness of the B-mode signal that these experiments seek to measure requires them to gather enormous data sets. MADmap is already being run on up to O(1011) time samples, O(108) pixels and O(104) cores, with ongoing work to scale to the next generation of data sets and supercomputers. We describe MADmap's algorithm based around a preconditioned conjugate gradient solver, fast Fourier transforms and sparse matrix operations. We highlight MADmap's ability to address problems typically encountered in the analysis of realistic CMB data sets and describe its application to simulations of the Planck and EBEX experiments. The massively parallel and distributed implementation is detailed and scaling complexities are given for the resources required. MADmap is capable of analysing the largest data sets now being collected on computing resources currently available, and we argue that, given Moore's Law, MADmap will be capable of reducing the most massive projected data sets.
Relocation impacts of a major release from SRTC
Blanchard, A.; Thompson, E.A.; Thompson, J.M.
1999-06-01T23:59:59.000Z
The relocation impacts of an accidental release, scenario 1-RD-3 , are evaluated for the Savannah River Technology Center. The extent of the area potentially contaminated to a level that would result in doses exceeding the relocation protective action guide(PAG)is calculated. The maximum calculated distance downwind from the accident at which the relocation PAG is exceeded is also determined. The consequences of the particulate portion of the release are evaluated using the HOTSPOT model and an EXCEL spreadsheet. The consequences of the tritium release are evaluated using UFOTRI.
The Prediction of Extratropical Storm Tracks by the ECMWF and NCEP Ensemble Prediction Systems
Begstsson, Lennart
The Prediction of Extratropical Storm Tracks by the ECMWF and NCEP Ensemble Prediction Systems Author: Email: lsrf@mail.nerc-essc.ac.uk #12;Abstract The prediction of extratropical cyclones Prediction (NCEP) Ensemble Prediction Systems (EPS) has been investigated using an objective feature tracking
The Prediction of Extratropical Storm Tracks by the ECMWF and NCEP Ensemble Prediction Systems
Froude, Lizzie
The Prediction of Extratropical Storm Tracks by the ECMWF and NCEP Ensemble Prediction Systems 2006) ABSTRACT The prediction of extratropical cyclones by the European Centre for Medium-Range Weather Forecasts (ECMWF) and the National Centers for Environmental Prediction (NCEP) ensemble prediction systems
Improved Prediction of the Doppler Effect in TRISO Fuel
J. Ortensi; A.M. Ougouag
2009-05-01T23:59:59.000Z
The Doppler feedback mechanism is a major contributor to the passive safety of gas-cooled, graphite-moderated High Temperature Reactors that use fuel based on TRISO particles. It follows that the correct prediction of the magnitude and time-dependence of this feedback effect is essential to the conduct of safety analyses for these reactors. Since the effect is directly dependent on the actual temperature reached by the fuel during transients, the underlying phenomena of heat transfer and temperature rise must be correctly predicted. This paper presents an improved model for the TRISO particle and its thermal behavior during transients. The improved approach incorporates an explicit TRISO heat conduction model to better quantify the time dependence of the temperature in the various layers of the TRISO particle, including its fuel central zone. There follows a better treatment of the Doppler Effect within said fuel zone. The new model is based on a 1-D analytic solution for composite media using the Green’s function technique. The modeling improvement takes advantage of some of the physical behavior of TRISO fuel under irradiation and includes a distinctive look at the physics of the neutronic Doppler Effect. The new methodology has been implemented within the coupled R-Z nodal diffusion code CYNOD-THERMIX. The new model has been applied to the analysis of earthquakes (presented in a companion paper). In this paper, the model is applied to the control rod ejection event, as specified in the OECD PBMR-400 benchmark, but with temperature dependent thermal properties. The results obtained for this transient using the enhanced code are a considerable improvement over the predictions of the original code. The incorporation of the enhanced model shows that the Doppler Effect plays a more significant role than predicted by the original unenhanced model based on the THERMIX homogenized fuel region model. The new model shows that the overall energy generation during the rod ejection transient is significantly lower than predicted by the unenhanced model. The fuel temperature reaches a slightly higher maximum, but at no time does it approach the nominal allowable TRISO fuel temperature. The analyses with the enhanced model also show that the reactor period during the cool down is larger than previously predicted with the homogenous fuel region model.
The Economic Impact of Binghamton
Suzuki, Masatsugu
The Economic Impact of Binghamton University, FY2010 (July 1, 2009-June 30, 2010) Office .......................................................................................................... 2 ECONOMIC OUTPUT and Tioga counties and the overall impact of New York State in terms of economic output, jobs, and human
ENVIRONMENTAL ASSESSMENT/ REGULATORY IMPACT REVIEW/
ENVIRONMENTAL ASSESSMENT/ REGULATORY IMPACT REVIEW/ FINAL REGULATORY FLEXIBILITY ANALYSIS Area and Regulatory Amendments for Bering Sea Habitat Conservation May 2008 Lead Agency: National Juneau, AK 99802 (907) 586-7228 Abstract: This Environmental Assessment/Regulatory Impact Review
Social Impact Management Plans: Innovation in corporate and public policy
Franks, Daniel M., E-mail: d.franks@uq.edu.au [Centre for Social Responsibility in Mining, The University of Queensland, Sustainable Minerals Institute, St Lucia, Brisbane, Queensland 4072 (Australia); Vanclay, Frank, E-mail: frank.vanclay@rug.nl [Department of Cultural Geography, Faculty of Spatial Sciences, The University of Groningen, P.O. Box 800, 9700 AV Groningen (Netherlands)] [Department of Cultural Geography, Faculty of Spatial Sciences, The University of Groningen, P.O. Box 800, 9700 AV Groningen (Netherlands)
2013-11-15T23:59:59.000Z
Social Impact Assessment (SIA) has traditionally been practiced as a predictive study for the regulatory approval of major projects, however, in recent years the drivers and domain of focus for SIA have shifted. This paper details the emergence of Social Impact Management Plans (SIMPs) and undertakes an analysis of innovations in corporate and public policy that have put in place ongoing processes – assessment, management and monitoring – to better identify the nature and scope of the social impacts that might occur during implementation and to proactively respond to change across the lifecycle of developments. Four leading practice examples are analyzed. The International Finance Corporation (IFC) Performance Standards require the preparation of Environmental and Social Management Plans for all projects financed by the IFC identified as having significant environmental and social risks. Anglo American, a major resources company, has introduced a Socio-Economic Assessment Toolbox, which requires mine sites to undertake regular assessments and link these assessments with their internal management systems, monitoring activities and a Social Management Plan. In South Africa, Social and Labour Plans are submitted with an application for a mining or production right. In Queensland, Australia, Social Impact Management Plans were developed as part of an Environmental Impact Statement, which included assessment of social impacts. Collectively these initiatives, and others, are a practical realization of theoretical conceptions of SIA that include management and monitoring as core components of SIA. The paper concludes with an analysis of the implications for the practice of impact assessment including a summary of key criteria for the design and implementation of effective SIMPs. -- Highlights: • Social impact management plans are effective strategies to manage social issues. • They are developed in partnership with regulatory agencies, investors and community. • SIMPs link assessment to ongoing management and address social and community issues. • SIMPs clarify responsibilities in the management of impacts, opportunities and risks. • SIMPs demonstrate a shift to include management as a core component of SIA practice.
A comparison of regulatory impacts to real target impacts
Ammerman, D.J.
1998-05-01T23:59:59.000Z
The purpose of this paper is to discuss the relative severity of regulatory impacts onto an essentially rigid target to impacts at higher velocities onto real targets. For impacts onto the essentially rigid target all of the kinetic energy of the package is absorbed by deformation of the package. For impacts onto real targets the kinetic energy is absorbed by deformation of the target as well as by deformation of the package. The amount of kinetic energy absorbed by the target does not increase the severity of the impact.
The Environmental Impacts of Subsidized Crop Insurance
LaFrance, Jeffrey T.; Shimshack, J. P.; Wu, S. Y.
2001-01-01T23:59:59.000Z
May 1996): 428-438. Environmental Impacts of Subsidized CropPaper No. 912 THE ENVIRONMENTAL IMPACTS OF SUBSIDIZED CROPsuch copies. The Environmental Impacts of Subsidized Crop
Thermo-hydro-chemical Predictive analysis for the drift-scale predictive heater test,
Sonnenthal, Eric L.; Spycher, Nicolas; Apps, John; Simmons, Ardyth
1998-01-01T23:59:59.000Z
Characterization Project Thermo-Hydro-Chemical Predictive90-1116 Berkeley, C A 94720 Thermo-Hydro-Chemical PredictiveVersion 1.0 Thermo-Hydro-Chemical Predictive Analysis for
"" EPAT# Risk Assessments Environmental Impact
"" EPAT# Risk Assessments Appendixes Environmental Impact Statement NESHAPS for Radionuclides for Hazardous Air Pollutants Risk Assessments Environmental Impact Statement for NESHAPS Radionuclides VOLUME 2 for Hazardous Air Pollutants EPA 520.1'1.-89-006,-2 Risk Assessments Environmental Impact Statement for NESHAPS
Campus Planning Environmental Impact Report
Mullins, Dyche
F I N A L Campus Planning Environmental Impact Report UCSF Mount Zion Garage State Clearinghouse No ENVIRONMENTAL IMPACT REPORT Under the California Environmental Quality Act (CEQA) and the University of California procedures for implementing CEQA, following completion of a Draft Environmental Impact Report (EIR
Müller, Jens-Dominik
GLOBAL IMPACT FROM THE HEART OF NORTHERN IRELAND #12;#12;CHANCELLOR'S WELCOME 4 VICE: INNOVATIVE AND WORLD-CLASS 18 CONTACT 26 CONTENTS 3GLOBAL IMPACT FROM THE HEART OF NORTHERN IRELAND #12;Queen THE HEART OF NORTHERN IRELANDGLOBAL IMPACT FROM THE HEART OF NORTHERN IRELAND CHANCELLOR'S WELCOME
Making predictions in the multiverse
Ben Freivogel
2011-09-18T23:59:59.000Z
I describe reasons to think we are living in an eternally inflating multiverse where the observable "constants" of nature vary from place to place. The major obstacle to making predictions in this context is that we must regulate the infinities of eternal inflation. I review a number of proposed regulators, or measures. Recent work has ruled out a number of measures by showing that they conflict with observation, and focused attention on a few proposals. Further, several different measures have been shown to be equivalent. I describe some of the many nontrivial tests these measures will face as we learn more from theory, experiment, and observation.
Galactosynthesis Predictions at High Redshift
Ari Buchalter; Raul Jimenez; Marc Kamionkowski
2001-02-02T23:59:59.000Z
We predict the Tully-Fisher (TF) and surface-brightness--magnitude relation for disk galaxies at z=3 and discuss the origin of these scaling relations and their scatter. We show that the variation of the TF relation with redshift can be a potentially powerful discriminator of galaxy-formation models. In particular, the TF relation at high redshift might be used to break parameter degeneracies among galactosynthesis models at z=0, as well as to constrain the redshift distribution of collapsing dark-matter halos, the star-formation history and baryon fraction in the disk, and the distribution of halo spins.
Predicting the NFL using Twitter
Sinha, Shiladitya; Gimpel, Kevin; Smith, Noah A
2013-01-01T23:59:59.000Z
We study the relationship between social media output and National Football League (NFL) games, using a dataset containing messages from Twitter and NFL game statistics. Specifically, we consider tweets pertaining to specific teams and games in the NFL season and use them alongside statistical game data to build predictive models for future game outcomes (which team will win?) and sports betting outcomes (which team will win with the point spread? will the total points be over/under the line?). We experiment with several feature sets and find that simple features using large volumes of tweets can match or exceed the performance of more traditional features that use game statistics.
Predictive Simulation | Department of Energy
Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)
AFDC Printable Version Share this resource Send a link to EERE: Alternative Fuels Data Center Home Page to someone by E-mail Share EERE: Alternative Fuels Data Center Home Page on Facebook Tweet about EERE: Alternative Fuels Data Center Home Page on Twitter Bookmark EERE: Alternative1 First Use of Energy for All Purposes (Fuel and Nonfuel), 2002; Level:Energy: Grid Integration Redefining What's Possible forPortsmouth/Paducah ProjectPRE-AWARD ACCOUNTING SYSTEMMeso-Scale duringPredictive
Technology's Impact on Production
Rachel Amann; Ellis Deweese; Deborah Shipman
2009-06-30T23:59:59.000Z
As part of a cooperative agreement with the United States Department of Energy (DOE) - entitled Technology's Impact on Production: Developing Environmental Solutions at the State and National Level - the Interstate Oil and Gas Compact Commission (IOGCC) has been tasked with assisting state governments in the effective, efficient, and environmentally sound regulation of the exploration and production of natural gas and crude oil, specifically in relation to orphaned and abandoned wells and wells nearing the end of productive life. Project goals include: (1) Developing (a) a model framework for prioritization and ranking of orphaned or abandoned well sites; (b) a model framework for disbursement of Energy Policy Act of 2005 funding; and (c) a research study regarding the current status of orphaned wells in the nation. (2) Researching the impact of new technologies on environmental protection from a regulatory perspective. Research will identify and document (a) state reactions to changing technology and knowledge; (b) how those reactions support state environmental conservation and public health; and (c) the impact of those reactions on oil and natural gas production. (3) Assessing emergent technology issues associated with wells nearing the end of productive life. Including: (a) location of orphaned and abandoned well sites; (b) well site remediation; (c) plugging materials; (d) plug placement; (e) the current regulatory environment; and (f) the identification of emergent technologies affecting end of life wells. New Energy Technologies - Regulating Change, is the result of research performed for Tasks 2 and 3.
Predicting low-frequency radio fluxes of known extrasolar planets
J. -M. Grießmeier; P. Zarka; H. Spreeuw
2008-06-02T23:59:59.000Z
Context. Close-in giant extrasolar planets (''Hot Jupiters'') are believed to be strong emitters in the decametric radio range. Aims. We present the expected characteristics of the low-frequency magnetospheric radio emission of all currently known extrasolar planets, including the maximum emission frequency and the expected radio flux. We also discuss the escape of exoplanetary radio emission from the vicinity of its source, which imposes additional constraints on detectability. Methods. We compare the different predictions obtained with all four existing analytical models for all currently known exoplanets. We also take care to use realistic values for all input parameters. Results. The four different models for planetary radio emission lead to very different results. The largest fluxes are found for the magnetic energy model, followed by the CME model and the kinetic energy model (for which our results are found to be much less optimistic than those of previous studies). The unipolar interaction model does not predict any observable emission for the present exoplanet census. We also give estimates for the planetary magnetic dipole moment of all currently known extrasolar planets, which will be useful for other studies. Conclusions. Our results show that observations of exoplanetary radio emission are feasible, but that the number of promising targets is not very high. The catalog of targets will be particularly useful for current and future radio observation campaigns (e.g. with the VLA, GMRT, UTR-2 and with LOFAR).
Climate Change Impacts in the Amazon. Review of scientific literature
NONE
2006-04-15T23:59:59.000Z
The Amazon's hydrological cycle is a key driver of global climate, and global climate is therefore sensitive to changes in the Amazon. Climate change threatens to substantially affect the Amazon region, which in turn is expected to alter global climate and increase the risk of biodiversity loss. In this literature review the following subjects can be distinguished: Observed Climatic Change and Variability, Predicted Climatic Change, Impacts, Forests, Freshwater, Agriculture, Health, and Sea Level Rise.
Predicting Improved Chiller Performance Through Thermodynamic Modeling
Figueroa, I. E.; Cathey, M.; Medina, M. A.; Nutter, D. W.
This paper presents two case studies in which thermodynamic modeling was used to predict improved chiller performance. The model predicted the performance (COP and total energy consumption) of water-cooled centrifugal chillers as a function...
Predicting gene function from images of cells
Jones, Thouis Raymond, 1971-
2007-01-01T23:59:59.000Z
This dissertation shows that biologically meaningful predictions can be made by analyzing images of cells. In particular, groups of related genes and their biological functions can be predicted using images from large ...
Transforms for prediction residuals in video coding
Kam??l?, Fatih
2010-01-01T23:59:59.000Z
Typically the same transform, the 2-D Discrete Cosine Transform (DCT), is used to compress both image intensities in image coding and prediction residuals in video coding. Major prediction residuals include the motion ...
A case model for predictive maintenance
Li, Jiawei, M. Eng. Massachusetts Institute of Technology
2008-01-01T23:59:59.000Z
This project is to respond to a need by Varian Semiconductor Equipment Associates, Inc. (VSEA) to help predict failure of ion implanters. Predictive maintenance would help to reduce the unscheduled downtime of ion implanters, ...
EVA: evaluation of protein structure prediction servers
Sali, Andrej
day, sequences of newly available protein structures in the Protein Data Bank (PDB) are sent performance of protein structure prediction servers through a battery of objective measures for prediction
Negative Ion Photoelectron Spectroscopy Confirms the Prediction...
Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)
Confirms the Prediction that (CO)5 and (CO)6 Each Has a Singlet Ground State. Negative Ion Photoelectron Spectroscopy Confirms the Prediction that (CO)5 and (CO)6 Each Has a...
Predicting risk for the appearance of melanoma.
Meyskens, Frank L Jr; Ransohoff, David F
2006-01-01T23:59:59.000Z
for projecting the absolute risk of breast cancer. J NatlD, Gail MH, et al: Cancer risk prediction models: A workshopal model of breast cancer risk prediction and implications
Alvarez, Pedro J.
Modeling the natural attenuation of benzene in groundwater impacted by ethanol-blended fuels: Effect of ethanol content on the lifespan and maximum length of benzene plumes Diego E. Gomez1 and Pedro 10 March 2009. [1] A numerical model was used to evaluate how the concentration of ethanol
Bürger, Raimund
Impact on sludge inventory and control strategies using the Benchmark Simulation Model No. 1 concentration predictions, plant sludge inventory and mixed liquor suspended solids based control actions-Diehl model allows for more realistic predictions of the underflow sludge concentration which is essential
Theoretical Uncertainties in Inflationary Predictions
William H. Kinney; Antonio Riotto
2006-03-09T23:59:59.000Z
With present and future observations becoming of higher and higher quality, it is timely and necessary to investigate the most significant theoretical uncertainties in the predictions of inflation. We show that our ignorance of the entire history of the Universe, including the physics of reheating after inflation, translates to considerable errors in observationally relevant parameters. Using the inflationary flow formalism, we estimate that for a spectral index $n$ and tensor/scalar ratio $r$ in the region favored by current observational constraints, the theoretical errors are of order $\\Delta n / | n - 1| \\sim 0.1 - 1$ and $\\Delta r /r \\sim 0.1 - 1$. These errors represent the dominant theoretical uncertainties in the predictions of inflation, and are generically of the order of or larger than the projected uncertainties in future precision measurements of the Cosmic Microwave Background. We also show that the lowest-order classification of models into small field, large field, and hybrid breaks down when higher order corrections to the dynamics are included. Models can flow from one region to another.
Analytical determination of package response to severe impact
Ludwigsen, J.S.; Ammerman, D.J.
1995-12-31T23:59:59.000Z
One important part of radioactive material transport risk assessments is amount of release from packages in accidents more severe than design basis accident (US NRC 10CFR71 1995). In order to remove some of the conservatism from current risk assessments, an effort is ongoing to qualify the finite element method for predicting cask performance by comparing analytical results to test measurements of the Structural Evaluation Test Unit (SETU) cask. Comparisons of deformed shapes, strains, and accelerations were made for impact velocities of 13.4, 20.1, and 26.8 m/s (30, 45, and 60 mph). The 13.4 m/s impact corresponds to the regulatory 9 m (30 ft) free fall, and the others correspond to impacts with 2.25 and 4 times the kinetic energy of the regulatory impact. One other analysis at an impact velocity of 38.0 m/s (85 mph) or 8 times regulatory impact kinetic energy was also done.
Colliding cascades model for earthquake prediction
2000-10-12T23:59:59.000Z
3 International Institute of Earthquake Prediction Theory and Mathematical Geophysics, Russian Academy of Sciences, Moscow, Russia. 4 Department of Earth ...
Predictive Science Academic Alliance Program | National Nuclear...
National Nuclear Security Administration (NNSA)
Predictive Science Academic Alliance Program | National Nuclear Security Administration Facebook Twitter Youtube Flickr RSS People Mission Managing the Stockpile Preventing...
Online prediction and control nonlinear stochastic systems
temperature in district heat- ing systems. · Prediction of power production from the wind turbines located and their application to prediction and control within district heating systems and for prediction of wind power. Here temperature in district heating systems', Techni- cal Report IMM-REP-2002-23, Informatics and Mathematical
FACILITATORY NEURAL DYNAMICS FOR PREDICTIVE EXTRAPOLATION
Choe, Yoonsuck
FACILITATORY NEURAL DYNAMICS FOR PREDICTIVE EXTRAPOLATION A Dissertation by HEE JIN LIM Submitted DYNAMICS FOR PREDICTIVE EXTRAPOLATION A Dissertation by HEE JIN LIM Submitted to Texas A&M University: Computer Science #12;iii ABSTRACT Facilitatory Neural Dynamics for Predictive Extrapolation. (August 2006
Short Specialist Review Gene structure prediction
Brendel, Volker
Short Specialist Review Gene structure prediction in plant genomes Volker Brendel Iowa State) within most genes makes the problem of computational gene structure prediction distinct from (and harder prediction in vertebrates. The second reason is pragmatic. Expressed Sequence Tag (EST) sequencing and whole
Prediction Markets Partition model of knowledge
Fiat, Amos
Prediction Markets Partition model of knowledge Distributed information markets Convergence time bounds Computational Aspects of Prediction Markets David M. Pennock and Rahul Sami December 5, 2012 Presented by: Rami Eitan David M. Pennock and Rahul Sami Computational Aspects of Prediction Markets #12
Prediction of Freshmen Academic Performance Iuliana Ianus
Prediction of Freshmen Academic Performance Iuliana Ianus Department of Statistics Carnegie Mellon is to improve prediction of freshman GPA based on college admission data to better inform the decision as to who algorithm for making this prediction. Data for two consecutive entering classes at CMU were used. Both
United States Department of Correlation and Prediction
Standiford, Richard B.
United States Department of Correlation and Prediction Agriculture Forest Service of Snow Water L. Azuma #12;McGurk, Bruce J.; Azuma, David L. 1992. Correlation and prediction of snow water" and, by implication, prediction of wilderness snow data by nonwilderness sensors that are typically
Theory and Applications of Competitive Prediction
Sheldon, Nathan D.
Theory and Applications of Competitive Prediction Fedor Zhdanov Computer Learning Research Centre;Abstract Predicting the future is an important purpose of machine learning research. In online learning, predictions are given sequentially rather than all at once. Peo- ple wish to make sensible decisions
Predicting the Wild Salmon Production Using Bayesian
MyllymÃ¤ki, Petri
Predicting the Wild Salmon Production Using Bayesian Networks Kimmo Valtonen, Tommi Mononen, Petri Karlsson and Ingemar PerÂ¨a December 22, 2002 HIIT TECHNICAL REPORT 2002Â7 #12;PREDICTING THE WILD SALMON elsewhere. #12;Predicting the wild salmon production using Bayesian networks Kimmo Valtonen, Tommi Mononen
Structure of Turbulence in Katabatic Flows below and above the Wind-Speed Maximum
Grachev, Andrey A; Di Sabatino, Silvana; Fernando, Harindra J S; Pardyjak, Eric R; Fairall, Christopher W
2015-01-01T23:59:59.000Z
Measurements of small-scale turbulence made over the complex-terrain atmospheric boundary layer during the MATERHORN Program are used to describe the structure of turbulence in katabatic flows. Turbulent and mean meteorological data were continuously measured at multiple levels at four towers deployed along the East lower slope (2-4 deg) of Granite Mountain. The multi-level observations made during a 30-day long MATERHORN-Fall field campaign in September-October 2012 allowed studying of temporal and spatial structure of katabatic flows in detail, and herein we report turbulence and their variations in katabatic winds. Observed vertical profiles show steep gradients near the surface, but in the layer above the slope jet the vertical variability is smaller. It is found that the vertical (normal to the slope) momentum flux and horizontal (along the slope) heat flux in a slope-following coordinate system change their sign below and above the wind maximum of a katabatic flow. The vertical momentum flux is directed...
Su-Jong Yoon; Piyush Sabharwall
2014-07-01T23:59:59.000Z
The operation temperature of advanced nuclear reactors is generally higher than commercial light water reactors and thermal energy from advanced nuclear reactor can be used for various purposes such as district heating, desalination, hydrogen production and other process heat applications, etc. The process heat industry/facilities will be located outside the nuclear island due to safety measures. This thermal energy from the reactor has to be transported a fair distance. In this study, analytical analysis was conducted to identify the maximum distance that thermal energy could be transported using various coolants such as molten-salts, helium and water by varying the pipe diameter and mass flow rate. The cost required to transport each coolant was also analyzed. The coolants analyzed are molten salts (such as: KClMgCl2, LiF-NaF-KF (FLiNaK) and KF-ZrF4), helium and water. Fluoride salts are superior because of better heat transport characteristics but chloride salts are most economical for higher temperature transportation purposes. For lower temperature water is a possible alternative when compared with He, because low pressure He requires higher pumping power which makes the process very inefficient and economically not viable for both low and high temperature application.
Thermal modification of bottomonium spectra from QCD sum rules with the maximum entropy method
Kei Suzuki; Philipp Gubler; Kenji Morita; Makoto Oka
2012-12-03T23:59:59.000Z
The bottomonium spectral functions at finite temperature are analyzed by employing QCD sum rules with the maximum entropy method. This approach enables us to extract the spectral functions without any phenomenological parametrization, and thus to visualize deformation of the spectral functions due to temperature effects estimated from quenched lattice QCD data. As a result, it is found that \\Upsilon and \\eta_b survive in hot matter of temperature up to at least 2.3T_c and 2.1T_c, respectively, while \\chi_{b0} and \\chi_{b1} will disappear at T<2.5T_c. Furthermore, a detailed analysis of the vector channel shows that the spectral function in the region of the lowest peak at T=0 contains contributions from the excited states, \\Upsilon(2S) and \\Upsilon(3S), as well as the ground states \\Upsilon (1S). Our results at finite T are consistent with the picture that the excited states of bottomonia dissociate at lower temperature than that of the ground state. Assuming this picture, we find that \\Upsilon(2S) and \\Upsilon(3S) disappear at T=1.5-2.0T_c.
Maximum-entropy calculation of end-to-end distance distribution of force stretching chains
Luru Dai; Fei Liu; Zhong-can Ou-Yang
2002-12-12T23:59:59.000Z
Using the maximum-entropy method, we calculate the end-to-end distance distribution of the force stretched chain from the moments of the distribution, which can be obtained from the extension-force curves recorded in single-molecule experiments. If one knows force expansion of the extension through the $(n-1)$th power of force, it is enough information to calculate the $n$ moments of the distribution. We examine the method with three force stretching chain models, Gaussian chain, free-joined chain and excluded-volume chain on two-dimension lattice. The method reconstructs all distributions precisely. We also apply the method to force stretching complex chain molecules: the hairpin and secondary structure conformations. We find that the distributions of homogeneous chains of two conformations are very different: there are two independent peaks in hairpin distribution; while only one peak is observed in the distribution of secondary structure conformations. Our discussion also shows that the end-to-end distance distribution may discover more critical physical information than the simpler extension-force curves can give.
Liu, Jian; Miller, William H.
2008-08-01T23:59:59.000Z
The maximum entropy analytic continuation (MEAC) method is used to extend the range of accuracy of the linearized semiclassical initial value representation (LSC-IVR)/classical Wigner approximation for real time correlation functions. The LSC-IVR provides a very effective 'prior' for the MEAC procedure since it is very good for short times, exact for all time and temperature for harmonic potentials (even for correlation functions of nonlinear operators), and becomes exact in the classical high temperature limit. This combined MEAC+LSC/IVR approach is applied here to two highly nonlinear dynamical systems, a pure quartic potential in one dimensional and liquid para-hydrogen at two thermal state points (25K and 14K under nearly zero external pressure). The former example shows the MEAC procedure to be a very significant enhancement of the LSC-IVR, for correlation functions of both linear and nonlinear operators, and especially at low temperature where semiclassical approximations are least accurate. For liquid para-hydrogen, the LSC-IVR is seen already to be excellent at T = 25K, but the MEAC procedure produces a significant correction at the lower temperature (T = 14K). Comparisons are also made to how the MEAC procedure is able to provide corrections for other trajectory-based dynamical approximations when used as priors.
FUEL CASK IMPACT LIMITER VULNERABILITIES
Leduc, D; Jeffery England, J; Roy Rothermel, R
2009-02-09T23:59:59.000Z
Cylindrical fuel casks often have impact limiters surrounding just the ends of the cask shaft in a typical 'dumbbell' arrangement. The primary purpose of these impact limiters is to absorb energy to reduce loads on the cask structure during impacts associated with a severe accident. Impact limiters are also credited in many packages with protecting closure seals and maintaining lower peak temperatures during fire events. For this credit to be taken in safety analyses, the impact limiter attachment system must be shown to retain the impact limiter following Normal Conditions of Transport (NCT) and Hypothetical Accident Conditions (HAC) impacts. Large casks are often certified by analysis only because of the costs associated with testing. Therefore, some cask impact limiter attachment systems have not been tested in real impacts. A recent structural analysis of the T-3 Spent Fuel Containment Cask found problems with the design of the impact limiter attachment system. Assumptions in the original Safety Analysis for Packaging (SARP) concerning the loading in the attachment bolts were found to be inaccurate in certain drop orientations. This paper documents the lessons learned and their applicability to impact limiter attachment system designs.