Theos J. Thompson, 1964 | U.S. DOE Office of Science (SC)
Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)
Theos J. Thompson, 1964 The Ernest Orlando Lawrence Award Lawrence Award Home Nomination & Selection Guidelines Award Laureates 2010's 2000's 1990's 1980's 1970's 1960's Ceremony The Life of Ernest Orlando Lawrence Contact Information The Ernest Orlando Lawrence Award U.S. Department of Energy SC-2/Germantown Building 1000 Independence Ave., SW Washington, DC 20585 P: (301) 903-2411 E: Email Us 1960's Theos J. Thompson, 1964 Print Text Size: A A A FeedbackShare Page Reactors: For leadership
Covariance Evaluation Methodology for Neutron Cross Sections
Herman,M.; Arcilla, R.; Mattoon, C.M.; Mughabghab, S.F.; Oblozinsky, P.; Pigni, M.; Pritychenko, b.; Songzoni, A.A.
2008-09-01
We present the NNDC-BNL methodology for estimating neutron cross section covariances in thermal, resolved resonance, unresolved resonance and fast neutron regions. The three key elements of the methodology are Atlas of Neutron Resonances, nuclear reaction code EMPIRE, and the Bayesian code implementing Kalman filter concept. The covariance data processing, visualization and distribution capabilities are integral components of the NNDC methodology. We illustrate its application on examples including relatively detailed evaluation of covariances for two individual nuclei and massive production of simple covariance estimates for 307 materials. Certain peculiarities regarding evaluation of covariances for resolved resonances and the consistency between resonance parameter uncertainties and thermal cross section uncertainties are also discussed.
Quality Quantification of Evaluated Cross Section Covariances
Varet, S.; Dossantos-Uzarralde, P.
2015-01-15
Presently, several methods are used to estimate the covariance matrix of evaluated nuclear cross sections. Because the resulting covariance matrices can be different according to the method used and according to the assumptions of the method, we propose a general and objective approach to quantify the quality of the covariance estimation for evaluated cross sections. The first step consists in defining an objective criterion. The second step is computation of the criterion. In this paper the Kullback-Leibler distance is proposed for the quality quantification of a covariance matrix estimation and its inverse. It is based on the distance to the true covariance matrix. A method based on the bootstrap is presented for the estimation of this criterion, which can be applied with most methods for covariance matrix estimation and without the knowledge of the true covariance matrix. The full approach is illustrated on the {sup 85}Rb nucleus evaluations and the results are then used for a discussion on scoring and Monte Carlo approaches for covariance matrix estimation of the cross section evaluations.
Progress on Nuclear Data Covariances: AFCI-1.2 Covariance Library
Oblozinsky,P.; Oblozinsky,P.; Mattoon,C.M.; Herman,M.; Mughabghab,S.F.; Pigni,M.T.; Talou,P.; Hale,G.M.; Kahler,A.C.; Kawano,T.; Little,R.C.; Young,P.G
2009-09-28
Improved neutron cross section covariances were produced for 110 materials including 12 light nuclei (coolants and moderators), 78 structural materials and fission products, and 20 actinides. Improved covariances were organized into AFCI-1.2 covariance library in 33-energy groups, from 10{sup -5} eV to 19.6 MeV. BNL contributed improved covariance data for the following materials: {sup 23}Na and {sup 55}Mn where more detailed evaluation was done; improvements in major structural materials {sup 52}Cr, {sup 56}Fe and {sup 58}Ni; improved estimates for remaining structural materials and fission products; improved covariances for 14 minor actinides, and estimates of mubar covariances for {sup 23}Na and {sup 56}Fe. LANL contributed improved covariance data for {sup 235}U and {sup 239}Pu including prompt neutron fission spectra and completely new evaluation for {sup 240}Pu. New R-matrix evaluation for {sup 16}O including mubar covariances is under completion. BNL assembled the library and performed basic testing using improved procedures including inspection of uncertainty and correlation plots for each material. The AFCI-1.2 library was released to ANL and INL in August 2009.
Performance of internal covariance estimators for cosmic shear correlation functions
DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)
Friedrich, O.; Seitz, S.; Eifler, T. F.; Gruen, D.
2015-12-31
Data re-sampling methods such as the delete-one jackknife are a common tool for estimating the covariance of large scale structure probes. In this paper we investigate the concepts of internal covariance estimation in the context of cosmic shear two-point statistics. We demonstrate how to use log-normal simulations of the convergence field and the corresponding shear field to carry out realistic tests of internal covariance estimators and find that most estimators such as jackknife or sub-sample covariance can reach a satisfactory compromise between bias and variance of the estimated covariance. In a forecast for the complete, 5-year DES survey we show that internally estimated covariance matrices can provide a large fraction of the true uncertainties on cosmological parameters in a 2D cosmic shear analysis. The volume inside contours of constant likelihood in themore » $$\\Omega_m$$-$$\\sigma_8$$ plane as measured with internally estimated covariance matrices is on average $$\\gtrsim 85\\%$$ of the volume derived from the true covariance matrix. The uncertainty on the parameter combination $$\\Sigma_8 \\sim \\sigma_8 \\Omega_m^{0.5}$$ derived from internally estimated covariances is $$\\sim 90\\%$$ of the true uncertainty.« less
Development of covariance capabilities in EMPIRE code (Conference...
Office of Scientific and Technical Information (OSTI)
The model-based covariances can be obtained using deterministic (Kalman) or stochastic (Monte Carlo) propagation of model parameter uncertainties. We show that these two procedures ...
Covariant functional diffusion equation for Polyakov's bosonic string
Botelho, L. C. L.
1989-07-15
I write a covariant functional diffusion equation for Polyakov's bosonic string with the string's world-sheet area playing the role of proper time.
Development of covariance capabilities in EMPIRE code (Conference...
Office of Scientific and Technical Information (OSTI)
The nuclear reaction code EMPIRE has been extended to provide evaluation capabilities for neutron cross section covariances in the thermal, resolved resonance, unresolved resonance ...
Conformal killing tensors and covariant Hamiltonian dynamics
Cariglia, M.; Gibbons, G. W.; Holten, J.-W. van; Horvathy, P. A.; Zhang, P.-M.
2014-12-15
A covariant algorithm for deriving the conserved quantities for natural Hamiltonian systems is combined with the non-relativistic framework of Eisenhart, and of Duval, in which the classical trajectories arise as geodesics in a higher dimensional space-time, realized by Brinkmann manifolds. Conserved quantities which are polynomial in the momenta can be built using time-dependent conformal Killing tensors with flux. The latter are associated with terms proportional to the Hamiltonian in the lower dimensional theory and with spectrum generating algebras for higher dimensional quantities of order 1 and 2 in the momenta. Illustrations of the general theory include the Runge-Lenz vector for planetary motion with a time-dependent gravitational constant G(t), motion in a time-dependent electromagnetic field of a certain form, quantum dots, the Hénon-Heiles and Holt systems, respectively, providing us with Killing tensors of rank that ranges from one to six.
Alfred Stadler, Franz Gross
2010-10-01
We provide a short overview of the Covariant Spectator Theory and its applications. The basic ideas are introduced through the example of a {phi}{sup 4}-type theory. High-precision models of the two-nucleon interaction are presented and the results of their use in calculations of properties of the two- and three-nucleon systems are discussed. A short summary of applications of this framework to other few-body systems is also presented.
Progress of Covariance Evaluation at the China Nuclear Data Center
Xu, R.; Zhang, Q.; Zhang, Y.; Liu, T.; Ge, Z.; Lu, H.; Sun, Z.; Yu, B.; Tang, G.
2015-01-15
Covariance evaluations at the China Nuclear Data Center focus on the cross sections of structural materials and actinides in the fast neutron energy range. In addition to the well-known Least-squares approach, a method based on the analysis of the sources of experimental uncertainties is especially introduced to generate a covariance matrix for a particular reaction for which multiple measurements are available. The scheme of the covariance evaluation flow is presented, and an example of n+{sup 90}Zr is given to illuminate the whole procedure. It is proven that the accuracy of measurements can be properly incorporated into the covariance and the long-standing small uncertainty problem can be avoided.
Are the invariance principles really truly Lorentz covariant?
Arunasalam, V.
1994-02-01
It is shown that some sections of the invariance (or symmetry) principles such as the space reversal symmetry (or parity P) and time reversal symmetry T (of elementary particle and condensed matter physics, etc.) are not really truly Lorentz covariant. Indeed, I find that the Dirac-Wigner sense of Lorentz invariance is not in full compliance with the Einstein-Minkowski reguirements of the Lorentz covariance of all physical laws (i.e., the world space Mach principle).
Low-Fidelity Covariances: Neutron Cross Section Covariance Estimates for 387 Materials
DOE Data Explorer [Office of Scientific and Technical Information (OSTI)]
"Covariance data are provided for radiative capture (or (n,ch.p.) for light nuclei), elastic scattering (or total for some actinides), inelastic scattering, (n,2n) reactions, fission and nubars over the energy range from 10(-5{super}) eV to 20 MeV. The library contains 387 files including almost all (383 out of 393) materials of the ENDF/B-VII.0. Absent are data for (7{super})Li, (232{super})Th, (233,235,238{super})U and (239{super})Pu as well as (223,224,225,226{super})Ra, while (nat{super})Zn is replaced by (64,66,67,68,70{super})Zn."[http://www.nndc.bnl.gov/lowfi/index.jsp?z=7
Neutron Resonance Parameters and Covariance Matrix of 239Pu
Derrien, Herve; Leal, Luiz C; Larson, Nancy M
2008-08-01
In order to obtain the resonance parameters in a single energy range and the corresponding covariance matrix, a reevaluation of 239Pu was performed with the code SAMMY. The most recent experimental data were analyzed or reanalyzed in the energy range thermal to 2.5 keV. The normalization of the fission cross section data was reconsidered by taking into account the most recent measurements of Weston et al. and Wagemans et al. A full resonance parameter covariance matrix was generated. The method used to obtain realistic uncertainties on the average cross section calculated by SAMMY or other processing codes was examined.
Covariance matrices for use in criticality safety predictability studies
Derrien, H.; Larson, N.M.; Leal, L.C.
1997-09-01
Criticality predictability applications require as input the best available information on fissile and other nuclides. In recent years important work has been performed in the analysis of neutron transmission and cross-section data for fissile nuclei in the resonance region by using the computer code SAMMY. The code uses Bayes method (a form of generalized least squares) for sequential analyses of several sets of experimental data. Values for Reich-Moore resonance parameters, their covariances, and the derivatives with respect to the adjusted parameters (data sensitivities) are obtained. In general, the parameter file contains several thousand values and the dimension of the covariance matrices is correspondingly large. These matrices are not reported in the current evaluated data files due to their large dimensions and to the inadequacy of the file formats. The present work has two goals: the first is to calculate the covariances of group-averaged cross sections from the covariance files generated by SAMMY, because these can be more readily utilized in criticality predictability calculations. The second goal is to propose a more practical interface between SAMMY and the evaluated files. Examples are given for {sup 235}U in the popular 199- and 238-group structures, using the latest ORNL evaluation of the {sup 235}U resonance parameters.
JENDL Actinoid File 2008 and Plan of Covariance Evaluation
Iwamoto, O. Nakagawa, T.; Otuka, N.; Chiba, S.; Okumura, K.; Chiba, G.
2008-12-15
JENDL Actinoid File 2008 (JENDL/AC-2008), which is one of JENDL special purpose files, was released in March 2008. It provides nuclear data for neutron induced nuclear reactions for actinoid nuclides from Ac (Z=89) to Fm (Z=100). The data for 62 nuclides in JENDL-3.3 were revised and newly evaluated data were added for 17 nuclides that have a half-life longer than 1 day. The energy range of incident neutrons is from 10{sup -5} eV to 20 MeV. The nuclear reaction model code CCONE was widely used for the evaluation of cross sections and energy-angular distributions of secondary neutrons in the fast energy region. Covariance data for the fission and capture cross sections and the number of neutrons per fission will be evaluated for important nuclides in JENDL/AC-2008. The evaluation methods and the preliminary results of estimated covariances are presented.
Bilinear covariants and spinor fields duality in quantum Clifford algebras
Ab?amowicz, Rafa?; Gonalves, Icaro; Rocha, Roldo da
2014-10-15
Classification of quantum spinor fields according to quantum bilinear covariants is introduced in a context of quantum Clifford algebras on Minkowski spacetime. Once the bilinear covariants are expressed in terms of algebraic spinor fields, the duality between spinor and quantum spinor fields can be discussed. Thus, by endowing the underlying spacetime with an arbitrary bilinear form with an antisymmetric part in addition to a symmetric spacetime metric, quantum algebraic spinor fields and deformed bilinear covariants can be constructed. They are thus compared to the classical (non quantum) ones. Classes of quantum spinor fields classes are introduced and compared with Lounesto's spinor field classification. A physical interpretation of the deformed parts and the underlying Z-grading is proposed. The existence of an arbitrary bilinear form endowing the spacetime already has been explored in the literature in the context of quantum gravity [S. W. Hawking, The unpredictability of quantum gravity, Commun. Math. Phys. 87, 395 (1982)]. Here, it is shown further to play a prominent role in the structure of Dirac, Weyl, and Majorana spinor fields, besides the most general flagpoles and flag-dipoles. We introduce a new duality between the standard and the quantum spinor fields, by showing that when Clifford algebras over vector spaces endowed with an arbitrary bilinear form are taken into account, a mixture among the classes does occur. Consequently, novel features regarding the spinor fields can be derived.
Role of Experiment Covariance in Cross Section Adjustments
Giuseppe Palmiotti; M. Salvatores
2014-06-01
This paper is dedicated to the memory of R. D. McKnight, which gave a seminal contribution in establishing methodology and rigorous approach in the evaluation of the covariance of reactor physics integral experiments. His original assessment of the ZPPR experiment uncertainties and correlations has made nuclear data adjustments, based on these experiments, much more robust and reliable. In the present paper it has been shown with some numerical examples the actual impact on an adjustment of accounting for or neglecting such correlations.
Covariance of Neutron Cross Sections for {sup 16}O through R-matrix Analysis
Kunieda, S.; Kawano, T.; Paris, M.; Hale, G.M.; Shibata, K.; Fukahori, T.
2015-01-15
Through the R-matrix analysis, neutron cross sections as well as the covariance are estimated for {sup 16}O in the resolved resonance range. Although we consider the current results are still preliminary, we present the summary of the cross section analysis and the results of data uncertainty/covariance, including those for the differential cross sections. It is found that the values obtained highlight consequences of nature in the theory as well as knowledge from measurements, which gives a realistic quantification of evaluated nuclear data covariances.
Extremal covariant positive operator valued measures: The case of a compact symmetry group
Carmeli, Claudio; Heinosaari, Teiko; Pellonpaeae, Juha-Pekka; Toigo, Alessandro
2008-06-15
Given a unitary representation U of a compact group G and a transitive G-space {omega}, we characterize the extremal elements of the convex set of all U-covariant positive operator valued measures.
PUFF-III: A Code for Processing ENDF Uncertainty Data Into Multigroup Covariance Matrices
Dunn, M.E.
2000-06-01
PUFF-III is an extension of the previous PUFF-II code that was developed in the 1970s and early 1980s. The PUFF codes process the Evaluated Nuclear Data File (ENDF) covariance data and generate multigroup covariance matrices on a user-specified energy grid structure. Unlike its predecessor, PUFF-III can process the new ENDF/B-VI data formats. In particular, PUFF-III has the capability to process the spontaneous fission covariances for fission neutron multiplicity. With regard to the covariance data in File 33 of the ENDF system, PUFF-III has the capability to process short-range variance formats, as well as the lumped reaction covariance data formats that were introduced in ENDF/B-V. In addition to the new ENDF formats, a new directory feature is now available that allows the user to obtain a detailed directory of the uncertainty information in the data files without visually inspecting the ENDF data. Following the correlation matrix calculation, PUFF-III also evaluates the eigenvalues of each correlation matrix and tests each matrix for positive definiteness. Additional new features are discussed in the manual. PUFF-III has been developed for implementation in the AMPX code system, and several modifications were incorporated to improve memory allocation tasks and input/output operations. Consequently, the resulting code has a structure that is similar to other modules in the AMPX code system. With the release of PUFF-III, a new and improved covariance processing code is available to process ENDF covariance formats through Version VI.
Review and Assessment of Neutron Cross Section and Nubar Covariances for Advanced Reactor Systems
Maslov,V.M.; Oblozinsky, P.; Herman, M.
2008-12-01
In January 2007, the National Nuclear Data Center (NNDC) produced a set of preliminary neutron covariance data for the international project 'Nuclear Data Needs for Advanced Reactor Systems'. The project was sponsored by the OECD Nuclear Energy Agency (NEA), Paris, under the Subgroup 26 of the International Working Party on Evaluation Cooperation (WPEC). These preliminary covariances are described in two recent BNL reports. The NNDC used a simplified version of the method developed by BNL and LANL that combines the recent Atlas of Neutron Resonances, the nuclear reaction model code EMPIRE and the Bayesian code KALMAN with the experimental data used as guidance. There are numerous issues involved in these estimates of covariances and it was decided to perform an independent review and assessment of these results so that better covariances can be produced for the revised version in future. Reviewed and assessed are uncertainties for fission, capture, elastic scattering, inelastic scattering and (n,2n) cross sections as well as prompt nubars for 15 minor actinides ({sup 233,234,236}U, {sup 237}Np, {sup 238,240,241,242}Pu, {sup 241,242m,243}Am and {sup 242,243,244,245}Cm) and 4 major actinides ({sup 232}Th, {sup 235,238}U and {sup 239}Pu). We examined available evaluations, performed comparison with experimental data, taken into account uncertainties in model parameterization and made use state-of-the-art nuclear reaction theory to produce the uncertainty assessment.
0v{beta}{beta} decay: theoretical nuclear matrix elements and their covariances
Lisi, Eligio [Istituto Nazionale di Fisica Nucleare, Sezione di Bari, Via Orabona 4, 70126 Bari (Italy)
2009-11-09
Within the quasiparticle random phase approximation (QRPA), the covariances associated to the nuclear matrix elements (NME) of neutrinoless double beta decay (0v{beta}{beta}) are estimated. It is shown that correlated NME uncertainties play an important role in the comparison of 0v{beta}{beta} decay rates for different nuclei, both in the standard case of light Majorana neutrino exchange, and in nonstandard physics cases.
Spagnolo, Nicolo; Sciarrino, Fabio; De Martini, Francesco
2010-09-15
We show that the quantum states generated by universal optimal quantum cloning of a single photon represent a universal set of quantum superpositions resilient to decoherence. We adopt the Bures distance as a tool to investigate the persistence of quantum coherence of these quantum states. According to this analysis, the process of universal cloning realizes a class of quantum superpositions that exhibits a covariance property in lossy configuration over the complete set of polarization states in the Bloch sphere.
Bystroff, Christopher; Webb-Robertson, Bobbie-Jo M.
2009-05-06
Amino acid sequence probability distributions, or profiles, have been used successfully to predict secondary structure and local structure in proteins. Profile models assume the statistical independence of each position in the sequence, but the energetics of protein folding is better captured in a scoring function that is based on pairwise interactions, like a force field. I-sites motifs are short sequence/structure motifs that populate the protein structure database due to energy-driven convergent evolution. Here we show that a pairwise covariant sequence model does not predict alpha helix or beta strand significantly better overall than a profile-based model, but it does improve the prediction of certain loop motifs. The finding is best explained by considering secondary structure profiles as multivariant, all-or-none models, which subsume covariant models. Pairwise covariance is nonetheless present and energetically rational. Examples of negative design are present, where the covariances disfavor non-native structures. Measured pairwise covariances are shown to be statistically robust in cross-validation tests, as long as the amino acid alphabet is reduced to nine classes. We present an updated I-sites local structure motif library and web server that provide sequence covariance information for all types of local structure in globular proteins.
Alexandre Pinto, S ergio; Stadler, Alfred; Gross, Franz
2009-01-01
We present the first calculations of the electromagnetic form factors of 3He and 3H within the framework of the Covariant Spectator Theory (CST). This first exploratory study concentrates on the sensitivity of the form factors to the strength of the scalar meson-nucleon off-shell coupling, known from previous studies to have a strong influence on the three-body binding energy. Results presented here were obtained using the complete impulse approximation (CIA), which includes contributions of relativistic origin that appear as two-body corrections in a non-relativistic framework, such as ?Z-graphs?, but omits other two and three-body currents. We compare our results to non-relativistic calculations augmented by relativistic corrections of O(v/c)2.
Sergio Alexandre Pinto, Alfred Stadler, Franz Gross
2009-05-01
We present the first calculations of the electromagnetic form factors of 3He and 3H within the framework of the Covariant Spectator Theory (CST). This first exploratory study concentrates on the sensitivity of the form factors to the strength of the scalar meson-nucleon off-shell coupling, known from previous studies to have a strong influence on the three-body binding energy. Results presented here were obtained using the complete impulse approximation (CIA), which includes contributions of relativistic origin that appear as two-body corrections in a non-relativistic framework, such as #28;Z-graphs#29;, but omits other two and three-body currents. We compare our results to non-relativistic calculations augmented by relativistic corrections of O(v/c)2.
Pinto, Sergio Alexandre; Stadler, Alfred; Gross, Franz
2009-05-15
We present the first calculations of the electromagnetic form factors of {sup 3}He and {sup 3}H within the framework of the Covariant Spectator Theory (CST). This first exploratory study concentrates on the sensitivity of the form factors to the strength of the scalar meson-nucleon off-shell coupling, known from previous studies to have a strong influence on the three-body binding energy. Results presented here were obtained using the complete impulse approximation (CIA), which includes contributions of relativistic origin that appear as two-body corrections in a nonrelativistic framework, such as 'Z-graphs', but omits other two and three-body currents. We compare our results to nonrelativistic calculations augmented by relativistic corrections of O(v/c){sup 2}.
Chiral symmetry and π-π scattering in the Covariant Spectator Theory
DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)
Biernat, Elmar P.; Peña, M. T.; Ribeiro, J. E.; Stadler, Alfred; Gross, Franz
2014-11-14
The π-π scattering amplitude calculated with a model for the quark-antiquark interaction in the framework of the Covariant Spectator Theory (CST) is shown to satisfy the Adler zero constraint imposed by chiral symmetry. The CST formalism is established in Minkowski space and our calculations are performed in momentum space. We prove that the axial-vector Ward-Takahashi identity is satisfied by our model. Then we show that, similarly to what happens within the Bethe-Salpeter formalism, application of the axial-vector Ward Takahashi identity to the CST π-π scattering amplitude allows us to sum the intermediate quark-quark interactions to all orders. Thus, the Adlermore » self-consistency zero for π-π scattering in the chiral limit emerges as the result for this sum.« less
Seifert, Michael D.; Wald, Robert M.
2007-04-15
We present a general method for the analysis of the stability of static, spherically symmetric solutions to spherically symmetric perturbations in an arbitrary diffeomorphism covariant Lagrangian field theory. Our method involves fixing the gauge and solving the linearized gravitational field equations to eliminate the metric perturbation variables in terms of the matter variables. In a wide class of cases--which include f(R) gravity, the Einstein-aether theory of Jacobson and Mattingly, and Bekenstein's TeVeS theory--the remaining perturbation equations for the matter fields are second order in time. We show how the symplectic current arising from the original Lagrangian gives rise to a symmetric bilinear form on the variables of the reduced theory. If this bilinear form is positive definite, it provides an inner product that puts the equations of motion of the reduced theory into a self-adjoint form. A variational principle can then be written down immediately, from which stability can be tested readily. We illustrate our method in the case of Einstein's equation with perfect fluid matter, thereby rederiving, in a systematic manner, Chandrasekhar's variational principle for radial oscillations of spherically symmetric stars. In a subsequent paper, we will apply our analysis to f(R) gravity, the Einstein-aether theory, and Bekenstein's TeVeS theory.
Deriving Daytime Variables From the AmeriFlux Standard Eddy Covariance Data Set
van Ingen, Catharine; Agarwal, Deborah A.; Humphrey, Marty; Li, Jie
2008-12-06
A gap-filled, quality assessed eddy covariance dataset has recently become available for the AmeriFluxnetwork. This dataset uses standard processing and produces commonly used science variables. This shared dataset enables robust comparisons across different analyses. Of course, there are many remaining questions. One of those is how to define 'during the day' which is an important concept for many analyses. Some studies have used local time for example 9am to 5pm; others have used thresholds on photosynthetic active radiation (PAR). A related question is how to derive quantities such as the Bowen ratio. Most studies compute the ratio of the averages of the latent heat (LE) and sensible heat (H). In this study, we use different methods of defining 'during the day' for GPP, LE, and H. We evaluate the differences between methods in two ways. First, we look at a number of statistics of GPP. Second, we look at differences in the derived Bowen ratio. Our goal is not science per se, but rather informatics in support of the science.
Computation of Large Covariance Matrices by SAMMY on Graphical Processing Units and Multicore CPUs
Arbanas, Goran [ORNL; Dunn, Michael E [ORNL; Wiarda, Dorothea [ORNL
2011-01-01
Computational power of Graphical Processing Units and multicore CPUs was harnessed by the nuclear data evaluation code SAMMY to speed up computations of large Resonance Parameter Covariance Matrices (RPCMs). This was accomplished by linking SAMMY to vendor-optimized implementations of the matrix-matrix multiplication subroutine of the Basic Linear Algebra Library to compute the most time-consuming step. The U-235 RPCM computed previously using a triple-nested loop was re-computed using the NVIDIA implementation of the subroutine on a single Tesla Fermi Graphical Processing Unit, and also using the Intel's Math Kernel Library implementation on two different multicore CPU systems. A multiplication of two matrices of dimensions 16,000 x 20,000 that had previously taken days, took approximately one minute on the GPU. Similar performance was achieved on a dual six-core CPU system. The magnitude of the speed-up suggests that these, or similar, combinations of hardware and libraries may be useful for large matrix operations in SAMMY. Uniform interfaces of standard linear algebra libraries make them a promising candidate for a programming framework of a new generation of SAMMY for the emerging heterogeneous computing platforms.
Fichtl, G.H.
1983-09-01
When designing a wind energy converison system (WECS), it may be necessary to take into account the distribution of wind across the disc of rotation. The specific engineering applications include structural strength, fatigue, and control. This wind distribution consists of two parts, namely that associated with the mean wind profile and that associated with the turbulence velocity fluctuation field. The work reported herein is aimed at the latter, namely the distribution of turbulence velocity fluctuations across the WECS disk of rotation. A theory is developed for the two-time covariance matrix for turbulence velocity vector components for wind energy conversion system (WECS) design. The theory is developed for homogeneous and iotropic turbulance with the assumption that Taylor's hypothesis is valid. The Eulerian turbulence velocity vector field is expanded about the hub of the WECS. Formulae are developed for the turbulence velocity vector component covariance matrix following the WECS blade elements. It is shown that upon specification of the turbulence energy spectrum function and the WECS rotation rate, the two-point, two-time covariance matrix of the turbulent flow relative to the WECS bladed elements is determined. This covariance matrix is represented as the sum of nonstationary and stationary contributions. Generalized power spectral methods are used to obtain two-point, double frequency power spectral density functions for the turbulent flow following the blade elements. The Dryden turbulence model is used to demonstrate the theory. A discussion of linear system response analysis is provided to show how the double frequency turbulence spectra might be used to calculate response spectra of a WECS to turbulent flow. Finally the spectrum of the component of turbulence normal to the WECS disc of rotation, following the blade elements, is compared with experimental results.
Palmiotti, Giuseppe; Salvatores, Massimo; Aliberti, G.
2015-01-01
In order to provide useful feedback to evaluators a set of criteria are established for assessing the robustness and reliability of the cross section adjustments that make use of integral experiment information. Criteria are also provided for accepting the “a posteriori” cross sections, both as new “nominal” values and as “trends”. Some indications of the use of the “a posteriori” covariance matrix are indicated, even though more investigation is needed to settle this complex subject.
Palmiotti, G.; Salvatores, M.; Aliberti, G.
2015-01-15
In order to provide useful feedback to evaluators a set of criteria are established for assessing the robustness and reliability of the cross section adjustments that make use of integral experiment information. Criteria are also provided for accepting the “a posteriori” cross sections, both as new “nominal” values and as “trends”. Some indications of the use of the “a posteriori” covariance matrix are indicated, even though more investigation is needed to settle this complex subject.
Use and Impact of Covariance Data in the Japanese Latest Adjusted Library ADJ2010 Based on JENDL-4.0
Yokoyama, K. Ishikawa, M.
2015-01-15
The current status of covariance applications to fast reactor analysis and design in Japan is summarized. In Japan, the covariance data are mainly used for three purposes: (1) to quantify the uncertainty of nuclear core parameters, (2) to identify important nuclides, reactions and energy ranges which are dominant to the uncertainty of core parameters, and (3) to improve the accuracy of core design values by adopting the integral data such as the critical experiments and the power reactor operation data. For the last purpose, the cross section adjustment based on the Bayesian theorem is used. After the release of JENDL-4.0, a development project of the new adjusted group-constant set ADJ2010 was started in 2010 and completed in 2013. In the present paper, the final results of ADJ2010 are briefly summarized. In addition, the adjustment results of ADJ2010 are discussed from the viewpoint of use and impact of nuclear data covariances, focusing on {sup 239}Pu capture cross section alterations. For this purpose three kind of indices, called “degree of mobility,” “adjustment motive force,” and “adjustment potential,” are proposed.
Dobaczewski, J.; Afanasjev, A. V.; Bender, M.; Shi, Yue
2015-07-29
In this study, we calculate properties of the ground and excited states of nuclei in the nobelium region for proton and neutron numbers of 92 ≤ Z ≤ 104 and 144 ≤ N ≤ 156, respectively. We use three different energy-density-functional (EDF) approaches, based on covariant, Skyrme, and Gogny functionals, each with two different parameter sets. A comparative analysis of the results obtained for quasiparticle spectra, odd–even and two-particle mass staggering, and moments of inertia allows us to identify single-particle and shell effects that are characteristic to these different models and to illustrate possible systematic uncertainties related to using the EDF modelling.
Biernat, Elmar P.; Gross, Franz; Peña, M. T.; Stadler, Alfred
2015-10-26
The pion form factor is calculated in the framework of the charge-conjugation invariant covariant spectator theory. This formalism is established in Minkowski space, and the calculation is set up in momentum space. In a previous calculation we included only the leading pole coming from the spectator quark (referred to as the relativistic impulse approximation). In this study we also include the contributions from the poles of the quark which interacts with the photon and average over all poles in both the upper and lower half-planes in order to preserve charge conjugation invariance (referred to as the C-symmetric complete impulse approximation). We find that for small pion mass these contributions are significant at all values of the four-momentum transfer Q^{2} but, surprisingly, do not alter the shape obtained from the spectator poles alone.
DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)
Biernat, Elmar P.; Gross, Franz; Peña, M. T.; Stadler, Alfred
2015-10-26
The pion form factor is calculated in the framework of the charge-conjugation invariant covariant spectator theory. This formalism is established in Minkowski space, and the calculation is set up in momentum space. In a previous calculation we included only the leading pole coming from the spectator quark (referred to as the relativistic impulse approximation). In this study we also include the contributions from the poles of the quark which interacts with the photon and average over all poles in both the upper and lower half-planes in order to preserve charge conjugation invariance (referred to as the C-symmetric complete impulse approximation).more » We find that for small pion mass these contributions are significant at all values of the four-momentum transfer Q2 but, surprisingly, do not alter the shape obtained from the spectator poles alone.« less
DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)
Dobaczewski, J.; Afanasjev, A. V.; Bender, M.; Robledo, L. M.; Shi, Yue
2015-07-29
In this study, we calculate properties of the ground and excited states of nuclei in the nobelium region for proton and neutron numbers of 92 ≤ Z ≤ 104 and 144 ≤ N ≤ 156, respectively. We use three different energy-density-functional (EDF) approaches, based on covariant, Skyrme, and Gogny functionals, each with two different parameter sets. A comparative analysis of the results obtained for quasiparticle spectra, odd–even and two-particle mass staggering, and moments of inertia allows us to identify single-particle and shell effects that are characteristic to these different models and to illustrate possible systematic uncertainties related to using themore » EDF modelling.« less
G. Palmiotti
2011-12-01
The ENDF/B-VII.1 library is our latest recommended evaluated nuclear data file for use in nuclear science and technology applications, and incorporates advances made in the five years since the release of ENDF/B-VII.0. These advances focus on neutron cross sections, covariances, fission product yields and decay data, and represent work by the US Cross Section Evaluation Working Group (CSEWG) in nuclear data evaluation that utilizes developments in nuclear theory, modeling, simulation, and experiment. The principal advances in the new library are: (1) An increase in the breadth of neutron reaction cross section coverage, extending from 393 nuclides to 418 nuclides; (2) Covariance uncertainty data for 185 of the most important nuclides, as documented in companion papers in this edition; (3) R-matrix analyses of neutron reactions on light nuclei, including isotopes of He, Li, and Be; (4) Resonance parameter analyses at lower energies and statistical high energy reactions at higher energies for isotopes of F, Cl, K, Ti, V, Mn, Cr, Ni, Zr and W; (5) Modifications to thermal neutron reactions on fission products (isotopes of Mo, Tc, Rh, Ag, Cs, Nd, Sm, Eu) and neutron absorber materials (Cd, Gd); (6) Improved minor actinide evaluations for isotopes of U, Np, Pu, and Am (we are not making changes to the major actinides 235,238U and 239Pu at this point, except for delayed neutron data, and instead we intend to update them after a further period of research in experiment and theory), and our adoption of JENDL-4.0 evaluations for isotopes of Cm, Bk, Cf, Es, Fm, and some other minor actinides; (7) Fission energy release evaluations; (8) Fission product yield advances for fission-spectrum neutrons and 14 MeV neutrons incident on 239Pu; and (9) A new Decay Data sublibrary. Integral validation testing of the ENDF/B-VII.1 library is provided for a variety of quantities: For nuclear criticality, the VII.1 library maintains the generally-good performance seen for VII.0 for a wide range of MCNP simulations of criticality benchmarks, with improved performance coming from new structural material evaluations, especially for Ti, Mn, Cr, Zr and W. For Be we see some improvements although the fast assembly data appear to be mutually inconsistent. Actinide cross section updates are also assessed through comparisons of fission and capture reaction rate measurements in critical assemblies and fast reactors. We describe the cross section evaluations that have been updated for ENDF/B-VII.1 and the measured data and calculations that motivated the changes, and therefore this paper augments the ENDF/B-VII.0 publication [1].
Greenwald, Jared; Satheeshkumar, V.H.; Wang, Anzhong E-mail: VHSatheeshkumar@baylor.edu
2010-12-01
We study spherically symmetric static spacetimes generally filled with an anisotropic fluid in the nonrelativistic general covariant theory of gravity. In particular, we find that the vacuum solutions are not unique, and can be expressed in terms of the U(1) gauge field A. When solar system tests are considered, severe constraints on A are obtained, which seemingly pick up the Schwarzschild solution uniquely. In contrast to other versions of the Horava-Lifshitz theory, non-singular static stars made of a perfect fluid without heat flow can be constructed, due to the coupling of the fluid with the gauge field. These include the solutions with a constant pressure. We also study the general junction conditions across the surface of a star. In general, the conditions allow the existence of a thin matter shell on the surface. When applying these conditions to the perfect fluid solutions with the vacuum ones as describing their external spacetimes, we find explicitly the matching conditions in terms of the parameters appearing in the solutions. Such matching is possible even without the presence of a thin matter shell.
Alexandre, Jean; Pasipoularides, Pavlos
2011-10-15
In this note we examine whether spherically symmetric solutions in covariant Horava-Lifshitz gravity can reproduce Newton's Law in the IR limit {lambda}{yields}1. We adopt the position that the auxiliary field A is independent of the space-time metric [J. Alexandre and P. Pasipoularides, Phys. Rev. D 83, 084030 (2011).][J. Greenwald, V. H. Satheeshkumar, and A. Wang, J. Cosmol. Astropart. Phys. 12 (2010) 007.], and we assume, as in [A. M. da Silva, Classical Quantum Gravity 28, 055011 (2011).], that {lambda} is a running coupling constant. We show that under these assumptions, spherically symmetric solutions fail to restore the standard Newtonian physics in the IR limit {lambda}{yields}1, unless {lambda} does not run, and has the fixed value {lambda}=1. Finally, we comment on the Horava and Melby-Thompson approach [P. Horava and C. M. Melby-Thompson, Phys. Rev. D 82, 064027 (2010).] in which A is assumed as a part of the space-time metric in the IR.
DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)
Nuijens, Louise; Medeiros, Brian; Sandu, Irina; Ahlgrimm, Maike
2015-11-06
We present patterns of covariability between low-level cloudiness and the trade-wind boundary layer structure using long-term measurements at a site representative of dynamical regimes with moderate subsidence or weak ascent. We compare these with ECMWF’s Integrated Forecast System and 10 CMIP5 models. By using single-time step output at a single location, we find that models can produce a fairly realistic trade-wind layer structure in long-term means, but with unrealistic variability at shorter-time scales. The unrealistic variability in modeled cloudiness near the lifting condensation level (LCL) is due to stronger than observed relationships with mixed-layer relative humidity (RH) and temperature stratificationmore » at the mixed-layer top. Those relationships are weak in observations, or even of opposite sign, which can be explained by a negative feedback of convection on cloudiness. Cloudiness near cumulus tops at the tradewind inversion instead varies more pronouncedly in observations on monthly time scales, whereby larger cloudiness relates to larger surface winds and stronger trade-wind inversions. However, these parameters appear to be a prerequisite, rather than strong controlling factors on cloudiness, because they do not explain submonthly variations in cloudiness. Models underestimate the strength of these relationships and diverge in particular in their responses to large-scale vertical motion. No model stands out by reproducing the observed behavior in all respects. As a result, these findings suggest that climate models do not realistically represent the physical processes that underlie the coupling between trade-wind clouds and their environments in present-day climate, which is relevant for how we interpret modeled cloud feedbacks.« less
Nuijens, Louise; Medeiros, Brian; Sandu, Irina; Ahlgrimm, Maike
2015-11-06
We present patterns of covariability between low-level cloudiness and the trade-wind boundary layer structure using long-term measurements at a site representative of dynamical regimes with moderate subsidence or weak ascent. We compare these with ECMWF’s Integrated Forecast System and 10 CMIP5 models. By using single-time step output at a single location, we find that models can produce a fairly realistic trade-wind layer structure in long-term means, but with unrealistic variability at shorter-time scales. The unrealistic variability in modeled cloudiness near the lifting condensation level (LCL) is due to stronger than observed relationships with mixed-layer relative humidity (RH) and temperature stratification at the mixed-layer top. Those relationships are weak in observations, or even of opposite sign, which can be explained by a negative feedback of convection on cloudiness. Cloudiness near cumulus tops at the tradewind inversion instead varies more pronouncedly in observations on monthly time scales, whereby larger cloudiness relates to larger surface winds and stronger trade-wind inversions. However, these parameters appear to be a prerequisite, rather than strong controlling factors on cloudiness, because they do not explain submonthly variations in cloudiness. Models underestimate the strength of these relationships and diverge in particular in their responses to large-scale vertical motion. No model stands out by reproducing the observed behavior in all respects. As a result, these findings suggest that climate models do not realistically represent the physical processes that underlie the coupling between trade-wind clouds and their environments in present-day climate, which is relevant for how we interpret modeled cloud feedbacks.
Bauge, E.
2015-01-15
The “Full model” evaluation process, that is used in CEA DAM DIF to evaluate nuclear data in the continuum region, makes extended use of nuclear models implemented in the TALYS code to account for experimental data (both differential and integral) by varying the parameters of these models until a satisfactory description of these experimental data is reached. For the evaluation of the covariance data associated with this evaluated data, the Backward-forward Monte Carlo (BFMC) method was devised in such a way that it mirrors the process of the “Full model” evaluation method. When coupled with the Total Monte Carlo method via the T6 system developed by NRG Petten, the BFMC method allows to make use of integral experiments to constrain the distribution of model parameters, and hence the distribution of derived observables and their covariance matrix. Together, TALYS, TMC, BFMC, and T6, constitute a powerful integrated tool for nuclear data evaluation, that allows for evaluation of nuclear data and the associated covariance matrix, all at once, making good use of all the available experimental information to drive the distribution of the model parameters and the derived observables.
Bryan, W. A.; Newell, W. R.; Sanderson, J. H.; Langley, A. J. [Department of Physics and Astronomy, University College London, Gower Street, London WC1E 6BT (United Kingdom); Department of Physics, University of Waterloo, Waterloo, Ontario, N2L 3G1 (Canada); Central Laser Facility, Rutherford Appleton Laboratory, Chilton, Didcot, Oxon OX11 0QX (United Kingdom)
2006-11-15
The two- and three-body Coulomb explosion of carbonyl sulfide (OCS) by 790 nm, 50 fs laser pulses focused to {approx_equal}10{sup 16} W cm{sup -2} has been investigated by the three-dimensional covariance mapping technique. In a triatomic molecule, a single charge state, in this case the trication, has been observed to dissociate into two distinct energy channels. With the aid of a three-dimensional visualization technique to reveal the ionization hierarchy, evidence is presented for the existence of two sets of ionization pathways resulting from these two initial states. While one group of ions can be modeled using a classical enhanced ionization model, the second group, consisting of mainly asymmetric channels, cannot. The results provide clear evidence that an enhanced ionization approach must also be accompanied by an appreciation of the effects of excited ionic states and multielectronic processes.
Chiral symmetry and $\pi $-$\pi $ scattering in the Covariant Spectator Theory
Biernat, Elmar P.; Peña, M. T.; Ribeiro, J. E.; Stadler, Alfred; Gross, Franz
2014-11-14
The π-π scattering amplitude calculated with a model for the quark-antiquark interaction in the framework of the Covariant Spectator Theory (CST) is shown to satisfy the Adler zero constraint imposed by chiral symmetry. The CST formalism is established in Minkowski space and our calculations are performed in momentum space. We prove that the axial-vector Ward-Takahashi identity is satisfied by our model. Then we show that, similarly to what happens within the Bethe-Salpeter formalism, application of the axial-vector Ward Takahashi identity to the CST π-π scattering amplitude allows us to sum the intermediate quark-quark interactions to all orders. Thus, the Adler self-consistency zero for π-π scattering in the chiral limit emerges as the result for this sum.
DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)
Mucke, M; Zhaunerchyk, V; Frasinski, L J; Squibb, R J; Siano, M; Eland, J H D; Linusson, P; Salén, P; Meulen, P v d; Thomas, R D; et al
2015-07-01
Few-photon ionization and relaxation processes in acetylene (C2H2) and ethane (C2H6) were investigated at the linac coherent light source x-ray free electron laser (FEL) at SLAC, Stanford using a highly efficient multi-particle correlation spectroscopy technique based on a magnetic bottle. The analysis method of covariance mapping has been applied and enhanced, allowing us to identify electron pairs associated with double core hole (DCH) production and competing multiple ionization processes including Auger decay sequences. The experimental technique and the analysis procedure are discussed in the light of earlier investigations of DCH studies carried out at the same FEL and at thirdmore » generation synchrotron radiation sources. In particular, we demonstrate the capability of the covariance mapping technique to disentangle the formation of molecular DCH states which is barely feasible with conventional electron spectroscopy methods.« less
Mucke, M; Zhaunerchyk, V; Frasinski, L J; Squibb, R J; Siano, M; Eland, J H D; Linusson, P; Salén, P; Meulen, P v d; Thomas, R D; Larsson, M; Foucar, L; Ullrich, J; Motomura, K; Mondal, S; Ueda, K; Osipov, T; Fang, L; Murphy, B F; Berrah, N; Bostedt, C; Bozek, J D; Schorb, S; Messerschmidt, M; Glownia, J M; Cryan, J P; Coffee, R N; Takahashi, O; Wada, S; Piancastelli, M N; Richter, R; Prince, K C; Feifel, R
2015-07-01
Few-photon ionization and relaxation processes in acetylene (C_{2}H_{2}) and ethane (C_{2}H_{6}) were investigated at the linac coherent light source x-ray free electron laser (FEL) at SLAC, Stanford using a highly efficient multi-particle correlation spectroscopy technique based on a magnetic bottle. The analysis method of covariance mapping has been applied and enhanced, allowing us to identify electron pairs associated with double core hole (DCH) production and competing multiple ionization processes including Auger decay sequences. The experimental technique and the analysis procedure are discussed in the light of earlier investigations of DCH studies carried out at the same FEL and at third generation synchrotron radiation sources. In particular, we demonstrate the capability of the covariance mapping technique to disentangle the formation of molecular DCH states which is barely feasible with conventional electron spectroscopy methods.
DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)
Neudecker, D.; Talou, P.; Kawano, T.; Smith, D. L.; Capote, R.; Rising, M. E.; Kahler, A. C.
2015-08-01
We present evaluations of the prompt fission neutron spectrum (PFNS) of ²³⁹Pu induced by 500 keV neutrons, and associated covariances. In a previous evaluation by Talou et al. 2010, surprisingly low evaluated uncertainties were obtained, partly due to simplifying assumptions in the quantification of uncertainties from experiment and model. Therefore, special emphasis is placed here on a thorough uncertainty quantification of experimental data and of the Los Alamos model predicted values entering the evaluation. In addition, the Los Alamos model was extended and an evaluation technique was employed that takes into account the qualitative differences between normalized model predicted valuesmore » and experimental shape data. These improvements lead to changes in the evaluated PFNS and overall larger evaluated uncertainties than in the previous work. However, these evaluated uncertainties are still smaller than those obtained in a statistical analysis using experimental information only, due to strong model correlations. Hence, suggestions to estimate model defect uncertainties are presented, which lead to more reasonable evaluated uncertainties. The calculated keff of selected criticality benchmarks obtained with these new evaluations agree with each other within their uncertainties despite the different approaches to estimate model defect uncertainties. The keff one standard deviations overlap with some of those obtained using ENDF/B-VII.1, albeit their mean values are further away from unity. Spectral indexes for the Jezebel critical assembly calculated with the newly evaluated PFNS agree with the experimental data for selected (n,γ) and (n,f) reactions, and show improvements for high-energy threshold (n,2n) reactions compared to ENDF/B-VII.1.« less
Neudecker, D.; Talou, P.; Kawano, T.; Smith, D. L.; Capote, R.; Rising, M. E.; Kahler, A. C.
2015-08-01
We present evaluations of the prompt fission neutron spectrum (PFNS) of ²³⁹Pu induced by 500 keV neutrons, and associated covariances. In a previous evaluation by Talou et al. 2010, surprisingly low evaluated uncertainties were obtained, partly due to simplifying assumptions in the quantification of uncertainties from experiment and model. Therefore, special emphasis is placed here on a thorough uncertainty quantification of experimental data and of the Los Alamos model predicted values entering the evaluation. In addition, the Los Alamos model was extended and an evaluation technique was employed that takes into account the qualitative differences between normalized model predicted values and experimental shape data. These improvements lead to changes in the evaluated PFNS and overall larger evaluated uncertainties than in the previous work. However, these evaluated uncertainties are still smaller than those obtained in a statistical analysis using experimental information only, due to strong model correlations. Hence, suggestions to estimate model defect uncertainties are presented, which lead to more reasonable evaluated uncertainties. The calculated k_{eff} of selected criticality benchmarks obtained with these new evaluations agree with each other within their uncertainties despite the different approaches to estimate model defect uncertainties. The k_{eff} one standard deviations overlap with some of those obtained using ENDF/B-VII.1, albeit their mean values are further away from unity. Spectral indexes for the Jezebel critical assembly calculated with the newly evaluated PFNS agree with the experimental data for selected (n,γ) and (n,f) reactions, and show improvements for high-energy threshold (n,2n) reactions compared to ENDF/B-VII.1.
Covariance propagation in spectral indices
DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)
Griffin, P. J.
2015-01-09
In this study, the dosimetry community has a history of using spectral indices to support neutron spectrum characterization and cross section validation efforts. An important aspect to this type of analysis is the proper consideration of the contribution of the spectrum uncertainty to the total uncertainty in calculated spectral indices (SIs). This study identifies deficiencies in the traditional treatment of the SI uncertainty, provides simple bounds to the spectral component in the SI uncertainty estimates, verifies that these estimates are reflected in actual applications, details a methodology that rigorously captures the spectral contribution to the uncertainty in the SI, andmore » provides quantified examples that demonstrate the importance of the proper treatment the spectral contribution to the uncertainty in the SI.« less
Covariance propagation in spectral indices
Griffin, P. J.
2015-01-09
In this study, the dosimetry community has a history of using spectral indices to support neutron spectrum characterization and cross section validation efforts. An important aspect to this type of analysis is the proper consideration of the contribution of the spectrum uncertainty to the total uncertainty in calculated spectral indices (SIs). This study identifies deficiencies in the traditional treatment of the SI uncertainty, provides simple bounds to the spectral component in the SI uncertainty estimates, verifies that these estimates are reflected in actual applications, details a methodology that rigorously captures the spectral contribution to the uncertainty in the SI, and provides quantified examples that demonstrate the importance of the proper treatment the spectral contribution to the uncertainty in the SI.
Density-dependent covariant energy density functionals
Lalazissis, G. A.
2012-10-20
Relativistic nuclear energy density functionals are applied to the description of a variety of nuclear structure phenomena at and away fromstability line. Isoscalar monopole, isovector dipole and isoscalar quadrupole giant resonances are calculated using fully self-consistent relativistic quasiparticle randomphase approximation, based on the relativistic Hartree-Bogoliubovmodel. The impact of pairing correlations on the fission barriers in heavy and superheavy nuclei is examined. The role of pion in constructing desnity functionals is also investigated.
Burgess, Caitlin; Skalski, John R.
2001-05-01
Effects of oceanographic conditions, as well as effects of release-timing and release-size, on first ocean-year survival of subyearling fall chinook salmon were investigated by analyzing CWT release and recovery data from Oregon and Washington coastal hatcheries. Age-class strength was estimated using a multinomial probability likelihood which estimated first-year survival as a proportional hazards regression against ocean and release covariates. Weight-at-release and release-month were found to significantly effect first year survival (p < 0.05) and ocean effects were therefore estimated after adjusting for weight-at-release. Negative survival trend was modeled for sea surface temperature (SST) during 11 months of the year over the study period (1970-1992). Statistically significant negative survival trends (p < 0.05) were found for SST during April, June, November and December. Strong pairwise correlations (r > 0.6) between SST in April/June, April/November and April/December suggest the significant relationships were due to one underlying process. At higher latitudes (45{sup o} and 48{sup o}N), summer upwelling (June-August) showed positive survival trend with survival and fall (September-November) downwelling showed positive trend with survival, indicating early fall transition improved survival. At 45{sup o} and 48{sup o}, during spring, alternating survival trends with upwelling were observed between March and May, with negative trend occurring in March and May, and positive trend with survival occurring in April. In January, two distinct scenarios of improved survival were linked to upwelling conditions, indicated by (1) a significant linear model effect (p < 0.05) showing improved survival with increasing upwelling, and (2) significant bowl-shaped curvature (p < 0.05) of survival with upwelling. The interpretation of the effects is that there was (1) significantly improved survival when downwelling conditions shifted to upwelling conditions in January (i.e., early spring transition occurred, p < 0.05), (2) improved survival during strong downwelling conditions (Bakun units < -250). Survival decreased during weak downwelling conditions (Bakun units between -180 and -100). Strong to moderately strong correlations between January upwelling and April SST (r = 0.5), June SST (r = 0.6), and the North Pacific Index (NPI) of Aleutian Low strength (r > 0.7) suggest January is a period when important effects originate and play out over ensuing months. Significant inverse trend with survival (p < 0.05) was found for Bakun indices in December, indicating strong downwelling improved survival. Higher-than-average adult return rates were observed for cohorts from brood-years 1982-1983, strong El Nino years. Individual hatcheries were found to have unique age-class strength and age-at-return characteristics.
Parameters Covariance in Neutron Time of Flight Analysis Explicit Formulae
Odyniec, M.; Blair, J.
2014-12-01
We present here a method that estimates the parameters variance in a parametric model for neutron time of flight (NToF). The analytical formulae for parameter variances, obtained independently of calculation of parameter values from measured data, express the variances in terms of the choice, settings, and placement of the detector and the oscilloscope. Consequently, the method can serve as a tool in planning a measurement setup.
Covariant spectator theory of np scattering: Deuteron quadrupole moment
DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)
Gross, Franz
2015-01-26
The deuteron quadrupole moment is calculated using two CST model wave functions obtained from the 2007 high precision fits to np scattering data. Included in the calculation are a new class of isoscalar np interaction currents automatically generated by the nuclear force model used in these fits. The prediction for model WJC-1, with larger relativistic P-state components, is 2.5% smaller that the experiential result, in common with the inability of models prior to 2014 to predict this important quantity. However, model WJC-2, with very small P-state components, gives agreement to better than 1%, similar to the results obtained recently frommore » XEFT predictions to order N3LO.« less
Magnetic and antimagnetic rotation in covariant density functional theory
Zhao, P. W.; Liang, H. Z.; Peng, J.; Ring, P.; Zhang, S. Q.; Meng, J.
2012-10-20
Progress on microscopic and self-consistent description of the magnetic rotation and antimagnetic rotation phenomena in tilted axis cranking relativistic mean-field theory based on a point-coupling interaction are briefly reviewed. In particular, the microscopic pictures of the shears mechanism in {sup 60}Ni and the two shears-like mechanism in {sup 105}Cd are discussed.
Tuning of the nucleation field in nanowires with perpendicular...
Office of Scientific and Technical Information (OSTI)
Authors: Kimling, Judith ; Gerhardt, Theo ; Kobs, Andr ; Vogel, Andreas ; Wintz, Sebastian ; Im, Mi-Young ; Fischer, Peter ; Oepen, Hans Peter ; Merkt, Ulrich ; Meier, Guido ...
Search for: All records | SciTech Connect
Office of Scientific and Technical Information (OSTI)
... National Laboratory, Oak Ridge, Tennessee 37831 ; Shi, Hongliang ; Siegrist, Theo ; Singh, David J., E-mail: singhdj@missouri.edu The electronic and optical properties of bulk ...
Six-Week Time Series Of Eddy Covariance CO2 Flux At Mammoth Mountain...
high, spatially heterogeneous CO2 emission rates. EC CO2 fluxes ranged from 218 to 3500 g m- 2 d- 1 (mean 1346 g m- 2 d- 1). Using footprint modeling, EC CO2 fluxes were...
Covariant Spectator Theory of np scattering: Deuteron magnetic moment and form factors
Gross, Franz L.
2014-06-01
The deuteron magnetic moment is calculated using two model wave functions obtained from 2007 high precision fits to $np$ scattering data. Included in the calculation are a new class of isoscalar $np$ interaction currents which are automatically generated by the nuclear force model used in these fits. After normalizing the wave functions, nearly identical predictions are obtained: model WJC-1, with larger relativistic P-state components, gives 0.863(2), while model WJC-2 with very small $P$-state components gives 0.864(2) These are about 1\\% larger than the measured value of the moment, 0.857 n.m., giving a new prediction for the size of the $\\rho\\pi\\gamma$ exchange, and other purely transverse interaction currents that are largely unconstrained by the nuclear dynamics. The physical significance of these results is discussed, and general formulae for the deuteron form factors, expressed in terms of deuteron wave functions and a new class of interaction current wave functions, are given.
Constraints on Covariant Horava-Lifshitz Gravity from frame-dragging experiment
Radicella, Ninfa; Lambiase, Gaetano; Parisi, Luca; Vilasi, Gaetano E-mail: lambiase@sa.infn.it E-mail: vilasi@sa.infn.it
2014-12-01
The effects of Horava-Lifshitz corrections to the gravito-magnetic field are analyzed. Solutions in the weak field, slow motion limit, referring to the motion of a satellite around the Earth are considered. The post-newtonian paradigm is used to evaluate constraints on the Horava-Lifshitz parameter space from current satellite and terrestrial experiments data. In particular, we focus on GRAVITY PROBE B, LAGEOS and the more recent LARES mission, as well as a forthcoming terrestrial project, GINGER.
Covariant energymomentum and an uncertainty principle for general relativity
Cooperstock, F.I.; Dupre, M.J.
2013-12-15
We introduce a naturally-defined totally invariant spacetime energy expression for general relativity incorporating the contribution from gravity. The extension links seamlessly to the action integral for the gravitational field. The demand that the general expression for arbitrary systems reduces to the Tolman integral in the case of stationary bounded distributions, leads to the matter-localized Ricci integral for energymomentum in support of the energy localization hypothesis. The role of the observer is addressed and as an extension of the special relativistic case, the field of observers comoving with the matter is seen to compute the intrinsic global energy of a system. The new localized energy supports the Bonnor claim that the Szekeres collapsing dust solutions are energy-conserving. It is suggested that in the extreme of strong gravity, the Heisenberg Uncertainty Principle be generalized in terms of spacetime energymomentum. -- Highlights: We present a totally invariant spacetime energy expression for general relativity incorporating the contribution from gravity. Demand for the general expression to reduce to the Tolman integral for stationary systems supports the Ricci integral as energymomentum. Localized energy via the Ricci integral is consistent with the energy localization hypothesis. New localized energy supports the Bonnor claim that the Szekeres collapsing dust solutions are energy-conserving. Suggest the Heisenberg Uncertainty Principle be generalized in terms of spacetime energymomentum in strong gravity extreme.
Covariant density functional theory with two-phonon coupling in nuclei
Ring, P.; Litvinova, E.; Tselyaev, V.
2012-10-20
A full description of excited states within the framework of density functional theory requires energy dependent self energies. We present a new class of many-body models. It allows a parameter free description of the fragmentation of nuclear states induced by mode coupling of two-quasiparticle and two-phonon configurations. The method is applied for an investigation of low-lying dipole excitations in Sn isotopes with large neutron excess.
Eddy-Covariance and auxiliary measurements, NGEE-Barrow, 2012-2013
DOE Data Explorer [Office of Scientific and Technical Information (OSTI)]
Torn, Margaret; Billesbach, Dave; Raz-Yaseef, Naama
2014-03-24
The EC tower is operated as part of the Next Generation Ecosystem Experiment-Arctic (NGEE) at Barrow, Alaska. The tower is collecting flux data from the beginning of the thaw season, early June, and until conditions are completely frozen, early November. The tower is equipped with a Gill R3-50 Sonic Anemometer, LI-7700 (CH4) sensor, a LI-7500A (CO2/H2O) sensor, and radiation sensors (Kipp and Zonen CNR-4 (four component radiometer), two LiCor LI-190 quantum sensors (PAR upwelling and downwelling), and a down-looking Apogee SI-111 infrared radiometer (surface temperature)). The sensors are remotely controlled, and communication with the tower allows us to retrieve information in real time.
Neudecker, Denise
2014-07-10
This document provides the numerical values of the evaluated prompt fission neutron spectrum for ^{239}Pu induced by neutrons of 500 keV as well as relative uncertainties and correlations. This document also contains a short description how these data were obtained and shows plots comparing the evaluated results to experimental information as well as the corresponding ENDF/B-VII.1 evaluation.
de Haan et al. Reply: (Journal Article) | SciTech Connect
Office of Scientific and Technical Information (OSTI)
de Haan et al. Reply: Citation Details In-Document Search Title: de Haan et al. Reply: A Reply to the Comment by V. K. Ignatovich. Authors: Haan, Victor-O. de ; Plomp, Jeroen ; Rekveldt, Theo M. ; Kraan, Wicher H. ; Well, Ad A. van [1] ; Dalgliesh, Robert M. ; Langridge, Sean [2] + Show Author Affiliations Department Radiation, Radionuclides and Reactors Faculty of Applied Sciences Delft University of Technology Mekelweg 15, 2629 JB Delft (Netherlands) STFC, ISIS Rutherford Appleton Laboratory
Particle Energy Spectrum, Revisited from a Counting Statistics Perspective
Yuan, D., Marks, D. G., Guss, P. P.
2012-07-16
This document is a slide show type presentation of a new covariance estimation for gamma spectra and neutron cross section.
DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)
Lees, J. P.; Poireau, V.; Tisserand, V.; Garra Tico, J.; Grauges, E.; Martinelli, M.; Milanes, D. A.; Palano, A.; Pappagallo, M.; Eigen, G.; et al
2012-08-07
We report measurements of partial branching fractions for inclusive charmless semileptonic B decays B¯¯¯→Xulν¯ and the determination of the Cabibbo–Kobayashi–Maskawa (CKM) matrix element |Vub|. The analysis is based on a sample of 467×10⁶ Υ(4S)→BB¯¯¯ decays recorded with the BABAR detector at the PEP-II e⁺e⁻ storage rings. We select events in which the decay of one of the B mesons is fully reconstructed and an electron or a muon signals the semileptonic decay of the other B meson. We measure partial branching fractions ΔB in several restricted regions of phase space and determine the CKM element |Vub| based on different QCDmore » predictions. For decays with a charged lepton momentum p*l>1.0 GeV in the B meson rest frame, we obtain ΔB=(1.80±0.13stat±0.15sys±0.02theo)×10⁻³ from a fit to the two-dimensional MX-q² distribution. Here, MX refers to the invariant mass of the final state hadron X and q² is the invariant mass squared of the charged lepton and neutrino. From this measurement we extract |Vub|=(4.33±0.24exp±0.15theo)×10⁻³ as the arithmetic average of four results obtained from four different QCD predictions of the partial rate. We separately determine partial branching fractions for B¯¯¯0 and B⁻ decays and derive a limit on the isospin breaking in B¯¯¯→Xulν¯ decays.« less
Lees, J. P.; Poireau, V.; Tisserand, V.; Garra Tico, J.; Grauges, E.; Martinelli, M.; Milanes, D. A.; Palano, A.; Pappagallo, M.; Eigen, G.; Stugu, B.; Brown, D. N.; Kerth, L. T.; Kolomensky, Yu. G.; Lynch, G.; Tackmann, K.; Koch, H.; Schroeder, T.; Asgeirsson, D. J.; Hearty, C.; Mattison, T. S.; McKenna, J. A.; Khan, A.; Blinov, V. E.; Buzykaev, A. R.; Druzhinin, V. P.; Golubev, V. B.; Kravchenko, E. A.; Onuchin, A. P.; Serednyakov, S. I.; Skovpen, Yu. I.; Solodov, E. P.; Todyshev, K. Yu.; Yushkov, A. N.; Bondioli, M.; Kirkby, D.; Lankford, A. J.; Mandelkern, M.; Stoker, D. P.; Atmacan, H.; Gary, J. W.; Liu, F.; Long, O.; Vitug, G. M.; Campagnari, C.; Hong, T. M.; Kovalskyi, D.; Richman, J. D.; West, C. A.; Eisner, A. M.; Kroseberg, J.; Lockman, W. S.; Martinez, A. J.; Schalk, T.; Schumm, B. A.; Seiden, A.; Cheng, C. H.; Doll, D. A.; Echenard, B.; Flood, K. T.; Hitlin, D. G.; Ongmongkolkul, P.; Porter, F. C.; Rakitin, A. Y.; Andreassen, R.; Dubrovin, M. S.; Huard, Z.; Meadows, B. T.; Sokoloff, M. D.; Sun, L.; Bloom, P. C.; Ford, W. T.; Gaz, A.; Nagel, M.; Nauenberg, U.; Smith, J. G.; Wagner, S. R.; Ayad, R.; Toki, W. H.; Spaan, B.; Kobel, M. J.; Schubert, K. R.; Schwierz, R.; Bernard, D.; Verderi, M.; Clark, P. J.; Playfer, S.; Bettoni, D.; Bozzi, C.; Calabrese, R.; Cibinetto, G.; Fioravanti, E.; Garzia, I.; Luppi, E.; Munerato, M.; Negrini, M.; Petrella, A.; Piemontese, L.; Santoro, V.; Baldini-Ferroli, R.; Calcaterra, A.; de Sangro, R.; Finocchiaro, G.; Nicolaci, M.; Patteri, P.; Peruzzi, I. M.; Piccolo, M.; Rama, M.; Zallo, A.; Contri, R.; Guido, E.; Lo Vetere, M.; Monge, M. R.; Passaggio, S.; Patrignani, C.; Robutti, E.; Bhuyan, B.; Prasad, V.; Lee, C. L.; Morii, M.; Edwards, A. J.; Adametz, A.; Marks, J.; Uwer, U.; Bernlochner, F. U.; Ebert, M.; Lacker, H. M.; Lueck, T.; Dauncey, P. D.; Tibbetts, M.; Behera, P. K.; Mallik, U.; Chen, C.; Cochran, J.; Meyer, W. T.; Prell, S.; Rosenberg, E. I.; Rubin, A. E.; Gritsan, A. V.; Guo, Z. J.; Arnaud, N.; Davier, M.; Grosdidier, G.; Le Diberder, F.; Lutz, A. M.; Malaescu, B.; Roudeau, P.; Schune, M. H.; Stocchi, A.; Wormser, G.; Lange, D. J.; Wright, D. M.; Bingham, I.; Chavez, C. A.; Coleman, J. P.; Fry, J. R.; Gabathuler, E.; Hutchcroft, D. E.; Payne, D. J.; Touramanis, C.; Bevan, A. J.; Di Lodovico, F.; Sacco, R.; Sigamani, M.; Cowan, G.; Brown, D. N.; Davis, C. L.; Denig, A. G.; Fritsch, M.; Gradl, W.; Hafner, A.; Prencipe, E.; Alwyn, K. E.; Bailey, D.; Barlow, R. J.; Jackson, G.; Lafferty, G. D.; Cenci, R.; Hamilton, B.; Jawahery, A.; Roberts, D. A.; Simi, G.; Dallapiccola, C.; Cowan, R.; Dujmic, D.; Sciolla, G.; Lindemann, D.; Patel, P. M.; Robertson, S. H.; Schram, M.; Biassoni, P.; Lazzaro, A.; Lombardo, V.; Neri, N.; Palombo, F.; Stracka, S.; Cremaldi, L.; Godang, R.; Kroeger, R.; Sonnek, P.; Summers, D. J.; Nguyen, X.; Taras, P.; De Nardo, G.; Monorchio, D.; Onorato, G.; Sciacca, C.; Raven, G.; Snoek, H. L.; Jessop, C. P.; Knoepfel, K. J.; LoSecco, J. M.; Wang, W. F.; Honscheid, K.; Kass, R.; Brau, J.; Frey, R.; Sinev, N. B.; Strom, D.; Torrence, E.; Feltresi, E.; Gagliardi, N.; Margoni, M.; Morandin, M.; Posocco, M.; Rotondo, M.; Simonetto, F.; Stroili, R.; Ben-Haim, E.; Bomben, M.; Bonneaud, G. R.; Briand, H.; Calderini, G.; Chauveau, J.; Hamon, O.; Leruste, Ph.; Marchiori, G.; Ocariz, J.; Sitt, S.; Biasini, M.; Manoni, E.; Pacetti, S.; Rossi, A.; Angelini, C.; Batignani, G.; Bettarini, S.; Carpinelli, M.; Casarosa, G.; Cervelli, A.; Forti, F.; Giorgi, M. A.; Lusiani, A.; Oberhof, B.; Paoloni, E.; Perez, A.; Rizzo, G.; Walsh, J. J.; Lopes Pegna, D.; Lu, C.; Olsen, J.; Smith, A. J. S.; Telnov, A. V.; Anulli, F.; Cavoto, G.; Faccini, R.; Ferrarotto, F.; Ferroni, F.; Gaspero, M.; Li Gioi, L.; Mazzoni, M. A.; Piredda, G.; Bnger, C.; Grnberg, O.; Hartmann, T.; Leddig, T.; Schrder, H.; Waldi, R.; Adye, T.; Olaiya, E. O.; Wilson, F. F.; Emery, S.; Hamel de Monchenault, G.; Vasseur, G.; Yche, Ch.; Aston, D.; Bard, D. J.; Bartoldus, R.; Cartaro, C.; Convery, M. R.; Dorfan, J.; Dubois-Felsmann, G. P.; Dunwoodie, W.; Field, R. C.; Franco Sevilla, M.; Fulsom, B. G.; Gabareen, A. M.; Graham, M. T.; Grenier, P.; Hast, C.; Innes, W. R.; Kelsey, M. H.; Kim, H.; Kim, P.; Kocian, M. L.; Leith, D. W. G. S.; Lewis, P.; Li, S.; Lindquist, B.; Luitz, S.; Luth, V.; Lynch, H. L.; MacFarlane, D. B.; Muller, D. R.; Neal, H.; Nelson, S.; Ofte, I.; Perl, M.; Pulliam, T.; Ratcliff, B. N.; Roodman, A.; Salnikov, A. A.; Schindler, R. H.; Snyder, A.; Su, D.; Sullivan, M. K.; Vavra, J.; Wagner, A. P.; Weaver, M.; Wisniewski, W. J.; Wittgen, M.; Wright, D. H.; Wulsin, H. W.; Yarritu, A. K.; Young, C. C.; Ziegler, V.; Park, W.; Purohit, M. V.; White, R. M.; Wilson, J. R.; Randle-Conde, A.; Sekula, S. J.; Bellis, M.; Benitez, J. F.; Burchat, P. R.
2012-08-07
We report measurements of partial branching fractions for inclusive charmless semileptonic B decays B?X_{u}l? and the determination of the CabibboKobayashiMaskawa (CKM) matrix element |V_{ub}|. The analysis is based on a sample of 46710? ?(4S)?BB decays recorded with the BABAR detector at the PEP-II e?e? storage rings. We select events in which the decay of one of the B mesons is fully reconstructed and an electron or a muon signals the semileptonic decay of the other B meson. We measure partial branching fractions ?B in several restricted regions of phase space and determine the CKM element |V_{ub}| based on different QCD predictions. For decays with a charged lepton momentum p^{*}_{l}>1.0 GeV in the B meson rest frame, we obtain ?B=(1.800.13stat0.15sys0.02theo)10? from a fit to the two-dimensional M_{X}-q distribution. Here, M_{X} refers to the invariant mass of the final state hadron X and q is the invariant mass squared of the charged lepton and neutrino. From this measurement we extract |V_{ub}|=(4.330.24_{exp}?0.15_{theo})10? as the arithmetic average of four results obtained from four different QCD predictions of the partial rate. We separately determine partial branching fractions for B0 and B? decays and derive a limit on the isospin breaking in B?X_{u}l? decays.
Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)
progress in our understanding of global properties of covariant energy density function- als. ... Theoretical uncertainties in their description and their underlying sources will be ...
Combination of Evidence in Dempster-Shafer Theory (Technical...
Office of Scientific and Technical Information (OSTI)
This is a potentially valuable tool for the evaluation of risk and reliability in ... DATA COVARIANCES; PROBABILITY; RISK ASSESSMENT; RELIABILITY; MATHEMATICAL MODELS Word ...
Search for: All records | SciTech Connect
Office of Scientific and Technical Information (OSTI)
... covariances (5) comparative evaluations (4) accuracy (3) computer codes (3) mathematical methods and computing (3) performance (3) risk assessment (3) accidents (2) benchmarks ...
"Title","Creator/Author","Publication Date","OSTI Identifier...
Office of Scientific and Technical Information (OSTI)
MODELS; PARTICLE MODELS; POSTULATED PARTICLES",,"The effective field theory of massive gravity has long been formulated in a generally covariant way N. Arkani-Hamed, H. Georgi,...
Unitarity check in gravitational Higgs mechanism Berezhiani,...
Office of Scientific and Technical Information (OSTI)
MODELS; PARTICLE MODELS; POSTULATED PARTICLES The effective field theory of massive gravity has long been formulated in a generally covariant way N. Arkani-Hamed, H. Georgi,...
Search for: All records | SciTech Connect
Office of Scientific and Technical Information (OSTI)
and the atmosphere is of scientific importance and also relevant to climate-policy making. Eddy covariance flux towers provide continuous measurements of ecosystem-level...
ARM - PI Product - AERIoe Thermodynamic Profile and Cloud Retrieval...
Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)
estimation framework, so a full error covariance of the solution is provided for each retrieval. The information content in the AERI observations on the thermodynamic profiles...
Slides presented by workshop participants at the International...
Office of Scientific and Technical Information (OSTI)
Workshop on Nuclear Data Covariances Citation Details In-Document Search Title: Slides presented by workshop participants at the International Workshop on Nuclear Data ...
DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)
Khachatryan, Vardan
2015-05-14
Properties of the Higgs boson with mass near 125GeV are measured in proton-proton collisions with the CMS experiment at the LHC. Comprehensive sets of production and decay measurements are combined. The decay channels include γγ, ZZ, WW, ττ, bb, and μμ pairs. The data samples were collected in 2011 and 2012 and correspond to integrated luminosities of up to 5.1fb-1 at 7TeV and up to 19.7fb-1 at 8TeV. From the high-resolution γγ and ZZ channels, the mass of the Higgs boson is determined to be 125.02+0.26–0.27 (stat) +0.14–0.15 (syst) GeV. For this mass value, the event yields obtained in themore » different analyses tagging specific decay channels and production mechanisms are consistent with those expected for the standard model Higgs boson. The combined best-fit signal relative to the standard model expectation is 1.00 ± 0.09(stat)+0.08–0.07 (theo) ± 0.07(syst) at the measured mass. The couplings of the Higgs boson are probed for deviations in magnitude from the standard model predictions in multiple ways, including searches for invisible and undetected decays. As a result, no significant deviations are found.« less
Khachatryan, Vardan
2015-05-01
This paper presents a measurement of the inclusive 3-jet production differential cross section at a proton-proton centre-of-mass energy of 7 TeV using data corresponding to an integrated luminosity of 5 fb$^{-1}$ collected with the CMS detector. The analysis is based on the three jets with the highest transverse momenta. The cross section is measured as a function of the invariant mass of the three jets in a range of 445-3270 GeV and in two bins of the maximum rapidity of the jets up to a value of 2. A comparison between the measurement and the prediction from perturbative QCD at next-to-leading order is performed. Within uncertainties, data and theory are in agreement. The sensitivity of the observable to parameters of the theory such as the parton distribution functions of the proton and the strong coupling constant $\\alpha_S$ is studied. A fit to all data points with 3-jet masses larger than 664 GeV gives a value of the strong coupling constant of $\\alpha_S(M_\\mathrm{Z})$ = 0.1171 $\\pm$ 0.0013 (exp) $^{+0.0073}_{-0.0047}$ (theo).
Khachatryan, Vardan
2015-05-14
Properties of the Higgs boson with mass near 125 GeV are measured in proton-proton collisions with the CMS experiment at the LHC. Comprehensive sets of production and decay measurements are combined. The decay channels include ??, ZZ, WW, ??, bb, and ?? pairs. The data samples were collected in 2011 and 2012 and correspond to integrated luminosities of up to 5.1 fb? at 7 TeV and up to 19.7 fb? at 8 TeV. From the high-resolution ?? and ZZ channels, the mass of the Higgs boson is determined to be 125.02\\,^{+0.26}_{-0.27}(stat)^{+0.14}_{-0.15}(syst) GeV. For this mass value, the event yields obtained in the different analyses tagging specific decay channels and production mechanisms are consistent with those expected for the standard model Higgs boson. The combined best-fit signal relative to the standard model expectation is 1.00 0.09 (stat), ^{+0.08} _{-0.07} (theo) 0.07 (syst) at the measured mass. The couplings of the Higgs boson are probed for deviations in magnitude from the standard model predictions in multiple ways, including searches for invisible and undetected decays. No significant deviations are found.
DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)
Khachatryan, Vardan
2015-06-26
The inclusive jet cross section for proton–proton collisions at a centre-of-mass energy of 7TeVwas measured by the CMS Collaboration at the LHC with data corresponding to an integrated luminosity of 5.0fb-1. The measurement covers a phase space up to 2TeV in jet transverse momentum and 2.5 in absolute jet rapidity. The statistical precision of these data leads to stringent constraints on the parton distribution functions of the proton. The data provide important input for the gluon density at high fractions of the proton momentum and for the strong coupling constant at large energy scales. Using predictions from perturbative quantum chromodynamicsmore » at next-to-leading order, complemented with electroweak corrections, the constraining power of these data is investigated and the strong coupling constant at the Z boson mass MZ is determined to be αS(MZ)=0.1185±0.0019(exp)+0.0060-0.0037(theo), which is in agreement with the world average.« less
Observation of the diphoton decay of the Higgs boson and measurement of its properties
DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)
Khachatryan, A. M.
2014-10-15
Observation of the diphoton decay mode of the recently discovered Higgs boson and measurement of some of its properties are reported. The analysis uses the entire dataset collected by the CMS experiment in proton-proton collisions during the 2011 and 2012 LHC running periods. The data samples correspond to integrated luminosities of 5.1fb-1 at √s=7 TeV and 19.7fb-1 at 8TeV. A clear signal is observed in the diphoton channel at a mass close to 125GeV with a local significance of 5.7σ, where a significance of 5.2σ is expected for the standard model Higgs boson. The mass is measured to be 124.70more » ± 0.34 GeV = 124.70 ± 0.31(stat) ± 0.15(syst) GeV, and the best-fit signal strength relative to the standard model prediction is 1.14+0.26–0.23 = 1.14 ± 0.21(stat) +0.09–0.05(syst) +0.13–0.09(theo). Thus, additional measurements include the signal strength modifiers associated with different production mechanisms, and hypothesis tests between spin-0 and spin-2 models.« less
Measurement of the B0(s) - anti-B0(s) Oscillation Frequency
Abulencia, A.; Acosta, D.; Adelman, Jahred A.; Affolder, T.; Akimoto, T.; Albrow, M.G.; Ambrose, D.; Amerio, S.; Amidei, D.; Anastassov, A.; Anikeev, K.; /Taiwan, Inst. Phys. /Argonne /Barcelona, IFAE /Baylor U. /INFN, Bologna /Bologna U. /Brandeis U. /UC, Davis /UCLA /UC, San Diego /UC, Santa Barbara
2006-06-01
The authors present the first measurement of the B{sub s}{sup 0}-{bar B}{sub s}{sup 0} oscillation frequency {Delta}m{sub s}. They use 1 fb{sup -1} of data from p{bar p} collisions at {radical}s = 1.96 TeV collected with the CDF II detector at the Fermilab Tevatron. The sample contains signals of 3600 fully reconstructed hadronic B{sub s} decays and 37,000 partially reconstructed semileptonic B{sub s} decays. They measure the probability as a function of proper decay time that the B{sub s} decays with the same, or opposite, flavor as the flavor at production and they find a signal consistent with B{sub s}{sup 0}-{bar B}{sub s}{sup 0} oscillations. The probability that random fluctuations could produce a comparable signal is 0.2%. Under the hypothesis that the signal is due to B{sub s}{sup 0}-{bar B}{sub s}{sup 0} oscillations, they measure {Delta}m{sub s} = 17.31{sub -0.18}{sup +0.33}(stat.) {+-} 0.07(syst.) ps{sup -1} and determine |V{sub td}/V{sub ts}| = 0.208{sub -0.002}{sup +0.001}(exp.){sub -0.006}{sup +0.008}(theo.).
Khachatryan, Vardan
2015-05-01
This article presents a measurement of the inclusive 3-jet production differential cross section at a proton–proton centre-of-mass energy of 7 TeV using data corresponding to an integrated luminosity of 5fb^{–1} collected with the CMS detector. The analysis is based on the three jets with the highest transverse momenta. The cross section is measured as a function of the invariant mass of the three jets in a range of 445–3270 GeV and in two bins of the maximum rapidity of the jets up to a value of 2. A comparison between the measurement and the prediction from perturbative QCD at next-to-leading order is performed. Within uncertainties, data and theory are in agreement. The sensitivity of the observable to the strong coupling constant αS is studied. A fit to all data points with 3-jet masses larger than 664 GeV gives a value of the strong coupling constant of α_{S}(M_{Z}) = 0.1171 ± 0.0013(exp)^{+0.0073}_{–0.0047}(theo).
Khachatryan, Vardan
2015-06-26
The inclusive jet cross section for proton–proton collisions at a centre-of-mass energy of 7TeVwas measured by the CMS Collaboration at the LHC with data corresponding to an integrated luminosity of 5.0fb^{-1}. The measurement covers a phase space up to 2TeV in jet transverse momentum and 2.5 in absolute jet rapidity. The statistical precision of these data leads to stringent constraints on the parton distribution functions of the proton. The data provide important input for the gluon density at high fractions of the proton momentum and for the strong coupling constant at large energy scales. Using predictions from perturbative quantum chromodynamics at next-to-leading order, complemented with electroweak corrections, the constraining power of these data is investigated and the strong coupling constant at the Z boson mass M_{Z} is determined to be α_{S}(M_{Z})=0.1185±0.0019(exp)^{+0.0060}_{-0.0037}(theo), which is in agreement with the world average.
Observation of the diphoton decay of the Higgs boson and measurement of its properties
Khachatryan, A. M.
2014-10-15
Observation of the diphoton decay mode of the recently discovered Higgs boson and measurement of some of its properties are reported. The analysis uses the entire dataset collected by the CMS experiment in proton-proton collisions during the 2011 and 2012 LHC running periods. The data samples correspond to integrated luminosities of 5.1fb^{-1} at √s=7 TeV and 19.7fb^{-1} at 8TeV. A clear signal is observed in the diphoton channel at a mass close to 125GeV with a local significance of 5.7σ, where a significance of 5.2σ is expected for the standard model Higgs boson. The mass is measured to be 124.70 ± 0.34 GeV = 124.70 ± 0.31(stat) ± 0.15(syst) GeV, and the best-fit signal strength relative to the standard model prediction is 1.14^{+0.26}_{–0.23} = 1.14 ± 0.21(stat) ^{+0.09}_{–0.05}(syst) ^{+0.13}_{–0.09}(theo). Thus, additional measurements include the signal strength modifiers associated with different production mechanisms, and hypothesis tests between spin-0 and spin-2 models.
Khachatryan, Vardan
2014-10-27
The inclusive jet cross section for proton-proton collisions at a centre-of-mass energy of 7$~\\mathrm{TeV}$ was measured by the CMS Collaboration at the LHC with data corresponding to an integrated luminosity of 5.0$~\\mathrm{fb}^{-1}$. The measurement covers a phase space up to 2$~\\mathrm{TeV}$ in jet transverse momentum and 2.5 in absolute jet rapidity. The statistical precision of these data leads to stringent constraints on the parton distribution functions of the proton. The data provide important input for the gluon density at high fractions of the proton momentum and for the strong coupling constant at large energy scales. Using predictions from perturbative quantum chromodynamics at next-to-leading order, complemented with electroweak corrections, the constraining power of these data is investigated and the strong coupling constant at the Z boson mass $M_{\\mathrm{Z}}$ is determined to be $\\alpha_S(M_{\\mathrm{Z}}) = 0.1185 \\pm 0.0019\\,(\\mathrm{exp})\\,^{+0.0060}_{-0.0037}\\,(\\mathrm{theo})$, which is in agreement with the world average.
DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)
Khachatryan, Vardan
2015-05-01
This article presents a measurement of the inclusive 3-jet production differential cross section at a proton–proton centre-of-mass energy of 7 TeV using data corresponding to an integrated luminosity of 5fb–1 collected with the CMS detector. The analysis is based on the three jets with the highest transverse momenta. The cross section is measured as a function of the invariant mass of the three jets in a range of 445–3270 GeV and in two bins of the maximum rapidity of the jets up to a value of 2. A comparison between the measurement and the prediction from perturbative QCD at next-to-leadingmore » order is performed. Within uncertainties, data and theory are in agreement. The sensitivity of the observable to the strong coupling constant αS is studied. A fit to all data points with 3-jet masses larger than 664 GeV gives a value of the strong coupling constant of αS(MZ) = 0.1171 ± 0.0013(exp)+0.0073–0.0047(theo).« less
DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)
Khachatryan, Vardan
2015-05-01
This paper presents a measurement of the inclusive 3-jet production differential cross section at a proton-proton centre-of-mass energy of 7 TeV using data corresponding to an integrated luminosity of 5 fb$^{-1}$ collected with the CMS detector. The analysis is based on the three jets with the highest transverse momenta. The cross section is measured as a function of the invariant mass of the three jets in a range of 445-3270 GeV and in two bins of the maximum rapidity of the jets up to a value of 2. A comparison between the measurement and the prediction from perturbative QCD atmorenext-to-leading order is performed. Within uncertainties, data and theory are in agreement. The sensitivity of the observable to parameters of the theory such as the parton distribution functions of the proton and the strong coupling constant $\\alpha_S$ is studied. A fit to all data points with 3-jet masses larger than 664 GeV gives a value of the strong coupling constant of $\\alpha_S(M_\\mathrm{Z})$ = 0.1171 $\\pm$ 0.0013 (exp) $^{+0.0073}_{-0.0047}$ (theo).less
DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)
Khachatryan, Vardan
2014-10-27
The inclusive jet cross section for proton-proton collisions at a centre-of-mass energy of 7$~\\mathrm{TeV}$ was measured by the CMS Collaboration at the LHC with data corresponding to an integrated luminosity of 5.0$~\\mathrm{fb}^{-1}$. The measurement covers a phase space up to 2$~\\mathrm{TeV}$ in jet transverse momentum and 2.5 in absolute jet rapidity. The statistical precision of these data leads to stringent constraints on the parton distribution functions of the proton. The data provide important input for the gluon density at high fractions of the proton momentum and for the strong coupling constant at large energy scales. Using predictions from perturbative quantummorechromodynamics at next-to-leading order, complemented with electroweak corrections, the constraining power of these data is investigated and the strong coupling constant at the Z boson mass $M_{\\mathrm{Z}}$ is determined to be $\\alpha_S(M_{\\mathrm{Z}}) = 0.1185 \\pm 0.0019\\,(\\mathrm{exp})\\,^{+0.0060}_{-0.0037}\\,(\\mathrm{theo})$, which is in agreement with the world average.less
DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)
Khachatryan, Vardan
2015-05-14
Properties of the Higgs boson with mass near 125 GeV are measured in proton-proton collisions with the CMS experiment at the LHC. Comprehensive sets of production and decay measurements are combined. The decay channels include ??, ZZ, WW, ??, bb, and ?? pairs. The data samples were collected in 2011 and 2012 and correspond to integrated luminosities of up to 5.1 fb? at 7 TeV and up to 19.7 fb? at 8 TeV. From the high-resolution ?? and ZZ channels, the mass of the Higgs boson is determined to be 125.02\\,+0.26-0.27(stat)+0.14-0.15(syst) GeV. For this mass value, the event yields obtainedmorein the different analyses tagging specific decay channels and production mechanisms are consistent with those expected for the standard model Higgs boson. The combined best-fit signal relative to the standard model expectation is 1.00 0.09 (stat), +0.08 -0.07 (theo) 0.07 (syst) at the measured mass. The couplings of the Higgs boson are probed for deviations in magnitude from the standard model predictions in multiple ways, including searches for invisible and undetected decays. No significant deviations are found.less
Search for: All records | SciTech Connect
Office of Scientific and Technical Information (OSTI)
... effective field theory of massive gravity has long been formulated in a generally covariant way N. Arkani-Hamed, H. Georgi, and M. D. Schwartz, Ann. Phys. (N.Y.) 305, 96 (2003).. ...
Assessment of model estimates of land-atmosphere CO2 exchange...
Office of Scientific and Technical Information (OSTI)
comparisons against both observed CO2 fluxes derived from site-based eddy covariance measurements as well as regional-scale GPP estimates based on satellite remote-sensing data. ...
Seasonal and Intra-annual Controls on CO_{2} Flux in Arctic Alaska
Oechel, Walter; Kalhori, Aram
2015-12-01
In order to advance the understanding of the patterns and controls on the carbon budget in the Arctic region, San Diego State University has maintained eddy covariance flux towers at three sites in Arctic Alaska, starting in 1997.
TITLE AUTHORS SUBJECT SUBJECT RELATED DESCRIPTION PUBLISHER AVAILABILI...
Office of Scientific and Technical Information (OSTI)
MODELS PARTICLE MODELS POSTULATED PARTICLES The effective field theory of massive gravity has long been formulated in a generally covariant way N Arkani Hamed H Georgi and M D...
Method and system to estimate variables in an integrated gasification combined cycle (IGCC) plant
Kumar, Aditya; Shi, Ruijie; Dokucu, Mustafa
2013-09-17
System and method to estimate variables in an integrated gasification combined cycle (IGCC) plant are provided. The system includes a sensor suite to measure respective plant input and output variables. An extended Kalman filter (EKF) receives sensed plant input variables and includes a dynamic model to generate a plurality of plant state estimates and a covariance matrix for the state estimates. A preemptive-constraining processor is configured to preemptively constrain the state estimates and covariance matrix to be free of constraint violations. A measurement-correction processor may be configured to correct constrained state estimates and a constrained covariance matrix based on processing of sensed plant output variables. The measurement-correction processor is coupled to update the dynamic model with corrected state estimates and a corrected covariance matrix. The updated dynamic model may be configured to estimate values for at least one plant variable not originally sensed by the sensor suite.
Updating the US hydrologic classification: an approach to clustering...
Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site
... Because D 2 takes into account the covariation among variables, it differs from Euclidean distance in that it is scale invariant (Mahalanobis, 1936). We calculated D 2 using the ...
ARM - Publications: Science Team Meeting Documents
Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)
best represented by power laws in the scale-parameter; 2) "intermittency" hence non-Gaussian statistics, i.e., not reducible to means, variances and covariances; and 3)...
Statistics for characterizing data on the periphery
Theiler, James P; Hush, Donald R
2010-01-01
We introduce a class of statistics for characterizing the periphery of a distribution, and show that these statistics are particularly valuable for problems in target detection. Because so many detection algorithms are rooted in Gaussian statistics, we concentrate on ellipsoidal models of high-dimensional data distributions (that is to say: covariance matrices), but we recommend several alternatives to the sample covariance matrix that more efficiently model the periphery of a distribution, and can more effectively detect anomalous data samples.
An Evaluation of Parametric and Nonparametric Models of Fish Population Response.
Haas, Timothy C.; Peterson, James T.; Lee, Danny C.
1999-11-01
Predicting the distribution or status of animal populations at large scales often requires the use of broad-scale information describing landforms, climate, vegetation, etc. These data, however, often consist of mixtures of continuous and categorical covariates and nonmultiplicative interactions among covariates, complicating statistical analyses. Using data from the interior Columbia River Basin, USA, we compared four methods for predicting the distribution of seven salmonid taxa using landscape information. Subwatersheds (mean size, 7800 ha) were characterized using a set of 12 covariates describing physiography, vegetation, and current land-use. The techniques included generalized logit modeling, classification trees, a nearest neighbor technique, and a modular neural network. We evaluated model performance using out-of-sample prediction accuracy via leave-one-out cross-validation and introduce a computer-intensive Monte Carlo hypothesis testing approach for examining the statistical significance of landscape covariates with the non-parametric methods. We found the modular neural network and the nearest-neighbor techniques to be the most accurate, but were difficult to summarize in ways that provided ecological insight. The modular neural network also required the most extensive computer resources for model fitting and hypothesis testing. The generalized logit models were readily interpretable, but were the least accurate, possibly due to nonlinear relationships and nonmultiplicative interactions among covariates. Substantial overlap among the statistically significant (P<0.05) covariates for each method suggested that each is capable of detecting similar relationships between responses and covariates. Consequently, we believe that employing one or more methods may provide greater biological insight without sacrificing prediction accuracy.
On the Bayesian Treed Multivariate Gaussian Process with Linear Model of Coregionalization
Konomi, Bledar A.; Karagiannis, Georgios; Lin, Guang
2015-02-01
The Bayesian treed Gaussian process (BTGP) has gained popularity in recent years because it provides a straightforward mechanism for modeling non-stationary data and can alleviate computational demands by fitting models to less data. The extension of BTGP to the multivariate setting requires us to model the cross-covariance and to propose efficient algorithms that can deal with trans-dimensional MCMC moves. In this paper we extend the cross-covariance of the Bayesian treed multivariate Gaussian process (BTMGP) to that of linear model of Coregionalization (LMC) cross-covariances. Different strategies have been developed to improve the MCMC mixing and invert smaller matrices in the Bayesian inference. Moreover, we compare the proposed BTMGP with existing multiple BTGP and BTMGP in test cases and multiphase flow computer experiment in a full scale regenerator of a carbon capture unit. The use of the BTMGP with LMC cross-covariance helped to predict the computer experiments relatively better than existing competitors. The proposed model has a wide variety of applications, such as computer experiments and environmental data. In the case of computer experiments we also develop an adaptive sampling strategy for the BTMGP with LMC cross-covariance function.
Khachatryan, Vardan
2015-07-14
A measurement of the W boson pair production cross section in proton-proton collisions at ? s = 8 TeV is presented. The data we collected with the CMS detector at the LHC correspond to an integrated luminosity of 19.4 fb^{-1} . The W^{+}W^{-} candidates are selected from events with two charged leptons, electrons or muons, and large missing transverse energy. The measured W^{+}W^{-} cross section is 60.1 0.9 (stat) 3.2 (exp) 3.1 (theo) 1.6 (lumi) pb = 60.1 4.8 pb, consistent with the standard model prediction. The W^{+}W^{-}cross sections are also measured in two different fiducial phase space regions. The normalized differential cross section is measured as a function of kinematic variables of the final-state charged leptons and compared with several perturbative QCD predictions. Limits on anomalous gauge couplings associated with dimension-six operators are also given in the framework of an effective field theory. Finally, the corresponding 95% confidence level intervals are -5.7 < c_{WWW}/?^{2} < 5.9 TeV^{-2} , -11.4 < c_{W}/?^{2} < 5.4 TeV^{-2} , -29.2 < c_{B}/?^{2} < 23.9 TeV^{-2} , in the HISZ basis.
DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)
Khachatryan, V.; Sirunyan, A. M.; Tumasyan, A.; Adam, W.; Bergauer, T.; Dragicevic, M.; Erö, J.; Friedl, M.; Frühwirth, R.; Ghete, V. M.; et al
2015-04-09
Measurements of the differential and double-differential Drell–Yan cross sections in the dielectron and dimuon channels are presented. They are based on proton–proton collision data at √s = 8TeV recorded with the CMS detector at the LHC and corresponding to an integrated luminosity of 19.7fb–1. The measured inclusive cross section in the Z peak region (60–120GeV), obtained from the combination of the dielectron and dimuon channels, is 1138 ± 8 (exp) ± 25 (theo) ± 30 (lumi)\\,pb, where the statistical uncertainty is negligible. The differential cross section dσ/dm in the dilepton mass range 15–2000GeV is measured and corrected to the fullmore » phase space. The double-differential cross section d2σ/dmd|y| is also measured over the mass range 20 to 1500GeV and absolute dilepton rapidity from 0 to 2.4. In addition, the ratios of the normalized differential cross sections measured at √s = 7 and 8TeV are presented. These measurements are compared to the predictions of perturbative QCD at next-to-leading and next-to-next-to-leading (NNLO) orders using various sets of parton distribution functions (PDFs). The results agree with the NNLO theoretical predictions computed with FEWZ 3.1 using the CT10 NNLO and NNPDF2.1 NNLO PDFs. Furthermore, the measured double-differential cross section and ratio of normalized differential cross sections are sufficiently precise to constrain the proton PDFs.« less
Khachatryan, V.; Sirunyan, A. M.; Tumasyan, A.; Adam, W.; Bergauer, T.; Dragicevic, M.; Er, J.; Friedl, M.; Frhwirth, R.; Ghete, V. M.; Hartl, C.; Hrmann, N.; Hrubec, J.; Jeitler, M.; Kiesenhofer, W.; Knnz, V.; Krammer, M.; Krtschmer, I.; Liko, D.; Mikulec, I.; Rabady, D.; Rahbaran, B.; Rohringer, H.; Schfbeck, R.; Strauss, J.; Treberer-Treberspurg, W.; Waltenberger, W.; Wulz, C. -E.; Mossolov, V.; Shumeiko, N.; Suarez Gonzalez, J.; Alderweireldt, S.; Bansal, S.; Cornelis, T.; De Wolf, E. A.; Janssen, X.; Knutsson, A.; Lauwers, J.; Luyckx, S.; Ochesanu, S.; Rougny, R.; Van De Klundert, M.; Van Haevermaet, H.; Van Mechelen, P.; Van Remortel, N.; Van Spilbeeck, A.; Blekman, F.; Blyweert, S.; DHondt, J.; Daci, N.; Heracleous, N.; Keaveney, J.; Lowette, S.; Maes, M.; Olbrechts, A.; Python, Q.; Strom, D.; Tavernier, S.; Van Doninck, W.; Van Mulders, P.; Van Onsem, G. P.; Villella, I.; Caillol, C.; Clerbaux, B.; De Lentdecker, G.; Dobur, D.; Favart, L.; Gay, A. P. R.; Grebenyuk, A.; Lonard, A.; Mohammadi, A.; Perni, L.; Randle-conde, A.; Reis, T.; Seva, T.; Thomas, L.; Vander Velde, C.; Vanlaer, P.; Wang, J.; Zenoni, F.; Adler, V.; Beernaert, K.; Benucci, L.; Cimmino, A.; Costantini, S.; Crucy, S.; Dildick, S.; Fagot, A.; Garcia, G.; Mccartin, J.; Ocampo Rios, A. A.; Poyraz, D.; Ryckbosch, D.; Salva Diblen, S.; Sigamani, M.; Strobbe, N.; Thyssen, F.; Tytgat, M.; Yazgan, E.; Zaganidis, N.; Basegmez, S.; Beluffi, C.; Bruno, G.; Castello, R.; Caudron, A.; Ceard, L.; Da Silveira, G. G.; Delaere, C.; du Pree, T.; Favart, D.; Forthomme, L.; Giammanco, A.; Hollar, J.; Jafari, A.; Jez, P.; Komm, M.; Lemaitre, V.; Nuttens, C.; Perrini, L.; Pin, A.; Piotrzkowski, K.; Popov, A.; Quertenmont, L.; Selvaggi, M.; Vidal Marono, M.; Vizan Garcia, J. M.; Beliy, N.; Caebergs, T.; Daubie, E.; Hammad, G. H.; Jnior, W. L. Ald; Alves, G. A.; Brito, L.; Correa Martins Junior, M.; Martins, T. Dos Reis; Molina, J.; Mora Herrera, C.; Pol, M. E.; Rebello Teles, P.; Carvalho, W.; Chinellato, J.; Custdio, A.; Da Costa, E. M.; De Jesus Damiao, D.; De Oliveira Martins, C.; Fonseca De Souza, S.; Malbouisson, H.; Matos Figueiredo, D.; Mundim, L.; Nogima, H.; Prado Da Silva, W. L.; Santaolalla, J.; Santoro, A.; Sznajder, A.; Tonelli Manganote, E. J.; Vilela Pereira, A.; Bernardes, C. A.; Dogra, S.; Fernandez Perez Tomei, T. R.; Gregores, E. M.; Mercadante, P. G.; Novaes, S. F.; Padula, Sandra S.; Aleksandrov, A.; Genchev, V.; Hadjiiska, R.; Iaydjiev, P.; Marinov, A.; Piperov, S.; Rodozov, M.; Stoykova, S.; Sultanov, G.; Vutova, M.; Dimitrov, A.; Glushkov, I.; Litov, L.; Pavlov, B.; Petkov, P.; Bian, J. G.; Chen, G. M.; Chen, H. S.; Chen, M.; Cheng, T.; Du, R.; Jiang, C. H.; Plestina, R.; Romeo, F.; Tao, J.; Wang, Z.; Asawatangtrakuldee, C.; Ban, Y.; Li, Q.; Liu, S.; Mao, Y.; Qian, S. J.; Wang, D.; Xu, Z.; Zou, W.; Avila, C.; Cabrera, A.; Chaparro Sierra, L. F.; Florez, C.; Gomez, J. P.; Gomez Moreno, B.; Sanabria, J. C.; Godinovic, N.; Lelas, D.; Polic, D.; Puljak, I.; Antunovic, Z.; Kovac, M.; Brigljevic, V.; Kadija, K.; Luetic, J.; Mekterovic, D.; Sudic, L.; Attikis, A.; Mavromanolakis, G.; Mousa, J.; Nicolaou, C.; Ptochos, F.; Razis, P. A.; Bodlak, M.; Finger, M.; Finger, M.; Assran, Y.; Ellithi Kamel, A.; Mahmoud, M. A.; Radi, A.; Kadastik, M.; Murumaa, M.; Raidal, M.; Tiko, A.; Eerola, P.; Voutilainen, M.; Hrknen, J.; Karimki, V.; Kinnunen, R.; Kortelainen, M. J.; Lampn, T.; Lassila-Perini, K.; Lehti, S.; Lindn, T.; Luukka, P.; Menp, T.; Peltola, T.; Tuominen, E.; Tuominiemi, J.; Tuovinen, E.; Wendland, L.; Talvitie, J.; Tuuva, T.; Besancon, M.; Couderc, F.; Dejardin, M.; Denegri, D.; Fabbro, B.; Faure, J. L.; Favaro, C.; Ferri, F.; Ganjour, S.; Givernaud, A.; Gras, P.; Hamel de Monchenault, G.; Jarry, P.; Locci, E.; Malcles, J.; Rander, J.; Rosowsky, A.; Titov, M.; Baffioni, S.; Beaudette, F.; Busson, P.; Chapon, E.; Charlot, C.; Dahms, T.; Dalchenko, M.; Dobrzynski, L.; Filipovic, N.; Florent, A.; Granier de Cassagnac, R.; Mastrolorenzo, L.; Min, P.; Naranjo, I. N.; Nguyen, M.; Ochando, C.; Ortona, G.; Paganini, P.; Regnard, S.; Salerno, R.; Sauvan, J. B.; Sirois, Y.; Veelken, C.; Yilmaz, Y.; Zabi, A.; Agram, J. -L.; Andrea, J.; Aubin, A.; Bloch, D.; Brom, J. -M.; Chabert, E. C.; Collard, C.; Conte, E.; Fontaine, J. -C.; Gel, D.; Goerlach, U.; Goetzmann, C.; Le Bihan, A. -C.; Skovpen, K.; Van Hove, P.; Gadrat, S.; Beauceron, S.; Beaupere, N.; Bernet, C.; Boudoul, G.; Bouvier, E.; Brochet, S.; Carrillo Montoya, C. A.; Chasserat, J.; Chierici, R.; Contardo, D.; Depasse, P.; El Mamouni, H.; Fan, J.; Fay, J.; Gascon, S.; Gouzevitch, M.; Ille, B.; Kurca, T.; Lethuillier, M.; Mirabito, L.; Perries, S.; Ruiz Alvarez, J. D.; Sabes, D.; Sgandurra, L.; Sordini, V.; Vander Donckt, M.; Verdier, P.; Viret, S.; Xiao, H.; Tsamalaidze, Z.; Autermann, C.; Beranek, S.; Bontenackels, M.; Edelhoff, M.; Feld, L.; Heister, A.; Klein, K.; Lipinski, M.; Ostapchuk, A.; Preuten, M.; Raupach, F.; Sammet, J.; Schael, S.; Schulte, J. F.; Weber, H.; Wittmer, B.; Zhukov, V.; Ata, M.; Brodski, M.; Dietz-Laursonn, E.; Duchardt, D.; Erdmann, M.; Fischer, R.; Gth, A.; Hebbeker, T.; Heidemann, C.; Hoepfner, K.; Klingebiel, D.; Knutzen, S.; Kreuzer, P.; Merschmeyer, M.; Meyer, A.; Mittag, G.; Millet, P.; Olschewski, M.; Padeken, K.; Papacz, P.; Reithler, H.; Schmitz, S. A.; Sonnenschein, L.; Teyssier, D.; Ther, S.; Weber, M.; Cherepanov, V.; Erdogan, Y.; Flgge, G.; Geenen, H.; Geisler, M.; Haj Ahmad, W.; Hoehle, F.; Kargoll, B.; Kress, T.; Kuessel, Y.; Knsken, A.; Lingemann, J.; Nowack, A.; Nugent, I. M.; Pooth, O.; Stahl, A.; Aldaya Martin, M.; Asin, I.; Bartosik, N.; Behr, J.; Behrens, U.; Bell, A. J.; Bethani, A.; Borras, K.; Burgmeier, A.; Cakir, A.; Calligaris, L.; Campbell, A.; Choudhury, S.; Costanza, F.; Diez Pardos, C.; Dolinska, G.; Dooling, S.; Dorland, T.; Eckerlin, G.; Eckstein, D.; Eichhorn, T.; Flucke, G.; Garcia, J. Garay; Geiser, A.; Gunnellini, P.; Hauk, J.; Hempel, M.; Jung, H.; Kalogeropoulos, A.; Kasemann, M.; Katsas, P.; Kieseler, J.; Kleinwort, C.; Korol, I.; Krcker, D.; Lange, W.; Leonard, J.; Lipka, K.; Lobanov, A.; Lohmann, W.; Lutz, B.; Mankel, R.; Marfin, I.; Melzer-Pellmann, I. -A.; Meyer, A. B.; Mnich, J.; Mussgiller, A.; Naumann-Emme, S.; Nayak, A.; Ntomari, E.; Perrey, H.; Pitzl, D.; Placakyte, R.; Raspereza, A.; Ribeiro Cipriano, P. M.; Roland, B.; Ron, E.; Sahin, M. .; Salfeld-Nebgen, J.; Saxena, P.; Schoerner-Sadenius, T.; Schrder, M.; Seitz, C.; Spannagel, S.; Vargas Trevino, A. D. R.; Walsh, R.; Wissing, C.; Blobel, V.; Centis Vignali, M.; Draeger, A. R.; Erfle, J.; Garutti, E.; Goebel, K.; Grner, M.; Haller, J.; Hoffmann, M.; Hing, R. S.; Junkes, A.; Kirschenmann, H.; Klanner, R.; Kogler, R.; Lange, J.; Lapsien, T.; Lenz, T.; Marchesini, I.; Ott, J.; Peiffer, T.; Perieanu, A.; Pietsch, N.; Poehlsen, J.; Poehlsen, T.; Rathjens, D.; Sander, C.; Schettler, H.; Schleper, P.; Schlieckau, E.; Schmidt, A.; Seidel, M.; Sola, V.; Stadie, H.; Steinbrck, G.; Troendle, D.; Usai, E.; Vanelderen, L.; Vanhoefer, A.; Barth, C.; Baus, C.; Berger, J.; Bser, C.; Butz, E.; Chwalek, T.; De Boer, W.; Descroix, A.; Dierlamm, A.; Feindt, M.; Frensch, F.; Giffels, M.; Gilbert, A.; Hartmann, F.; Hauth, T.; Husemann, U.; Katkov, I.; Kornmayer, A.; Lobelle Pardo, P.; Mozer, M. U.; Mller, T.; Mller, Th.; Nrnberg, A.; Quast, G.; Rabbertz, K.; Rcker, S.; Simonis, H. J.; Stober, F. M.; Ulrich, R.; Wagner-Kuhr, J.; Wayand, S.; Weiler, T.; Wolf, R.; Anagnostou, G.; Daskalakis, G.; Geralis, T.; Giakoumopoulou, V. A.; Kyriakis, A.; Loukas, D.; Markou, A.; Markou, C.; Psallidas, A.; Topsis-Giotis, I.; Agapitos, A.; Kesisoglou, S.; Panagiotou, A.; Saoulidou, N.; Stiliaris, E.; Aslanoglou, X.; Evangelou, I.; Flouris, G.; Foudas, C.; Kokkas, P.; Manthos, N.; Papadopoulos, I.; Strologas, J.; Paradas, E.; Bencze, G.; Hajdu, C.; Hidas, P.; Horvath, D.; Sikler, F.; Veszpremi, V.; Vesztergombi, G.; Zsigmond, A. J.; Beni, N.; Czellar, S.; Karancsi, J.; Molnar, J.; Palinkas, J.; Szillasi, Z.; Makovec, A.; Raics, P.; Trocsanyi, Z. L.; Ujvari, B.; Swain, S. K.; Beri, S. B.; Bhatnagar, V.; Gupta, R.; Bhawandeep, U.; Kalsi, A. K.; Kaur, M.; Kumar, R.; Mittal, M.; Nishu, N.; Singh, J. B.; Kumar, Ashok; Kumar, Arun; Ahuja, S.; Bhardwaj, A.; Choudhary, B. C.; Kumar, A.; Malhotra, S.; Naimuddin, M.; Ranjan, K.; Sharma, V.; Banerjee, S.; Bhattacharya, S.; Chatterjee, K.; Dutta, S.; Gomber, B.; Jain, Sa.; Jain, Sh.; Khurana, R.; Modak, A.; Mukherjee, S.; Roy, D.; Sarkar, S.; Sharan, M.; Abdulsalam, A.; Dutta, D.; Kumar, V.; Mohanty, A. K.; Pant, L. M.; Shukla, P.; Topkar, A.; Aziz, T.; Banerjee, S.; Bhowmik, S.; Chatterjee, R. M.; Dewanjee, R. K.; Dugad, S.; Ganguly, S.; Ghosh, S.; Guchait, M.; Gurtu, A.; Kole, G.; Kumar, S.; Maity, M.; Majumder, G.; Mazumdar, K.; Mohanty, G. B.; Parida, B.; Sudhakar, K.; Wickramage, N.; Bakhshiansohi, H.; Behnamian, H.; Etesami, S. M.; Fahim, A.; Goldouzian, R.; Khakzad, M.; Mohammadi Najafabadi, M.; Naseri, M.; Paktinat Mehdiabadi, S.; Rezaei Hosseinabadi, F.; Safarzadeh, B.; Zeinali, M.; Felcini, M.; Grunewald, M.; Abbrescia, M.; Calabria, C.; Chhibra, S. S.; Colaleo, A.; Creanza, D.; De Filippis, N.; De Palma, M.; Fiore, L.; Iaselli, G.; Maggi, G.; Maggi, M.; My, S.; Nuzzo, S.; Pompili, A.; Pugliese, G.; Radogna, R.; Selvaggi, G.; Sharma, A.; Silvestris, L.; Venditti, R.; Verwilligen, P.; Abbiendi, G.; Benvenuti, A. C.; Bonacorsi, D.; Braibant-Giacomelli, S.; Brigliadori, L.; Campanini, R.; Capiluppi, P.; Castro, A.; Cavallo, F. R.; Codispoti, G.; Cuffiani, M.; Dallavalle, G. M.; Fabbri, F.; Fanfani, A.; Fasanella, D.; Giacomelli, P.; Grandi, C.; Guiducci, L.; Marcellini, S.; Masetti, G.; Montanari, A.; Navarria, F. L.; Perrotta, A.; Primavera, F.; Rossi, A. M.; Rovelli, T.; Siroli, G. P.; Tosi, N.; Travaglini, R.; Albergo, S.; Cappello, G.; Chiorboli, M.; Costa, S.; Giordano, F.; Potenza, R.; Tricomi, A.; Tuve, C.; Barbagli, G.; Ciulli, V.; Civinini, C.; DAlessandro, R.; Focardi, E.; Gallo, E.; Gonzi, S.; Gori, V.; Lenzi, P.; Meschini, M.; Paoletti, S.; Sguazzoni, G.; Tropiano, A.; Benussi, L.; Bianco, S.; Fabbri, F.; Piccolo, D.; Ferretti, R.; Ferro, F.; Lo Vetere, M.; Robutti, E.; Tosi, S.; Dinardo, M. E.; Fiorendi, S.; Gennai, S.; Gerosa, R.; Ghezzi, A.; Govoni, P.; Lucchini, M. T.; Malvezzi, S.; Manzoni, R. A.; Martelli, A.; Marzocchi, B.; Menasce, D.; Moroni, L.; Paganoni, M.; Pedrini, D.; Ragazzi, S.; Redaelli, N.; Tabarelli de Fatis, T.; Buontempo, S.; Cavallo, N.; Di Guida, S.; Fabozzi, F.; Iorio, A. O. M.; Lista, L.; Meola, S.; Merola, M.; Paolucci, P.; Azzi, P.; Bacchetta, N.; Bellato, M.; Biasotto, M.; Branca, A.; DallOsso, M.; Dorigo, T.; Fantinel, S.; Fanzago, F.; Galanti, M.; Gasparini, F.; Gozzelino, A.; Kanishchev, K.; Lacaprara, S.; Margoni, M.; Meneguzzo, A. T.; Pazzini, J.; Pozzobon, N.; Ronchese, P.; Simonetto, F.; Torassa, E.; Tosi, M.; Vanini, S.; Zotto, P.; Zucchetta, A.; Zumerle, G.; Gabusi, M.; Ratti, S. P.; Re, V.; Riccardi, C.; Salvini, P.; Vitulo, P.; Biasini, M.; Bilei, G. M.; Ciangottini, D.; Fan, L.; Lariccia, P.; Mantovani, G.; Menichelli, M.; Saha, A.; Santocchia, A.; Spiezia, A.; Androsov, K.; Azzurri, P.; Bagliesi, G.; Bernardini, J.; Boccali, T.; Broccolo, G.; Castaldi, R.; Ciocci, M. A.; DellOrso, R.; Donato, S.; Fedi, G.; Fiori, F.; Fo, L.; Giassi, A.; Grippo, M. T.; Ligabue, F.; Lomtadze, T.; Martini, L.; Messineo, A.; Moon, C. S.; Palla, F.; Rizzi, A.; Savoy-Navarro, A.; Serban, A. T.; Spagnolo, P.; Squillacioti, P.; Tenchini, R.; Tonelli, G.; Venturi, A.; Verdini, P. G.; Vernieri, C.; Barone, L.; Cavallari, F.; Dimperio, G.; Del Re, D.; Diemoz, M.; Jorda, C.; Longo, E.; Margaroli, F.; Meridiani, P.; Micheli, F.; Organtini, G.; Paramatti, R.; Rahatlou, S.; Rovelli, C.; Santanastasio, F.; Soffi, L.; Traczyk, P.; Amapane, N.; Arcidiacono, R.; Argiro, S.; Arneodo, M.; Bellan, R.; Biino, C.; Cartiglia, N.; Casasso, S.; Costa, M.; Degano, A.; Demaria, N.; Finco, L.; Mariotti, C.; Maselli, S.; Migliore, E.; Monaco, V.; Musich, M.; Obertino, M. M.; Pacher, L.; Pastrone, N.; Pelliccioni, M.; Pinna Angioni, G. L.; Potenza, A.; Romero, A.; Ruspa, M.; Sacchi, R.; Solano, A.; Staiano, A.; Tamponi, U.; Belforte, S.; Candelise, V.; Casarsa, M.; Cossutti, F.; Della Ricca, G.; Gobbo, B.; La Licata, C.; Marone, M.; Schizzi, A.; Umer, T.; Zanetti, A.; Chang, S.; Kropivnitskaya, A.; Nam, S. K.; Kim, D. H.; Kim, G. N.; Kim, M. S.; Kim, M. S.; Kong, D. J.; Lee, S.; Oh, Y. D.; Park, H.; Sakharov, A.; Son, D. C.; Kim, T. J.; Ryu, M. S.; Kim, J. Y.; Moon, D. H.; Song, S.; Choi, S.; Gyun, D.; Hong, B.; Jo, M.; Kim, H.; Kim, Y.; Lee, B.; Lee, K. S.; Park, S. K.; Roh, Y.; Yoo, H. D.; Choi, M.; Kim, J. H.; Park, I. C.; Ryu, G.; Choi, Y.; Choi, Y. K.; Goh, J.; Kim, D.; Kwon, E.; Lee, J.; Yu, I.; Juodagalvis, A.; Komaragiri, J. R.; Md Ali, M. A. B.; Casimiro Linares, E.; Castilla-Valdez, H.; De La Cruz-Burelo, E.; Heredia-de La Cruz, I.; Hernandez-Almada, A.; Lopez-Fernandez, R.; Sanchez-Hernandez, A.; Carrillo Moreno, S.; Vazquez Valencia, F.; Pedraza, I.; Salazar Ibarguen, H. A.; Morelos Pineda, A.; Krofcheck, D.; Butler, P. H.; Reucroft, S.; Ahmad, A.; Ahmad, M.; Hassan, Q.; Hoorani, H. R.; Khan, W. A.; Khurshid, T.; Shoaib, M.; Bialkowska, H.; Bluj, M.; Boimska, B.; Frueboes, T.; Grski, M.; Kazana, M.; Nawrocki, K.; Romanowska-Rybinska, K.; Szleper, M.; Zalewski, P.; Brona, G.; Bunkowski, K.; Cwiok, M.; Dominik, W.; Doroba, K.; Kalinowski, A.; Konecki, M.; Krolikowski, J.; Misiura, M.; Olszewski, M.; Bargassa, P.; Beiro Da Cruz E Silva, C.; Faccioli, P.; Ferreira Parracho, P. G.; Gallinaro, M.; Lloret Iglesias, L.; Nguyen, F.; Rodrigues Antunes, J.; Seixas, J.; Varela, J.; Vischia, P.; Afanasiev, S.; Bunin, P.; Gavrilenko, M.; Golutvin, I.; Gorbunov, I.; Kamenev, A.; Karjavin, V.; Konoplyanikov, V.; Lanev, A.; Malakhov, A.; Matveev, V.; Moisenz, P.; Palichik, V.; Perelygin, V.; Shmatov, S.; Skatchkov, N.; Smirnov, V.; Zarubin, A.; Golovtsov, V.; Ivanov, Y.; Kim, V.; Kuznetsova, E.; Levchenko, P.; Murzin, V.; Oreshkin, V.; Smirnov, I.; Sulimov, V.; Uvarov, L.; Vavilov, S.; Vorobyev, A.; Vorobyev, An.; Andreev, Yu.; Dermenev, A.; Gninenko, S.; Golubev, N.; Kirsanov, M.; Krasnikov, N.; Pashenkov, A.; Tlisov, D.; Toropin, A.; Epshteyn, V.; Gavrilov, V.; Lychkovskaya, N.; Popov, V.; Pozdnyakov, I.; Safronov, G.; Semenov, S.; Spiridonov, A.; Stolin, V.; Vlasov, E.; Zhokin, A.; Andreev, V.; Azarkin, M.; Dremin, I.; Kirakosyan, M.; Leonidov, A.; Mesyats, G.; Rusakov, S. V.; Vinogradov, A.; Belyaev, A.; Boos, E.; Bunichev, V.; Dubinin, M.; Dudko, L.; Ershov, A.; Klyukhin, V.; Kodolova, O.; Lokhtin, I.; Obraztsov, S.; Perfilov, M.; Savrin, V.; Snigirev, A.; Azhgirey, I.; Bayshev, I.; Bitioukov, S.; Kachanov, V.; Kalinin, A.; Konstantinov, D.; Krychkine, V.; Petrov, V.; Ryutin, R.; Sobol, A.; Tourtchanovitch, L.; Troshin, S.; Tyurin, N.; Uzunian, A.; Volkov, A.; Adzic, P.; Ekmedzic, M.; Milosevic, J.; Rekovic, V.; Alcaraz Maestre, J.; Battilana, C.; Calvo, E.; Cerrada, M.; Chamizo Llatas, M.; Colino, N.; De La Cruz, B.; Delgado Peris, A.; Domnguez Vzquez, D.; Escalante Del Valle, A.; Fernandez Bedoya, C.; Fernndez Ramos, J. P.; Flix, J.; Fouz, M. C.; Garcia-Abia, P.; Gonzalez Lopez, O.; Goy Lopez, S.; Hernandez, J. M.; Josa, M. I.; Navarro De Martino, E.; Prez-Calero Yzquierdo, A.; Puerta Pelayo, J.; Quintario Olmeda, A.; Redondo, I.; Romero, L.; Soares, M. S.; Albajar, C.; de Trocniz, J. F.; Missiroli, M.; Moran, D.; Brun, H.; Cuevas, J.; Fernandez Menendez, J.; Folgueras, S.; Gonzalez Caballero, I.; Brochero Cifuentes, J. A.; Cabrillo, I. J.; Calderon, A.; Duarte Campderros, J.; Fernandez, M.; Gomez, G.; Graziano, A.; Lopez Virto, A.; Marco, J.; Marco, R.; Martinez Rivero, C.; Matorras, F.; Munoz Sanchez, F. J.; Piedra Gomez, J.; Rodrigo, T.; Rodrguez-Marrero, A. Y.; Ruiz-Jimeno, A.; Scodellaro, L.; Vila, I.; Vilar Cortabitarte, R.; Abbaneo, D.; Auffray, E.; Auzinger, G.; Bachtis, M.; Baillon, P.; Ball, A. H.; Barney, D.; Benaglia, A.; Bendavid, J.; Benhabib, L.; Benitez, J. F.; Bloch, P.; Bocci, A.; Bonato, A.; Bondu, O.; Botta, C.; Breuker, H.; Camporesi, T.; Cerminara, G.; Colafranceschi, S.; DAlfonso, M.; dEnterria, D.; Dabrowski, A.; David, A.; De Guio, F.; De Roeck, A.; De Visscher, S.; Di Marco, E.; Dobson, M.; Dordevic, M.; Dorney, B.; Dupont-Sagorin, N.; Elliott-Peisert, A.; Franzoni, G.; Funk, W.; Gigi, D.; Gill, K.; Giordano, D.; Girone, M.; Glege, F.; Guida, R.; Gundacker, S.; Guthoff, M.; Guida, R.; Hammer, J.; Hansen, M.; Harris, P.; Hegeman, J.; Innocente, V.; Janot, P.; Kousouris, K.; Krajczar, K.; Lecoq, P.; Loureno, C.; Magini, N.; Malgeri, L.; Mannelli, M.; Marrouche, J.; Masetti, L.; Meijers, F.; Mersi, S.; Meschi, E.; Moortgat, F.; Morovic, S.; Mulders, M.; Orsini, L.; Pape, L.; Perez, E.; Petrilli, A.; Petrucciani, G.; Pfeiffer, A.; Pimi, M.; Piparo, D.; Plagge, M.; Racz, A.; Rojo, J.; Rolandi, G.; Rovere, M.; Sakulin, H.; Schfer, C.; Schwick, C.; Sharma, A.; Siegrist, P.; Silva, P.; Simon, M.; Sphicas, P.; Spiga, D.; Steggemann, J.; Stieger, B.; Stoye, M.; Takahashi, Y.; Treille, D.; Tsirou, A.; Veres, G. I.; Wardle, N.; Whri, H. K.; Wollny, H.; Zeuner, W. D.; Bertl, W.; Deiters, K.; Erdmann, W.; Horisberger, R.; Ingram, Q.; Kaestli, H. C.; Kotlinski, D.; Langenegger, U.; Renker, D.; Rohe, T.; Bachmair, F.; Bni, L.; Bianchini, L.; Buchmann, M. A.; Casal, B.; Chanon, N.; Dissertori, G.; Dittmar, M.; Doneg, M.; Dnser, M.; Eller, P.; Grab, C.; Hits, D.; Hoss, J.; Lustermann, W.; Mangano, B.; Marini, A. C.; Marionneau, M.; Martinez Ruiz del Arbol, P.; Masciovecchio, M.; Meister, D.; Mohr, N.; Musella, P.; Ngeli, C.; Nessi-Tedaldi, F.; Pandolfi, F.; Pauss, F.; Perrozzi, L.; Peruzzi, M.; Quittnat, M.; Rebane, L.; Rossini, M.; Starodumov, A.; Takahashi, M.; Theofilatos, K.; Wallny, R.; Weber, H. A.; Amsler, C.; Canelli, M. F.; Chiochia, V.; De Cosa, A.; Hinzmann, A.; Hreus, T.; Kilminster, B.; Lange, C.; Millan Mejias, B.; Ngadiuba, J.; Pinna, D.; Robmann, P.; Ronga, F. J.; Taroni, S.; Verzetti, M.; Yang, Y.; Cardaci, M.; Chen, K. H.; Ferro, C.; Kuo, C. M.; Lin, W.; Lu, Y. J.; Volpe, R.; Yu, S. S.; Chang, P.; Chang, Y. H.; Chao, Y.; Chen, K. F.; Chen, P. H.; Dietz, C.; Grundler, U.; Hou, W. -S.; Liu, Y. F.; Lu, R. -S.; Petrakou, E.; Tzeng, Y. M.; Wilken, R.; Asavapibhop, B.; Singh, G.; Srimanobhas, N.; Suwonjandee, N.; Adiguzel, A.; Bakirci, M. N.; Cerci, S.; Dozen, C.; Dumanoglu, I.; Eskut, E.; Girgis, S.; Gokbulut, G.; Guler, Y.; Gurpinar, E.; Hos, I.; Kangal, E. E.; Kayis Topaksu, A.; Onengut, G.; Ozdemir, K.; Ozturk, S.; Polatoz, A.; Sunar Cerci, D.; Tali, B.; Topakli, H.; Vergili, M.; Zorbilmez, C.; Akin, I. V.; Bilin, B.; Bilmis, S.; Gamsizkan, H.; Isildak, B.; Karapinar, G.; Ocalan, K.; Sekmen, S.; Surat, U. E.; Yalvac, M.; Zeyrek, M.; Albayrak, E. A.; Glmez, E.; Kaya, M.; Kaya, O.; Yetkin, T.; Cankocak, K.; Vardarl?, F. I.; Levchuk, L.; Sorokin, P.; Brooke, J. J.; Clement, E.; Cussans, D.; Flacher, H.; Goldstein, J.; Grimes, M.; Heath, G. P.; Heath, H. F.; Jacob, J.; Kreczko, L.; Lucas, C.; Meng, Z.; Newbold, D. M.; Paramesvaran, S.; Poll, A.; Sakuma, T.; Seif El Nasr-storey, S.; Senkin, S.; Smith, V. J.; Williams, T.; Bell, K. W.; Belyaev, A.; Brew, C.; Brown, R. M.; Cockerill, D. J. A.; Coughlan, J. A.; Harder, K.; Harper, S.; Olaiya, E.; Petyt, D.; Shepherd-Themistocleous, C. H.; Thea, A.; Tomalin, I. R.; Williams, T.; Womersley, W. J.; Worm, S. D.; Baber, M.; Bainbridge, R.; Buchmuller, O.; Burton, D.; Colling, D.; Cripps, N.; Dauncey, P.; Davies, G.; Della Negra, M.; Dunne, P.; Ferguson, W.; Fulcher, J.; Futyan, D.; Hall, G.; Iles, G.; Jarvis, M.; Karapostoli, G.; Kenzie, M.; Lane, R.; Lucas, R.; Lyons, L.; Magnan, A. -M.; Malik, S.; Mathias, B.; Nash, J.; Nikitenko, A.; Pela, J.; Pesaresi, M.; Petridis, K.; Raymond, D. M.; Rogerson, S.; Rose, A.; Seez, C.; Sharp, P.; Tapper, A.; Vazquez Acosta, M.; Virdee, T.; Zenz, S. C.; Cole, J. E.; Hobson, P. R.; Khan, A.; Kyberd, P.; Leggat, D.; Leslie, D.; Reid, I. D.; Symonds, P.; Teodorescu, L.; Turner, M.; Dittmann, J.; Hatakeyama, K.; Kasmi, A.; Liu, H.; Scarborough, T.; Wu, Z.; Charaf, O.; Cooper, S. I.; Henderson, C.; Rumerio, P.; Avetisyan, A.; Bose, T.; Fantasia, C.; Lawson, P.; Richardson, C.; Rohlf, J.; St. John, J.; Sulak, L.; Alimena, J.; Berry, E.; Bhattacharya, S.; Christopher, G.; Cutts, D.; Demiragli, Z.; Dhingra, N.; Ferapontov, A.; Garabedian, A.; Heintz, U.; Kukartsev, G.; Laird, E.; Landsberg, G.; Luk, M.; Narain, M.; Segala, M.; Sinthuprasith, T.; Speer, T.; Swanson, J.; Breedon, R.; Breto, G.; Calderon De La Barca Sanchez, M.; Chauhan, S.; Chertok, M.; Conway, J.; Conway, R.; Cox, P. T.; Erbacher, R.; Gardner, M.; Ko, W.; Lander, R.; Mulhearn, M.; Pellett, D.; Pilot, J.; Ricci-Tam, F.; Shalhout, S.; Smith, J.; Squires, M.; Stolp, D.; Tripathi, M.; Wilbur, S.; Yohay, R.; Cousins, R.; Everaerts, P.; Farrell, C.; Hauser, J.; Ignatenko, M.; Rakness, G.; Takasugi, E.; Valuev, V.; Weber, M.; Burt, K.; Clare, R.; Ellison, J.; Gary, J. W.; Hanson, G.; Heilman, J.; Ivova Rikova, M.; Jandir, P.; Kennedy, E.; Lacroix, F.; Long, O. R.; Luthra, A.; Malberti, M.; Negrete, M. Olmedo; Shrinivas, A.; Sumowidagdo, S.; Wimpenny, S.; Branson, J. G.; Cerati, G. B.; Cittolin, S.; DAgnolo, R. T.; Holzner, A.; Kelley, R.; Klein, D.; Letts, J.; Macneill, I.; Olivito, D.; Padhi, S.; Palmer, C.; Pieri, M.; Sani, M.; Sharma, V.; Simon, S.; Tadel, M.; Tu, Y.; Vartak, A.; Welke, C.; Wrthwein, F.; Yagil, A.; Barge, D.; Bradmiller-Feld, J.; Campagnari, C.; Danielson, T.; Dishaw, A.; Dutta, V.; Flowers, K.; Franco Sevilla, M.; Geffert, P.; George, C.; Golf, F.; Gouskos, L.; Incandela, J.; Justus, C.; Mccoll, N.; Richman, J.; Stuart, D.; To, W.; West, C.; Yoo, J.; Apresyan, A.; Bornheim, A.; Bunn, J.; Chen, Y.; Duarte, J.; Mott, A.; Newman, H. B.; Pena, C.; Pierini, M.; Spiropulu, M.; Vlimant, J. R.; Wilkinson, R.; Xie, S.; Zhu, R. Y.; Azzolini, V.; Calamba, A.; Carlson, B.; Ferguson, T.; Iiyama, Y.; Paulini, M.; Russ, J.; Vogel, H.; Vorobiev, I.; Cumalat, J. P.; Ford, W. T.; Gaz, A.; Krohn, M.; Luiggi Lopez, E.; Nauenberg, U.; Smith, J. G.; Stenson, K.; Wagner, S. R.; Alexander, J.; Chatterjee, A.; Chaves, J.; Chu, J.; Dittmer, S.; Eggert, N.; Mirman, N.; Nicolas Kaufman, G.; Patterson, J. R.; Ryd, A.; Salvati, E.; Skinnari, L.; Sun, W.; Teo, W. D.; Thom, J.; Thompson, J.; Tucker, J.; Weng, Y.; Winstrom, L.; Wittich, P.; Winn, D.; Abdullin, S.; Albrow, M.; Anderson, J.; Apollinari, G.; Bauerdick, L. A. T.; Beretvas, A.; Berryhill, J.; Bhat, P. C.; Bolla, G.; Burkett, K.; Butler, J. N.; Cheung, H. W. K.; Chlebana, F.; Cihangir, S.; Elvira, V. D.; Fisk, I.; Freeman, J.; Gottschalk, E.; Gray, L.; Green, D.; Grnendahl, S.; Gutsche, O.; Hanlon, J.; Hare, D.; Harris, R. M.; Hirschauer, J.; Hooberman, B.; Jindariani, S.; Johnson, M.; Joshi, U.; Klima, B.; Kreis, B.; Kwan, S.; Linacre, J.; Lincoln, D.; Lipton, R.; Liu, T.; Lykken, J.; Maeshima, K.; Marraffino, J. M.; Martinez Outschoorn, V. I.; Maruyama, S.; Mason, D.; McBride, P.; Merkel, P.; Mishra, K.; Mrenna, S.; Nahn, S.; Newman-Holmes, C.; ODell, V.; Prokofyev, O.; Sexton-Kennedy, E.; Sharma, S.; Soha, A.; Spalding, W. J.; Spiegel, L.; Taylor, L.; Tkaczyk, S.; Tran, N. V.; Uplegger, L.; Vaandering, E. W.; Vidal, R.; Whitbeck, A.; Whitmore, J.; Yang, F.; Acosta, D.; Avery, P.; Bortignon, P.; Bourilkov, D.; Carver, M.; Curry, D.; Das, S.; De Gruttola, M.; Di Giovanni, G. P.; Field, R. D.; Fisher, M.; Furic, I. K.; Hugon, J.; Konigsberg, J.; Korytov, A.; Kypreos, T.; Low, J. F.; Matchev, K.; Mei, H.; Milenovic, P.; Mitselmakher, G.; Muniz, L.; Rinkevicius, A.; Shchutska, L.; Snowball, M.; Sperka, D.; Yelton, J.; Zakaria, M.; Hewamanage, S.; Linn, S.; Markowitz, P.; Martinez, G.; Rodriguez, J. L.; Adams, T.; Askew, A.; Bochenek, J.; Diamond, B.; Haas, J.; Hagopian, S.; Hagopian, V.; Johnson, K. F.; Prosper, H.; Veeraraghavan, V.; Weinberg, M.; Baarmand, M. M.; Hohlmann, M.; Kalakhety, H.; Yumiceva, F.; Adams, M. R.; Apanasevich, L.; Berry, D.; Betts, R. R.; Bucinskaite, I.; Cavanaugh, R.; Evdokimov, O.; Gauthier, L.; Gerber, C. E.; Hofman, D. J.; Kurt, P.; OBrien, C.; Sandoval Gonzalez, I. D.; Silkworth, C.; Turner, P.; Varelas, N.; Bilki, B.; Clarida, W.; Dilsiz, K.; Haytmyradov, M.; Merlo, J. -P.; Mermerkaya, H.; Mestvirishvili, A.; Moeller, A.; Nachtman, J.; Ogul, H.; Onel, Y.; Ozok, F.; Penzo, A.; Rahmat, R.; Sen, S.; Tan, P.; Tiras, E.; Wetzel, J.; Yi, K.; Anderson, I.; Barnett, B. A.; Blumenfeld, B.; Bolognesi, S.; Fehling, D.; Gritsan, A. V.; Maksimovic, P.; Martin, C.; Swartz, M.; Baringer, P.; Bean, A.; Benelli, G.; Bruner, C.; Gray, J.; Kenny, R. P.; Majumder, D.; Malek, M.; Murray, M.; Noonan, D.; Sanders, S.; Sekaric, J.; Stringer, R.; Wang, Q.; Wood, J. S.; Chakaberia, I.; Ivanov, A.; Kaadze, K.; Khalil, S.; Makouski, M.; Maravin, Y.; Saini, L. K.; Skhirtladze, N.; Svintradze, I.; Gronberg, J.; Lange, D.; Rebassoo, F.; Wright, D.; Baden, A.; Belloni, A.; Calvert, B.; Eno, S. C.; Gomez, J. A.; Hadley, N. J.; Kellogg, R. G.; Kolberg, T.; Lu, Y.; Mignerey, A. C.; Pedro, K.; Skuja, A.; Tonjes, M. B.; Tonwar, S. C.; Apyan, A.; Barbieri, R.; Busza, W.; Cali, I. A.; Chan, M.; Di Matteo, L.; Gomez Ceballos, G.; Goncharov, M.; Gulhan, D.; Klute, M.; Lai, Y. S.; Lee, Y. -J.; Levin, A.; Luckey, P. D.; Paus, C.; Ralph, D.; Roland, C.; Roland, G.; Stephans, G. S. F.; Sumorok, K.; Velicanu, D.; Veverka, J.; Wyslouch, B.; Yang, M.; Yoon, A. S.; Zanetti, M.; Zhukova, V.; Dahmes, B.; De Benedetti, A.; Gude, A.; Kao, S. C.; Klapoetke, K.; Kubota, Y.; Mans, J.; Nourbakhsh, S.; Pastika, N.; Rusack, R.; Singovsky, A.; Tambe, N.; Turkewitz, J.; Acosta, J. G.; Cremaldi, L. M.; Kroeger, R.; Oliveros, S.; Perera, L.; Sanders, D. A.; Summers, D.; Avdeeva, E.; Bloom, K.; Bose, S.; Claes, D. R.; Dominguez, A.; Gonzalez Suarez, R.; Keller, J.; Knowlton, D.; Kravchenko, I.; Lazo-Flores, J.; Meier, F.; Ratnikov, F.; Snow, G. R.; Zvada, M.; Dolen, J.; Godshalk, A.; Iashvili, I.; Jain, S.; Kharchilava, A.; Kumar, A.; Rappoccio, S.; Alverson, G.; Barberis, E.; Baumgartel, D.; Chasco, M.; Massironi, A.; Nash, D.; Orimoto, T.; Trocino, D.; Wood, D.; Zhang, J.; Anastassov, A.; Hahn, K. A.; Kubik, A.; Lusito, L.; Mucia, N.; Odell, N.; Pollack, B.; Pozdnyakov, A.; Schmitt, M.; Stoynev, S.; Sung, K.; Velasco, M.; Won, S.; Brinkerhoff, A.; Chan, K. M.; Drozdetskiy, A.; Hildreth, M.; Jessop, C.; Karmgard, D. J.; Kellams, N.; Lannon, K.; Lynch, S.; Marinelli, N.; Musienko, Y.; Pearson, T.; Planer, M.; Ruchti, R.; Valls, N.; Smith, G.; Wayne, M.; Wolf, M.; Woodard, A.; Antonelli, L.; Brinson, J.; Bylsma, B.; Durkin, L. S.; Flowers, S.; Hart, A.; Hill, C.; Hughes, R.; Kotov, K.; Ling, T. Y.; Luo, W.; Puigh, D.; Rodenburg, M.; Winer, B. L.; Wolfe, H.; Wulsin, H. W.; Driga, O.; Elmer, P.; Hardenbrook, J.; Hebda, P.; Koay, S. A.; Lujan, P.; Marlow, D.; Medvedeva, T.; Mooney, M.; Olsen, J.; Pirou, P.; Quan, X.; Saka, H.; Stickland, D.; Tully, C.; Werner, J. S.; Zuranski, A.; Brownson, E.; Malik, S.; Mendez, H.; Ramirez Vargas, J. E.; Barnes, V. E.; Benedetti, D.; Bortoletto, D.; De Mattia, M.; Gutay, L.; Hu, Z.; Jha, M. K.; Jones, M.; Jung, K.; Kress, M.; Leonardo, N.; Miller, D. H.; Neumeister, N.; Radburn-Smith, B. C.; Shi, X.; Shipsey, I.; Silvers, D.; Svyatkovskiy, A.; Wang, F.; Xie, W.; Xu, L.; Zablocki, J.; Parashar, N.; Stupak, J.; Adair, A.; Akgun, B.; Ecklund, K. M.; Geurts, F. J. M.; Li, W.; Michlin, B.; Padley, B. P.; Redjimi, R.; Roberts, J.; Zabel, J.; Betchart, B.; Bodek, A.; Covarelli, R.; de Barbaro, P.; Demina, R.; Eshaq, Y.; Ferbel, T.; Garcia-Bellido, A.; Goldenzweig, P.; Han, J.; Harel, A.; Hindrichs, O.; Khukhunaishvili, A.; Korjenevski, S.; Petrillo, G.; Vishnevskiy, D.; Ciesielski, R.; Demortier, L.; Goulianos, K.; Mesropian, C.; Arora, S.; Barker, A.; Chou, J. P.; Contreras-Campana, C.; Contreras-Campana, E.; Duggan, D.; Ferencek, D.; Gershtein, Y.; Gray, R.; Halkiadakis, E.; Hidas, D.; Kaplan, S.; Lath, A.; Panwalkar, S.; Park, M.; Patel, R.; Salur, S.; Schnetzer, S.; Sheffield, D.; Somalwar, S.; Stone, R.; Thomas, S.; Thomassen, P.; Walker, M.; Rose, K.; Spanier, S.; York, A.; Bouhali, O.; Castaneda Hernandez, A.; Eusebi, R.; Flanagan, W.; Gilmore, J.; Kamon, T.; Khotilovich, V.; Krutelyov, V.; Montalvo, R.; Osipenkov, I.; Pakhotin, Y.; Perloff, A.; Roe, J.; Rose, A.; Safonov, A.; Suarez, I.; Tatarinov, A.; Ulmer, K. A.; Akchurin, N.; Cowden, C.; Damgov, J.; Dragoiu, C.; Dudero, P. R.; Faulkner, J.; Kovitanggoon, K.; Kunori, S.; Lee, S. W.; Libeiro, T.; Volobouev, I.; Appelt, E.; Delannoy, A. G.; Greene, S.; Gurrola, A.; Johns, W.; Maguire, C.; Mao, Y.; Melo, A.; Sharma, M.; Sheldon, P.; Snook, B.; Tuo, S.; Velkovska, J.; Arenton, M. W.; Boutle, S.; Cox, B.; Francis, B.; Goodell, J.; Hirosky, R.; Ledovskoy, A.; Li, H.; Lin, C.; Neu, C.; Wood, J.; Clarke, C.; Harr, R.; Karchin, P. E.; Kottachchi Kankanamge Don, C.; Lamichhane, P.; Sturdy, J.; Belknap, D. A.; Carlsmith, D.; Cepeda, M.; Dasu, S.; Dodd, L.; Duric, S.; Friis, E.; Hall-Wilton, R.; Herndon, M.; Herv, A.; Klabbers, P.; Lanaro, A.; Lazaridis, C.; Levine, A.; Loveless, R.; Mohapatra, A.; Ojalvo, I.; Perry, T.; Pierro, G. A.; Polese, G.; Ross, I.; Sarangi, T.; Savin, A.; Smith, W. H.; Taylor, D.; Vuosalo, C.; Woods, N.; Collaboration, The CMS
2015-04-09
Measurements of the differential and double-differential DrellYan cross sections in the dielectron and dimuon channels are presented. They are based on protonproton collision data at ?s = 8TeV recorded with the CMS detector at the LHC and corresponding to an integrated luminosity of 19.7fb^{1}. The measured inclusive cross section in the Z peak region (60120GeV), obtained from the combination of the dielectron and dimuon channels, is 1138 8 (exp) 25 (theo) 30 (lumi)\\,pb, where the statistical uncertainty is negligible. The differential cross section d?/dm in the dilepton mass range 152000GeV is measured and corrected to the full phase space. The double-differential cross section d^{2}?/dmd|y| is also measured over the mass range 20 to 1500GeV and absolute dilepton rapidity from 0 to 2.4. In addition, the ratios of the normalized differential cross sections measured at ?s = 7 and 8TeV are presented. These measurements are compared to the predictions of perturbative QCD at next-to-leading and next-to-next-to-leading (NNLO) orders using various sets of parton distribution functions (PDFs). The results agree with the NNLO theoretical predictions computed with _{FEWZ} 3.1 using the CT10 NNLO and NNPDF2.1 NNLO PDFs. Furthermore, the measured double-differential cross section and ratio of normalized differential cross sections are sufficiently precise to constrain the proton PDFs.
DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)
Khachatryan, Vardan
2014-06-16
Our measurements are presented of the t-channel single-top-quark production cross section in proton-proton collisions at √s = 8 TeV. The results are based on a data sample corresponding to an integrated luminosity of 19.7 fb-1 recorded with the CMS detector at the LHC. The cross section is measured inclusively, as well as separately for top (t) and antitop (t¯), in final states with a muon or an electron. The measured inclusive t-channel cross section is σ t-ch. = 83.6 ± 2.3 (stat.) ± 7.4 (syst.) pb. The single t and t¯ cross sections are measured to be σ t-ch.(t) =more » 53.8 ± 1.5 (stat.) ± 4.4 (syst.) pb and σ t-ch. (t¯) = 27.6 ± 1.3 (stat.) ± 3.7 (syst.) pb, respectively. The measured ratio of cross sections is R t-ch. = σ t-ch.(t)/σ t-ch. (t¯) = 1.95 ± 0.10 (stat.) ± 0.19 (syst.), in agreement with the standard model prediction. Finally, the modulus of the Cabibbo-Kobayashi-Maskawa matrix element V tb is extracted and, in combination with a previous CMS result at √s = 7 TeV, a value |V tb| = 0.998 ± 0.038 (exp.) ± 0.016 (theo.) is obtained.« less
DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)
Khachatryan, Vardan
2015-04-09
Measurements of the differential and double-differential Drell-Yan cross sections in the dielectron and dimuon channels are presented. They are based on proton-proton collision data at $\\sqrt{s}$ - 8 TeV recorded with the CMS detector at the LHC and corresponding to an integrated luminosity of 19.7 inverse femtobarns. The measured inclusive cross section in the Z peak region (60-120 GeV), obtained from the combination of the dielectron and dimuon channels, is 1138 +/- 8 (exp) +/- 25 (theo) +/- 30 (lumi) pb, where the statistical uncertainty is negligible. The differential cross section $d\\sigma/dm$ in the dilepton mass range 15 to 2000moreGeV is measured and corrected to the full phase space. The double-differential cross section $d^2 \\sigma / d(m) d(abs(y))$ is also measured over the mass range 20 to 1500 GeV and absolute dilepton rapidity from 0 to 2.4. In addition, the ratios of the normalized differential cross sections measured at $\\sqrt{s}$ = 7 and 8 TeV are presented. These measurements are compared to the predictions of perturbative QCD at next-to-leading and next-to-next-to-leading (NNLO) orders using various sets of parton distribution functions (PDFs). The results agree with the NNLO theoretical predictions computed with FEWZ 3.1 using the CT10 NNLO and NNPDF2.1 NNLO PDFs. The measured double-differential cross section and ratio of normalized differential cross sections are sufficiently precise to constrain the proton PDFs.less
Khachatryan, V.; et al.,
2014-06-01
Measurements are presented of the t-channel single-top-quark production cross section in proton-proton collisions at ?s = 8 TeV. The results are based on a data sample corresponding to an integrated luminosity of 19.7 fb? recorded with the CMS detector at the LHC. The cross section is measured inclusively, as well as separately for top (t) and antitop $ \\left(\\overline{\\mathrm{t}}\\right) $ , in final states with a muon or an electron. The measured inclusive t-channel cross section is ?_{t-ch.} = 83.6 2.3 (stat.) 7.4 (syst.) pb. The single t and $ \\overline{\\mathrm{t}} $ cross sections are measured to be ?_{t-ch.}(t) = 53.8 1.5 (stat.) 4.4 (syst.) pb and ?$_{t-ch.}$ $ \\left(\\overline{t}\\right) $ = 27.6 1.3 (stat.) 3.7 (syst.) pb, respectively. The measured ratio of cross sections is R_{t-ch.} = ?_{t-ch.}(t)/?_{t-ch.} $ \\left(\\overline{\\mathrm{t}}\\right) $ = 1.95 0.10 (stat.) 0.19 (syst.), in agreement with the standard model prediction. The modulus of the Cabibbo-Kobayashi-Maskawa matrix element V_{tb} is extracted and, in combination with a previous CMS result at ?s = 7 TeV, a value |V_{tb}| = 0.998 0.038 (exp.) 0.016 (theo.) is obtained.
ACORNS: Analysis of Correlations Used in Neutron Spectrometry
Energy Science and Technology Software Center (OSTI)
1988-05-01
The program ACORNS performs the complete analysis of the input covariance and/or relative covariance and/or correlation matrices, first of all used in the activation neutron spectrometry. These matrices have to be positive definite. To check the fulfillment of this requirement, the program calculates the eigenvalues and eigenvectors of those. If all the eigenvalues are positive, the program optionally performs the factor analysis. The user's input can be either made manually, or the cross section librariesmore » generated by the code X333.« less
Selecta from a Life-Long Obsession with Path Integrals
Klauder, John R.
2008-06-18
The definition and interpretation of canonical, phase space path integrals has evolved over many years to achieve a form that now admits a correct and rigorous formulation, which is also covariant under canonical coordinate transformations. Such formulations involve coherent state representations, which, in their modern version, were originally introduced as an alternative tool to construct phase space path integrals. Moreover, coherent state representations lead to physical interpretations that are more natural than those afforded by more traditional representations. Suitable continuous time regularization procedures lead to a covariant phase space path integral formulation that greatly clarifies the vague phrase that canonical quantization requires Cartesian coordinates.
A comparison of spatial averaging and Cadzow's method for array wavenumber estimation
Harris, D.B.; Clark, G.A.
1989-10-31
We are concerned with resolving superimposed, correlated seismic waves with small-aperture arrays. The limited time-bandwidth product of transient seismic signals complicates the task. We examine the use of MUSIC and Cadzow's ML estimator with and without subarray averaging for resolution potential. A case study with real data favors the MUSIC algorithm and a multiple event covariance averaging scheme.
Evaluated Nuclear (reaction) Data from the Evaluated Nuclear Data File (ENDF)
DOE Data Explorer [Office of Scientific and Technical Information (OSTI)]
The current version is ENDF/B VII.0, released in 2006. Users can search ENDF via specialized interfaces, browse sub-libraries or download them as zipped files. Data plots can be generated through the Sigma interface. The ENDF web page also provides access to covariance data processing and plots. (Specialized Interface)
Visions for Data Management and Remote Collaboration on ITER
Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)
I P 0.9 MA P IN 2.3 MW H98 1.3 M. Greenwald, et al., APS-DPP November 2007 C-Mod Data Helps Break Covariance Between EFF and nn G Makes Extrapolation To ITER More...
Improving Dryer and Press Efficiencies Through Combustion of Hydrocarbon Emissions
Sujit Banerjee
2005-10-31
Emission control devices on dryers and presses have been legislated into the industry, and are now an integral part of the drying system. These devices consume large quantities of natural gas and electricity and down-sizing or eliminating them will provide major energy savings. The principal strategy taken here focuses on developing process changes that should minimize (and in some cases eliminate) the need for controls. A second approach is to develop lower-cost control options. It has been shown in laboratory and full-scale work that Hazardous Air Pollutants (HAPs) emerge mainly at the end of the press cycle for particleboard, and, by extension, to other prod-ucts. Hence, only the air associated with this point of the cycle need be captured and treated. A model for estimating terpene emissions in the various zones of veneer dryers has been developed. This should allow the emissions to be concentrated in some zones and minimized in others, so that some of the air could be directly released without controls. Low-cost catalysts have been developed for controlling HAPs from dryers and presses. Catalysts conventionally used for regenerative catalytic oxidizers can be used at much lower temperatures for treating press emissions. Fluidized wood ash is an especially inexpensive mate-rial for efficiently reducing formaldehyde in dryer emissions. A heat transfer model for estimating pinene emissions from hot-pressing strand for the manufacture of flakeboard has been constructed from first principles and validated. The model shows that most of the emissions originate from the 1-mm layer of wood adjoining the platen surface. Hence, a simple control option is to surface a softwood mat with a layer of hardwood prior to pressing. Fines release a disproportionate large quantity of HAPs, and it has been shown both theo-retically and in full-scale work that particles smaller than 400 ???µm are principally responsible. Georgia-Pacific is considering green-screening their furnish at several of their mills in order to remove these particles and reduce their treatment costs.
Garcia, E. V.; Stassun, Keivan G.; Pavlovski, K.; Hensberge, H.; Chew, Y. Gmez Maqueo; Claret, A.
2014-09-01
We determine the absolute dimensions of the eclipsing binary V578 Mon, a detached system of two early B-type stars (B0V + B1V, P = 2.40848 days) in the star-forming region NGC 2244 of the Rosette Nebula. From the light curve analysis of 40 yr of photometry and the analysis of HERMES spectra, we find radii of 5.41 0.04 R{sub ?} and 4.29 0.05 R{sub ?}, and temperatures of 30,000 500 K and 25,750 435 K, respectively. We find that our disentangled component spectra for V578 Mon agree well with previous spectral disentangling from the literature. We also reconfirm the previous spectroscopic orbit of V578 Mon finding that masses of 14.54 0.08 M{sub ?} and 10.29 0.06 M{sub ?} are fully compatible with the new analysis. We compare the absolute dimensions to the rotating models of the Geneva and Utrecht groups and the models of the Granada group. We find that all three sets of models marginally reproduce the absolute dimensions of both stars with a common age within the uncertainty for gravity-effective temperature isochrones. However, there are some apparent age discrepancies for the corresponding mass-radius isochrones. Models with larger convective overshoot, >0.35, worked best. Combined with our previously determined apsidal motion of 0.07089{sub ?0.00013}{sup +0.00021} deg cycle{sup 1}, we compute the internal structure constants (tidal Love number) for the Newtonian and general relativistic contribution to the apsidal motion as log k {sub 2} = 1.975 0.017 and log k {sub 2} = 3.412 0.018, respectively. We find the relativistic contribution to the apsidal motion to be small, <4%. We find that the prediction of log k {sub 2,theo} = 2.005 0.025 of the Granada models fully agrees with our observed log k {sub 2}.
Berg, J. S.
2015-05-03
The International Muon Ionization Cooling Experiment (MICE) is an experiment to demonstrate ionization cooling of a muon beam in a beamline that shares characteristics with one that might be used for a muon collider or neutrino factory. I describe a way to quantify cooling performance by examining the phase space density of muons, and determining how much that density increases. This contrasts with the more common methods that rely on the covariance matrix and compute emittances from that. I discuss why a direct measure of phase space density might be preferable to a covariance matrix method. I apply this technique to an early proposal for the MICE final step beamline. I discuss how matching impacts the measured performance.
Pasti, Paolo; Tonin, Mario; Samsonov, Igor; Sorokin, Dmitri
2009-10-15
We reveal nonmanifest gauge and SO(1,5) Lorentz symmetries in the Lagrangian description of a six-dimensional free chiral field derived from the Bagger-Lambert-Gustavsson model in [P.-M. Ho and Y. Matsuo, J. High Energy Phys. 06 (2008) 105.] and make this formulation covariant with the use of a triplet of auxiliary scalar fields. We consider the coupling of this self-dual construction to gravity and its supersymmetrization. In the case of the nonlinear model of [P.-M. Ho, Y. Imamura, Y. Matsuo, and S. Shiba, J. High Energy Phys. 08 (2008) 014.] we solve the equations of motion of the gauge field, prove that its nonlinear field strength is self-dual and find a gauge-covariant form of the nonlinear action. Issues of the relation of this model to the known formulations of the M5-brane worldvolume theory are discussed.
A semiparametric spatio-temporal model for solar irradiance data
DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)
Patrick, Joshua D.; Harvill, Jane L.; Hansen, Clifford W.
2016-03-01
Here, we evaluate semiparametric spatio-temporal models for global horizontal irradiance at high spatial and temporal resolution. These models represent the spatial domain as a lattice and are capable of predicting irradiance at lattice points, given data measured at other lattice points. Using data from a 1.2 MW PV plant located in Lanai, Hawaii, we show that a semiparametric model can be more accurate than simple interpolation between sensor locations. We investigate spatio-temporal models with separable and nonseparable covariance structures and find no evidence to support assuming a separable covariance structure. These results indicate a promising approach for modeling irradiance atmore » high spatial resolution consistent with available ground-based measurements. Moreover, this kind of modeling may find application in design, valuation, and operation of fleets of utility-scale photovoltaic power systems.« less
Gerstl, S.A.W.
1980-01-01
SENSIT computes the sensitivity and uncertainty of a calculated integral response (such as a dose rate) due to input cross sections and their uncertainties. Sensitivity profiles are computed for neutron and gamma-ray reaction cross sections of standard multigroup cross section sets and for secondary energy distributions (SEDs) of multigroup scattering matrices. In the design sensitivity mode, SENSIT computes changes in an integral response due to design changes and gives the appropriate sensitivity coefficients. Cross section uncertainty analyses are performed for three types of input data uncertainties: cross-section covariance matrices for pairs of multigroup reaction cross sections, spectral shape uncertainty parameters for secondary energy distributions (integral SED uncertainties), and covariance matrices for energy-dependent response functions. For all three types of data uncertainties SENSIT computes the resulting variance and estimated standard deviation in an integral response of interest, on the basis of generalized perturbation theory. SENSIT attempts to be more comprehensive than earlier sensitivity analysis codes, such as SWANLAKE.
User Guide for the STAYSL PNNL Suite of Software Tools
Greenwood, Lawrence R.; Johnson, Christian D.
2013-02-27
The STAYSL PNNL software suite provides a set of tools for working with neutron activation rates measured in a nuclear fission reactor, an accelerator-based neutron source, or any neutron field to determine the neutron flux spectrum through a generalized least-squares approach. This process is referred to as neutron spectral adjustment since the preferred approach is to use measured data to adjust neutron spectra provided by neutron physics calculations. The input data consist of the reaction rates based on measured activities, an initial estimate of the neutron flux spectrum, neutron activation cross sections and their associated uncertainties (covariances), and relevant correction factors. The output consists of the adjusted neutron flux spectrum and associated covariance matrix, which is useful for neutron dosimetry and radiation damage calculations.
A fast map-making preconditioner for regular scanning patterns
Nss, Sigurd K.; Louis, Thibaut E-mail: thibaut.louis@astro.ox.ac.uk
2014-08-01
High-resolution Maximum Likelihood map-making of the Cosmic Microwave Background is usually performed using Conjugate Gradients with a preconditioner that ignores noise correlations. We here present a new preconditioner that approximates the map noise covariance as circulant, and show that this results in a speedup of up to 400% for a realistic scanning pattern from the Atacama Cosmology Telescope. The improvement is especially large for polarized maps.
Nuclear energy density functionals: What we can learn about/from their global performance?
Afanasjev, A. V.; Agbemava, S. E.; Ray, D.; Ring, P.
2014-10-15
A short review of recent results on the global performance of covariant energy density functionals is presented. It is focused on an analysis of the accuracy of the description of physical observables of ground and excited states as well as to related theoretical uncertainties. In addition, a global analysis of pairing properties is presented and the impact of pairing on the position of two-neutron drip line is discussed.
Chiral Effective Field Theory in the $\\Delta$-resonance region
Vladimir Pascalutsa
2006-09-18
I discuss the problem of constructing an effective low-energy theory in the vicinity of a resonance or a bound state. The focus is on the example of the $\\Delta(1232)$, the lightest resonance in the nucleon sector. Recent developments of the chiral effective-field theory in the $\\Delta$-resonance region are briefly reviewed. I conclude with a comment on the merits of the manifestly covariant formulation of chiral EFT in the baryon sector.
Confirmation of standard error analysis techniques applied to EXAFS using
Office of Scientific and Technical Information (OSTI)
simulations (Conference) | SciTech Connect Confirmation of standard error analysis techniques applied to EXAFS using simulations Citation Details In-Document Search Title: Confirmation of standard error analysis techniques applied to EXAFS using simulations Systematic uncertainties, such as those in calculated backscattering amplitudes, crystal glitches, etc., not only limit the ultimate accuracy of the EXAFS technique, but also affect the covariance matrix representation of real parameter
AmeriFlux US-Wkg Walnut Gulch Kendall Grasslands
DOE Data Explorer [Office of Scientific and Technical Information (OSTI)]
Scott, Russell [United States Department of Agriculture
2016-01-01
This is the AmeriFlux version of the carbon flux data for the site US-Wkg Walnut Gulch Kendall Grasslands. Site Description - This site is located in a small, intensively-studied, experimental watershed within USDA-ARS's Walnut Gulch Experimental Watershed. Eddy covariance measurements of energy, water and CO2 fluxes began in the spring of 2004, though meteorological (including Bowen ratio) and hydrological measurements are available much further back.
Flat space physics from holography
Bousso, Raphael
2004-02-06
We point out that aspects of quantum mechanics can be derived from the holographic principle, using only a perturbative limit of classical general relativity. In flat space, the covariant entropy bound reduces to the Bekenstein bound. The latter does not contain Newton's constant and cannot operate via gravitational backreaction. Instead, it is protected by--and in this sense, predicts--the Heisenberg uncertainty principle.
Correlation function analysis of the COBE differential microwave radiometer sky maps
Lineweaver, C.H.
1994-08-01
The Differential Microwave Radiometer (DMR) aboard the COBE satellite has detected anisotropies in the cosmic microwave background (CMB) radiation. A two-point correlation function analysis which helped lead to this discovery is presented in detail. The results of a correlation function analysis of the two year DMR data set is presented. The first and second year data sets are compared and found to be reasonably consistent. The positive correlation for separation angles less than {approximately}20{degree} is robust to Galactic latitude cuts and is very stable from year to year. The Galactic latitude cut independence of the correlation function is strong evidence that the signal is not Galactic in origin. The statistical significance of the structure seen in the correlation function of the first, second and two year maps is respectively > 9{sigma}, > 10{sigma} and > 18{sigma} above the noise. The noise in the DMR sky maps is correlated at a low level. The structure of the pixel temperature covariance matrix is given. The noise covariance matrix of a DMR sky map is diagonal to an accuracy of better than 1%. For a given sky pixel, the dominant noise covariance occurs with the ring of pixels at an angular separation of 60{degree} due to the 60{degree} separation of the DMR horns. The mean covariance of 60{degree} is 0.45%{sub {minus}0.14}{sup +0.18} of the mean variance. The noise properties of the DMR maps are thus well approximated by the noise properties of maps made by a single-beam experiment. Previously published DMR results are not significantly affected by correlated noise.
Accounting for Incomplete Species Detection in Fish Community Monitoring
McManamay, Ryan A; Orth, Dr. Donald J; Jager, Yetta
2013-01-01
Riverine fish assemblages are heterogeneous and very difficult to characterize with a one-size-fits-all approach to sampling. Furthermore, detecting changes in fish assemblages over time requires accounting for variation in sampling designs. We present a modeling approach that permits heterogeneous sampling by accounting for site and sampling covariates (including method) in a model-based framework for estimation (versus a sampling-based framework). We snorkeled during three surveys and electrofished during a single survey in suite of delineated habitats stratified by reach types. We developed single-species occupancy models to determine covariates influencing patch occupancy and species detection probabilities whereas community occupancy models estimated species richness in light of incomplete detections. For most species, information-theoretic criteria showed higher support for models that included patch size and reach as covariates of occupancy. In addition, models including patch size and sampling method as covariates of detection probabilities also had higher support. Detection probability estimates for snorkeling surveys were higher for larger non-benthic species whereas electrofishing was more effective at detecting smaller benthic species. The number of sites and sampling occasions required to accurately estimate occupancy varied among fish species. For rare benthic species, our results suggested that higher number of occasions, and especially the addition of electrofishing, may be required to improve detection probabilities and obtain accurate occupancy estimates. Community models suggested that richness was 41% higher than the number of species actually observed and the addition of an electrofishing survey increased estimated richness by 13%. These results can be useful to future fish assemblage monitoring efforts by informing sampling designs, such as site selection (e.g. stratifying based on patch size) and determining effort required (e.g. number of sites versus occasions).
Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)
SciTech Connect Journal Article: Unitarity check in gravitational Higgs mechanism Citation Details In-Document Search Title: Unitarity check in gravitational Higgs mechanism The effective field theory of massive gravity has long been formulated in a generally covariant way [N. Arkani-Hamed, H. Georgi, and M. D. Schwartz, Ann. Phys. (N.Y.) 305, 96 (2003).]. Using this formalism, it has been found recently that there exists a class of massive nonlinear theories that are free of the
DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)
Harris, David B.; Gibbons, Steven J.; Rodgers, Arthur J.; Pasyanls, Michael E.
2012-05-01
In this approach, small scale-length medium perturbations not modeled in the tomographic inversion might be described as random fields, characterized by particular distribution functions (e.g., normal with specified spatial covariance). Conceivably, random field parameters (scatterer density or scale length) might themselves be the targets of tomographic inversions of the scattered wave field. As a result, such augmented models may provide processing gain through the use of probabilistic signal sub spaces rather than deterministic waveforms.
Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)
Eddy Correlation Deployments Completed Bookmark and Share In mid-March, the last of a series of new eddy covariance or "eddy correlation" (ECOR) systems was installed at the ARM Climate Research Facility's Southern Great Plains (SGP) extended facility at Cyril, Oklahoma. This completes the replacement of the original ECOR systems initiated in 2002. In all, nine new ECOR systems have been deployed, including one on the 18-meter tower at the SGP forest locale at Okmulgee, Oklahoma. The
Image Appraisal for 2D and 3D Electromagnetic Inversion
Alumbaugh, D.L.; Newman, G.A.
1999-01-28
Linearized methods are presented for appraising image resolution and parameter accuracy in images generated with two and three dimensional non-linear electromagnetic inversion schemes. When direct matrix inversion is employed, the model resolution and posterior model covariance matrices can be directly calculated. A method to examine how the horizontal and vertical resolution varies spatially within the electromagnetic property image is developed by examining the columns of the model resolution matrix. Plotting the square root of the diagonal of the model covariance matrix yields an estimate of how errors in the inversion process such as data noise and incorrect a priori assumptions about the imaged model map into parameter error. This type of image is shown to be useful in analyzing spatial variations in the image sensitivity to the data. A method is analyzed for statistically estimating the model covariance matrix when the conjugate gradient method is employed rather than a direct inversion technique (for example in 3D inversion). A method for calculating individual columns of the model resolution matrix using the conjugate gradient method is also developed. Examples of the image analysis techniques are provided on 2D and 3D synthetic cross well EM data sets, as well as a field data set collected at the Lost Hills Oil Field in Central California.
Einstein-aether theory with a Maxwell field: General formalism
Balakin, Alexander B.; Lemos, José P.S.
2014-11-15
We extend the Einstein-aether theory to include the Maxwell field in a nontrivial manner by taking into account its interaction with the time-like unit vector field characterizing the aether. We also include a generic matter term. We present a model with a Lagrangian that includes cross-terms linear and quadratic in the Maxwell tensor, linear and quadratic in the covariant derivative of the aether velocity four-vector, linear in its second covariant derivative and in the Riemann tensor. We decompose these terms with respect to the irreducible parts of the covariant derivative of the aether velocity, namely, the acceleration four-vector, the shear and vorticity tensors, and the expansion scalar. Furthermore, we discuss the influence of an aether non-uniform motion on the polarization and magnetization of the matter in such an aether environment, as well as on its dielectric and magnetic properties. The total self-consistent system of equations for the electromagnetic and the gravitational fields, and the dynamic equations for the unit vector aether field are obtained. Possible applications of this system are discussed. Based on the principles of effective field theories, we display in an appendix all the terms up to fourth order in derivative operators that can be considered in a Lagrangian that includes the metric, the electromagnetic and the aether fields.
Zanolin, M.; Vitale, S.; Makris, N.
2010-06-15
In this paper we apply to gravitational waves (GW) from the inspiral phase of binary systems a recently derived frequentist methodology to calculate analytically the error for a maximum likelihood estimate of physical parameters. We use expansions of the covariance and the bias of a maximum likelihood estimate in terms of inverse powers of the signal-to-noise ration (SNR)s where the square root of the first order in the covariance expansion is the Cramer Rao lower bound (CRLB). We evaluate the expansions, for the first time, for GW signals in noises of GW interferometers. The examples are limited to a single, optimally oriented, interferometer. We also compare the error estimates using the first two orders of the expansions with existing numerical Monte Carlo simulations. The first two orders of the covariance allow us to get error predictions closer to what is observed in numerical simulations than the CRLB. The methodology also predicts a necessary SNR to approximate the error with the CRLB and provides new insight on the relationship between waveform properties, SNR, dimension of the parameter space and estimation errors. For example the timing match filtering can achieve the CRLB only if the SNR is larger than the Kurtosis of the gravitational wave spectrum and the necessary SNR is much larger if other physical parameters are also unknown.
Stueber, A.M. ); Walter, L.M. . Dept. of Geological Sciences)
1992-01-01
Formation waters from carbonate reservoirs in the upper Ordovician Galena Group of the Illinois Basin have been analyzed geochemically to study origin of salinity, chemical and isotopic evolution, and relation to paleohydrologic flow systems. These carbonate reservoirs underlie the Maquoketa Shale Group of Cincinnatian age, which forms a regional aquitard. Cl-Br relations and Na/Br-Cl/Br systematics indicate that initial brine salinity resulted from subaerial evaporation of seawater to a point not significantly beyond halite saturation. Subsequent dilution in the subsurface by meteoric waters is supported by delta D-delta O-18 covariance. Systematic relations between Sr-87/Sr-86 and 1/Sr suggest two distinct mixing events: introduction of a Sr-87 enriched fluid from a siliciclastic source, and a later event which only affected reservoir waters from the western shelf of the basin. The second mixing event is supported by covariance between Sr-87/Sr-86 and concentrations of cations and anions; covariance between Sr and O-D isotopes suggests that the event is related to meteoric water influx. Systematic geochemical relations in ordovician Galena Group formation waters have been preserved by the overlying Maquoketa shale aquitard. Comparison with results from previous studies indicates that waters from Silurian-Devonian carbonate strata evolved in a manner similar to yet distinct from that of the Ordovician carbonate waters, whereas waters from Mississippian-Pennsylvanian strata that overlie the New Albany Shale Group regional aquitard are marked by fundamentally different Cl-Br-Na and Sr isotope systematics. Evolution of these geochemical formation-water regimes apparently has been influenced significantly by paleohydrologic flow systems.
Transit light curves with finite integration time: Fisher information analysis
Price, Ellen M.; Rogers, Leslie A.
2014-10-10
Kepler has revolutionized the study of transiting planets with its unprecedented photometric precision on more than 150,000 target stars. Most of the transiting planet candidates detected by Kepler have been observed as long-cadence targets with 30 minute integration times, and the upcoming Transiting Exoplanet Survey Satellite will record full frame images with a similar integration time. Integrations of 30 minutes affect the transit shape, particularly for small planets and in cases of low signal to noise. Using the Fisher information matrix technique, we derive analytic approximations for the variances and covariances on the transit parameters obtained from fitting light curve photometry collected with a finite integration time. We find that binning the light curve can significantly increase the uncertainties and covariances on the inferred parameters when comparing scenarios with constant total signal to noise (constant total integration time in the absence of read noise). Uncertainties on the transit ingress/egress time increase by a factor of 34 for Earth-size planets and 3.4 for Jupiter-size planets around Sun-like stars for integration times of 30 minutes compared to instantaneously sampled light curves. Similarly, uncertainties on the mid-transit time for Earth and Jupiter-size planets increase by factors of 3.9 and 1.4. Uncertainties on the transit depth are largely unaffected by finite integration times. While correlations among the transit depth, ingress duration, and transit duration all increase in magnitude with longer integration times, the mid-transit time remains uncorrelated with the other parameters. We provide code in Python and Mathematica for predicting the variances and covariances at www.its.caltech.edu/?eprice.
Sample variance in weak lensing: How many simulations are required?
DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)
Petri, Andrea; May, Morgan; Haiman, Zoltan
2016-03-24
Constraining cosmology using weak gravitational lensing consists of comparing a measured feature vector of dimension Nb with its simulated counterpart. An accurate estimate of the Nb × Nb feature covariance matrix C is essential to obtain accurate parameter confidence intervals. When C is measured from a set of simulations, an important question is how large this set should be. To answer this question, we construct different ensembles of Nr realizations of the shear field, using a common randomization procedure that recycles the outputs from a smaller number Ns ≤ Nr of independent ray-tracing N-body simulations. We study parameter confidence intervalsmore » as a function of (Ns, Nr) in the range 1 ≤ Ns ≤ 200 and 1 ≤ Nr ≲ 105. Previous work [S. Dodelson and M. D. Schneider, Phys. Rev. D 88, 063537 (2013)] has shown that Gaussian noise in the feature vectors (from which the covariance is estimated) lead, at quadratic order, to an O(1/Nr) degradation of the parameter confidence intervals. Using a variety of lensing features measured in our simulations, including shear-shear power spectra and peak counts, we show that cubic and quartic covariance fluctuations lead to additional O(1/N2r) error degradation that is not negligible when Nr is only a factor of few larger than Nb. We study the large Nr limit, and find that a single, 240 Mpc/h sized 5123-particle N-body simulation (Ns = 1) can be repeatedly recycled to produce as many as Nr = few × 104 shear maps whose power spectra and high-significance peak counts can be treated as statistically independent. Lastly, a small number of simulations (Ns = 1 or 2) is sufficient to forecast parameter confidence intervals at percent accuracy.« less
Prenatal exposure to environmental contaminants and body composition at age 7–9 years
Delvaux, Immle; Van Cauwenberghe, Jolijn; Den Hond, Elly; Schoeters, Greet; Govarts, Eva; Nelen, Vera; Baeyens, Willy; Van Larebeke, Nicolas; Sioen, Isabelle
2014-07-15
The study aim was to investigate the association between prenatal exposure to endocrine disrupting chemicals (EDCs) and the body composition of 7 to 9 year old Flemish children. The subjects were 114 Flemish children (50% boys) that took part in the first Flemish Environment and Health Study (2002–2006). Cadmium, PCBs, dioxins, p,p′-DDE and HCB were analysed in cord blood/plasma. When the child reached 7–9 years, height, weight, waist circumference and skinfolds were measured. Significant associations between prenatal exposure to EDCs and indicators of body composition were only found in girls. After adjustment for confounders and covariates, a significant negative association was found in girls between prenatal cadmium exposure and weight, BMI and waist circumference (indicator of abdominal fat) and the sum of four skinfolds (indicator of subcutaneous fat). In contrast, a significant positive association (after adjustment for confounders/covariates) was found between prenatal p,p′-DDE exposure and waist circumference as well as waist/height ratio in girls (indicators of abdominal fat). No significant associations were found for prenatal PCBs, dioxins and HCB exposure after adjustment for confounders/covariates. This study suggests a positive association between prenatal p,p′-DDE exposure and indicators of abdominal fat and a negative association between prenatal cadmium exposure and indicators of both abdominal as well as subcutaneous fat in girls between 7 and 9 years old. - Highlights: • Associations between prenatal contaminant exposure and anthropometrics in children. • Significant association only found in girls. • No significant associations found for prenatal PCBs, dioxins and HCB exposure. • Girls: negative association between cadmium and abdominal and subcutaneous fat. • Girls: positive association between p,p′-DDE and indicators of abdominal fat.
Low-rank matrix decomposition and spatio-temporal sparse recovery for STAP radar
DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)
Sen, Satyabrata
2015-08-04
We develop space-time adaptive processing (STAP) methods by leveraging the advantages of sparse signal processing techniques in order to detect a slowly-moving target. We observe that the inherent sparse characteristics of a STAP problem can be formulated as the low-rankness of clutter covariance matrix when compared to the total adaptive degrees-of-freedom, and also as the sparse interference spectrum on the spatio-temporal domain. By exploiting these sparse properties, we propose two approaches for estimating the interference covariance matrix. In the first approach, we consider a constrained matrix rank minimization problem (RMP) to decompose the sample covariance matrix into a low-rank positivemore » semidefinite and a diagonal matrix. The solution of RMP is obtained by applying the trace minimization technique and the singular value decomposition with matrix shrinkage operator. Our second approach deals with the atomic norm minimization problem to recover the clutter response-vector that has a sparse support on the spatio-temporal plane. We use convex relaxation based standard sparse-recovery techniques to find the solutions. With extensive numerical examples, we demonstrate the performances of proposed STAP approaches with respect to both the ideal and practical scenarios, involving Doppler-ambiguous clutter ridges, spatial and temporal decorrelation effects. As a result, the low-rank matrix decomposition based solution requires secondary measurements as many as twice the clutter rank to attain a near-ideal STAP performance; whereas the spatio-temporal sparsity based approach needs a considerably small number of secondary data.« less
Energy Science and Technology Software Center (OSTI)
2013-04-12
The STAYSL PNNL Suite of software provides a set of tools for working with neutron activation rates measured in a nuclear fission reactor, an accelerator-based neutron source, or any neutron field to determine the neutron flux spectrum through a generalized least-squares approach. This process is referred to as neutron spectral adjustment since the preferred approach is to use measured data to adjust neutron spectra provided by neutron physics calculations. The input data consist of themore » reaction rates based on measured activities, an initial estimate of the neutron flux spectrum, neutron activation cross sections and their associated uncertainties (covariances), and relevant correction factors. The output consists of the adjusted neutron flux spectrum and associated covariance matrix, which is useful for neutron dosimetry and radiation damage calculations. The software suite consists of the STAYSL PNNL, SHIELD, BCF, and NJpp Fortran codes and the SigPhi Calculator spreadsheet tool. In addition, the development of this software suite and associated data libraries used the third-party NJOY99 Fortran code (http://t2.lanl.gov/nis/codes/njoy99/). The NJOY99 and NJpp codes are used to assemble cross section and covariance input data libraries (for both SHIELD and STAYSL PNNL) from the International Reactor Dosimetry File of 2002 (IRDF-2002; http://www-nds.iaea.org/irdf2002/) developed by the Nuclear Data Section of the International Atomic Energy Agency (Vienna, Austria). The BCF, SigPhi Calculator, and SHIELD software tools are used to calculate corrected activation rates and neutron self-shielding correction factors, which are inputs to the STAYSL PNNL code.« less
Low-rank matrix decomposition and spatio-temporal sparse recovery for STAP radar
Sen, Satyabrata
2015-08-04
We develop space-time adaptive processing (STAP) methods by leveraging the advantages of sparse signal processing techniques in order to detect a slowly-moving target. We observe that the inherent sparse characteristics of a STAP problem can be formulated as the low-rankness of clutter covariance matrix when compared to the total adaptive degrees-of-freedom, and also as the sparse interference spectrum on the spatio-temporal domain. By exploiting these sparse properties, we propose two approaches for estimating the interference covariance matrix. In the first approach, we consider a constrained matrix rank minimization problem (RMP) to decompose the sample covariance matrix into a low-rank positive semidefinite and a diagonal matrix. The solution of RMP is obtained by applying the trace minimization technique and the singular value decomposition with matrix shrinkage operator. Our second approach deals with the atomic norm minimization problem to recover the clutter response-vector that has a sparse support on the spatio-temporal plane. We use convex relaxation based standard sparse-recovery techniques to find the solutions. With extensive numerical examples, we demonstrate the performances of proposed STAP approaches with respect to both the ideal and practical scenarios, involving Doppler-ambiguous clutter ridges, spatial and temporal decorrelation effects. As a result, the low-rank matrix decomposition based solution requires secondary measurements as many as twice the clutter rank to attain a near-ideal STAP performance; whereas the spatio-temporal sparsity based approach needs a considerably small number of secondary data.
Shukla, K. K.; Phanikumar, D. V.; Newsom, Rob K.; Kumar, Niranjan; Ratnam, Venkat; Naja, M.; Singh, Narendra
2014-03-01
A Doppler lidar was installed at Manora Peak, Nainital (29.4 N; 79.2 E, 1958 amsl) to estimate mixing layer height for the first time by using vertical velocity variance as basic measurement parameter for the period September-November 2011. Mixing layer height is found to be located ~0.57 +/- 0.1and 0.45 +/- 0.05km AGL during day and nighttime, respectively. The estimation of mixing layer height shows good correlation (R>0.8) between different instruments and with different methods. Our results show that wavelet co-variance transform is a robust method for mixing layer height estimation.
Information content of nonautonomous free fields in curved space-time
Parreira, J. E.; Nemes, M. C.; Fonseca-Romero, K. M.
2011-03-15
We show that it is possible to quantify the information content of a nonautonomous free field state in curved space-time. A covariance matrix is defined and it is shown that, for symmetric Gaussian field states, the matrix is connected to the entropy of the state. This connection is maintained throughout a quadratic nonautonomous (including possible phase transitions) evolution. Although particle-antiparticle correlations are dynamically generated, the evolution is isoentropic. If the current standard cosmological model for the inflationary period is correct, in absence of decoherence such correlations will be preserved, and could potentially lead to observable effects, allowing for a test of the model.
Path integral quantization of generalized quantum electrodynamics
Bufalo, R.; Pimentel, B. M.; Zambrano, G. E. R.
2011-02-15
In this paper, a complete covariant quantization of generalized electrodynamics is shown through the path integral approach. To this goal, we first studied the Hamiltonian structure of the system following Dirac's methodology and, then, we followed the Faddeev-Senjanovic procedure to obtain the transition amplitude. The complete propagators (Schwinger-Dyson-Fradkin equations) of the correct gauge fixation and the generalized Ward-Fradkin-Takahashi identities are also obtained. Afterwards, an explicit calculation of one-loop approximations of all Green's functions and a discussion about the obtained results are presented.
Estimation of the Dynamic States of Synchronous Machines Using an Extended Particle Filter
Zhou, Ning; Meng, Da; Lu, Shuai
2013-11-11
In this paper, an extended particle filter (PF) is proposed to estimate the dynamic states of a synchronous machine using phasor measurement unit (PMU) data. A PF propagates the mean and covariance of states via Monte Carlo simulation, is easy to implement, and can be directly applied to a non-linear system with non-Gaussian noise. The extended PF modifies a basic PF to improve robustness. Using Monte Carlo simulations with practical noise and model uncertainty considerations, the extended PFs performance is evaluated and compared with the basic PF and an extended Kalman filter (EKF). The extended PF results showed high accuracy and robustness against measurement and model noise.
Non-gaussian inflationary shapes in G{sup 3} theories beyond Horndeski
Fasiello, Matteo; Renaux-Petel, Sébastien E-mail: srenaux@lpthe.jussieu.fr
2014-10-01
We consider the possible signatures of a recently introduced class of healthy theories beyond Horndeski models on higher-order correlators of the inflationary curvature fluctuation. Despite the apparent large number and complexity of the cubic interactions, we show that the leading-order bispectrum generated by the Generalized Horndeski (also called G{sup 3}) interactions can be reduced to a linear combination of two well known k-inflationary shapes. We conjecture that said behavior is not an accident of the cubic order but a consequence dictated by the requirements on the absence of Ostrogradski instability, the general covariance and the linear dispersion relation in these theories.
Mode Coupling and the Pygmy Dipole Resonance in a Relativistic Two-Phonon Model
Litvinova, Elena; Ring, Peter; Tselyaev, Victor
2010-07-09
A new class of many-body models, based on covariant density functional theory for excited states, is presented. It allows a parameter free description of the fragmentation of nuclear states induced by mode coupling of two-quasiparticle and two-phonon configurations. As compared to earlier methods it provides a consistent and parameter free theory of the fine structure of nuclear resonances. The method is applied very successfully to investigate the newly discovered low-lying dipole excitations in Sn and Ni isotopes with large neutron excess.
Rigas, Johannes; Luetkenhaus, Norbert
2006-01-15
We consider entanglement detection for quantum-key-distribution systems that use two signal states and continuous-variable measurements. This problem can be formulated as a separability problem in a qubit-mode system. To verify entanglement, we introduce an object that combines the covariance matrix of the mode with the density matrix of the qubit. We derive necessary separability criteria for this scenario. These criteria can be readily evaluated using semidefinite programming and we apply them to the specific quantum key distribution protocol.
Neutron Reference Benchmark Field Specification: ACRR Free-Field Environment (ACRR-FF-CC-32-CL).
Vega, Richard Manuel; Parma, Edward J.; Griffin, Patrick J.; Vehar, David W.
2015-07-01
This report was put together to support the International Atomic Energy Agency (IAEA) REAL- 2016 activity to validate the dosimetry community’s ability to use a consistent set of activation data and to derive consistent spectral characterizations. The report captures details of integral measurements taken in the Annular Core Research Reactor (ACRR) central cavity free-field reference neutron benchmark field. The field is described and an “a priori” calculated neutron spectrum is reported, based on MCNP6 calculations, and a subject matter expert (SME) based covariance matrix is given for this “a priori” spectrum. The results of 31 integral dosimetry measurements in the neutron field are reported.
Rising, Michael Evan
2015-06-10
After a brief introduction concerning nuclear data, prompt fission neutron spectrum (PFNS) evaluations and the limited PFNS covariance data in the ENDF/B-VII library, and the important fact that cross section uncertainties ~ PFNS uncertainties, the author presents background information on the PFNS (experimental data, theoretical models, data evaluation, uncertainty quantification) and discusses the impact on certain well-known critical assemblies with regard to integral quantities, sensitivity analysis, and uncertainty propagation. He sketches recent and ongoing research and concludes with some final thoughts.
Metric redefinitions in Einstein-Aether theory
Foster, Brendan Z.
2005-08-15
'Einstein-Aether' theory, in which gravity couples to a dynamical, timelike, unit-norm vector field, provides a means for studying Lorentz violation in a generally covariant setting. Demonstrated here is the effect of a redefinition of the metric and 'aether' fields in terms of the original fields and two free parameters. The net effect is a change of the coupling constants appearing in the action. Using such a redefinition, one of the coupling constants can be set to zero, simplifying studies of solutions of the theory.
Constraining Lorentz Violation with Cosmology
Zuntz, J. A.; Ferreira, P. G.; Zlosnik, T. G
2008-12-31
The Einstein-aether theory provides a simple, dynamical mechanism for breaking Lorentz invariance. It does so within a generally covariant context and may emerge from quantum effects in more fundamental theories. The theory leads to a preferred frame and can have distinct experimental signatures. In this Letter, we perform a comprehensive study of the cosmological effects of the Einstein-aether theory and use observational data to constrain it. Allied to previously determined consistency and experimental constraints, we find that an Einstein-aether universe can fit experimental data over a wide range of its parameter space, but requires a specific rescaling of the other cosmological densities.
Post-Newtonian parameters and constraints on Einstein-aether theory
Foster, Brendan Z.; Jacobson, Ted
2006-03-15
We analyze the observational and theoretical constraints on ''Einstein-aether theory,'' a generally covariant theory of gravity coupled to a dynamical, unit, timelike vector field that breaks local Lorentz symmetry. The results of a computation of the remaining post-Newtonian parameters are reported. These are combined with other results to determine the joint post-Newtonian, vacuum-Cerenkov, nucleosynthesis, stability, and positive-energy constraints. All of these constraints are satisfied by parameters in a large two-dimensional region in the four-dimensional parameter space defining the theory.
Precision Gas Sampling (PGS) Validation2011-2014 Final Campaign Report
Office of Scientific and Technical Information (OSTI)
(Technical Report) | SciTech Connect Precision Gas Sampling (PGS) Validation2011-2014 Final Campaign Report Citation Details In-Document Search Title: Precision Gas Sampling (PGS) Validation2011-2014 Final Campaign Report In this field campaign, we used eddy covariance towers to quantify carbon, water, and energy fluxes from a pasture and a wheat field that were converted to switchgrass. The U.S. Department of Energy is investing in switchgrass as a cellulosic bioenergy crop, but there is
Quantum field theory in the presence of a medium: Green's function expansions
Kheirandish, Fardin; Salimi, Shahriar
2011-12-15
Starting from a Lagrangian and using functional-integration techniques, series expansions of Green's function of a real scalar field and electromagnetic field, in the presence of a medium, are obtained. The parameter of expansion in these series is the susceptibility function of the medium. Relativistic and nonrelativistic Langevin-type equations are derived. Series expansions for Lifshitz energy in finite temperature and for an arbitrary matter distribution are derived. Covariant formulations for both scalar and electromagnetic fields are introduced. Two illustrative examples are given.
Relations between health indicators and residential proximity to coal mining in West Virginia
Hendryx, M.; Ahern, M.M.
2008-04-15
We used data from a survey of 16493 West Virginians merged with county-level coal production and other covariates to investigate the relations between health indicators and residential proximity to coal mining. Results of hierarchical analyses indicated that high levels of coal production were associated with worse adjusted health status and with higher rates of cardiopulmonary disease, chronic obstructive pulmonary disease, hypertension, lung disease, and kidney disease. Research is recommended to ascertain the mechanisms, magnitude, and consequences of a community coal-mining exposure effect.
Sigma: Web Retrieval Interface for Nuclear Reaction Data
Pritychenko,B.; Sonzogni, A.A.
2008-06-24
The authors present Sigma, a Web-rich application which provides user-friendly access in processing and plotting of the evaluated and experimental nuclear reaction data stored in the ENDF-6 and EXFOR formats. The main interface includes browsing using a periodic table and a directory tree, basic and advanced search capabilities, interactive plots of cross sections, angular distributions and spectra, comparisons between evaluated and experimental data, computations between different cross section sets. Interactive energy-angle, neutron cross section uncertainties plots and visualization of covariance matrices are under development. Sigma is publicly available at the National Nuclear Data Center website at www.nndc.bnl.gov/sigma.
Quantum field theory of classically unstable Hamiltonian dynamics
Strauss, Y.; Horwitz, L. P.; Levitan, J.; Yahalom, A.
2015-07-15
We study a class of dynamical systems for which the motions can be described in terms of geodesics on a manifold (ordinary potential models can be cast into this form by means of a conformal map). It is rigorously proven that the geodesic deviation equation of Jacobi, constructed with a second covariant derivative, is unitarily equivalent to that of a parametric harmonic oscillator, and we study the second quantization of this oscillator. The excitations of the Fock space modes correspond to the emission and absorption of quanta into the dynamical medium, thus associating unstable behavior of the dynamical system with calculable fluctuations in an ensemble with possible thermodynamic consequences.
Unitarity check in gravitational Higgs mechanism (Journal Article) |
Office of Scientific and Technical Information (OSTI)
SciTech Connect Unitarity check in gravitational Higgs mechanism Citation Details In-Document Search Title: Unitarity check in gravitational Higgs mechanism The effective field theory of massive gravity has long been formulated in a generally covariant way [N. Arkani-Hamed, H. Georgi, and M. D. Schwartz, Ann. Phys. (N.Y.) 305, 96 (2003).]. Using this formalism, it has been found recently that there exists a class of massive nonlinear theories that are free of the Boulware-Deser ghosts, at
Perfetti, Christopher M; Rearden, Bradley T
2014-01-01
This work introduces a new approach for calculating sensitivity coefficients for generalized neutronic responses to nuclear data uncertainties using continuous-energy Monte Carlo methods. The approach presented in this paper, known as the GEAR-MC method, allows for the calculation of generalized sensitivity coefficients for multiple responses in a single Monte Carlo calculation with no nuclear data perturbations or knowledge of nuclear covariance data. The theory behind the GEAR-MC method is presented here, and proof of principle is demonstrated by using the GEAR-MC method to calculate sensitivity coefficients for responses in several 3D, continuous-energy Monte Carlo applications.
Many-Group Cross-Section Adjustment Techniques for Boiling Water Reactor Adaptive Simulation
Jessee, Matthew Anderson
2011-01-01
Computational capability has been developed to adjust multigroup neutron cross sections, including self-shielding correction factors, to improve the fidelity of boiling water reactor (BWR) core modeling and simulation. The method involves propagating multigroup neutron cross-section uncertainties through various BWR computational models to evaluate uncertainties in key core attributes such as core k{sub eff}, nodal power distributions, thermal margins, and in-core detector readings. Uncertainty-based inverse theory methods are then employed to adjust multigroup cross sections to minimize the disagreement between BWR core modeling predictions and observed (i.e., measured) plant data. For this paper, observed plant data are virtually simulated in the form of perturbed three-dimensional nodal power distributions with the perturbations sized to represent actual discrepancies between predictions and real plant data. The major focus of this work is to efficiently propagate multigroup neutron cross-section uncertainty through BWR lattice physics and core simulator calculations. The data adjustment equations are developed using a subspace approach that exploits the ill-conditioning of the multigroup cross-section covariance matrix to minimize computation and storage burden. Tikhonov regularization is also employed to improve the conditioning of the data adjustment equations. Expressions are also provided for posterior covariance matrices of both the multigroup cross-section and core attributes uncertainties.
Total least squares for anomalous change detection
Theiler, James P; Matsekh, Anna M
2010-01-01
A family of difference-based anomalous change detection algorithms is derived from a total least squares (TLSQ) framework. This provides an alternative to the well-known chronochrome algorithm, which is derived from ordinary least squares. In both cases, the most anomalous changes are identified with the pixels that exhibit the largest residuals with respect to the regression of the two images against each other. The family of TLSQ-based anomalous change detectors is shown to be equivalent to the subspace RX formulation for straight anomaly detection, but applied to the stacked space. However, this family is not invariant to linear coordinate transforms. On the other hand, whitened TLSQ is coordinate invariant, and furthermore it is shown to be equivalent to the optimized covariance equalization algorithm. What whitened TLSQ offers, in addition to connecting with a common language the derivations of two of the most popular anomalous change detection algorithms - chronochrome and covariance equalization - is a generalization of these algorithms with the potential for better performance.
Detecting a Lorentz-violating field in cosmology
Li Baojiu; Barrow, John D.; Mota, David F.
2008-01-15
We consider cosmology in the Einstein-Aether theory (the generally covariant theory of gravitation coupled to a dynamical timelike Lorentz-violating vector field) with a linear Ae-Lagrangian. The 3+1 spacetime splitting approach is used to derive covariant and gauge invariant perturbation equations which are valid for a general class of Lagrangians. Restricting attention to the parameter space of these theories which is consistent with local gravity experiments, we show that there are tracking behaviors for the Ae field, both in the background cosmology and at the linear perturbation level. The primordial power spectrum of scalar perturbations in this model is shown to be the same as that predicted by standard general relativity. However, the power spectrum of tensor perturbation is different from that in general relativity, but has a smaller amplitude and so cannot be detected at present. We also study the implications for late-time cosmology and find that the evolution of photon and neutrino anisotropic stresses can source the Ae field perturbation during the radiation and matter dominated epochs, and as a result the CMB and matter power spectra are modified. However, these effects are degenerate with respect to other cosmological parameters, such as neutrino masses and the bias parameter in the observed galaxy spectrum.
Nuclear Material Variance Calculation
Energy Science and Technology Software Center (OSTI)
1995-01-01
MAVARIC (Materials Accounting VARIance Calculations) is a custom spreadsheet that significantly reduces the effort required to make the variance and covariance calculations needed to determine the detection sensitivity of a materials accounting system and loss of special nuclear material (SNM). The user is required to enter information into one of four data tables depending on the type of term in the materials balance (MB) equation. The four data tables correspond to input transfers, output transfers,more » and two types of inventory terms, one for nondestructive assay (NDA) measurements and one for measurements made by chemical analysis. Each data entry must contain an identification number and a short description, as well as values for the SNM concentration, the bulk mass (or solution volume), the measurement error standard deviations, and the number of measurements during an accounting period. The user must also specify the type of error model (additive or multiplicative) associated with each measurement, and possible correlations between transfer terms. Predefined spreadsheet macros are used to perform the variance and covariance calculations for each term based on the corresponding set of entries. MAVARIC has been used for sensitivity studies of chemical separation facilities, fuel processing and fabrication facilities, and gas centrifuge and laser isotope enrichment facilities.« less
Cross-linked structure of network evolution
Bassett, Danielle S.; Wymbs, Nicholas F.; Grafton, Scott T.; Porter, Mason A.; CABDyN Complexity Centre, University of Oxford, Oxford, OX1 1HP ; Mucha, Peter J.; Department of Applied Physical Sciences, University of North Carolina, Chapel Hill, North Carolina 27599
2014-03-15
We study the temporal co-variation of network co-evolution via the cross-link structure of networks, for which we take advantage of the formalism of hypergraphs to map cross-link structures back to network nodes. We investigate two sets of temporal network data in detail. In a network of coupled nonlinear oscillators, hyperedges that consist of network edges with temporally co-varying weights uncover the driving co-evolution patterns of edge weight dynamics both within and between oscillator communities. In the human brain, networks that represent temporal changes in brain activity during learning exhibit early co-evolution that then settles down with practice. Subsequent decreases in hyperedge size are consistent with emergence of an autonomous subgraph whose dynamics no longer depends on other parts of the network. Our results on real and synthetic networks give a poignant demonstration of the ability of cross-link structure to uncover unexpected co-evolution attributes in both real and synthetic dynamical systems. This, in turn, illustrates the utility of analyzing cross-links for investigating the structure of temporal networks.
Rising, M. E.; Prinja, A. K.
2012-07-01
A critical neutron transport problem with random material properties is introduced. The total cross section and the average neutron multiplicity are assumed to be uncertain, characterized by the mean and variance with a log-normal distribution. The average neutron multiplicity and the total cross section are assumed to be uncorrected and the material properties for differing materials are also assumed to be uncorrected. The principal component analysis method is used to decompose the covariance matrix into eigenvalues and eigenvectors and then 'realizations' of the material properties can be computed. A simple Monte Carlo brute force sampling of the decomposed covariance matrix is employed to obtain a benchmark result for each test problem. In order to save computational time and to characterize the moments and probability density function of the multiplication factor the polynomial chaos expansion method is employed along with the stochastic collocation method. A Gauss-Hermite quadrature set is convolved into a multidimensional tensor product quadrature set and is successfully used to compute the polynomial chaos expansion coefficients of the multiplication factor. Finally, for a particular critical fuel pin assembly the appropriate number of random variables and polynomial expansion order are investigated. (authors)
The effective field theory of dark energy
Gubitosi, Giulia; Vernizzi, Filippo; Piazza, Federico E-mail: fpiazza@apc.univ-paris7.fr
2013-02-01
We propose a universal description of dark energy and modified gravity that includes all single-field models. By extending a formalism previously applied to inflation, we consider the metric universally coupled to matter fields and we write in terms of it the most general unitary gauge action consistent with the residual unbroken symmetries of spatial diffeomorphisms. Our action is particularly suited for cosmological perturbation theory: the background evolution depends on only three operators. All other operators start at least at quadratic order in the perturbations and their effects can be studied independently and systematically. In particular, we focus on the properties of a few operators which appear in non-minimally coupled scalar-tensor gravity and galileon theories. In this context, we study the mixing between gravity and the scalar degree of freedom. We assess the quantum and classical stability, derive the speed of sound of fluctuations and the renormalization of the Newton constant. The scalar can always be de-mixed from gravity at quadratic order in the perturbations, but not necessarily through a conformal rescaling of the metric. We show how to express covariant field-operators in our formalism and give several explicit examples of dark energy and modified gravity models in our language. Finally, we discuss the relation with the covariant EFT methods recently appeared in the literature.
Hamiltonian dynamics of an exotic action for gravity in three dimensions
Escalante, Alberto Manuel-Cabrera, J.
2014-04-15
The Hamiltonian dynamics and the canonical covariant formalism for an exotic action in three dimensions are performed. By working with the complete phase space, we report a complete Hamiltonian description of the theory such as the extended action, the extended Hamiltonian, the algebra among the constraints, the Diracs brackets and the correct gauge transformations. In addition, we show that in spite of exotic action and tetrad gravity with a cosmological constant give rise to the same equations of motion, they are not equivalent, in fact, we show that their corresponding Diracs brackets are quite different. Finally, we construct a gauge invariant symplectic form which in turn represents a complete Hamiltonian description of the covariant phase space. -- Highlights: We report a detailed Hamiltonian analysis for an exotic action of gravity. We show that Palatini and exotic actions are not equivalent. The exotic action is a non-commutative theory. The fundamental gauge transformations of the theory are ?-deformed Poincar transformations. A Lorentz and gauge invariant symplectic two-form is constructed.
Cosmic Shear Measurements with DES Science Verification Data
Becker, M. R.
2015-07-20
We present measurements of weak gravitational lensing cosmic shear two-point statistics using Dark Energy Survey Science Verification data. We demonstrate that our results are robust to the choice of shear measurement pipeline, either ngmix or im3shape, and robust to the choice of two-point statistic, including both real and Fourier-space statistics. Our results pass a suite of null tests including tests for B-mode contamination and direct tests for any dependence of the two-point functions on a set of 16 observing conditions and galaxy properties, such as seeing, airmass, galaxy color, galaxy magnitude, etc. We use a large suite of simulations to compute the covariance matrix of the cosmic shear measurements and assign statistical significance to our null tests. We find that our covariance matrix is consistent with the halo model prediction, indicating that it has the appropriate level of halo sample variance. We also compare the same jackknife procedure applied to the data and the simulations in order to search for additional sources of noise not captured by the simulations. We find no statistically significant extra sources of noise in the data. The overall detection significance with tomography for our highest source density catalog is 9.7σ. Cosmological constraints from the measurements in this work are presented in a companion paper (DES et al. 2015).
Mortality in Appalachian coal mining regions: the value of statistical life lost
Hendryx, M.; Ahern, M.M.
2009-07-15
We examined elevated mortality rates in Appalachian coal mining areas for 1979-2005, and estimated the corresponding value of statistical life (VSL) lost relative to the economic benefits of the coal mining industry. We compared age-adjusted mortality rates and socioeconomic conditions across four county groups: Appalachia with high levels of coal mining, Appalachia with lower mining levels, Appalachia without coal mining, and other counties in the nation. We converted mortality estimates to VSL estimates and compared the results with the economic contribution of coal mining. We also conducted a discount analysis to estimate current benefits relative to future mortality costs. The heaviest coal mining areas of Appalachia had the poorest socioeconomic conditions. Before adjusting for covariates, the number of excess annual age-adjusted deaths in coal mining areas ranged from 3,975 to 10,923, depending on years studied and comparison group. Corresponding VSL estimates ranged from $18.563 billion to $84.544 billion, with a point estimate of $50.010 billion, greater than the $8.088 billion economic contribution of coal mining. After adjusting for covariates, the number of excess annual deaths in mining areas ranged from 1,736 to 2,889, and VSL costs continued to exceed the benefits of mining. Discounting VSL costs into the future resulted in excess costs relative to benefits in seven of eight conditions, with a point estimate of $41.846 billion.
Decreases in Human Semen Quality with Age Among Healthy Men
Eskenazi, B.; Wyrobek, A.J.; Kidd, S.A.; Moore, L.; Young, S.S.; Moore, D.
2001-12-01
The objective of this report is to characterize the associations between age and semen quality among healthy active men after controlling for identified covariates. Ninety-seven healthy, nonsmoking men between 22 and 80 years without known fertility problems who worked for or retired from a large research laboratory. There was a gradual decrease in all semen parameters from 22-80 years of age. After adjusting for covariates, volume decreased 0.03 ml per year (p = 0.001); sperm concentration decreased 2.5% per year (p = 0.005); total count decreased 3.6% per year of age (p < 0.001); motility decreased 0.7% per year (P < 0.001); progressive motility decreased 3.1% per year (p < 0.001); and total progressively motile sperm decreased 4.8% per year (p < 0.001). In a group of healthy active men, semen volume, sperm concentration, total sperm count, and sperm motility decrease continuously between 22-80 years of age, with no evidence of a threshold.
The role of optimality in characterizing CO2 seepage from geological carbon sequestration sites
Cortis, Andrea; Oldenburg, Curtis M.; Benson, Sally M.
2008-09-15
Storage of large amounts of carbon dioxide (CO{sub 2}) in deep geological formations for greenhouse gas mitigation is gaining momentum and moving from its conceptual and testing stages towards widespread application. In this work we explore various optimization strategies for characterizing surface leakage (seepage) using near-surface measurement approaches such as accumulation chambers and eddy covariance towers. Seepage characterization objectives and limitations need to be defined carefully from the outset especially in light of large natural background variations that can mask seepage. The cost and sensitivity of seepage detection are related to four critical length scales pertaining to the size of the: (1) region that needs to be monitored; (2) footprint of the measurement approach, and (3) main seepage zone; and (4) region in which concentrations or fluxes are influenced by seepage. Seepage characterization objectives may include one or all of the tasks of detecting, locating, and quantifying seepage. Each of these tasks has its own optimal strategy. Detecting and locating seepage in a region in which there is no expected or preferred location for seepage nor existing evidence for seepage requires monitoring on a fixed grid, e.g., using eddy covariance towers. The fixed-grid approaches needed to detect seepage are expected to require large numbers of eddy covariance towers for large-scale geologic CO{sub 2} storage. Once seepage has been detected and roughly located, seepage zones and features can be optimally pinpointed through a dynamic search strategy, e.g., employing accumulation chambers and/or soil-gas sampling. Quantification of seepage rates can be done through measurements on a localized fixed grid once the seepage is pinpointed. Background measurements are essential for seepage detection in natural ecosystems. Artificial neural networks are considered as regression models useful for distinguishing natural system behavior from anomalous behavior suggestive of CO{sub 2} seepage without need for detailed understanding of natural system processes. Because of the local extrema in CO{sub 2} fluxes and concentrations in natural systems, simple steepest-descent algorithms are not effective and evolutionary computation algorithms are proposed as a paradigm for dynamic monitoring networks to pinpoint CO{sub 2} seepage areas.
Xia, Jingfeng; Zhuang, Qianlai; Law, Beverly E.; Chen, Jiquan; Baldocchi, Dennis D.; Cook, David R.; Oren, Ram; Richardson, Andrew D.; Wharton, Sonia; Ma, Siyan; Martin, Timothy A.; Verma, Shashi B.; Suyker, Andrew E.; Scott, Russell L.; Monson, Russell K.; Litvak, Marcy; Hollinger, David Y.; Sun, Ge; Davis, Kenneth J.; Bolstad, Paul V.; Burns, Sean P.; Curtis, Peter S.; Drake, Bert G.; Falk, Matthias; Fischer, Marc L.; Foster, David R.; Gu, Lianhong; Hadley, Julian L.; Katul, Gabriel G.; Matamala, Roser; McNulty, Steve; Meyers, Tilden P.; Munger, J. William; Noormets, Asko; Oechel, Walter C.; U, Kyaw Tha Paw; Schmid, Hans Peter; Starr, Gregory; Torn, Margaret S.; Wofsy, Steven C.
2009-01-28
The quantification of carbon fluxes between the terrestrial biosphere and the atmosphere is of scientific importance and also relevant to climate-policy making. Eddy covariance flux towers provide continuous measurements of ecosystem-level exchange of carbon dioxide spanning diurnal, synoptic, seasonal, and interannual time scales. However, these measurements only represent the fluxes at the scale of the tower footprint. Here we used remotely-sensed data from the Moderate Resolution Imaging Spectroradiometer (MODIS) to upscale gross primary productivity (GPP) data from eddy covariance flux towers to the continental scale. We first combined GPP and MODIS data for 42 AmeriFlux towers encompassing a wide range of ecosystem and climate types to develop a predictive GPP model using a regression tree approach. The predictive model was trained using observed GPP over the period 2000-2004, and was validated using observed GPP over the period 2005-2006 and leave-one-out cross-validation. Our model predicted GPP fairly well at the site level. We then used the model to estimate GPP for each 1 km x 1 km cell across the U.S. for each 8-day interval over the period from February 2000 to December 2006 using MODIS data. Our GPP estimates provide a spatially and temporally continuous measure of gross primary production for the U.S. that is a highly constrained by eddy covariance flux data. Our study demonstrated that our empirical approach is effective for upscaling eddy flux GPP data to the continental scale and producing continuous GPP estimates across multiple biomes. With these estimates, we then examined the patterns, magnitude, and interannual variability of GPP. We estimated a gross carbon uptake between 6.91 and 7.33 Pg C yr{sup -1} for the conterminous U.S. Drought, fires, and hurricanes reduced annual GPP at regional scales and could have a significant impact on the U.S. net ecosystem carbon exchange. The sources of the interannual variability of U.S. GPP were dominated by these extreme climate events and disturbances.
TCP Final Report: Measuring the Effects of Stand Age and Soil Drainage on Boreal Forest
Michael L. Goulden
2007-05-02
This was a 6-year research project in the Canadian boreal forest that focused on using field observations to understand how boreal forest carbon balance changes during recovery from catastrophic forest fire. The project began with two overarching goals: (1) to develop techniques that would all the year round operation of 7 eddy covariance sites in a harsh environment at a much lower cost than had previously been possible, and (2) to use these measurements to determine how carbon balance changes during secondary succession. The project ended in 2006, having accomplished its primary objectives. Key contributions to DOE during the study were: (1) Design, test, and demonstrate a lightweight, fully portable eddy flux system that exploits several economies of scale to allow AmeriFlux-quality measurements of CO{sub 2} exchange at many sites for a large reduction in cost (Goulden et al. 2006). (2) Added seven year-round sites to AmeriFlux, at a relatively low per site cost using the Eddy Covariance Mesonet approach (Goulden et al. 2006). These data are freely available on the AmeriFlux web site. (3) Tested and rejected the conventional wisdom that forests lose large amounts of carbon during the first decade after disturbance, then accumulate large amounts of carbon for {approx}several decades, and then return to steady state in old age. Rather, we found that boreal forests recovers quickly from fire and begins to accumulate carbon within {approx}5 years after disturbance. Additionally, we found no evidence that carbon accumulation declines in old stands (Goulden et al. 2006, Goulden et al. in prep). (4) Tested and rejected claims based on remote sensing observations (for example, Myneni et al 1996 using AVHRR) that regions of boreal forest have changed markedly in the last 20 years. Rather, we assembled a much richer data set than had been used in the past (eddy covariance observations, tree rings, biomass, NPP, AVHRR, and LandSat), which we used to establish that the forests in our study region have remained largely constant over the last 20 years after accounting for the effects of stand age and succession (McMillen et al. in review).
Litvinova, E.; Ring, P.; Tselyaev, V.; Langanke, K.
2009-05-15
Theoretical studies of low-lying dipole strength in even-even spherical nuclei within the relativistic quasiparticle time blocking approximation (RQTBA) are presented. The RQTBA developed recently as an extension of the self-consistent relativistic quasiparticle random-phase approximation (RQRPA) enables one to investigate the effects of the coupling of two-quasiparticle excitations to collective vibrations within a fully consistent calculation scheme based on covariant energy density functional theory. Dipole spectra of even-even {sup 130}Sn-{sup 140}Sn and {sup 68}Ni-{sup 78}Ni isotopes calculated within both RQRPA and RQTBA show two well-separated collective structures: the higher lying giant dipole resonance and the lower lying pygmy dipole resonance, which can be identified by the different behavior of the transition densities of states in these regions.
Statistical tools for prognostics and health management of complex systems
Collins, David H; Huzurbazar, Aparna V; Anderson - Cook, Christine M
2010-01-01
Prognostics and Health Management (PHM) is increasingly important for understanding and managing today's complex systems. These systems are typically mission- or safety-critical, expensive to replace, and operate in environments where reliability and cost-effectiveness are a priority. We present background on PHM and a suite of applicable statistical tools and methods. Our primary focus is on predicting future states of the system (e.g., the probability of being operational at a future time, or the expected remaining system life) using heterogeneous data from a variety of sources. We discuss component reliability models incorporating physical understanding, condition measurements from sensors, and environmental covariates; system reliability models that allow prediction of system failure time distributions from component failure models; and the use of Bayesian techniques to incorporate expert judgments into component and system models.
Adler function and hadronic contribution to the muon g-2 in a nonlocal chiral quark model
Dorokhov, Alexander E.
2004-11-01
The behavior of the vector Adler function at spacelike momenta is studied in the framework of a covariant chiral quark model with instantonlike quark-quark interaction. This function describes the transition between the high-energy asymptotically free region of almost massless current quarks to the low-energy hadronized regime with massive constituent quarks. The model reproduces the Adler function and V-A correlator extracted from the ALEPH and OPAL data on hadronic {tau} lepton decays, transformed into the Euclidean domain via dispersion relations. The leading order contribution from the hadronic part of the photon vacuum polarization to the anomalous magnetic moment of the muon, a{sub {mu}}{sup hvp(1)}, is estimated.
Final Scientific EFNUDAT Workshop
None
2011-10-06
The Final Scientific EFNUDAT Workshop - organized by the CERN/EN-STI group on behalf of n_TOF Collaboration - will be held at CERN, Geneva (Switzerland) from 30 August to 2 September 2010 inclusive.EFNUDAT website: http://www.efnudat.euTopics of interest include: Data evaluationCross section measurementsExperimental techniquesUncertainties and covariancesFission propertiesCurrent and future facilities International Advisory Committee: C. Barreau (CENBG, France)T. Belgya (IKI KFKI, Hungary)E. Gonzalez (CIEMAT, Spain)F. Gunsing (CEA, France)F.-J. Hambsch (IRMM, Belgium)A. Junghans (FZD, Germany)R. Nolte (PTB, Germany)S. Pomp (TSL UU, Sweden) Workshop Organizing Committee: Enrico Chiaveri (Chairman)Marco CalvianiSamuel AndriamonjeEric BerthoumieuxCarlos GuerreroRoberto LositoVasilis Vlachoudis Workshop Assistant: Géraldine Jean
Effective perfect fluids in cosmology
Ballesteros, Guillermo; Bellazzini, Brando E-mail: brando.bellazzini@pd.infn.it
2013-04-01
We describe the cosmological dynamics of perfect fluids within the framework of effective field theories. The effective action is a derivative expansion whose terms are selected by the symmetry requirements on the relevant long-distance degrees of freedom, which are identified with comoving coordinates. The perfect fluid is defined by requiring invariance of the action under internal volume-preserving diffeomorphisms and general covariance. At lowest order in derivatives, the dynamics is encoded in a single function of the entropy density that characterizes the properties of the fluid, such as the equation of state and the speed of sound. This framework allows a neat simultaneous description of fluid and metric perturbations. Longitudinal fluid perturbations are closely related to the adiabatic modes, while the transverse modes mix with vector metric perturbations as a consequence of vorticity conservation. This formalism features a large flexibility which can be of practical use for higher order perturbation theory and cosmological parameter estimation.
Vega, Richard Manuel; Parma, Edward J.; Griffin, Patrick J.; Vehar, David W.
2015-07-01
This report was put together to support the International Atomic Energy Agency (IAEA) REAL- 2016 activity to validate the dosimetry community’s ability to use a consistent set of activation data and to derive consistent spectral characterizations. The report captures details of integral measurements taken in the Annular Core Research Reactor (ACRR) central cavity with the 44 inch Lead-Boron (LB44) bucket, reference neutron benchmark field. The field is described and an “a priori” calculated neutron spectrum is reported, based on MCNP6 calculations, and a subject matter expert (SME) based covariance matrix is given for this “a priori” spectrum. The results of 31 integral dosimetry measurements in the neutron field are reported.
Vega, Richard Manuel; Parm, Edward J.; Griffin, Patrick J.; Vehar, David W.
2015-07-01
This report was put together to support the International Atomic Energy Agency (IAEA) REAL- 2016 activity to validate the dosimetry community’s ability to use a consistent set of activation data and to derive consistent spectral characterizations. The report captures details of integral measurements taken in the Annular Core Research Reactor (ACRR) central cavity with the Polyethylene-Lead-Graphite (PLG) bucket, reference neutron benchmark field. The field is described and an “a priori” calculated neutron spectrum is reported, based on MCNP6 calculations, and a subject matter expert (SME) based covariance matrix is given for this “a priori” spectrum. The results of 37 integral dosimetry measurements in the neutron field are reported.
Accurate Development of Thermal Neutron Scattering Cross Section Libraries
Hawari, Ayman; Dunn, Michael
2014-06-10
The objective of this project is to develop a holistic (fundamental and accurate) approach for generating thermal neutron scattering cross section libraries for a collection of important enutron moderators and reflectors. The primary components of this approach are the physcial accuracy and completeness of the generated data libraries. Consequently, for the first time, thermal neutron scattering cross section data libraries will be generated that are based on accurate theoretical models, that are carefully benchmarked against experimental and computational data, and that contain complete covariance information that can be used in propagating the data uncertainties through the various components of the nuclear design and execution process. To achieve this objective, computational and experimental investigations will be performed on a carefully selected subset of materials that play a key role in all stages of the nuclear fuel cycle.
Probing particle and nuclear physics models of neutrinoless double beta decay with different nuclei
Fogli, G. L.; Rotunno, A. M. [Dipartimento Interateneo di Fisica 'Michelangelo Merlin', Via Amendola 173, 70126 Bari (Italy); Istituto Nazionale di Fisica Nucleare, Sezione di Bari, Via Orabona 4, 70126 Bari (Italy); Lisi, E. [Istituto Nazionale di Fisica Nucleare, Sezione di Bari, Via Orabona 4, 70126 Bari (Italy)
2009-07-01
Half-life estimates for neutrinoless double beta decay depend on particle physics models for lepton-flavor violation, as well as on nuclear physics models for the structure and transitions of candidate nuclei. Different models considered in the literature can be contrasted - via prospective data - with a 'standard' scenario characterized by light Majorana neutrino exchange and by the quasiparticle random phase approximation, for which the theoretical covariance matrix has been recently estimated. We show that, assuming future half-life data in four promising nuclei ({sup 76}Ge, {sup 82}Se, {sup 130}Te, and {sup 136}Xe), the standard scenario can be distinguished from a few nonstandard physics models, while being compatible with alternative state-of-the-art nuclear calculations (at 95% C.L.). Future signals in different nuclei may thus help to discriminate at least some decay mechanisms, without being spoiled by current nuclear uncertainties. Prospects for possible improvements are also discussed.
B-mode polarization in Einstein-aether theory
Nakashima, Masahiro; Kobayashi, Tsutomu
2011-10-15
We study how the dynamical vector degree of freedom in modified gravity affects the CMB B-mode polarization in terms of the Einstein-aether theory. In this theory, vector perturbations can be generated from inflation, which can grow on superhorizon scales in the subsequent epochs and thereby leaves imprints on the CMB B-mode polarization. We derive the linear perturbation equations in a covariant formalism, and compute the CMB B-mode polarization using the CAMB code modified so as to incorporate the effect of the aether vector field. We find that the amplitude of the B-mode signal from the aether field can be larger than the contribution from the inflationary gravitational waves for reasonable initial conditions and for a viable range of model parameters, in which perturbation modes propagate superluminally. We also give an analytic argument explaining the shape of the spectrum based on the tight coupling approximation.
Modifying gravity with the aether: An alternative to dark matter
Zlosnik, T. G; Ferreira, P. G; Starkman, G. D.
2007-02-15
There is evidence that Newton and Einstein's theories of gravity cannot explain the dynamics of a universe made up solely of baryons and radiation. To be able to understand the properties of galaxies, clusters of galaxies and the universe on the whole it has become commonplace to invoke the presence of dark matter. An alternative approach is to modify the gravitational field equations to accommodate observations. We propose a new class of gravitational theories in which we add a new degree of freedom, the Aether, in the form of a vector field that is coupled covariantly, but nonminimally, with the space-time metric. We explore the Newtonian and non-Newtonian limits, discuss the conditions for these theories to be consistent and explore their effect on cosmology.
Reliable clock estimation using linear weighted fusion based on pairwise broadcast synchronization
Shi, Xin Zhao, Xiangmo Hui, Fei Ma, Junyan Yang, Lan
2014-10-06
Clock synchronization in wireless sensor networks (WSNs) has been studied extensively in recent years and many protocols are put forward based on the point of statistical signal processing, which is an effective way to optimize accuracy. However, the accuracy derived from the statistical data can be improved mainly by sufficient packets exchange, which will consume the limited power resources greatly. In this paper, a reliable clock estimation using linear weighted fusion based on pairwise broadcast synchronization is proposed to optimize sync accuracy without expending additional sync packets. As a contribution, a linear weighted fusion scheme for multiple clock deviations is constructed with the collaborative sensing of clock timestamp. And the fusion weight is defined by the covariance of sync errors for different clock deviations. Extensive simulation results show that the proposed approach can achieve better performance in terms of sync overhead and sync accuracy.
Data Assimilation in the ADAPT Photospheric Flux Transport Model
Hickmann, Kyle S.; Godinez, Humberto C.; Henney, Carl J.; Arge, C. Nick
2015-03-17
Global maps of the solar photospheric magnetic flux are fundamental drivers for simulations of the corona and solar wind and therefore are important predictors of geoeffective events. However, observations of the solar photosphere are only made intermittently over approximately half of the solar surface. The Air Force Data Assimilative Photospheric Flux Transport (ADAPT) model uses localized ensemble Kalman filtering techniques to adjust a set of photospheric simulations to agree with the available observations. At the same time, this information is propagated to areas of the simulation that have not been observed. ADAPT implements a local ensemble transform Kalman filter (LETKF) to accomplish data assimilation, allowing the covariance structure of the flux-transport model to influence assimilation of photosphere observations while eliminating spurious correlations between ensemble members arising from a limited ensemble size. We give a detailed account of the implementation of the LETKF into ADAPT. Advantages of the LETKF scheme over previously implemented assimilation methods are highlighted.
Cosmological Ohm's law and dynamics of non-minimal electromagnetism
Hollenstein, Lukas; Jain, Rajeev Kumar; Urban, Federico R. E-mail: jain@cp3.dias.sdu.dk
2013-01-01
The origin of large-scale magnetic fields in cosmic structures and the intergalactic medium is still poorly understood. We explore the effects of non-minimal couplings of electromagnetism on the cosmological evolution of currents and magnetic fields. In this context, we revisit the mildly non-linear plasma dynamics around recombination that are known to generate weak magnetic fields. We use the covariant approach to obtain a fully general and non-linear evolution equation for the plasma currents and derive a generalised Ohm law valid on large scales as well as in the presence of non-minimal couplings to cosmological (pseudo-)scalar fields. Due to the sizeable conductivity of the plasma and the stringent observational bounds on such couplings, we conclude that modifications of the standard (adiabatic) evolution of magnetic fields are severely limited in these scenarios. Even at scales well beyond a Mpc, any departure from flux freezing behaviour is inhibited.
The use of microdosimetric techniques in radiation protection measurements
Chen, J.; Hsu, H.H.; Casson, W.H.; Vasilik, D.G.
1997-01-01
A major objective of radiation protection is to determine the dose equivalent for routine radiation protection applications. As microdosimetry has developed over approximately three decades, its most important application has been in measuring radiation quality, especially in radiation fields of unknown or inadequately known energy spectra. In these radiation fields, determination of dose equivalent is not straightforward; however, the use of microdosimetric principles and techniques could solve this problem. In this paper, the authors discuss the measurement of lineal energy, a microscopic analog to linear energy transfer, and demonstrate the development and implementation of the variance-covariance method, a novel method in experimental microdosimetry. This method permits the determination of dose mean lineal energy, an essential parameter of radiation quality, in a radiation field of unknown spectrum, time-varying dose rate, and high dose rate. Real-time monitoring of changes in radiation quality can also be achieved by using microdosimetric techniques.
Introduction to theory and analysis of resolved (and unresolved) neutron resonances via SAMMY
Larson, N.M.
1998-07-01
Neutron cross-section data are important for two distinct purposes: first, they provide insight into the nature of matter, thus assisting in the understanding of fundamental physics; second, they are needed for practical applications (e.g., for calculating when and how a reactor will become critical, or how much shielding is needed for storage of nuclear materials, and for medical applications). Neutron cross section data in the resolved-resonance region are generally obtained by time-of-flight experiments, which must be carefully analyzed if they are to be properly understood and utilized. In this paper, important features of the analysis process are discussed, with emphasis on the particular technique used in the analysis code SAMMY. Other features of the code are also described; these include such topics as calculation of group cross sections (including covariance matrices), generation and fitting of integral quantities, and extensions into the unresolved-resonance region and higher-energy regions.
Integral data analysis for resonance parameters determination
Larson, N.M.; Leal, L.C.; Derrien, H.
1997-09-01
Neutron time-of-flight experiments have long been used to determine resonance parameters. Those resonance parameters have then been used in calculations of integral quantities such as Maxwellian averages or resonance integrals, and results of those calculations in turn have been used as a criterion for acceptability of the resonance analysis. However, the calculations were inadequate because covariances on the parameter values were not included in the calculations. In this report an effort to correct for that deficiency is documented: (1) the R-matrix analysis code SAMMY has been modified to include integral quantities of importance, (2) directly within the resonance parameter analysis, and (3) to determine the best fit to both differential (microscopic) and integral (macroscopic) data simultaneously. This modification was implemented because it is expected to have an impact on the intermediate-energy range that is important for criticality safety applications.
Uncertainty of silicon 1-MeV damage function
Danjaji, M.B.; Griffin, P.J.
1997-02-01
The electronics radiation hardness-testing community uses the ASTM E722-93 Standard Practice to define the energy dependence of the nonionizing neutron damage to silicon semiconductors. This neutron displacement damage response function is defined to be equal to the silicon displacement kerma as calculated from the ORNL Si cross-section evaluation. Experimental work has shown that observed damage ratios at various test facilities agree with the defined response function to within 5%. Here, a covariance matrix for the silicon 1-MeV neutron displacement damage function is developed. This uncertainty data will support the electronic radiation hardness-testing community and will permit silicon displacement damage sensors to be used in least squares spectrum adjustment codes.
Quark mass functions and pion structure in Minkowski space
Biernat, Elmer P.; Gross, Franz L.; Pena, Maria Teresa; Stadler, Alfred
2014-03-01
We present a study of the dressed quark mass function and the pion structure in Minkowski space using the Covariant Spectator Theory (CST). The quark propagators are dressed with the same kernel that describes the interaction between different quarks. We use an interaction kernel in momentum space that is a relativistic generalization of the linear confining q-qbar potential and a constant potential shift that defines the energy scale. The confining interaction has a Lorentz scalar part that is not chirally invariant by itself but decouples from the equations in the chiral limit and therefore allows the Nambu--Jona-Lasinio (NJL) mechanism to work. We adjust the parameters of our quark mass function calculated in Minkowski-space to agree with LQCD data obtained in Euclidean space. Results of a calculation of the pion electromagnetic form factor in the relativistic impulse approximation using the same mass function are presented and compared with experimental data.
Quantum Operator Design for Lattice Baryon Spectroscopy
Lichtl, Adam
2007-09-06
A previously-proposed method of constructing spatially-extended gauge-invariant three-quark operators for use in Monte Carlo lattice QCD calculations is tested, and a methodology for using these operators to extract the energies of a large number of baryon states is developed. This work is part of a long-term project undertaken by the Lattice Hadron Physics Collaboration to carry out a first-principles calculation of the low-lying spectrum of QCD. The operators are assemblages of smeared and gauge-covariantly-displaced quark fields having a definite flavor structure. The importance of using smeared fields is dramatically demonstrated. It is found that quark field smearing greatly reduces the couplings to the unwanted high-lying short-wavelength modes, while gauge field smearing drastically reduces the statistical noise in the extended operators.
Natural star-products on symplectic manifolds and related quantum mechanical operators
B?aszak, Maciej Doma?ski, Ziemowit
2014-05-15
In this paper is considered a problem of defining natural star-products on symplectic manifolds, admissible for quantization of classical Hamiltonian systems. First, a construction of a star-product on a cotangent bundle to an Euclidean configuration space is given with the use of a sequence of pair-wise commuting vector fields. The connection with a covariant representation of such a star-product is also presented. Then, an extension of the construction to symplectic manifolds over flat and non-flat pseudo-Riemannian configuration spaces is discussed. Finally, a coordinate free construction of related quantum mechanical operators from Hilbert space over respective configuration space is presented. -- Highlights: Invariant representations of natural star-products on symplectic manifolds are considered. Star-products induced by flat and non-flat connections are investigated. Operator representations in Hilbert space of considered star-algebras are constructed.
Electromagnetically superconducting phase of QCD vacuum induced by strong magnetic field
Chernodub, M. N. [CNRS, Laboratoire de Mathematiques et Physique Theorique, Universite Francois-Rabelais Tours, Federation Denis Poisson, Parc de Grandmont, 37200 Tours (France); Department of Physics and Astronomy, University of Gent, Krijgslaan 281, S9, B-9000 Gent (Belgium)
2011-05-23
In this talk we discuss our recent suggestion that the QCD vacuum in a sufficiently strong magnetic field (stronger than 10{sup 16} Tesla) may undergo a spontaneous transition to an electromagnetically superconducting state. The possible superconducting state is anisotropic (the vacuum exhibits superconductivity only along the axis of the uniform magnetic field) and inhomogeneous (in the transverse directions the vacuum structure shares similarity with the Abrikosov lattice of an ordinary type-II superconductor). The electromagnetic superconductivity of the QCD vacuum is suggested to occur due to emergence of specific quark-antiquark condensates which carry quantum numbers of electrically charged rho mesons. A Lorentz-covariant generalization of the London transport equations for the magnetic-field-induced superconductivity is given.
Vega, Richard Manuel; Parma, Edward J.; Naranjo, Gerald E.; Lippert, Lance L.; Vehar, David W.; Griffin, Patrick J.
2015-08-01
This document presents the facilit y - recommended characteri zation o f the neutron, prompt gamma - ray, and delayed gamma - ray radiation fields in the Annular Core Research Reactor ( ACRR ) for the cen tral cavity free - field environment with the 32 - inch pedestal at the core centerline. The designation for this environmen t is ACRR - FF - CC - 32 - cl. The neutron, prompt gamma - ray , and delayed gamma - ray energy spectra , uncertainties, and covariance matrices are presented as well as radial and axial neutron and gamma - ray fluence profiles within the experiment area of the cavity . Recommended constants are given to facilitate the conversion of various dosimetry readings into radiation metrics desired by experimenters. Representative pulse operations are presented with conversion examples . Acknowledgements The authors wish to th ank the Annular Core Research Reactor staff and the Radiation Metrology Laboratory staff for their support of this work . Also thanks to David Ames for his assistance in running MCNP on the Sandia parallel machines.
Chiral anomalies and zeta-function regularization
Reuter, M.
1985-03-15
The zeta-function method for regularizing determinants is used to calculate the chiral anomalies of several field-theory models. In SU(N) gauge theories without ..gamma../sub 5/ couplings, the results of perturbation theory are obtained in an unambiguous manner for the full gauge theory as well as for the corresponding external-field problem. If axial-vector couplings are present, different anomalies occur for the two cases. The result for the full gauge theory is again uniquely determined; for its nongauge analog, however, ambiguities can arise. The connection between the basic path integral and the operator used to construct the heat kernel is investigated and the significance of its Hermiticity and gauge covariance are analyzed. The implications of the Wess-Zumino conditions are considered.
Final Scientific EFNUDAT Workshop
None
2011-10-06
The Final Scientific EFNUDAT Workshop - organized by the CERN/EN-STI group on behalf of n_TOF Collaboration - will be held at CERN, Geneva (Switzerland) from 30 August to 2 September 2010 inclusive.EFNUDAT website: http://www.efnudat.euTopics of interest include: Data evaluationCross section measurementsExperimental techniquesUncertainties and covariancesFission propertiesCurrent and future facilitiesInternational Advisory Committee: C. Barreau (CENBG, France)T. Belgya (IKI KFKI, Hungary)E. Gonzalez (CIEMAT, Spain)F. Gunsing (CEA, France)F.-J. Hambsch (IRMM, Belgium)A. Junghans (FZD, Germany)R. Nolte (PTB, Germany)S. Pomp (TSL UU, Sweden)Workshop Organizing Committee: Enrico Chiaveri (Chairman)Marco CalvianiSamuel AndriamonjeEric BerthoumieuxCarlos GuerreroRoberto LositoVasilis VlachoudisWorkshop Assistant: Graldine Jean
Problematic projection to the in-sample subspace for a kernelized anomaly detector
DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)
Theiler, James; Grosklos, Guen
2016-03-07
We examine the properties and performance of kernelized anomaly detectors, with an emphasis on the Mahalanobis-distance-based kernel RX (KRX) algorithm. Although the detector generally performs well for high-bandwidth Gaussian kernels, it exhibits problematic (in some cases, catastrophic) performance for distances that are large compared to the bandwidth. By comparing KRX to two other anomaly detectors, we can trace the problem to a projection in feature space, which arises when a pseudoinverse is used on the covariance matrix in that feature space. Here, we show that a regularized variant of KRX overcomes this difficulty and achieves superior performance over a widemore » range of bandwidths.« less
Unsaturated fractured rock characterization methods and data sets at the Apache Leap Tuff Site
Rasmussen, T.C.; Evans, D.D.; Sheets, P.J.; Blanford, J.H. [Arizona Univ., Tucson, AZ (USA). Dept. of Hydrology and Water Resources
1990-08-01
Performance assessment of high-level nuclear waste containment feasibility requires representative values of parameters as input, including parameter moments, distributional characteristics, and covariance structures between parameters. To meet this need, characterization methods and data sets for interstitial, hydraulic, pneumatic and thermal parameters for a slightly welded fractured tuff at the Apache Leap Tuff Site situated in central Arizona are reported in this document. The data sets include the influence of matric suction on measured parameters. Spatial variability is investigated by sampling along nine boreholes at regular distances. Laboratory parameter estimates for 105 core segments are provided, as well as field estimates centered on the intervals where the core segments were collected. Measurement uncertainty is estimated by repetitively testing control samples. 31 refs., 10 figs., 21 tabs.
The relationship between interannual and long-term cloud feedbacks
Zhou, Chen; Zelinka, Mark D.; Dessler, Andrew E.; Klein, Stephen A.
2015-12-11
The analyses of Coupled Model Intercomparison Project phase 5 simulations suggest that climate models with more positive cloud feedback in response to interannual climate fluctuations also have more positive cloud feedback in response to long-term global warming. Ensemble mean vertical profiles of cloud change in response to interannual and long-term surface warming are similar, and the ensemble mean cloud feedback is positive on both timescales. However, the average long-term cloud feedback is smaller than the interannual cloud feedback, likely due to differences in surface warming pattern on the two timescales. Low cloud cover (LCC) change in response to interannual and long-term global surface warming is found to be well correlated across models and explains over half of the covariance between interannual and long-term cloud feedback. In conclusion, the intermodel correlation of LCC across timescales likely results from model-specific sensitivities of LCC to sea surface warming.
Pritychenko, B.; Mughabghab, S.F.
2012-12-15
We present calculations of neutron thermal cross sections, Westcott factors, resonance integrals, Maxwellian-averaged cross sections and astrophysical reaction rates for 843 ENDF materials using data from the major evaluated nuclear libraries and European activation file. Extensive analysis of newly-evaluated neutron reaction cross sections, neutron covariances, and improvements in data processing techniques motivated us to calculate nuclear industry and neutron physics quantities, produce s-process Maxwellian-averaged cross sections and astrophysical reaction rates, systematically calculate uncertainties, and provide additional insights on currently available neutron-induced reaction data. Nuclear reaction calculations are discussed and new results are presented. Due to space limitations, the present paper contains only calculated Maxwellian-averaged cross sections and their uncertainties. The complete data sets for all results are published in the Brookhaven National Laboratory report.
Galileon gravity and its relevance to late time cosmic acceleration
Gannouji, Radouane; Sami, M.
2010-07-15
We consider the covariant Galileon gravity taking into account the third order and fourth order scalar field Lagrangians L{sub 3}({pi}) and L{sub 4}({pi}), consisting of three and four {pi}'s with four and five derivatives acting on them, respectively. The background dynamical equations are set up for the system under consideration and the stability of the self-accelerating solution is demonstrated in a general setting. We extended this study to the general case of the fifth order theory. For the spherically symmetric static background, we spell out conditions for the suppression of fifth force effects mediated by the Galileon field {pi}. We study field perturbations in the fixed background and investigate the conditions for their causal propagation. We also briefly discuss metric fluctuations and derive an evolution equation for matter perturbations in Galileon gravity.
Hamiltonian Light-Front Ffield Theory in a Basis Function Approach
Vary, J.P.; Honkanen, H.; Li, Jun; Maris, P.; Brodsky, S.J.; Harindranath, A.; de Teramond, G.F.; Sternberg, P.; Ng, E.G.; Yang, C.
2009-05-15
Hamiltonian light-front quantum field theory constitutes a framework for the non-perturbative solution of invariant masses and correlated parton amplitudes of self-bound systems. By choosing the light-front gauge and adopting a basis function representation, we obtain a large, sparse, Hamiltonian matrix for mass eigenstates of gauge theories that is solvable by adapting the ab initio no-core methods of nuclear many-body theory. Full covariance is recovered in the continuum limit, the infinite matrix limit. There is considerable freedom in the choice of the orthonormal and complete set of basis functions with convenience and convergence rates providing key considerations. Here, we use a two-dimensional harmonic oscillator basis for transverse modes that corresponds with eigensolutions of the soft-wall AdS/QCD model obtained from light-front holography. We outline our approach, present illustrative features of some non-interacting systems in a cavity and discuss the computational challenges.
A stochastic diffusion process for Lochner's generalized Dirichlet distribution
DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)
Bakosi, J.; Ristorcelli, J. R.
2013-10-01
The method of potential solutions of Fokker-Planck equations is used to develop a transport equation for the joint probability of N stochastic variables with Lochner’s generalized Dirichlet distribution as its asymptotic solution. Individual samples of a discrete ensemble, obtained from the system of stochastic differential equations, equivalent to the Fokker-Planck equation developed here, satisfy a unit-sum constraint at all times and ensure a bounded sample space, similarly to the process developed in for the Dirichlet distribution. Consequently, the generalized Dirichlet diffusion process may be used to represent realizations of a fluctuating ensemble of N variables subject to a conservation principle.more » Compared to the Dirichlet distribution and process, the additional parameters of the generalized Dirichlet distribution allow a more general class of physical processes to be modeled with a more general covariance matrix.« less
Thermodynamics in variable speed of light theories
Racker, Juan [CONICET, Centro Atomico Bariloche, Avenida Bustillo 9500 (8400), San Carlos De Bariloche (Argentina); Facultad de Ciencias Astronomicas y Geofisicas, Universidad Nacional de La Plata, Paseo del Bosque S/N (1900), La Plata (Argentina); Sisterna, Pablo [Facultad de Ciencias Exactas y Naturales, Universidad Nacional de Mar del Plata, Funes 3350 (7600), Mar del Plata (Argentina); Vucetich, Hector [Facultad de Ciencias Astronomicas y Geofisicas, Universidad Nacional de La Plata, Paseo del Bosque S/N (1900), La Plata (Argentina)
2009-10-15
The perfect fluid in the context of a covariant variable speed of light theory proposed by J. Magueijo is studied. On the one hand the modified first law of thermodynamics together with a recipe to obtain equations of state are obtained. On the other hand the Newtonian limit is performed to obtain the nonrelativistic hydrostatic equilibrium equation for the theory. The results obtained are used to determine the time variation of the radius of Mercury induced by the variability of the speed of light (c), and the scalar contribution to the luminosity of white dwarfs. Using a bound for the change of that radius and combining it with an upper limit for the variation of the fine structure constant, a bound on the time variation of c is set. An independent bound is obtained from luminosity estimates for Stein 2015B.
Leclerc, Monique Y.
2014-11-17
This final report presents the main activities and results of the project “A Carbon Flux Super Site: New Insights and Innovative Atmosphere-Terrestrial Carbon Exchange Measurements and Modeling” from 10/1/2006 to 9/30/2014. It describes the new AmeriFlux tower site (Aiken) at Savanna River Site (SC) and instrumentation, long term eddy-covariance, sodar, microbarograph, soil and other measurements at the site, and intensive field campaigns of tracer experiment at the Carbon Flux Super Site, SC, in 2009 and at ARM-CF site, Lamont, OK, and experiments in Plains, GA. The main results on tracer experiment and modeling, on low-level jet characteristics and their impact on fluxes, on gravity waves and their influence on eddy fluxes, and other results are briefly described in the report.
Working Party on International Nuclear Data Evaluation Cooperation (WPEC)
Dupont, E.; Herman, M.; Dupont, E.; Chadwick, M. B.; Danon, Y.; De Saint Jean, C.; Dunn, M.; Fischer, U.; Forrest, R. A.; Fukahori, T.; Ge, Z.; Harada, H.; Herman, M.; Igashira, M.; Ignatyuk, A.; Ishikawa, M.; Iwamoto, O.; Jacqmin, R.; Kahler, A. C.; Kawano, T.; Koning, A. J.; Leal, L.; Lee, Y. O.; McKnight, R.; McNabb, D.; Mills, R. W.; Palmiotti, G.; Plompen, A.; Salvatores, M.; Schillebeeckx, P.
2014-06-01
The OECD Nuclear Energy Agency (NEA) organizes cooperation between the major nuclear data evaluation projects in the world. Moreover, the NEA Working Party on International Nuclear Data Evaluation Cooperation (WPEC) was established to promote the exchange of information on nuclear data evaluation, measurement, nuclear model calculation, validation, and related topics, and to provide a framework for cooperative activities between the participating projects. The working party assesses nuclear data improvement needs and addresses these needs by initiating joint activities in the framework of dedicated WPEC subgroups. Studies recently completed comprise a number of works related to nuclear data covariance and associated processing issues, as well as more specific studies related to the resonance parameter representation in the unresolved resonance region, the gamma production from fission product capture reactions, the ^{235}U capture cross section, the EXFOR database, and the improvement of nuclear data for advanced reactor systems. Ongoing activities focus on the evaluation of ^{239}Pu in the resonance region, scattering angular distribution in the fast energy range, and reporting/usage of experimental data for evaluation in the resolved resonance region. New activities include two subgroups on improved fission product yield evaluation methodologies and on modern nuclear database structures. Some future activities under discussion include a pilot project for a Collaborative International Evaluated Library Organization (CIELO) and methods to provide feedback from nuclear and covariance data adjustment for improvement of nuclear data. In addition to the above mentioned short-term task-oriented subgroups, WPEC also hosts a longer-term subgroup charged with reviewing and compiling the most important nuclear data requirements in a high priority request list (HPRL).
Working Party on International Nuclear Data Evaluation Cooperation (WPEC)
Dupont, E.; Chadwick, M.B.; Danon, Y.; De Saint Jean, C.; Dunn, M.; Fischer, U.; Forrest, R.A.; Fukahori, T.; Ge, Z.; Harada, H.; Herman, M.; Igashira, M.; Ignatyuk, A.; Ishikawa, M.; Iwamoto, O.; Jacqmin, R.; Kahler, A.C.; Kawano, T.; Koning, A.J.; Leal, L.; and others
2014-06-15
The OECD Nuclear Energy Agency (NEA) organizes cooperation between the major nuclear data evaluation projects in the world. The NEA Working Party on International Nuclear Data Evaluation Cooperation (WPEC) was established to promote the exchange of information on nuclear data evaluation, measurement, nuclear model calculation, validation, and related topics, and to provide a framework for cooperative activities between the participating projects. The working party assesses nuclear data improvement needs and addresses these needs by initiating joint activities in the framework of dedicated WPEC subgroups. Studies recently completed comprise a number of works related to nuclear data covariance and associated processing issues, as well as more specific studies related to the resonance parameter representation in the unresolved resonance region, the gamma production from fission product capture reactions, the {sup 235}U capture cross section, the EXFOR database, and the improvement of nuclear data for advanced reactor systems. Ongoing activities focus on the evaluation of {sup 239}Pu in the resonance region, scattering angular distribution in the fast energy range, and reporting/usage of experimental data for evaluation in the resolved resonance region. New activities include two subgroups on improved fission product yield evaluation methodologies and on modern nuclear database structures. Future activities under discussion include a pilot project for a Collaborative International Evaluated Library Organization (CIELO) and methods to provide feedback from nuclear and covariance data adjustment for improvement of nuclear data. In addition to the above mentioned short-term task-oriented subgroups, WPEC also hosts a longer-term subgroup charged with reviewing and compiling the most important nuclear data requirements in a high priority request list (HPRL)
Burkhardt, Christoph; Wieler, Rainer; Kleine, Thorsten; Dauphas, Nicolas
2012-07-01
Progressive dissolution of the Murchison carbonaceous chondrite with acids of increasing strengths reveals large internal W isotope variations that reflect a heterogeneous distribution of s- and r-process W isotopes among the components of primitive chondrites. At least two distinct carriers of nucleosynthetic W isotope anomalies must be present, which were produced in different nucleosynthetic environments. The co-variation of {sup 182}W/{sup 184}W and {sup 183}W/{sup 184}W in the leachates follows a linear trend that is consistent with a mixing line between terrestrial W and a presumed s-process-enriched component. The composition of the s-enriched component agrees reasonably well with that predicted by the stellar model of s-process nucleosynthesis. The co-variation of {sup 182}W/{sup 184}W and {sup 183}W/{sup 184}W in the leachates provides a means for correcting the measured {sup 182}W/{sup 184}W and {sup 182}W/{sup 183}W of Ca-Al-rich inclusions (CAI) for nucleosynthetic anomalies using the isotopic variations in {sup 183}W/{sup 184}W. This new correction procedure is different from that used previously, and results in a downward shift of the initial {epsilon}{sup 182}W of CAI to -3.51 {+-} 0.10 (where {epsilon}{sup 182}W is the variation in 0.01% of the {sup 182}W/{sup 183}W ratio relative to Earth's mantle). This revision leads to Hf-W model ages of core formation in iron meteorite parent bodies that are {approx}2 Myr younger than previously calculated. The revised Hf-W model ages are consistent with CAI being the oldest solids formed in the solar system, and indicate that core formation in some planetesimals occurred within {approx}2 Myr of the beginning of the solar system.
Massive graviton on arbitrary background: derivation, syzygies, applications
Bernard, Laura; Deffayet, Cédric; Strauss, Mikael von
2015-06-23
We give the detailed derivation of the fully covariant form of the quadratic action and the derived linear equations of motion for a massive graviton in an arbitrary background metric (which were presented in arXiv:1410.8302 [hep-th]). Our starting point is the de Rham-Gabadadze-Tolley (dRGT) family of ghost free massive gravities and using a simple model of this family, we are able to express this action and these equations of motion in terms of a single metric in which the graviton propagates, hence removing in particular the need for a “reference metric' which is present in the non perturbative formulation. We show further how 5 covariant constraints can be obtained including one which leads to the tracelessness of the graviton on flat space-time and removes the Boulware-Deser ghost. This last constraint involves powers and combinations of the curvature of the background metric. The 5 constraints are obtained for a background metric which is unconstrained, i.e. which does not have to obey the background field equations. We then apply these results to the case of Einstein space-times, where we show that the 5 constraints become trivial, and Friedmann-Lemaître-Robertson-Walker space-times, for which we correct in particular some results that appeared elsewhere. To reach our results, we derive several non trivial identities, syzygies, involving the graviton fields, its derivatives and the background metric curvature. These identities have their own interest. We also discover that there exist backgrounds for which the dRGT equations cannot be unambiguously linearized.
Optimizing the choice of spin-squeezed states for detecting and characterizing quantum processes
Rozema, Lee A.; Mahler, Dylan H.; Blume-Kohout, Robin; Steinberg, Aephraim M.
2014-11-07
Quantum metrology uses quantum states with no classical counterpart to measure a physical quantity with extraordinary sensitivity or precision. Most such schemes characterize a dynamical process by probing it with a specially designed quantum state. The success of such a scheme usually relies on the process belonging to a particular one-parameter family. If this assumption is violated, or if the goal is to measure more than one parameter, a different quantum state may perform better. In the most extreme case, we know nothing about the process and wish to learn everything. This requires quantum process tomography, which demands an informationally complete set of probe states. It is very convenient if this set is group covarianti.e., each element is generated by applying an element of the quantum systems natural symmetry group to a single fixed fiducial state. In this paper, we consider metrology with 2-photon (biphoton) states and report experimental studies of different states sensitivity to small, unknown collective SU(2) rotations [SU(2) jitter]. Maximally entangled N00N states are the most sensitive detectors of such a rotation, yet they are also among the worst at fully characterizing an a priori unknown process. We identify (and confirm experimentally) the best SU(2)-covariant set for process tomography; these states are all less entangled than the N00N state, and are characterized by the fact that they form a 2-design.
Observed drag coefficients in high winds in the near offshore of the South China Sea
Bi, Xueyan; Liu, Yangan; Gao, Zhiqiu; Liu, Feng; Song, Qingtao; Huang, Jian; Huang, Huijun; Mao, Weikang; Liu, Chunxia
2015-07-14
This paper investigates the relationships between friction velocity, 10 m drag coefficient, and 10 m wind speed using data collected at two offshore observation towers (one over the sea and the other on an island) from seven typhoon episodes in the South China Sea from 2008 to 2014. The two towers were placed in areas with different water depths along a shore-normal line. The depth of water at the tower over the sea averages about 15 m, and the depth of water near the island is about 10 m. The observed maximum 10 min average wind speed at a height of 10 m is about 32 m s⁻¹. Momentum fluxes derived from three methods (eddy covariance, inertial dissipation, and flux profile) are compared. The momentum fluxes derived from the flux profile method are larger (smaller) over the sea (on the island) than those from the other two methods. The relationship between the 10 m drag coefficient and the 10 m wind speed is examined by use of the data obtained by the eddy covariance method. The drag coefficient first decreases with increasing 10 m wind speed when the wind speeds are 5–10 m s⁻¹, then increases and reaches a peak value of 0.002 around a wind speed of 18 m s⁻¹. The drag coefficient decreases with increasing 10 m wind speed when 10 m wind speeds are 18–27 m s⁻¹. A comparison of the measurements from the two towers shows that the 10 m drag coefficient from the tower in 10 m water depth is about 40% larger than that from the tower in 15 m water depth when the 10 m wind speed is less than 10 m s⁻¹. Above this, the difference in the 10 m drag coefficients of the two towers disappears.
DARK FLUID: A UNIFIED FRAMEWORK FOR MODIFIED NEWTONIAN DYNAMICS, DARK MATTER, AND DARK ENERGY
Zhao Hongsheng; Li Baojiu E-mail: b.li@damtp.cam.ac.u
2010-03-20
Empirical theories of dark matter (DM) like modified Newtonian dynamics (MOND) gravity and of dark energy (DE) like f(R) gravity were motivated by astronomical data. But could these theories be branches rooted from a more general and hence generic framework? Here we propose a very generic Lagrangian of such a framework based on simple dimensional analysis and covariant symmetry requirements, and explore various outcomes in a top-down fashion. The desired effects of quintessence plus cold DM particle fields or MOND-like scalar field(s) are shown to be largely achievable by one vector field only. Our framework preserves the covariant formulation of general relativity, but allows the expanding physical metric to be bent by a single new species of dark fluid flowing in spacetime. Its non-uniform stress tensor and current vector are simple functions of a vector field with variable norm, not coupled with the baryonic fluid and the four-vector potential of the photon fluid. The dark fluid framework generically branches into a continuous spectrum of theories with DE and DM effects, including the f(R) gravity, tensor-vector-scalar-like theories, Einstein-Aether, and nuLAMBDA theories as limiting cases. When the vector field degenerates into a pure scalar field, we obtain the physics for quintessence. Choices of parameters can be made to pass Big Bang nucleosynthesis, parameterized post-Newtonian, and causality constraints. In this broad setting we emphasize the non-constant dynamical field behind the cosmological constant effect, and highlight plausible corrections beyond the classical MOND predictions.
Working Party on International Nuclear Data Evaluation Cooperation (WPEC)
Giuseppe Palmiotti
2014-06-01
The OECD Nuclear Energy Agency (NEA) is organizing the cooperation between the major nuclear data evaluation projects in the world. The NEA Working Party on International Nuclear Data Evaluation Cooperation (WPEC) was established to promote the exchange of information on nuclear data evaluation, measurement, nuclear model calculation, validation, and related topics, and to provide a framework for cooperative activities between the participating projects. The working party assesses nuclear data improvement needs and addresses these needs by initiating joint activities in the framework of dedicated WPEC subgroups. Studies recently completed comprise a number of works related to nuclear data covariance and associated processing issues, as well as more specific studies related to the resonance parameter representation in the unresolved resonance region, the gamma production from fission-product capture reactions, the U-235 capture cross-section, the EXFOR database, and the improvement of nuclear data for advanced reactor systems. Ongoing activities focus on the evaluation of Pu-239 in the resonance region, scattering angular distribution in the fast energy range, and reporting/usage of experimental data for evaluation in the resolved resonance region. New activities include two new subgroups on improved fission product yield evaluation methodologies and on modern nuclear database structures. Future activities under discussion include a pilot project of a Collaborative International Evaluated Library (CIELO) and methods to provide feedback from nuclear and covariance data adjustment for improvement of nuclear data. In addition to the above mentioned short-term, task-oriented subgroups, the WPEC also hosts a longer-term subgroup charged with reviewing and compiling the most important nuclear data requirements in a high priority request list (HPRL).
Nuclei at extreme conditions. A relativistic study
Afanasjev, Anatoli
2014-11-14
The major goals of the current project were further development of covariant density functional theory (CDFT), better understanding of its features, its application to different nuclear structure and nuclear astrophysics phenomena and training of graduate and undergraduate students. The investigations have proceeded in a number of directions which are discussed in detail in the part “Accomplishments” of this report. We have studied the role of isovector and isoscalar proton-neutron pairings in rotating nuclei; based on available experimental data it was concluded that there are no evidences for the existence of isoscalar proton-neutron pairing. Generalized theoretical approach has been developed for pycnonuclear reaction rates in the crust of neutron stars and interior of white dwarfs. Using this approach, extensive database for considerable number of pycnonuclear reactions involving stable and neutron-rich light nuclei has been created; it can be used in future for the study of various nuclear burning phenomena in different environments. Time-odd mean fields and their manifestations in terminating states, non-rotating and rotating nuclei have been studied in the framework of covariant density functional theory. Contrary to non-relativistic density functional theories these fields, which are important for a proper description of nuclear systems with broken time-reversal symmetry, are uniquely defined in the CDFT framework. Hyperdeformed nuclear shapes (with semi-axis ratio 2.5:1 and larger) have been studied in the Z = 40-58 part of nuclear chart. We strongly believe that such shapes could be studied experimentally in the future with full scale GRETA detector.
Dilling, Thomas J.; Bae, Kyounghwa; Paulus, Rebecca; Watkins-Bruner, Deborah; Garden, Adam S.; Forastiere, Arlene; Kian Ang, K.; Movsas, Benjamin
2011-11-01
Purpose: We investigated the impact of race, in conjunction with gender and partner status, on locoregional control (LRC) and overall survival (OS) in three head and neck trials conducted by the Radiation Therapy Oncology Group (RTOG). Methods and Materials: Patients from RTOG studies 9003, 9111, and 9703 were included. Patients were stratified by treatment arms. Covariates of interest were partner status (partnered vs. non-partnered), race (white vs. non-white), and sex (female vs. male). Chi-square testing demonstrated homogeneity across treatment arms. Hazards ratio (HR) was used to estimate time to event outcome. Unadjusted and adjusted HRs were calculated for all covariates with associated 95% confidence intervals (CIs) and p values. Results: A total of 1,736 patients were analyzed. Unpartnered males had inferior OS rates compared to partnered females (adjusted HR = 1.22, 95% CI, 1.09-1.36), partnered males (adjusted HR = 1.20, 95% CI, 1.09-1.28), and unpartnered females (adjusted HR = 1.20, 95% CI, 1.09-1.32). White females had superior OS compared with white males, non-white females, and non-white males. Non-white males had inferior OS compared to white males. Partnered whites had improved OS relative to partnered non-white, unpartnered white, and unpartnered non-white patients. Unpartnered males had inferior LRC compared to partnered males (adjusted HR = 1.26, 95% CI, 1.09-1.46) and unpartnered females (adjusted HR = 1.30, 95% CI, 1.05-1.62). White females had LRC superior to non-white males and females. White males had improved LRC compared to non-white males. Partnered whites had improved LRC compared to partnered and unpartnered non-white patients. Unpartnered whites had improved LRC compared to unpartnered non-whites. Conclusions: Race, gender, and partner status had impacts on both OS and locoregional failure, both singly and in combination.
The L_X-M relation of Clusters of Galaxies
Rykoff, E.S.; Evrard, A.E.; McKay, T.A.; Becker, M.R.; Johnston, D.E.; Koester, B.P.; Nord, B.; Rozo, E.; Sheldon, E.S.; Stanek, R.; Wechsler, R.H.
2008-05-16
We present a new measurement of the scaling relation between X-ray luminosity and total mass for 17,000 galaxy clusters in the maxBCG cluster sample. Stacking sub-samples within fixed ranges of optical richness, N200, we measure the mean 0.1-2.4 keV X-ray luminosity,
Observed drag coefficients in high winds in the near offshore of the South China Sea
DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)
Bi, Xueyan; Liu, Yangan; Gao, Zhiqiu; Liu, Feng; Song, Qingtao; Huang, Jian; Huang, Huijun; Mao, Weikang; Liu, Chunxia
2015-07-14
This paper investigates the relationships between friction velocity, 10 m drag coefficient, and 10 m wind speed using data collected at two offshore observation towers (one over the sea and the other on an island) from seven typhoon episodes in the South China Sea from 2008 to 2014. The two towers were placed in areas with different water depths along a shore-normal line. The depth of water at the tower over the sea averages about 15 m, and the depth of water near the island is about 10 m. The observed maximum 10 min average wind speed at a heightmore » of 10 m is about 32 m s⁻¹. Momentum fluxes derived from three methods (eddy covariance, inertial dissipation, and flux profile) are compared. The momentum fluxes derived from the flux profile method are larger (smaller) over the sea (on the island) than those from the other two methods. The relationship between the 10 m drag coefficient and the 10 m wind speed is examined by use of the data obtained by the eddy covariance method. The drag coefficient first decreases with increasing 10 m wind speed when the wind speeds are 5–10 m s⁻¹, then increases and reaches a peak value of 0.002 around a wind speed of 18 m s⁻¹. The drag coefficient decreases with increasing 10 m wind speed when 10 m wind speeds are 18–27 m s⁻¹. A comparison of the measurements from the two towers shows that the 10 m drag coefficient from the tower in 10 m water depth is about 40% larger than that from the tower in 15 m water depth when the 10 m wind speed is less than 10 m s⁻¹. Above this, the difference in the 10 m drag coefficients of the two towers disappears.« less
Pigni, Marco T; Francis, Matthew W; Gauld, Ian C
2015-01-01
A recent implementation of ENDF/B-VII. independent fission product yields and nuclear decay data identified inconsistencies in the data caused by the use of updated nuclear scheme in the decay sub-library that is not reflected in legacy fission product yield data. Recent changes in the decay data sub-library, particularly the delayed neutron branching fractions, result in calculated fission product concentrations that are incompatible with the cumulative fission yields in the library, and also with experimental measurements. A comprehensive set of independent fission product yields was generated for thermal and fission spectrum neutron induced fission for 235,238U and 239,241Pu in order to provide a preliminary assessment of the updated fission product yield data consistency. These updated independent fission product yields were utilized in the ORIGEN code to evaluate the calculated fission product inventories with experimentally measured inventories, with particular attention given to the noble gases. An important outcome of this work is the development of fission product yield covariance data necessary for fission product uncertainty quantification. The evaluation methodology combines a sequential Bayesian method to guarantee consistency between independent and cumulative yields along with the physical constraints on the independent yields. This work was motivated to improve the performance of the ENDF/B-VII.1 library in the case of stable and long-lived cumulative yields due to the inconsistency of ENDF/B-VII.1 fission p;roduct yield and decay data sub-libraries. The revised fission product yields and the new covariance data are proposed as a revision to the fission yield data currently in ENDF/B-VII.1.
Patterns of NPP, GPP, Respiration and NEP During Boreal Forest Succession
Goulden, Michael L.; McMillan, Andrew; Winston, Greg; Rocha, Adrian; Manies, Kristen; Harden, Jennifer W.; Bond-Lamberty, Benjamin
2010-12-15
We deployed a mesonet of year-round eddy covariance towers in boreal forest stands that last burned in ~1850, ~1930, 1964, 1981, 1989, 1998, and 2003 to understand how CO2 exchange changes during secondary succession.The strategy of using multiple methods, including biometry and micrometeorology, worked well. In particular, the three independent measures of NEP during succession gave similar results. A stratified and tiered approach to deploying eddy covariance systems that combines many lightweight and portable towers with a few permanent ones is likely to maximize the science return for a fixed investment. The existing conceptual models did a good job of capturing the dominant patterns of NPP, GPP, Respiration and NEP during succession. The initial loss of carbon following disturbance was neither as protracted nor large as predicted. This muted response reflects both the rapid regrowth of vegetation following fire and the prevalence of standing coarse woody debris following the fire, which is thought to decay slowly. In general, the patterns of forest recovery from disturbance should be expected to vary as a function of climate, ecosystem type and disturbance type. The NPP decline at the older stands appears related to increased Rauto rather than decreased GPP. The increase in Rauto in the older stands does not appear to be caused by accelerated maintenance respiration with increased biomass, and more likely involves increased allocation to fine root turnover, root metabolism, alternative forms of respiration, mycorrhizal relationships, or root exudates, possibly associated with progressive nutrient limitation. Several studies have now described a similar pattern of NEP following boreal fire, with 10-to-15 years of modest carbon loss followed by 50-to-100 years of modest carbon gain. This trend has been sufficiently replicated and evaluated using independent techniques that it can be used to quantify the likely effects of changes in boreal fire frequency and stand age structure on regional carbon balance.
TU-F-18A-02: Iterative Image-Domain Decomposition for Dual-Energy CT
Niu, T; Dong, X; Petrongolo, M; Zhu, L
2014-06-15
Purpose: Dual energy CT (DECT) imaging plays an important role in advanced imaging applications due to its material decomposition capability. Direct decomposition via matrix inversion suffers from significant degradation of image signal-to-noise ratios, which reduces clinical value. Existing de-noising algorithms achieve suboptimal performance since they suppress image noise either before or after the decomposition and do not fully explore the noise statistical properties of the decomposition process. We propose an iterative image-domain decomposition method for noise suppression in DECT, using the full variance-covariance matrix of the decomposed images. Methods: The proposed algorithm is formulated in the form of least-square estimation with smoothness regularization. It includes the inverse of the estimated variance-covariance matrix of the decomposed images as the penalty weight in the least-square term. Performance is evaluated using an evaluation phantom (Catphan 600) and an anthropomorphic head phantom. Results are compared to those generated using direct matrix inversion with no noise suppression, a de-noising method applied on the decomposed images, and an existing algorithm with similar formulation but with an edge-preserving regularization term. Results: On the Catphan phantom, our method retains the same spatial resolution as the CT images before decomposition while reducing the noise standard deviation of decomposed images by over 98%. The other methods either degrade spatial resolution or achieve less low-contrast detectability. Also, our method yields lower electron density measurement error than direct matrix inversion and reduces error variation by over 97%. On the head phantom, it reduces the noise standard deviation of decomposed images by over 97% without blurring the sinus structures. Conclusion: We propose an iterative image-domain decomposition method for DECT. The method combines noise suppression and material decomposition into an iterative process and achieves both goals simultaneously. The proposed algorithm shows superior performance on noise suppression with high image spatial resolution and low-contrast detectability. This work is supported by a Varian MRA grant.
Iterative image-domain decomposition for dual-energy CT
Niu, Tianye; Dong, Xue; Petrongolo, Michael; Zhu, Lei
2014-04-15
Purpose: Dual energy CT (DECT) imaging plays an important role in advanced imaging applications due to its capability of material decomposition. Direct decomposition via matrix inversion suffers from significant degradation of image signal-to-noise ratios, which reduces clinical values of DECT. Existing denoising algorithms achieve suboptimal performance since they suppress image noise either before or after the decomposition and do not fully explore the noise statistical properties of the decomposition process. In this work, the authors propose an iterative image-domain decomposition method for noise suppression in DECT, using the full variance-covariance matrix of the decomposed images. Methods: The proposed algorithm is formulated in the form of least-square estimation with smoothness regularization. Based on the design principles of a best linear unbiased estimator, the authors include the inverse of the estimated variance-covariance matrix of the decomposed images as the penalty weight in the least-square term. The regularization term enforces the image smoothness by calculating the square sum of neighboring pixel value differences. To retain the boundary sharpness of the decomposed images, the authors detect the edges in the CT images before decomposition. These edge pixels have small weights in the calculation of the regularization term. Distinct from the existing denoising algorithms applied on the images before or after decomposition, the method has an iterative process for noise suppression, with decomposition performed in each iteration. The authors implement the proposed algorithm using a standard conjugate gradient algorithm. The method performance is evaluated using an evaluation phantom (Catphan600) and an anthropomorphic head phantom. The results are compared with those generated using direct matrix inversion with no noise suppression, a denoising method applied on the decomposed images, and an existing algorithm with similar formulation as the proposed method but with an edge-preserving regularization term. Results: On the Catphan phantom, the method maintains the same spatial resolution on the decomposed images as that of the CT images before decomposition (8 pairs/cm) while significantly reducing their noise standard deviation. Compared to that obtained by the direct matrix inversion, the noise standard deviation in the images decomposed by the proposed algorithm is reduced by over 98%. Without considering the noise correlation properties in the formulation, the denoising scheme degrades the spatial resolution to 6 pairs/cm for the same level of noise suppression. Compared to the edge-preserving algorithm, the method achieves better low-contrast detectability. A quantitative study is performed on the contrast-rod slice of Catphan phantom. The proposed method achieves lower electron density measurement error as compared to that by the direct matrix inversion, and significantly reduces the error variation by over 97%. On the head phantom, the method reduces the noise standard deviation of decomposed images by over 97% without blurring the sinus structures. Conclusions: The authors propose an iterative image-domain decomposition method for DECT. The method combines noise suppression and material decomposition into an iterative process and achieves both goals simultaneously. By exploring the full variance-covariance properties of the decomposed images and utilizing the edge predetection, the proposed algorithm shows superior performance on noise suppression with high image spatial resolution and low-contrast detectability.
Assessment of Fission Product Cross-Section Data for Burnup Credit Applications
Leal, Luiz C; Derrien, Herve; Dunn, Michael E; Mueller, Don
2007-12-01
Past efforts by the Department of Energy (DOE), the Electric Power Research Institute (EPRI), the Nuclear Regulatory Commission (NRC), and others have provided sufficient technical information to enable the NRC to issue regulatory guidance for implementation of pressurized-water reactor (PWR) burnup credit; however, consideration of only the reactivity change due to the major actinides is recommended in the guidance. Moreover, DOE, NRC, and EPRI have noted the need for additional scientific and technical data to justify expanding PWR burnup credit to include fission product (FP) nuclides and enable burnup credit implementation for boiling-water reactor (BWR) spent nuclear fuel (SNF). The criticality safety assessment needed for burnup credit applications will utilize computational analyses of packages containing SNF with FP nuclides. Over the years, significant efforts have been devoted to the nuclear data evaluation of major isotopes pertinent to reactor applications (i.e., uranium, plutonium, etc.); however, efforts to evaluate FP cross-section data in the resonance region have been less thorough relative to actinide data. In particular, resonance region cross-section measurements with corresponding R-matrix resonance analyses have not been performed for FP nuclides. Therefore, the objective of this work is to assess the status and performance of existing FP cross-section and cross-section uncertainty data in the resonance region for use in burnup credit analyses. Recommendations for new cross-section measurements and/or evaluations are made based on the data assessment. The assessment focuses on seven primary FP isotopes (103Rh, 133Cs, 143Nd, 149Sm, 151Sm, 152Sm, and 155Gd) that impact reactivity analyses of transportation packages and two FP isotopes (153Eu and 155Eu) that impact prediction of 155Gd concentrations. Much of the assessment work was completed in 2005, and the assessment focused on the latest FP cross-section evaluations available in the international nuclear data community as of March 2005. The accuracy of the cross-section data was investigated by comparing existing cross-section evaluations against available measured cross-section data. When possible, benchmark calculations were also used to assess the performance of the latest FP cross-section data. Since March 2005, the U.S. and European data projects have released newer versions of their respective data files. Although there have been updates to the international data files and to some degree FP data, much of the updates have included nuclear cross-section modeling improvements at energies above the resonance region. The one exception is improved ENDF/B-VII cross-section uncertainty data or covariance data for gadolinium isotopes. In particular, ENDF/B-VII includes improved 155Gd resonance parameter covariance data, but they are based on previously measured resonance data. Although the new covariance data are available for 155Gd, the conclusions of the FP cross-section data assessment of this report still hold in lieu of the newer international cross-section data files. Based on the FP data assessment, there is judged to be a need for new total and capture cross-section measurements and corresponding cross-section evaluations, in a prioritized manner, for the nine FPs to provide the improved information and technical rigor needed for criticality safety analyses.
Final Technical Report [Carbon Data Assimilation with a Coupled Ensemble Kalman Filter
Kalnay, Eugenia
2013-08-30
We proposed (and accomplished) the development of an Ensemble Kalman Filter (EnKF) approach for the estimation of surface carbon fluxes as if they were parameters, augmenting the model with them. Our system is quite different from previous approaches, such as carbon flux inversions, 4D-‐Var, and EnKF with approximate background error covariance (Peters et al., 2008). We showed (using observing system simulation experiments, OSSEs) that these differences lead to a more accurate estimation of the evolving surface carbon fluxes at model grid-‐scale resolution. The main properties of the LETKF-‐C are: a) The carbon cycle LETKF is coupled with the simultaneous assimilation of the standard atmospheric variables, so that the ensemble wind transport of the CO2 provides an estimation of the carbon transport uncertainty. b) The use of an assimilation window (6hr) much shorter than the months-‐long windows used in other methods. This avoids the inevitable “blurring” of the signal that takes place in long windows due to turbulent mixing since the CO2 does not have time to mix before the next window. In this development we introduced new, advanced techniques that have since been adopted by the EnKF community (Kang, 2009, Kang et al., 2011, Kang et al. 2012). These advances include “variable localization” that reduces sampling errors in the estimation of the forecast error covariance, more advanced adaptive multiplicative and additive inflations, and vertical localization based on the time scale of the processes. The main result has been obtained using the LETKF-‐C with all these advances, and assimilating simulated atmospheric CO2 observations from different observing systems (surface flask observations of CO2 but no surface carbon fluxes observations, total column CO2 from GoSAT/OCO-‐2, and upper troposphere AIRS retrievals). After a spin-‐up of about one month, the LETKF-‐C succeeded in reconstructing the true evolving surface fluxes of carbon at a model grid resolution. When applied to the CAM3.5 model, the LETKF gave very promising results as well, although only one month is available.
Machtay, Mitchell; Movsas, Benjamin; Paulus, Rebecca; Gore, Elizabeth M.; Komaki, Ritsuko; Albain, Kathy; Sause, William T.; Curran, Walter J.
2012-01-01
Purpose: Patients treated with chemoradiotherapy for locally advanced non-small-cell lung carcinoma (LA-NSCLC) were analyzed for local-regional failure (LRF) and overall survival (OS) with respect to radiotherapy dose intensity. Methods and Materials: This study combined data from seven Radiation Therapy Oncology Group (RTOG) trials in which chemoradiotherapy was used for LA-NSCLC: RTOG 88-08 (chemoradiation arm only), 90-15, 91-06, 92-04, 93-09 (nonoperative arm only), 94-10, and 98-01. The radiotherapeutic biologically effective dose (BED) received by each individual patient was calculated, as was the overall treatment time-adjusted BED (tBED) using standard formulae. Heterogeneity testing was done with chi-squared statistics, and weighted pooled hazard ratio estimates were used. Cox and Fine and Gray's proportional hazard models were used for OS and LRF, respectively, to test the associations between BED and tBED adjusted for other covariates. Results: A total of 1,356 patients were analyzed for BED (1,348 for tBED). The 2-year and 5-year OS rates were 38% and 15%, respectively. The 2-year and 5-year LRF rates were 46% and 52%, respectively. The BED (and tBED) were highly significantly associated with both OS and LRF, with or without adjustment for other covariates on multivariate analysis (p < 0.0001). A 1-Gy BED increase in radiotherapy dose intensity was statistically significantly associated with approximately 4% relative improvement in survival; this is another way of expressing the finding that the pool-adjusted hazard ratio for survival as a function of BED was 0.96. Similarly, a 1-Gy tBED increase in radiotherapy dose intensity was statistically significantly associated with approximately 3% relative improvement in local-regional control; this is another way of expressing the finding that the pool-adjusted hazard ratio as a function of tBED was 0.97. Conclusions: Higher radiotherapy dose intensity is associated with improved local-regional control and survival in the setting of chemoradiotherapy.
Final report on "Carbon Data Assimilation with a Coupled Ensemble Kalman Filter"
Kalnay, Eugenia; Kang, Ji-Sun; Fung, Inez
2014-07-23
We proposed (and accomplished) the development of an Ensemble Kalman Filter (EnKF) approach for the estimation of surface carbon fluxes as if they were parameters, augmenting the model with them. Our system is quite different from previous approaches, such as carbon flux inversions, 4D-Var, and EnKF with approximate background error covariance (Peters et al., 2008). We showed (using observing system simulation experiments, OSSEs) that these differences lead to a more accurate estimation of the evolving surface carbon fluxes at model grid-scale resolution. The main properties of the LETKF-C are: a) The carbon cycle LETKF is coupled with the simultaneous assimilation of the standard atmospheric variables, so that the ensemble wind transport of the CO2 provides an estimation of the carbon transport uncertainty. b) The use of an assimilation window (6hr) much shorter than the months-long windows used in other methods. This avoids the inevitable blurring of the signal that takes place in long windows due to turbulent mixing since the CO2 does not have time to mix before the next window. In this development we introduced new, advanced techniques that have since been adopted by the EnKF community (Kang, 2009, Kang et al., 2011, Kang et al. 2012). These advances include variable localization that reduces sampling errors in the estimation of the forecast error covariance, more advanced adaptive multiplicative and additive inflations, and vertical localization based on the time scale of the processes. The main result has been obtained using the LETKF-C with all these advances, and assimilating simulated atmospheric CO2 observations from different observing systems (surface flask observations of CO2 but no surface carbon fluxes observations, total column CO2 from GoSAT/OCO-2, and upper troposphere AIRS retrievals). After a spin-up of about one month, the LETKF-C succeeded in reconstructing the true evolving surface fluxes of carbon at a model grid resolution. When applied to the CAM3.5 model, the LETKF gave very promising results as well, although only one month is available.
Kim, Yangho; Lee, Byung-Kook
2012-10-15
Introduction: The objective of this study was to evaluate associations between blood lead, cadmium, and mercury levels with estimated glomerular filtration rate in a general population of South Korean adults. Methods: This was a cross-sectional study based on data obtained in the Korean National Health and Nutrition Examination Survey (KNHANES) (2008-2010). The final analytical sample consisted of 5924 participants. Estimated glomerular filtration rate (eGFR) was calculated using the MDRD Study equation as an indicator of glomerular function. Results: In multiple linear regression analysis of log2-transformed blood lead as a continuous variable on eGFR, after adjusting for covariates including cadmium and mercury, the difference in eGFR levels associated with doubling of blood lead were -2.624 mL/min per 1.73 m Superscript-Two (95% CI: -3.803 to -1.445). In multiple linear regression analysis using quartiles of blood lead as the independent variable, the difference in eGFR levels comparing participants in the highest versus the lowest quartiles of blood lead was -3.835 mL/min per 1.73 m Superscript-Two (95% CI: -5.730 to -1.939). In a multiple linear regression analysis using blood cadmium and mercury, as continuous or categorical variables, as independent variables, neither metal was a significant predictor of eGFR. Odds ratios (ORs) and 95% CI values for reduced eGFR calculated for log2-transformed blood metals and quartiles of the three metals showed similar trends after adjustment for covariates. Discussion: In this large, representative sample of South Korean adults, elevated blood lead level was consistently associated with lower eGFR levels and with the prevalence of reduced eGFR even in blood lead levels below 10 {mu}g/dL. In conclusion, elevated blood lead level was associated with lower eGFR in a Korean general population, supporting the role of lead as a risk factor for chronic kidney disease.
Lin, Steven H., E-mail: SHLin@mdanderson.org [Department of Radiation Oncology, University of Texas MD Anderson Cancer Center, Houston, Texas (United States); Wang Lu [Department of Biostatistics, University of Michigan, Ann Arbor, Michigan (United States)] [Department of Biostatistics, University of Michigan, Ann Arbor, Michigan (United States); Myles, Bevan [Department of Radiation Oncology, University of Texas MD Anderson Cancer Center, Houston, Texas (United States)] [Department of Radiation Oncology, University of Texas MD Anderson Cancer Center, Houston, Texas (United States); Thall, Peter F. [Department of Biostatistics, University of Texas MD Anderson Cancer Center, Houston, Texas (United States)] [Department of Biostatistics, University of Texas MD Anderson Cancer Center, Houston, Texas (United States); Hofstetter, Wayne L.; Swisher, Stephen G. [Department of Thoracic Surgery, University of Texas MD Anderson Cancer Center, Houston, Texas (United States)] [Department of Thoracic Surgery, University of Texas MD Anderson Cancer Center, Houston, Texas (United States); Ajani, Jaffer A. [Department of Gastrointestinal Medical Oncology, University of Texas MD Anderson Cancer Center, Houston, Texas (United States)] [Department of Gastrointestinal Medical Oncology, University of Texas MD Anderson Cancer Center, Houston, Texas (United States); Cox, James D.; Komaki, Ritsuko; Liao Zhongxing [Department of Radiation Oncology, University of Texas MD Anderson Cancer Center, Houston, Texas (United States)] [Department of Radiation Oncology, University of Texas MD Anderson Cancer Center, Houston, Texas (United States)
2012-12-01
Purpose: Although 3-dimensional conformal radiotherapy (3D-CRT) is the worldwide standard for the treatment of esophageal cancer, intensity modulated radiotherapy (IMRT) improves dose conformality and reduces the radiation exposure to normal tissues. We hypothesized that the dosimetric advantages of IMRT should translate to substantive benefits in clinical outcomes compared with 3D-CRT. Methods and Materials: An analysis was performed of 676 nonrandomized patients (3D-CRT, n=413; IMRT, n=263) with stage Ib-IVa (American Joint Committee on Cancer 2002) esophageal cancers treated with chemoradiotherapy at a single institution from 1998-2008. An inverse probability of treatment weighting and inclusion of propensity score (treatment probability) as a covariate were used to compare overall survival time, interval to local failure, and interval to distant metastasis, while accounting for the effects of other clinically relevant covariates. The propensity scores were estimated using logistic regression analysis. Results: A fitted multivariate inverse probability weighted-adjusted Cox model showed that the overall survival time was significantly associated with several well-known prognostic factors, along with the treatment modality (IMRT vs 3D-CRT, hazard ratio 0.72, P<.001). Compared with IMRT, 3D-CRT patients had a significantly greater risk of dying (72.6% vs 52.9%, inverse probability of treatment weighting, log-rank test, P<.0001) and of locoregional recurrence (P=.0038). No difference was seen in cancer-specific mortality (Gray's test, P=.86) or distant metastasis (P=.99) between the 2 groups. An increased cumulative incidence of cardiac death was seen in the 3D-CRT group (P=.049), but most deaths were undocumented (5-year estimate, 11.7% in 3D-CRT vs 5.4% in IMRT group, Gray's test, P=.0029). Conclusions: Overall survival, locoregional control, and noncancer-related death were significantly better after IMRT than after 3D-CRT. Although these results need confirmation, IMRT should be considered for the treatment of esophageal cancer.
Schreiner, Kathryn Melissa; Lowry, Thomas Stephen
2013-10-01
This work was partially supported by the Sandia National Laboratories,Laboratory Directed Research and Development' (LDRD) fellowship program in conjunction with Texas A&M University (TAMU). The research described herein is the work of Kathryn M. Schreiner (Katie') and her advisor, Thomas S. Bianchi and represents a concise description of Katie's dissertation that was submitted to the TAMU Office of Graduate Studies in May 2013 in partial fulfillment of her doctorate of philosophy degree. High Arctic permafrost soils contain a massive amount of organic carbon, accounting for twice as much carbon as what is currently stored as carbon dioxide in the atmosphere. However, with current warming trends this sink is in danger of thawing and potentially releasing large amounts of carbon as both carbon dioxide and methane into the atmosphere. It is difficult to make predictions about the future of this sink without knowing how it has reacted to past temperature and climate changes. This project investigated long term, fine scale particulate organic carbon (POC) delivery by the high-Arctic Colville River into Simpson's Lagoon in the near-shore Beaufort Sea. Modern POC was determined to be a mixture of three sources (riverine soils, coastal erosion, and marine). Downcore POC measurements were performed in a core close to the Colville River output and a core close to intense coastal erosion. Inputs of the three major sources were found to vary throughout the last two millennia, and in the Colville River core covary significantly with Alaskan temperature reconstructions.
Hawley, Alyse K.; Brewer, Heather M.; Norbeck, Angela D.; Pasa-Tolic, Ljiljana; Hallam, Steven J.
2014-08-05
Oxygen minimum zones (OMZs) are intrinsic water column features arising from respiratory oxygen demand during organic matter degradation in stratified marine waters. Currently OMZs are expanding due to global climate change. This expansion alters marine ecosystem function and the productivity of fisheries due to habitat compression and changes in biogeochemical cycling leading to fixed nitrogen loss and greenhouse gas production. Here we use metaproteomics to chart spatial and temporal patterns of gene expression along defined redox gradients in a seasonally anoxic fjord, Saanich Inlet to better understand microbial community responses to OMZ expansion. The expression of metabolic pathway components for nitrification, anaerobic ammonium oxidation (anammox), denitrification and inorganic carbon fixation predominantly co-varied with abundance and distribution patterns of Thaumarchaeota, Nitrospira, Planctomycetes and SUP05/ARCTIC96BD-19 Gammaproteobacteria. Within these groups, pathways mediating inorganic carbon fixation and nitrogen and sulfur transformations were differentially expressed across the redoxcline. Nitrification and inorganic carbon fixation pathways affiliated with Thaumarchaeota dominated dysoxic waters and denitrification, sulfur-oxidation and inorganic carbon fixation pathways affiliated with SUP05 dominated suboxic and anoxic waters. Nitrite-oxidation and anammox pathways affiliated with Nitrospina and Planctomycetes respectively, also exhibited redox partitioning between dysoxic and suboxic waters. The differential expression of these pathways under changing water column redox conditions has quantitative implications for coupled biogeochemical cycling linking different modes of inorganic carbon fixation with distributed nitrogen and sulfur-based energy metabolism extensible to coastal and open ocean OMZs.
Tzvi Galchen; Mei Xu ); Eberhard, W.L. )
1992-11-30
This work is part of the First International Satellite Land Surface Climatology Project (ISLSCP) Field Experiment (FIFE), an international land-surface-atmosphere experiment aimed at improving the way climate models represent energy, water, heat, and carbon exchanges, and improving the utilization of satellite based remote sensing to monitor such parameters. Here the authors present results on doppler LIDAR measurements used to measure a range of turbulence parameters in the region of the unstable planetary boundary layer (PBL). The parameters include, averaged velocities, cartesian velocities, variances in velocities, parts of the covariance associated with vertical fluxes of horizontal momentum, and third moments of the vertical velocity. They explain their analysis technique, especially as it relates to error reduction of the averaged turbulence parameters from individual measurements with relatively large errors. The scales studied range from 150m to 12km. With this new diagnostic they address questions about the behavior of the convectively unstable PBL, as well as the stable layer which overlies it.
Shukla, K. K.; Phanikumar, D. V.; Kumar, Niranjan; Reddy, Kishore; Kotamarthi, Veerabhadra R.; Newsom, Rob K.; Ouarda, Taha B.
2015-10-01
In this study, we present a case study on 16 October 2011 to show the first observational evidence of the influence of short period gravity waves in aerosol transport during daytime over the central Himalayan region. The Doppler lidar data has been utilized to address the daytime boundary layer evolution and related aerosol dynamics over the site. Mixing layer height is estimated by wavelet covariance transform method and found to be ~ 0.7 km, AGL. Aerosol optical depth observations during daytime revealed an asymmetry showing clear enhancement during afternoon hours as compared to forenoon. Interestingly, Fourier and wavelet analysis of vertical velocity and attenuated backscatter showed similar 50-90 min short period gravity wave signatures during afternoon hours. Moreover, our observations showed that gravity waves are dominant within the boundary layer implying that the daytime boundary layer dynamics is playing a vital role in transporting the aerosols from surface to the top of the boundary layer. Similar modulations are also evident in surface parameters like temperature, relative humidity and wind speed indicating these waves are associated with the dynamical aspects over Himalayan region. Finally, time evolution of range-23 height indicator snapshots during daytime showed strong upward velocities especially during afternoon hours implying that convective processes through short period gravity waves plays a significant role in transporting aerosols from the nearby valley region to boundary layer top over the site. These observations also establish the importance of wave induced daytime convective boundary layer dynamics in the lower Himalayan region.
Inflationary power asymmetry from primordial domain walls
Jazayeri, Sadra; Akrami, Yashar; Firouzjahi, Hassan; Solomon, Adam R.; Wang, Yi E-mail: yashar.akrami@astro.uio.no E-mail: a.r.solomon@damtp.cam.ac.uk
2014-11-01
We study the asymmetric primordial fluctuations in a model of inflation in which translational invariance is broken by a domain wall. We calculate the corrections to the power spectrum of curvature perturbations; they are anisotropic and contain dipole, quadrupole, and higher multipoles with non-trivial scale-dependent amplitudes. Inspired by observations of these multipole asymmetries in terms of two-point correlations and variance in real space, we demonstrate that this model can explain the observed anomalous power asymmetry of the cosmic microwave background (CMB) sky, including its characteristic feature that the dipole dominates over higher multipoles. We test the viability of the model and place approximate constraints on its parameters by using observational values of dipole, quadrupole, and octopole amplitudes of the asymmetry measured by a local-variance estimator. We find that a configuration of the model in which the CMB sphere does not intersect the domain wall during inflation provides a good fit to the data. We further derive analytic expressions for the corrections to the CMB temperature covariance matrix, or angular power spectra, which can be used in future statistical analysis of the model in spherical harmonic space.
Quantum driven dissipative parametric oscillator in a blackbody radiation field
Pachn, Leonardo A.; Department of Chemistry and Center for Quantum Information and Quantum Control, Chemical Physics Theory Group, University of Toronto, Toronto, Ontario M5S 3H6 ; Brumer, Paul
2014-01-15
We consider the general open system problem of a charged quantum oscillator confined in a harmonic trap, whose frequency can be arbitrarily modulated in time, that interacts with both an incoherent quantized (blackbody) radiation field and with an arbitrary coherent laser field. We assume that the oscillator is initially in thermodynamic equilibrium with its environment, a non-factorized initial density matrix of the system and the environment, and that at t = 0 the modulation of the frequency, the coupling to the incoherent and the coherent radiation are switched on. The subsequent dynamics, induced by the presence of the blackbody radiation, the laser field, and the frequency modulation, is studied in the framework of the influence functional approach. This approach allows incorporating, in analytic closed formulae, the non-Markovian character of the oscillator-environment interaction at any temperature as well the non-Markovian character of the blackbody radiation and its zero-point fluctuations. Expressions for the time evolution of the covariance matrix elements of the quantum fluctuations and the reduced density-operator are obtained.
Monte Carlo analysis of localization errors in magnetoencephalography
Medvick, P.A.; Lewis, P.S.; Aine, C.; Flynn, E.R.
1989-01-01
In magnetoencephalography (MEG), the magnetic fields created by electrical activity in the brain are measured on the surface of the skull. To determine the location of the activity, the measured field is fit to an assumed source generator model, such as a current dipole, by minimizing chi-square. For current dipoles and other nonlinear source models, the fit is performed by an iterative least squares procedure such as the Levenberg-Marquardt algorithm. Once the fit has been computed, analysis of the resulting value of chi-square can determine whether the assumed source model is adequate to account for the measurements. If the source model is adequate, then the effect of measurement error on the fitted model parameters must be analyzed. Although these kinds of simulation studies can provide a rough idea of the effect that measurement error can be expected to have on source localization, they cannot provide detailed enough information to determine the effects that the errors in a particular measurement situation will produce. In this work, we introduce and describe the use of Monte Carlo-based techniques to analyze model fitting errors for real data. Given the details of the measurement setup and a statistical description of the measurement errors, these techniques determine the effects the errors have on the fitted model parameters. The effects can then be summarized in various ways such as parameter variances/covariances or multidimensional confidence regions. 8 refs., 3 figs.
Constraint analysis for variational discrete systems
Dittrich, Bianca; Hhn, Philipp A.; Institute for Theoretical Physics, Universiteit Utrecht, Leuvenlaan 4, NL-3584 CE Utrecht
2013-09-15
A canonical formalism and constraint analysis for discrete systems subject to a variational action principle are devised. The formalism is equivalent to the covariant formulation, encompasses global and local discrete time evolution moves and naturally incorporates both constant and evolving phase spaces, the latter of which is necessary for a time varying discretization. The different roles of constraints in the discrete and the conditions under which they are first or second class and/or symmetry generators are clarified. The (non-) preservation of constraints and the symplectic structure is discussed; on evolving phase spaces the number of constraints at a fixed time step depends on the initial and final time step of evolution. Moreover, the definition of observables and a reduced phase space is provided; again, on evolving phase spaces the notion of an observable as a propagating degree of freedom requires specification of an initial and final step and crucially depends on this choice, in contrast to the continuum. However, upon restriction to translation invariant systems, one regains the usual time step independence of canonical concepts. This analysis applies, e.g., to discrete mechanics, lattice field theory, quantum gravity models, and numerical analysis.
Meyer, Philip D.; Ye, Ming; Neuman, Shlomo P.; Rockhold, Mark L.
2008-06-01
A methodology to systematically and quantitatively assess model predictive uncertainty was applied to saturated zone uranium transport at the 300 Area of the U.S. Department of Energy Hanford Site in Washington State, USA. The methodology extends Maximum Likelihood Bayesian Model Averaging (MLBMA) to account jointly for uncertainties due to the conceptual-mathematical basis of models, model parameters, and the scenarios to which the models are applied. Conceptual uncertainty was represented by postulating four alternative models of hydrogeology and uranium adsorption. Parameter uncertainties were represented by estimation covariances resulting from the joint calibration of each model to observed heads and uranium concentration. Posterior model probability was dominated by one model. Results demonstrated the role of model complexity and fidelity to observed system behavior in determining model probabilities, as well as the impact of prior information. Two scenarios representing alternative future behavior of the Columbia River adjacent to the site were considered. Predictive simulations carried out with the calibrated models illustrated the computation of model- and scenario-averaged predictions and how results can be displayed to clearly indicate the individual contributions to predictive uncertainty of the model, parameter, and scenario uncertainties. The application demonstrated the practicability of applying a comprehensive uncertainty assessment to large-scale, detailed groundwater flow and transport modelling.
Electromagnetic structure of few-nucleon ground states
DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)
Marcucci, Laura E.; Istituto Nazionale di Fisica Nucleare; Gross, Franz L.; Thomas Jefferson National Accelerator Facility; Peña, M. T.; Piarulli, M.; Old Dominion Univ., Norfolk, VA; Schiavilla, Rocco; Old Dominion Univ., Norfolk, VA; Sick, Ingo; et al
2016-01-08
Experimental form factors of the hydrogen and helium isotopes, extracted from an up-to-date global analysis of cross sections and polarization observables measured in elastic electron scattering from these systems, are compared to predictions obtained in three different theoretical approaches: the first is based on realistic interactions and currents, including relativistic corrections (labeled as the conventional approach); the second relies on a chiral effective field theory description of the strong and electromagnetic interactions in nuclei (labeled ChiEFT); the third utilizes a fully relativistic treatment of nuclear dynamics as implemented in the covariant spectator theory (labeled CST). Furthermore, for momentum transfers belowmore » Q < 5 fm-1 there is satisfactory agreement between experimental data and theoretical results in all three approaches. Conversely, at Q > 5 fm-1, particularly in the case of the deuteron, a relativistic treatment of the dynamics, as is done in the CST, is necessary. The experimental data on the deuteron A structure function extend to Q ~ 12 fm-1, and the close agreement between these data and the CST results suggests that, even in this extreme kinematical regime, there is no evidence for new effects coming from quark and gluon degrees of freedom at short distances.« less
Virtuality and transverse momentum dependence of the pion distribution amplitude
DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)
Radyushkin, Anatoly V.
2016-03-08
We describe basics of a new approach to transverse momentum dependence in hard exclusive processes. We develop it in application to the transition process γ*γ → π0 at the handbag level. Our starting point is coordinate representation for matrix elements of operators (in the simplest case, bilocal O (0,z)) describing a hadron with momentum p. Treated as functions of (pz) and z2, they are parametrized through virtuality distribution amplitudes (VDA) Φ(x,σ), with x being Fourier-conjugate to (pz) and σ Laplace-conjugate to z2. For intervals with z+ = 0, we introduce the transverse momentum distribution amplitude (TMDA) ψ(x, k), and writemore » it in terms of VDA Φ(x,σ). The results of covariant calculations, written in terms of Φ(x, σ) are converted into expressions involving ψ(x, k). Starting with scalar toy models, we extend the analysis onto the case of spin-1/2 quarks and QCD. We propose simple models for soft VDAs/TMDAs, and use them for comparison of handbag results with experimental (BaBar and BELLE) data on the pion transition form factor. Furthermore, we discuss how one can generate high-k tails from primordial soft distributions.« less
Data Assimilation in the ADAPT Photospheric Flux Transport Model
DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)
Hickmann, Kyle S.; Godinez, Humberto C.; Henney, Carl J.; Arge, C. Nick
2015-03-17
Global maps of the solar photospheric magnetic flux are fundamental drivers for simulations of the corona and solar wind and therefore are important predictors of geoeffective events. However, observations of the solar photosphere are only made intermittently over approximately half of the solar surface. The Air Force Data Assimilative Photospheric Flux Transport (ADAPT) model uses localized ensemble Kalman filtering techniques to adjust a set of photospheric simulations to agree with the available observations. At the same time, this information is propagated to areas of the simulation that have not been observed. ADAPT implements a local ensemble transform Kalman filter (LETKF)more » to accomplish data assimilation, allowing the covariance structure of the flux-transport model to influence assimilation of photosphere observations while eliminating spurious correlations between ensemble members arising from a limited ensemble size. We give a detailed account of the implementation of the LETKF into ADAPT. Advantages of the LETKF scheme over previously implemented assimilation methods are highlighted.« less
Flavour symmetry breaking in the kaon parton distribution amplitude
DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)
none,
2014-11-01
We compute the kaon's valence-quark (twist-two parton) distribution amplitude (PDA) by projecting its Poincar-covariant BetheSalpeter wave-function onto the light-front. At a scale ? = 2 GeV, the PDA is a broad, concave and asymmetric function, whose peak is shifted 1216% away from its position in QCD's conformal limit. These features are a clear expression of SU(3)-flavour-symmetry breaking. They show that the heavier quark in the kaon carries more of the bound-state's momentum than the lighter quark and also that emergent phenomena in QCD modulate the magnitude of flavour-symmetry breaking: it is markedly smaller than one might expect based on themoredifference between light-quark current masses. Our results add to a body of evidence which indicates that at any energy scale accessible with existing or foreseeable facilities, a reliable guide to the interpretation of experiment requires the use of such nonperturbatively broadened PDAs in leading-order, leading-twist formulae for hard exclusive processes instead of the asymptotic PDA associated with QCD's conformal limit. We illustrate this via the ratio of kaon and pion electromagnetic form factors: using our nonperturbative PDAs in the appropriate formulae, FK/F?=1.23 at spacelike-Q2=17 GeV2, which compares satisfactorily with the value of 0.92(5) inferred in e+e- annihilation at s=17 GeV2.less
Modeling patterns in data using linear and related models
Engelhardt, M.E.
1996-06-01
This report considers the use of linear models for analyzing data related to reliability and safety issues of the type usually associated with nuclear power plants. The report discusses some of the general results of linear regression analysis, such as the model assumptions and properties of the estimators of the parameters. The results are motivated with examples of operational data. Results about the important case of a linear regression model with one covariate are covered in detail. This case includes analysis of time trends. The analysis is applied with two different sets of time trend data. Diagnostic procedures and tests for the adequacy of the model are discussed. Some related methods such as weighted regression and nonlinear models are also considered. A discussion of the general linear model is also included. Appendix A gives some basic SAS programs and outputs for some of the analyses discussed in the body of the report. Appendix B is a review of some of the matrix theoretic results which are useful in the development of linear models.
Radyushkin, Anatoly V.
2014-07-01
We outline basics of a new approach to transverse momentum dependence in hard processes. As an illustration, we consider hard exclusive transition process gamma*gamma -> to pi^0 at the handbag level. Our starting point is coordinate representation for matrix elements of operators (in the simplest case, bilocal O(0,z)) describing a hadron with momentum p. Treated as functions of (pz) and z^2, they are parametrized through a virtuality distribution amplitude (VDA) Phi (x, sigma), with x being Fourier-conjugate to (pz) and sigma Laplace-conjugate to z^2. For intervals with z^+=0, we introduce transverse momentum distribution amplitude (TMDA) Psi (x, k_\\perp), and write it in terms of VDA Phi (x, \\sigma). The results of covariant calculations, written in terms of Phi (x sigma) are converted into expressions involving Psi (x, k_\\perp. Starting with scalar toy models, we extend the analysis onto the case of spin-1/2 quarks and QCD. We propose simple models for soft VDAs/TMDAs, and use them for comparison of handbag results with experimental (BaBar and BELLE) data on the pion transition form factor. We also discuss how one can generate high-k_\\perp tails from primordial soft distributions.
Saririan, K.
1997-05-01
In this thesis, the author presents some works in the direction of studying quantum effects in locally supersymmetric effective field theories that appear in the low energy limit of superstring theory. After reviewing the Kaehler covariant formulation of supergravity, he shows the calculation of the divergent one-loop contribution to the effective boson Lagrangian for supergravity, including the Yang-Mills sector and the helicity-odd operators that arise from integration over fermion fields. The only restriction is on the Yang-Mills kinetic energy normalization function, which is taken diagonal in gauge indices, as in models obtained from superstrings. He then presents the full result for the divergent one-loop contribution to the effective boson Lagrangian for supergravity coupled to chiral and Yang-Mills supermultiplets. He also considers the specific case of dilaton couplings in effective supergravity Lagrangians from superstrings, for which the one-loop result is considerably simplified. He studies gaugino condensation in the presence of an intermediate mass scale in the hidden sector. S-duality is imposed as an approximate symmetry of the effective supergravity theory. Furthermore, the author includes in the Kaehler potential the renormalization of the gauge coupling and the one-loop threshold corrections at the intermediate scale. It is shown that confinement is indeed achieved. Furthermore, a new running behavior of the dilaton arises which he attributes to S-duality. He also discusses the effects of the intermediate scale, and possible phenomenological implications of this model.
Strong field effects on binary systems in Einstein-aether theory
Foster, Brendan Z.
2007-10-15
'Einstein-aether' theory is a generally covariant theory of gravity containing a dynamical preferred frame. This article continues an examination of effects on the motion of binary pulsar systems in this theory, by incorporating effects due to strong fields in the vicinity of neutron star pulsars. These effects are included through an effective approach, by treating the compact bodies as point particles with nonstandard, velocity dependent interactions parametrized by dimensionless sensitivities. Effective post-Newtonian equations of motion for the bodies and the radiation damping rate are determined. More work is needed to calculate values of the sensitivities for a given fluid source; therefore, precise constraints on the theory's coupling constants cannot yet be stated. It is shown, however, that strong field effects will be negligible given current observational uncertainties if the dimensionless couplings are less than roughly 0.1 and two conditions that match the PPN parameters to those of pure general relativity are imposed. In this case, weak field results suffice. There then exists a one-parameter family of Einstein-aether theories with 'small-enough' couplings that passes all current observational tests. No conclusion can be reached for larger couplings until the sensitivities for a given source can be calculated.
Generation of Stationary Non-Gaussian Time Histories with a Specified Cross-spectral Density
DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)
Smallwood, David O.
1997-01-01
The paper reviews several methods for the generation of stationary realizations of sampled time histories with non-Gaussian distributions and introduces a new method which can be used to control the cross-spectral density matrix and the probability density functions (pdfs) of the multiple input problem. Discussed first are two methods for the specialized case of matching the auto (power) spectrum, the skewness, and kurtosis using generalized shot noise and using polynomial functions. It is then shown that the skewness and kurtosis can also be controlled by the phase of a complex frequency domain description of the random process. The general casemore » of matching a target probability density function using a zero memory nonlinear (ZMNL) function is then covered. Next methods for generating vectors of random variables with a specified covariance matrix for a class of spherically invariant random vectors (SIRV) are discussed. Finally the general case of matching the cross-spectral density matrix of a vector of inputs with non-Gaussian marginal distributions is presented.« less
Vulnerability of crops and native grasses to summer drying in the U.S. Southern Great Plains
Raz-Yaseef, Naama; Billesbach, Dave P.; Fischer, Marc L.; Biraud, Sebastien C.; Gunter, Stacey A.; Bradford, James A.; Torn, Margaret S.
2015-08-31
The Southern Great Plains are characterized by a fine-scale mixture of different land-cover types, predominantly winter-wheat and grazed pasture, with relatively small areas of other crops, native prairie, and switchgrass. Recent droughts and predictions of increased drought in the Southern Great Plains, especially during the summer months, raise concern for these ecosystems. We measured ecosystem carbon and water fluxes with eddy-covariance systems over cultivated cropland for 10 years, and over lightly grazed prairie and new switchgrass fields for 2 years each. Growing-season precipitation showed the strongest control over net carbon uptake for all ecosystems, but with a variable effect: grasses (prairie and switchgrass) needed at least 350 mm of precipitation during the growing season to become net carbon sinks, while crops needed only 100 mm. In summer, high temperatures enhanced evaporation and led to higher likelihood of dry soil conditions. Therefore, summer-growing native prairie species and switchgrass experienced more seasonal droughts than spring-growing crops. For wheat, the net reduction in carbon uptake resulted mostly from a decrease in gross primary production rather than an increase in respiration. Flux measurements suggested that management practices for crops were effective in suppressing evapotranspiration and decomposition (by harvesting and removing secondary growth), and in increasing carbon uptake (by fertilizing and conserving summer soil water). In light of future projections for wetter springs and drier and warmer summers in the Southern Great Plains, our study indicates an increased vulnerability in native ecosystems and summer crops over time.
Revisiting Statistical Aspects of Nuclear Material Accounting
DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)
Burr, T.; Hamada, M. S.
2013-01-01
Nuclear material accounting (NMA) is the only safeguards system whose benefits are routinely quantified. Process monitoring (PM) is another safeguards system that is increasingly used, and one challenge is how to quantify its benefit. This paper considers PM in the role of enabling frequent NMA, which is referred to as near-real-time accounting (NRTA). We quantify NRTA benefits using period-driven and data-driven testing. Period-driven testing makes a decision to alarm or not at fixed periods. Data-driven testing decides as the data arrives whether to alarm or continue testing. The difference between period-driven and datad-riven viewpoints is illustrated by using one-year andmore » two-year periods. For both one-year and two-year periods, period-driven NMA using once-per-year cumulative material unaccounted for (CUMUF) testing is compared to more frequent Shewhart and joint sequential cusum testing using either MUF or standardized, independently transformed MUF (SITMUF) data. We show that the data-driven viewpoint is appropriate for NRTA and that it can be used to compare safeguards effectiveness. In addition to providing period-driven and data-driven viewpoints, new features include assessing the impact of uncertainty in the estimated covariance matrix of the MUF sequence and the impact of both random and systematic measurement errors.« less
On the energy-momentum tensor in Moyal space
DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)
Balasin, Herbert; Blaschke, Daniel N.; Gieres, François; Schweda, Manfred
2015-06-26
We study the properties of the energy-momentum tensor of gauge fields coupled to matter in non-commutative (Moyal) space. In general, the non-commutativity affects the usual conservation law of the tensor as well as its transformation properties (gauge covariance instead of gauge invariance). It is known that the conservation of the energy-momentum tensor can be achieved by a redefinition involving another starproduct. Furthermore, for a pure gauge theory it is always possible to define a gauge invariant energy-momentum tensor by means of a Wilson line. We show that the latter two procedures are incompatible with each other if couplings of gaugemore » fields to matter fields (scalars or fermions) are considered: The gauge invariant tensor (constructed via Wilson line) does not allow for a redefinition assuring its conservation, and vice-versa the introduction of another star-product does not allow for gauge invariance by means of a Wilson line.« less
Sole, Claudio V.; Calvo, Felipe A.; Polo, Alfredo; Cambeiro, Mauricio; Gonzalez, Carmen; Desco, Manuel; Martinez-Monge, Rafael
2015-08-01
Purpose: To assess long-term outcomes and toxicity of intraoperative electron-beam radiation therapy (IOERT) in the management of pediatric patients with Ewing sarcomas (EWS) and rhabdomyosarcomas (RMS). Methods and Materials: Seventy-one sarcoma (EWS n=37, 52%; RMS n=34, 48%) patients underwent IOERT for primary (n=46, 65%) or locally recurrent sarcomas (n=25, 35%) from May 1983 to November 2012. Local control (LC), overall survival (OS), and disease-free survival were estimated using Kaplan-Meier methods. For survival outcomes, potential associations were assessed in univariate and multivariate analyses using the Cox proportional hazards model. Results: After a median follow-up of 72 months (range, 4-310 months), 10-year LC, disease-free survival, and OS was 74%, 57%, and 68%, respectively. In multivariate analysis after adjustment for other covariates, disease status (P=.04 and P=.05) and R1 margin status (P<.01 and P=.04) remained significantly associated with LC and OS. Nine patients (13%) reported severe chronic toxicity events (all grade 3). Conclusions: A multimodal IOERT-containing approach is a well-tolerated component of treatment for pediatric EWS and RMS patients, allowing reduction or substitution of external beam radiation exposure while maintaining high local control rates.
Normal form decomposition for Gaussian-to-Gaussian superoperators
De Palma, Giacomo; Mari, Andrea; Giovannetti, Vittorio; Holevo, Alexander S.
2015-05-15
In this paper, we explore the set of linear maps sending the set of quantum Gaussian states into itself. These maps are in general not positive, a feature which can be exploited as a test to check whether a given quantum state belongs to the convex hull of Gaussian states (if one of the considered maps sends it into a non-positive operator, the above state is certified not to belong to the set). Generalizing a result known to be valid under the assumption of complete positivity, we provide a characterization of these Gaussian-to-Gaussian (not necessarily positive) superoperators in terms of their action on the characteristic function of the inputs. For the special case of one-mode mappings, we also show that any Gaussian-to-Gaussian superoperator can be expressed as a concatenation of a phase-space dilatation, followed by the action of a completely positive Gaussian channel, possibly composed with a transposition. While a similar decomposition is shown to fail in the multi-mode scenario, we prove that it still holds at least under the further hypothesis of homogeneous action on the covariance matrix.
Receipt of Guideline-Concordant Treatment in Elderly Prostate Cancer Patients
Chen, Ronald C., E-mail: Ronald_chen@med.unc.edu [Department of Radiation Oncology, University of North Carolina at Chapel Hill, Chapel Hill, North Carolina (United States); Sheps Center for Health Services Research, University of North Carolina at Chapel Hill, Chapel Hill, North Carolina (United States); Lineberger Comprehensive Cancer Center, University of North Carolina at Chapel Hill, Chapel Hill, North Carolina (United States); Carpenter, William R. [Sheps Center for Health Services Research, University of North Carolina at Chapel Hill, Chapel Hill, North Carolina (United States); Lineberger Comprehensive Cancer Center, University of North Carolina at Chapel Hill, Chapel Hill, North Carolina (United States); Gillings School of Global Public Health, University of North Carolina at Chapel Hill, Chapel Hill, North Carolina (United States); Hendrix, Laura H. [Department of Radiation Oncology, University of North Carolina at Chapel Hill, Chapel Hill, North Carolina (United States); Bainbridge, John [Sheps Center for Health Services Research, University of North Carolina at Chapel Hill, Chapel Hill, North Carolina (United States); Wang, Andrew Z. [Department of Radiation Oncology, University of North Carolina at Chapel Hill, Chapel Hill, North Carolina (United States); Lineberger Comprehensive Cancer Center, University of North Carolina at Chapel Hill, Chapel Hill, North Carolina (United States); Nielsen, Matthew E. [Sheps Center for Health Services Research, University of North Carolina at Chapel Hill, Chapel Hill, North Carolina (United States); Lineberger Comprehensive Cancer Center, University of North Carolina at Chapel Hill, Chapel Hill, North Carolina (United States); Department of Urology, University of North Carolina at Chapel Hill, Chapel Hill, North Carolina (United States); and others
2014-02-01
Purpose: To examine the proportion of elderly prostate cancer patients receiving guideline-concordant treatment, using the Surveillance, Epidemiology, and End Results (SEER)-Medicare linked database. Methods and Materials: A total of 29,001 men diagnosed in 2004-2007 with localized prostate cancer, aged 66 to 79 years, were included. We characterized the proportion of men who received treatment concordant with the National Comprehensive Cancer Network guidelines, stratified by risk group and age. Logistic regression was used to examine covariates associated with receipt of guideline-concordant management. Results: Guideline concordance was 79%-89% for patients with low- or intermediate-risk disease. Among high-risk patients, 66.6% of those aged 66-69 years received guideline-concordant management, compared with 51.9% of those aged 75-79 years. Discordance was mainly due to conservative managementno treatment or hormone therapy alone. Among the subgroup of patients aged ?76 years with no measured comorbidity, findings were similar. On multivariable analysis, older age (75-79 vs 66-69 years, odds ratio 0.51, 95% confidence interval 0.50-0.57) was associated with a lower likelihood of guideline concordance for high-risk prostate cancer, but comorbidity was not. Conclusions: There is undertreatment of elderly but healthy patients with high-risk prostate cancer, the most aggressive form of this disease.
Quantization of systems with temporally varying discretization. I. Evolving Hilbert spaces
Hhn, Philipp A.
2014-08-15
A temporally varying discretization often features in discrete gravitational systems and appears in lattice field theory models subject to a coarse graining or refining dynamics. To better understand such discretization changing dynamics in the quantum theory, an according formalism for constrained variational discrete systems is constructed. While this paper focuses on global evolution moves and, for simplicity, restricts to flat configuration spaces R{sup N}, a Paper II [P. A. Hhn, Quantization of systems with temporally varying discretization. II. Local evolution moves, J. Math. Phys., e-print http://arxiv.org/abs/arXiv:1401.7731 [gr-qc].] discusses local evolution moves. In order to link the covariant and canonical picture, the dynamics of the quantum states is generated by propagators which satisfy the canonical constraints and are constructed using the action and group averaging projectors. This projector formalism offers a systematic method for tracing and regularizing divergences in the resulting state sums. Non-trivial coarse graining evolution moves lead to non-unitary, and thus irreversible, projections of physical Hilbert spaces and Dirac observables such that these concepts become evolution move dependent on temporally varying discretizations. The formalism is illustrated in a toy model mimicking a creation from nothing. Subtleties arising when applying such a formalism to quantum gravity models are discussed.
GravitoMagnetic force in modified Newtonian dynamics
Exirifard, Qasem
2013-08-01
We introduce the Gauge Vector-Tensor (GVT) theory by extending the AQUAL's approach to the GravitoElectroMagnetism (GEM) approximation of gravity. GVT is a generally covariant theory of gravity composed of a pseudo Riemannian metric and two U(1) gauge connections that reproduces MOND in the limit of very weak gravitational fields while remains consistent with the Einstein-Hilbert gravity in the limit of strong and Newtonian gravitational fields. GVT also provides a simple framework to study the GEM approximation to gravity. We illustrate that the gravitomagnetic force at the edge of a galaxy can be in accord with either GVT or ?CDM but not both. We also study the physics of the GVT theory around the gravitational saddle point of the Sun and Jupiter system. We notice that the conclusive refusal of the GVT theory demands measuring either both of the gravitoelectric and gravitomagnetic fields inside the Sun-Jupiter MOND window, or the gravitoelectric field inside two different solar GVT MOND windows. The GVT theory, however, will be favored by observing an anomaly in the gravitoelectric field inside a single MOND window.
Rucci, A.; Vasco, D.W.; Novali, F.
2010-04-01
Deformation in the overburden proves useful in deducing spatial and temporal changes in the volume of a producing reservoir. Based upon these changes we estimate diffusive travel times associated with the transient flow due to production, and then, as the solution of a linear inverse problem, the effective permeability of the reservoir. An advantage an approach based upon travel times, as opposed to one based upon the amplitude of surface deformation, is that it is much less sensitive to the exact geomechanical properties of the reservoir and overburden. Inequalities constrain the inversion, under the assumption that the fluid production only results in pore volume decreases within the reservoir. We apply the formulation to satellite-based estimates of deformation in the material overlying a thin gas production zone at the Krechba field in Algeria. The peak displacement after three years of gas production is approximately 0.5 cm, overlying the eastern margin of the anticlinal structure defining the gas field. Using data from 15 irregularly-spaced images of range change, we calculate the diffusive travel times associated with the startup of a gas production well. The inequality constraints are incorporated into the estimates of model parameter resolution and covariance, improving the resolution by roughly 30 to 40%.
Virtuality Distributions and γγ^{*} -> π^{0} Transition at Handbag Level
Radyushkin, Anatoly V.
2015-09-01
We outline a new approach to transverse momentum dependence in hard processes using as an example the exclusive transition ${\\gamma^{*}\\gamma \\to \\pi^{0}}$ at the handbag level. We start with the coordinate representation for a matrix element ${\\langle p |{\\cal O}(0,z) |0 \\rangle}$ of a bilocal operator ${{\\cal O} (0,z)}$ describing a hadron with momentum p. Treated as a function of (pz) and z$^{2}$, it is parametrized through virtuality distribution amplitude (VDA) Φ (x, σ), with x being Fourier-conjugate to (pz) and σ Laplace-conjugate to z$^{2}$. For intervals with z$^{+}$ = 0, we introduce the transverse momentum distribution amplitude (TMDA) ${\\Ψ (x,k_{\\perp})}$ , and write it in terms of VDA Φ (x, σ). The results of covariant calculations, written in terms of Φ (x, σ) are converted into expressions involving ${\\Ψ (x,k_{\\perp})}$ . We propose simple models for soft VDAs/TMDAs, and use them for comparison of handbag results with experimental (BaBar and BELLE) data on the pion transition form factor.
Oblozinsky, P.; Oblozinsky,P.; Herman,M.; Mughabghab,S.F.
2010-10-01
This chapter describes the current status of evaluated nuclear data for nuclear technology applications. We start with evaluation procedures for neutron-induced reactions focusing on incident energies from the thermal energy up to 20 MeV, though higher energies are also mentioned. This is followed by examining the status of evaluated neutron data for actinides that play dominant role in most of the applications, followed by coolants/moderators, structural materials and fission products. We then discuss neutron covariance data that characterize uncertainties and correlations. We explain how modern nuclear evaluated data libraries are validated against an extensive set of integral benchmark experiments. Afterwards, we briefly examine other data of importance for nuclear technology, including fission yields, thermal neutron scattering and decay data. A description of three major evaluated nuclear data libraries is provided, including the latest version of the US library ENDF/B-VII.0, European JEFF-3.1 and Japanese JENDL-3.3. A brief introduction is made to current web retrieval systems that allow easy access to a vast amount of up-to-date evaluated nuclear data for nuclear technology applications.
Turner, D P; Ritts, W D; Wharton, S; Thomas, C; Monson, R; Black, T A
2009-02-26
The combination of satellite remote sensing and carbon cycle models provides an opportunity for regional to global scale monitoring of terrestrial gross primary production, ecosystem respiration, and net ecosystem production. FPAR (the fraction of photosynthetically active radiation absorbed by the plant canopy) is a critical input to diagnostic models, however little is known about the relative effectiveness of FPAR products from different satellite sensors nor about the sensitivity of flux estimates to different parameterization approaches. In this study, we used multiyear observations of carbon flux at four eddy covariance flux tower sites within the conifer biome to evaluate these factors. FPAR products from the MODIS and SeaWiFS sensors, and the effects of single site vs. cross-site parameter optimization were tested with the CFLUX model. The SeaWiFs FPAR product showed greater dynamic range across sites and resulted in slightly reduced flux estimation errors relative to the MODIS product when using cross-site optimization. With site-specific parameter optimization, the flux model was effective in capturing seasonal and interannual variation in the carbon fluxes at these sites. The cross-site prediction errors were lower when using parameters from a cross-site optimization compared to parameter sets from optimization at single sites. These results support the practice of multisite optimization within a biome for parameterization of diagnostic carbon flux models.
Spherical collapse in Galileon gravity: fifth force solutions, halo mass function and halo bias
Barreira, Alexandre; Li, Baojiu; Baugh, Carlton M.; Pascoli, Silvia E-mail: liqb@mail.ihep.ac.cn E-mail: silvia.pascoli@durham.ac.uk
2013-11-01
We study spherical collapse in the Quartic and Quintic Covariant Galileon gravity models within the framework of the excursion set formalism. We derive the nonlinear spherically symmetric equations in the quasi-static and weak-field limits, focusing on model parameters that fit current CMB, SNIa and BAO data. We demonstrate that the equations of the Quintic model do not admit physical solutions of the fifth force in high density regions, which prevents the study of structure formation in this model. For the Quartic model, we show that the effective gravitational strength deviates from the standard value at late times (z?<1), becoming larger if the density is low, but smaller if the density is high. This shows that the Vainshtein mechanism at high densities is not enough to screen all of the modifications of gravity. This makes halos that collapse at z?<1 feel an overall weaker gravity, which suppresses halo formation. However, the matter density in the Quartic model is higher than in standard ?CDM, which boosts structure formation and dominates over the effect of the weaker gravity. In the Quartic model there is a significant overabundance of high-mass halos relative to ?CDM. Dark matter halos are also less biased than in ?CDM, with the difference increasing appreciably with halo mass. However, our results suggest that the bias may not be small enough to fully reconcile the predicted matter power spectrum with LRG clustering data.
2D stochastic-integral models for characterizing random grain noise in titanium alloys
Sabbagh, Harold A.; Murphy, R. Kim; Sabbagh, Elias H.; Cherry, Matthew; Pilchak, Adam; Knopp, Jeremy S.; Blodgett, Mark P.
2014-02-18
We extend our previous work, in which we applied high-dimensional model representation (HDMR) and analysis of variance (ANOVA) concepts to the characterization of a metallic surface that has undergone a shot-peening treatment to reduce residual stresses, and has, therefore, become a random conductivity field. That example was treated as a onedimensional problem, because those were the only data available. In this study, we develop a more rigorous two-dimensional model for characterizing random, anisotropic grain noise in titanium alloys. Such a model is necessary if we are to accurately capture the 'clumping' of crystallites into long chains that appear during the processing of the metal into a finished product. The mathematical model starts with an application of the Karhunen-Love (K-L) expansion for the random Euler angles, ? and ?, that characterize the orientation of each crystallite in the sample. The random orientation of each crystallite then defines the stochastic nature of the electrical conductivity tensor of the metal. We study two possible covariances, Gaussian and double-exponential, which are the kernel of the K-L integral equation, and find that the double-exponential appears to satisfy measurements more closely of the two. Results based on data from a Ti-7Al sample will be given, and further applications of HDMR and ANOVA will be discussed.
Parma, Edward J.,; Vehar, David W.; Lippert, Lance L.; Griffin, Patrick J.; Naranjo, Gerald E.; Luker, Spencer M.
2015-06-01
This document presents the facility-recommended characterization of the neutron, prompt gamma-ray, and delayed gamma-ray radiation fields in the Annular Core Research Reactor (ACRR) for the polyethylene-lead-graphite (PLG) bucket in the central cavity on the 32-inch pedestal at the core centerline. The designation for this environment is ACRR-PLG-CC-32-cl. The neutron, prompt gamma-ray, and delayed gamma-ray energy spectra, uncertainties, and covariance matrices are presented as well as radial and axial neutron and gamma-ray fluence profiles within the experiment area of the bucket. Recommended constants are given to facilitate the conversion of various dosimetry readings into radiation metrics desired by experimenters. Representative pulse operations are presented with conversion examples. Acknowledgements The authors wish to thank the Annular Core Research Reactor staff and the Radiation Metrology Laboratory staff for their support of this work. Also thanks to David Ames for his assistance in running MCNP on the Sandia parallel machines.
Daily diaries of respiratory symptoms and air pollution: Methodological issues and results
Schwartz, J. ); Wypij, D.; Dockery D.; Ware, J.; Spengler, J.; Ferris, B. Jr. ); Zeger, S. )
1991-01-01
Daily diaries of respiratory symptoms are a powerful technique for detecting acute effects of air pollution exposure. While conceptually simple, these diary studies can be difficult to analyze. The daily symptom rates are highly correlated, even after adjustment for covariates, and this lack of independence must be considered in the analysis. Possible approaches include the use of incidence instead of prevalence rates and autoregressive models. Heterogeneity among subjects also induces dependencies in the data. These can be addressed by stratification and by two-stage models such as those developed by Korn and Whittemore. These approaches have been applied to two data sets: a cohort of school children participating in the Harvard Six Cities Study and a cohort of student nurses in Los Angeles. Both data sets provide evidence of autocorrelation and heterogeneity. Controlling for autocorrelation corrects the precision estimates, and because diary data are usually positively autocorrelated, this leads to larger variance estimates. Controlling for heterogeneity among subjects appears to increase the effect sizes for air pollution exposure. Preliminary results indicate associations between sulfur dioxide and cough incidence in children and between nitrogen dioxide and phlegm incidence in student nurses.
Konomi, Bledar A.; Karagiannis, Georgios; Sarkar, Avik; Sun, Xin; Lin, Guang
2014-05-16
Computer experiments (numerical simulations) are widely used in scientific research to study and predict the behavior of complex systems, which usually have responses consisting of a set of distinct outputs. The computational cost of the simulations at high resolution are often expensive and become impractical for parametric studies at different input values. To overcome these difficulties we develop a Bayesian treed multivariate Gaussian process (BTMGP) as an extension of the Bayesian treed Gaussian process (BTGP) in order to model and evaluate a multivariate process. A suitable choice of covariance function and the prior distributions facilitates the different Markov chain Monte Carlo (MCMC) movements. We utilize this model to sequentially sample the input space for the most informative values, taking into account model uncertainty and expertise gained. A simulation study demonstrates the use of the proposed method and compares it with alternative approaches. We apply the sequential sampling technique and BTMGP to model the multiphase flow in a full scale regenerator of a carbon capture unit. The application presented in this paper is an important tool for research into carbon dioxide emissions from thermal power plants.
Argonne Terrestrial Carbon Cycle Data from Batavia Prairie and Agricultural Sites
DOE Data Explorer [Office of Scientific and Technical Information (OSTI)]
Matamala, Roser [ANL; Jastrow, Julie D.; Lesht, Barry [ANL; Cook, David [ANL; Pekour, Mikhail [ANL; Gonzalez-Meler, Miquel A. [University of Illinois at Chicago
Carbon dioxide fluxes and stocks in terrestrial ecosystems are key measurements needed to constrain quantification of regional carbon sinks and sources and the mechanisms controlling them. This information is required to produce a sound carbon budget for North America. This project examines CO2 and energy fluxes from agricultural land and from restored tallgrass prairie to compare their carbon sequestration potentials. The study integrates eddy covariance measurements with biometric measurements of plant and soil carbon stocks for two systems in northeastern Illinois: 1) long-term cultivated land in corn-soybean rotation with conventional tillage, and 2) a 15 year-old restored prairie that represents a long-term application of CRP conversion of cultivated land to native vegetation. The study contributes to the North American Carbon Program (NACP) by providing information on the magnitude and distribution of carbon stocks and the processes that control carbon dynamics in cultivated and CRP-restored land in the Midwest. The prairie site has been functioning since October 2004 and the agricultural site since July 2005. (From http://www.atmos.anl.gov/ FERMI/index.html)
The relationship between interannual and long-term cloud feedbacks
DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)
Zhou, Chen; Zelinka, Mark D.; Dessler, Andrew E.; Klein, Stephen A.
2015-12-11
The analyses of Coupled Model Intercomparison Project phase 5 simulations suggest that climate models with more positive cloud feedback in response to interannual climate fluctuations also have more positive cloud feedback in response to long-term global warming. Ensemble mean vertical profiles of cloud change in response to interannual and long-term surface warming are similar, and the ensemble mean cloud feedback is positive on both timescales. However, the average long-term cloud feedback is smaller than the interannual cloud feedback, likely due to differences in surface warming pattern on the two timescales. Low cloud cover (LCC) change in response to interannual andmore » long-term global surface warming is found to be well correlated across models and explains over half of the covariance between interannual and long-term cloud feedback. In conclusion, the intermodel correlation of LCC across timescales likely results from model-specific sensitivities of LCC to sea surface warming.« less
Interpretation of the MEG-MUSIC scan in biomagnetic source localization
Mosher, J.C.; Lewis, P.S.; Leahy, R.M.
1993-09-01
MEG-Music is a new approach to MEG source localization. MEG-Music is based on a spatio-temporal source model in which the observed biomagnetic fields are generated by a small number of current dipole sources with fixed positions/orientations and varying strengths. From the spatial covariance matrix of the observed fields, a signal subspace can be identified. The rank of this subspace is equal to the number of elemental sources present. This signal sub-space is used in a projection metric that scans the three dimensional head volume. Given a perfect signal subspace estimate and a perfect forward model, the metric will peak at unity at each dipole location. In practice, the signal subspace estimate is contaminated by noise, which in turn yields MUSIC peaks which are less than unity. Previously we examined the lower bounds on localization error, independent of the choice of localization procedure. In this paper, we analyzed the effects of noise and temporal coherence on the signal subspace estimate and the resulting effects on the MEG-MUSIC peaks.
AllamehZadeh, Mostafa
2011-12-15
A Quadratic Neural Networks (QNNs) model has been developed for identifying seismic source classification problem at regional distances using ARMA coefficients determination by Artificial Neural Networks (ANNs). We have devised a supervised neural system to discriminate between earthquakes and chemical explosions with filter coefficients obtained by windowed P-wave phase spectra (15 s). First, we preprocess the recording's signals to cancel out instrumental and attenuation site effects and obtain a compact representation of seismic records. Second, we use a QNNs system to obtain ARMA coefficients for feature extraction in the discrimination problem. The derived coefficients are then applied to the neural system to train and classification. In this study, we explore the possibility of using single station three-component (3C) covariance matrix traces from a priori-known explosion sites (learning) for automatically recognizing subsequent explosions from the same site. The results have shown that this feature extraction gives the best classifier for seismic signals and performs significantly better than other classification methods. The events have been tested, which include 36 chemical explosions at the Semipalatinsk test site in Kazakhstan and 61 earthquakes (mb = 5.0-6.5) recorded by the Iranian National Seismic Network (INSN). The 100% correct decisions were obtained between site explosions and some of non-site events. The above approach to event discrimination is very flexible as we can combine several 3C stations.
Couplings between Chern-Simons gravities and 2p-branes
Miskovic, Olivera; Zanelli, Jorge
2009-08-15
The interaction between Chern-Simons (CS) theories and localized external sources (2p-branes) is analyzed. This interaction generalizes the minimal coupling between a point charge (0-brane) and a gauge connection. The external currents that define the 2p branes are covariantly constant (D-2p-1)-forms coupled to (2p-1) CS forms. The general expression for the sources--charged with respect to the corresponding gauge algebra--is presented, focusing on two special cases: 0-branes and (D-3)-branes. In any dimension, 0-branes are constructed as topological defects produced by a surface deficit of (D-2)-sphere in anti-de Sitter space, and they are not constant curvature spaces for D>3. They correspond to dimensionally continued black holes with negative mass. On the other hand, in the case of CS (super) gravities, the (D-3)-branes are naked conical singularities (topological defects) obtained by identification of points with a Killing vector. In 2+1 dimensions, extremal spinning branes of this type are Bogomol'nyi-Prasad-Sommerfield states. Stable (D-3)-branes are shown to exist also in higher dimensions, as well. Classical field equations are also discussed, and in the presence of sources there is a large number of inequivalent and disconnected sectors in solution space.
Information geometry of Gaussian channels
Monras, Alex; Illuminati, Fabrizio
2010-06-15
We define a local Riemannian metric tensor in the manifold of Gaussian channels and the distance that it induces. We adopt an information-geometric approach and define a metric derived from the Bures-Fisher metric for quantum states. The resulting metric inherits several desirable properties from the Bures-Fisher metric and is operationally motivated by distinguishability considerations: It serves as an upper bound to the attainable quantum Fisher information for the channel parameters using Gaussian states, under generic constraints on the physically available resources. Our approach naturally includes the use of entangled Gaussian probe states. We prove that the metric enjoys some desirable properties like stability and covariance. As a by-product, we also obtain some general results in Gaussian channel estimation that are the continuous-variable analogs of previously known results in finite dimensions. We prove that optimal probe states are always pure and bounded in the number of ancillary modes, even in the presence of constraints on the reduced state input in the channel. This has experimental and computational implications. It limits the complexity of optimal experimental setups for channel estimation and reduces the computational requirements for the evaluation of the metric: Indeed, we construct a converging algorithm for its computation. We provide explicit formulas for computing the multiparametric quantum Fisher information for dissipative channels probed with arbitrary Gaussian states and provide the optimal observables for the estimation of the channel parameters (e.g., bath couplings, squeezing, and temperature).
McCuller, Lee Patrick
2015-12-01
The Holometer is designed to test for a Planck diffractive-scaling uncertainty in long-baseline position measurements due to an underlying noncommutative geometry normalized to relate Black hole entropy bounds of the Holographic principle to the now-finite number of position states. The experiment overlaps two independent 40 meter optical Michelson interferometers to detect the proposed uncertainty as a common broadband length fluctuation. 150 hours of instrument cross-correlation data are analyzed to test the prediction of a correlated noise magnitude of $7\\times10^{−21}$~m/$\\sqrt{\\rm Hz}$ with an effective bandwidth of 750kHz. The interferometers each have a quantum-limited sensitivity of $2.5\\times 10^{−18}$~m/$\\sqrt{\\rm Hz}$, but their correlation with a time-bandwidth product of $4\\times 10^{11}$ digs between the noise floors in search for the covarying geometric jitter. The data presents an exclusion of 5 standard deviations for the tested model. This exclus ion is defended through analysis of the calibration methods for the instrument as well as further sub shot noise characterization of the optical systems to limit spurious background-correlations from undermining the signal.
Validation of MCNP6.1 for Criticality Safety of Pu-Metal, -Solution, and -Oxide Systems
Kiedrowski, Brian C.; Conlin, Jeremy Lloyd; Favorite, Jeffrey A.; Kahler, III, Albert C.; Kersting, Alyssa R.; Parsons, Donald K.; Walker, Jessie L.
2014-05-13
Guidance is offered to the Los Alamos National Laboratory Nuclear Criticality Safety division towards developing an Upper Subcritical Limit (USL) for MCNP6.1 calculations with ENDF/B-VII.1 nuclear data for three classes of problems: Pu-metal, -solution, and -oxide systems. A benchmark suite containing 1,086 benchmarks is prepared, and a sensitivity/uncertainty (S/U) method with a generalized linear least squares (GLLS) data adjustment is used to reject outliers, bringing the total to 959 usable benchmarks. For each class of problem, S/U methods are used to select relevant experimental benchmarks, and the calculational margin is computed using extreme value theory. A portion of the margin of sub criticality is defined considering both a detection limit for errors in codes and data and uncertainty/variability in the nuclear data library. The latter employs S/U methods with a GLLS data adjustment to find representative nuclear data covariances constrained by integral experiments, which are then used to compute uncertainties in k_{eff} from nuclear data. The USLs for the classes of problems are as follows: Pu metal, 0.980; Pu solutions, 0.973; dry Pu oxides, 0.978; dilute Pu oxide-water mixes, 0.970; and intermediate-spectrum Pu oxide-water mixes, 0.953.
Higher coronary heart disease and heart attack morbidity in Appalachian coal mining regions
Hendryx, M.; Zullig, K.J.
2009-11-15
This study analyzes the U.S. 2006 Behavioral Risk Factor Surveillance System survey data (N = 235,783) to test whether self-reported cardiovascular disease rates are higher in Appalachian coal mining counties compared to other counties after control for other risks. Dependent variables include self-reported measures of ever (1) being diagnosed with cardiovascular disease (CVD) or with a specific form of CVD including (2) stroke, (3) heart attack, or (4) angina or coronary heart disease (CHD). Independent variables included coal mining, smoking, BMI, drinking, physician supply, diabetes co-morbidity, age, race/ethnicity, education, income, and others. SUDAAN Multilog models were estimated, and odds ratios tested for coal mining effects. After control for covariates, people in Appalachian coal mining areas reported significantly higher risk of CVD (OR = 1.22, 95% CI = 1.14-1.30), angina or CHO (OR = 1.29, 95% C1 = 1.19-1.39) and heart attack (OR = 1.19, 95% C1 = 1.10-1.30). Effects were present for both men and women. Cardiovascular diseases have been linked to both air and water contamination in ways consistent with toxicants found in coal and coal processing. Future research is indicated to assess air and water quality in coal mining communities in Appalachia, with corresponding environmental programs and standards established as indicated.
MCClay, Joseph L.; Adkins, Daniel E.; Isern, Nancy G.; O'Connell, Thomas M.; Wooten, Jan B.; Zedler, Barbara K.; Dasika, Madhukar S.; Webb, B. T.; Webb-Robertson, Bobbie-Jo M.; Pounds, Joel G.; Murrelle, Edward L.; Leppert, Mark F.; van den Oord, Edwin J.
2010-06-04
Chronic obstructive pulmonary disease (COPD), characterized by chronic airflow limitation, is a serious and growing public health concern. The major environmental risk factor for COPD is tobacco smoking, but the biological mechanisms underlying COPD are not well understood. In this study, we used proton nuclear magnetic resonance (1H-NMR) spectroscopy to identify and quantify metabolites associated with lung function in COPD. Plasma and urine were collected from 197 adults with COPD and from 195 adults without COPD. Samples were assayed using a 600 MHz NMR spectrometer, and the resulting spectra were analyzed against quantitative spirometric measures of lung function. After correcting for false discoveries and adjusting for covariates (sex, age, smoking) several spectral regions in urine were found to be significantly associated with baseline lung function. These regions correspond to the metabolites trigonelline, hippurate and formate. Concentrations of each metabolite, standardized to urinary creatinine, were associated with baseline lung function (minimum p-value = 0.0002 for trigonelline). No significant associations were found with plasma metabolites. Two of the three urinary metabolites positively associated with baseline lung function, i.e. hippurate and formate, are often related to gut microflora. This suggests that the microbiome composition is variable between individuals with different lung function. Alternatively, the nature and origins of all three associated metabolites may reflect lifestyle differences affecting overall health. Our results will require replication and validation, but demonstrate the utility of NMR metabolomics as a screening tool for identifying novel biomarkers of lung disease or disease risk.
DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)
Tang, Shuaiqi; Zhang, Minghua; Xie, Shaocheng
2016-01-05
Large-scale atmospheric forcing data can greatly impact the simulations of atmospheric process models including Large Eddy Simulations (LES), Cloud Resolving Models (CRMs) and Single-Column Models (SCMs), and impact the development of physical parameterizations in global climate models. This study describes the development of an ensemble variationally constrained objective analysis of atmospheric large-scale forcing data and its application to evaluate the cloud biases in the Community Atmospheric Model (CAM5). Sensitivities of the variational objective analysis to background data, error covariance matrix and constraint variables are described and used to quantify the uncertainties in the large-scale forcing data. Application of the ensemblemore » forcing in the CAM5 SCM during March 2000 intensive operational period (IOP) at the Southern Great Plains (SGP) of the Atmospheric Radiation Measurement (ARM) program shows systematic biases in the model simulations that cannot be explained by the uncertainty of large-scale forcing data, which points to the deficiencies of physical parameterizations. The SCM is shown to overestimate high clouds and underestimate low clouds. These biases are found to also exist in the global simulation of CAM5 when it is compared with satellite data.« less
BHR equations re-derived with immiscible particle effects
Schwarzkopf, John Dennis; Horwitz, Jeremy A.
2015-05-01
Compressible and variable density turbulent flows with dispersed phase effects are found in many applications ranging from combustion to cloud formation. These types of flows are among the most challenging to simulate. While the exact equations governing a system of particles and fluid are known, computational resources limit the scale and detail that can be simulated in this type of problem. Therefore, a common method is to simulate averaged versions of the flow equations, which still capture salient physics and is relatively less computationally expensive. Besnard developed such a model for variable density miscible turbulence, where ensemble-averaging was applied to the flow equations to yield a set of filtered equations. Besnard further derived transport equations for the Reynolds stresses, the turbulent mass flux, and the density-specific volume covariance, to help close the filtered momentum and continuity equations. We re-derive the exact BHR closure equations which include integral terms owing to immiscible effects. Physical interpretations of the additional terms are proposed along with simple models. The goal of this work is to extend the BHR model to allow for the simulation of turbulent flows where an immiscible dispersed phase is non-trivially coupled with the carrier phase.
Oil industry investment and research as portfolio choices
Helfat, C.E.
1985-01-01
The Tobin-Markowitz portfolio selection model is used to test two hypotheses: (1) the oil price increase of 1973-74 altered the structure of oil industry risks and returns in favor of certain types of research and investment; (2) the altered structure of risks and their correlations affected the allocation of funds to capital investment and research and development in the oil industry. To test these hypotheses, the efficient frontiers of investment and R and D projects for a representative firm in the oil industry are derived empirically, pre-embargo and post-embargo. In deriving the efficient frontiers, the Tobin-Markowitz model is altered to account for an asset whose supply to the industry if fixed and whose price is determined endogenously from the portfolio selection model itself. This asset is an offshore oil tract. The government fixes the supply of offshore oil tracts to the industry, for which the firms submit sealed bids. Because the returns to investment in offshore oil covary with the returns to other types of industry investment and R and D, firms determine the price to bid for a tract in conjunction with the allocation of funds to all of the firm's projects. Both the actual expenditure shares by the industry and those predicted by the model showed an increased share of the portfolio devoted to offshore oil investment and a decreased share to other projects after the embargo.
Three-dimensional stationary cyclic symmetric Einstein-Maxwell solutions; black holes
Garcia, Alberto A.
2009-09-15
From a general metric for stationary cyclic symmetric gravitational fields coupled to Maxwell electromagnetic fields within the (2 + 1)-dimensional gravity the uniqueness of wide families of exact solutions is established. Among them, all uniform electromagnetic solutions possessing electromagnetic fields with vanishing covariant derivatives, all fields having constant electromagnetic invariants F{sub {mu}}{sub {nu}}F{sup {mu}}{sup {nu}} and T{sub {mu}}{sub {nu}}T{sup {mu}}{sup {nu}}, the whole classes of hybrid electromagnetic solutions, and also wide classes of stationary solutions are derived for a third-order nonlinear key equation. Certain of these families can be thought of as black hole solutions. For the most general set of Einstein-Maxwell equations, reducible to three nonlinear equations for the three unknown functions, two new classes of solutions - having anti-de Sitter spinning metric limit - are derived. The relationship of various families with those reported by different authors' solutions has been established. Among the classes of solutions with cosmological constant a relevant place is occupied by the electrostatic and magnetostatic Peldan solutions, the stationary uniform and spinning Clement classes, the constant electromagnetic invariant branches with the particular Kamata-Koikawa solution, the hybrid cyclic symmetric stationary black hole fields, and the non-less important solutions generated via SL(2,R)-transformations where the Clement spinning charged solution, the Martinez-Teitelboim-Zanelli black hole solution, and Dias-Lemos metric merit mention.
Optimizing the choice of spin-squeezed states for detecting and characterizing quantum processes
DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)
Rozema, Lee A.; Mahler, Dylan H.; Blume-Kohout, Robin; Steinberg, Aephraim M.
2014-11-07
Quantum metrology uses quantum states with no classical counterpart to measure a physical quantity with extraordinary sensitivity or precision. Most such schemes characterize a dynamical process by probing it with a specially designed quantum state. The success of such a scheme usually relies on the process belonging to a particular one-parameter family. If this assumption is violated, or if the goal is to measure more than one parameter, a different quantum state may perform better. In the most extreme case, we know nothing about the process and wish to learn everything. This requires quantum process tomography, which demands an informationallymore » complete set of probe states. It is very convenient if this set is group covariant—i.e., each element is generated by applying an element of the quantum system’s natural symmetry group to a single fixed fiducial state. In this paper, we consider metrology with 2-photon (“biphoton”) states and report experimental studies of different states’ sensitivity to small, unknown collective SU(2) rotations [“SU(2) jitter”]. Maximally entangled N00N states are the most sensitive detectors of such a rotation, yet they are also among the worst at fully characterizing an a priori unknown process. We identify (and confirm experimentally) the best SU(2)-covariant set for process tomography; these states are all less entangled than the N00N state, and are characterized by the fact that they form a 2-design.« less
Precision Gas Sampling (PGS) Validation2011-2014 Final Campaign Report
Tom, M. S.; Fischer, M. L.; Biraud, S. C.; Billesbach, D.
2016-01-01
In this field campaign, we used eddy covariance towers to quantify carbon, water, and energy fluxes from a pasture and a wheat field that were converted to switchgrass. The U.S. Department of Energy is investing in switchgrass as a cellulosic bioenergy crop, but there is little data available that could be used to develop or test land surface model representations of the crop. This campaign was a collaboration between Lawrence Berkeley National Laboratory and the U.S. Department of Agriculture Agricultural Research Service. Unfortunately, in 2011, Oklahoma had one of the most severe droughts on record, and the crop in one of the switchgrass fields experienced almost complete die-off. The crop was replanted, but subsequent drought conditions prevented its establishment. Then, in April 2012, a large tornado demolished the instruments at our site in Woodward, Oklahoma. These two events meant that we have some interesting data on land response to extreme weather; however, we were not able to collect continuous data for annual sums as originally intended. We did observe that, because of the drought, the net ecosystem exchange of CO_{2} was much lower in 2011 than in 2010. Concomitantly, sensible heat fluxes increased and latent heat fluxes decreased. These conditions would have large consequences for land surface forcing of convection. Data from all years were submitted to the Atmospheric Radiation Measurement Climate Research Facility Data Archive, and the sites were registered in AmeriFlux.
Integral quantizations with two basic examples
Bergeron, H.; Gazeau, J.P.
2014-05-15
The paper concerns integral quantization, a procedure based on operator-valued measure and resolution of the identity. We insist on covariance properties in the important case where group representation theory is involved. We also insist on the inherent probabilistic aspects of this classicalquantum map. The approach includes and generalizes coherent state quantization. Two applications based on group representation are carried out. The first one concerns the WeylHeisenberg group and the euclidean plane viewed as the corresponding phase space. We show that a world of quantizations exist, which yield the canonical commutation rule and the usual quantum spectrum of the harmonic oscillator. The second one concerns the affine group of the real line and gives rise to an interesting regularization of the dilation origin in the half-plane viewed as the corresponding phase space. -- Highlights: Original approach to quantization based on (positive) operator-valued measures. Includes BerezinKlauderToeplitz and WeylWigner quantizations. Infinitely many such quantizations produce canonical commutation rule. Set of objects to be quantized is enlarged in order to include singular functions or distributions. Are given illuminating examples like quantum angle and affine or wavelet quantization.
A model-free temperature-dependent conformational study of n-pentane in nematic liquid crystals
Burnell, E. Elliott; Weber, Adrian C. J.; Dong, Ronald Y.; Meerts, W. Leo; Lange, Cornelis A. de
2015-01-14
The proton NMR spectra of n-pentane orientationally ordered in two nematic liquid-crystal solvents are studied over a wide temperature range and analysed using covariance matrix adaptation evolutionary strategy. Since alkanes possess small electrostatic moments, their anisotropic intermolecular interactions are dominated by short-range size-and-shape effects. As we assumed for n-butane, the anisotropic energy parameters of each n-pentane conformer are taken to be proportional to those of ethane and propane, independent of temperature. The observed temperature dependence of the n-pentane dipolar couplings allows a model-free separation between conformer degrees of order and conformer probabilities, which cannot be achieved at a single temperature. In this way for n-pentane 13 anisotropic energy parameters (two for trans trans, tt, five for trans gauche, tg, and three for each of gauche{sub +} gauche{sub +}, pp, and gauche{sub +} gauche{sub ?}, pm), the isotropic trans-gauche energy difference E{sub tg} and its temperature coefficient E{sub tg}{sup ?} are obtained. The value obtained for the extra energy associated with the proximity of the two methyl groups in the gauche{sub +} gauche{sub ?} conformers (the pentane effect) is sensitive to minute details of other assumptions and is thus fixed in the calculations. Conformer populations are affected by the environment. In particular, anisotropic interactions increase the trans probability in the ordered phase.
Cho decomposition of electrically charged one-half monopole
Ng, Ban-Loong; Teh, Rosy; Wong, Khai-Ming
2014-03-05
Recently we have carried out some work on the Cho decomposition of the electrically neutral, finite energy one-half monopole solution of the SU(2) Yang-Mills-Higgs field theory. In this paper, we performed the decomposition of the electrically charged solution using the same numerical procedure. The gauge potential of the one-half dyon solution is decomposed into Abelian and non-Abelian components. The semi-infinite string singularity in the gauge potential is a contribution of the Higgs field and hence topological in nature. The string singularity cannot be cancelled by the non-Abelian components of the gauge potential. However, the string singularity is integrable and the energy of the solution is finite. By decomposing the magnetic fields and covariant derivatives of the Higgs field into three isospin space directions, we are able to provide conclusive evidence that the constructed one-half dyon is certainly a non-BPS solution even in the limit of vanishing Higgs self-coupling constant and electric charge. Furthermore, we found that the time component of gauge function is parallel to the Higgs field in isospace only at large distances, elsewhere they are non-parallel.
EXTENSION OF THE NUCLEAR REACTION MODEL CODE EMPIRE TO ACTINIDES NUCLEAR DATA EVALUATION.
CAPOTE,R.; SIN, M.; TRKOV, A.; HERMAN, M.; CARLSON, B.V.; OBLOZINSKY, P.
2007-04-22
Recent extensions and improvements of the EMPIRE code system are outlined. They add new capabilities to the code, such as prompt fission neutron spectra calculations using Hauser-Feshbach plus pre-equilibrium pre-fission spectra, cross section covariance matrix calculations by Monte Carlo method, fitting of optical model parameters, extended set of optical model potentials including new dispersive coupled channel potentials, parity-dependent level densities and transmission through numerically defined fission barriers. These features, along with improved and validated ENDF formatting, exclusive/inclusive spectra, and recoils make the current EMPIRE release a complete and well validated tool for evaluation of nuclear data at incident energies above the resonance region. The current EMPIRE release has been used in evaluations of neutron induced reaction files for {sup 232}Th and {sup 231,233}Pa nuclei in the fast neutron region at IAEA. Triple-humped fission barriers and exclusive pre-fission neutron spectra were considered for the fission data evaluation. Total, fission, capture and neutron emission cross section, average resonance parameters and angular distributions of neutron scattering are in excellent agreement with the available experimental data.
Lagerlöf, Jakob H.; Kindblom, Jon; Bernhardt, Peter
2014-09-15
Purpose: To construct a Monte Carlo (MC)-based simulation model for analyzing the dependence of tumor oxygen distribution on different variables related to tumor vasculature [blood velocity, vessel-to-vessel proximity (vessel proximity), and inflowing oxygen partial pressure (pO{sub 2})]. Methods: A voxel-based tissue model containing parallel capillaries with square cross-sections (sides of 10 μm) was constructed. Green's function was used for diffusion calculations and Michaelis-Menten's kinetics to manage oxygen consumption. The model was tuned to approximately reproduce the oxygenational status of a renal carcinoma; the depth oxygenation curves (DOC) were fitted with an analytical expression to facilitate rapid MC simulations of tumor oxygen distribution. DOCs were simulated with three variables at three settings each (blood velocity, vessel proximity, and inflowing pO{sub 2}), which resulted in 27 combinations of conditions. To create a model that simulated variable oxygen distributions, the oxygen tension at a specific point was randomly sampled with trilinear interpolation in the dataset from the first simulation. Six correlations between blood velocity, vessel proximity, and inflowing pO{sub 2} were hypothesized. Variable models with correlated parameters were compared to each other and to a nonvariable, DOC-based model to evaluate the differences in simulated oxygen distributions and tumor radiosensitivities for different tumor sizes. Results: For tumors with radii ranging from 5 to 30 mm, the nonvariable DOC model tended to generate normal or log-normal oxygen distributions, with a cut-off at zero. The pO{sub 2} distributions simulated with the six-variable DOC models were quite different from the distributions generated with the nonvariable DOC model; in the former case the variable models simulated oxygen distributions that were more similar to in vivo results found in the literature. For larger tumors, the oxygen distributions became truncated in the lower end, due to anoxia, but smaller tumors showed undisturbed oxygen distributions. The six different models with correlated parameters generated three classes of oxygen distributions. The first was a hypothetical, negative covariance between vessel proximity and pO{sub 2} (VPO-C scenario); the second was a hypothetical positive covariance between vessel proximity and pO{sub 2} (VPO+C scenario); and the third was the hypothesis of no correlation between vessel proximity and pO{sub 2} (UP scenario). The VPO-C scenario produced a distinctly different oxygen distribution than the two other scenarios. The shape of the VPO-C scenario was similar to that of the nonvariable DOC model, and the larger the tumor, the greater the similarity between the two models. For all simulations, the mean oxygen tension decreased and the hypoxic fraction increased with tumor size. The absorbed dose required for definitive tumor control was highest for the VPO+C scenario, followed by the UP and VPO-C scenarios. Conclusions: A novel MC algorithm was presented which simulated oxygen distributions and radiation response for various biological parameter values. The analysis showed that the VPO-C scenario generated a clearly different oxygen distribution from the VPO+C scenario; the former exhibited a lower hypoxic fraction and higher radiosensitivity. In future studies, this modeling approach might be valuable for qualitative analyses of factors that affect oxygen distribution as well as analyses of specific experimental and clinical situations.
Fares, Silvano; Schnitzhofer, Ralf; Xiaoyan, Jiang; Guenther, Alex B.; Hansel, Armin; Loreto, Francesco
2013-09-04
Most vascular plants species, especially trees, emit biogenic volatile organic compounds (BVOC). Global estimates of BVOC emissions from plants range from 1 to 1.5 Pg C yr?1.1 Mediterranean forest trees have been described as high BVOC emitters, with emission depending primarily on light and temperature, and therefore being promoted by the warm Mediterranean climate. In the presence of sufficient sunlight and nitrogen oxides (NOx), the oxidation of BVOCs can lead to the formation of tropospheric ozone, a greenhouse gas with detrimental effects on plant health, crop yields, and human health. BVOCs are also precursors for aerosol formation, accounting for a significant fraction of secondary organic aerosol (SOA) produced in the atmosphere. The presidential Estate of Castelporziano covers an area of about 6000 ha located 25 km SW from the center of Rome, Italy (Figure 1) and hosts representative forest ecosystems typical of Mediterranean areas: holm oak forests, pine forests, dune vegetation, mixed oak and pine forests. Between 1995 and 2011, three intensive field campaigns were carried out on Mediterranean-type ecosystems inside the Estate. These campaigns were aimed at measuring BVOC emissions and environmental parameters, to improve formulation of basal emission factors (BEFs), that is, standardized emissions at 30 C and 1000 ?mol m?2s?1 of photosynthetic active radiation (PAR). BEFs are key input parameters of emission models. The first campaign in Castelporziano was a pioneering integrated study on biogenic emissions (1993? 19964). BVOC fluxes from different forest ecosystems were mainly investigated using plant- and leaf enclosures connected to adsorption tubes followed by GC?MS analysis in the laboratory. This allowed a first screening of Mediterranean species with respect to their BVOC emission potential, environmental control, and emission algorithms. In particular, deciduous oak species revealed high isoprene emissions (Quercus f rainetto, Quercus petrea, Quercus pubescens), while evergreen oaks emitted monoterpenes only, for example, Quercus ilex = holm oak. Differences in constitutive emission patterns discovered in Castelporziano supplied basic information to discriminate oak biodiversity in following studies.Ten years later, a second experimental campaign took place in spring and summer 2007 on a dune-shrubland experimental site. In this campaign, the use of a proton transfer reaction mass spectrometer (PTR-MS14) provided the fast BVOC observations necessary for quasi-real-time flux measurements using Disjunct Eddy Covariance. This allowed for the first time continuous measurements and BEFs calculation at canopy level. Finally, in September 2011 a third campaign was performed with the aim of further characterizing and improving estimates of BVOC fluxes from mixed Mediterranean forests dominated by a mixed holm oak and stone pine forest, using for the first time a proton transfer reaction?time-of-flight?mass spectrometer (PTR-TOF-MS). In contrast to the standard quadrupole PTR-MS, which can only measure one m/z ratio at a discrete time, thus being inadequate to quantify fluxes of more than a handful of compounds simultaneously, PTR-TOF-MS allowed simultaneous measurements (10 Hz) of fluxes of all BVOCs at the canopy level by Eddy Covariance.17?20, 50 In this work, we reviewed BEFs from previous campaigns in Castelporziano and calculated new BEFs from the campaign based on PTR-TOF-MS analysis. The new BEFs were used to parametrize the model of emissions of gases and aerosols from nature (MEGAN v2.11).
Dimensionality and noise in energy selective x-ray imaging
Alvarez, Robert E.
2013-11-15
Purpose: To develop and test a method to quantify the effect of dimensionality on the noise in energy selective x-ray imaging.Methods: The Cramr-Rao lower bound (CRLB), a universal lower limit of the covariance of any unbiased estimator, is used to quantify the noise. It is shown that increasing dimensionality always increases, or at best leaves the same, the variance. An analytic formula for the increase in variance in an energy selective x-ray system is derived. The formula is used to gain insight into the dependence of the increase in variance on the properties of the additional basis functions, the measurement noise covariance, and the source spectrum. The formula is also used with computer simulations to quantify the dependence of the additional variance on these factors. Simulated images of an object with three materials are used to demonstrate the trade-off of increased information with dimensionality and noise. The images are computed from energy selective data with a maximum likelihood estimator.Results: The increase in variance depends most importantly on the dimension and on the properties of the additional basis functions. With the attenuation coefficients of cortical bone, soft tissue, and adipose tissue as the basis functions, the increase in variance of the bone component from two to three dimensions is 1.4 10{sup 3}. With the soft tissue component, it is 2.7 10{sup 4}. If the attenuation coefficient of a high atomic number contrast agent is used as the third basis function, there is only a slight increase in the variance from two to three basis functions, 1.03 and 7.4 for the bone and soft tissue components, respectively. The changes in spectrum shape with beam hardening also have a substantial effect. They increase the variance by a factor of approximately 200 for the bone component and 220 for the soft tissue component as the soft tissue object thickness increases from 1 to 30 cm. Decreasing the energy resolution of the detectors increases the variance of the bone component markedly with three dimension processing, approximately a factor of 25 as the resolution decreases from 100 to 3 bins. The increase with two dimension processing for adipose tissue is a factor of two and with the contrast agent as the third material for two or three dimensions is also a factor of two for both components. The simulated images show that a maximum likelihood estimator can be used to process energy selective x-ray data to produce images with noise close to the CRLB.Conclusions: The method presented can be used to compute the effects of the object attenuation coefficients and the x-ray system properties on the relationship of dimensionality and noise in energy selective x-ray imaging systems.
Perturbation theory in light-cone quantization
Langnau, A.
1992-01-01
A thorough investigation of light-cone properties which are characteristic for higher dimensions is very important. The easiest way of addressing these issues is by analyzing the perturbative structure of light-cone field theories first. Perturbative studies cannot be substituted for an analysis of problems related to a nonperturbative approach. However, in order to lay down groundwork for upcoming nonperturbative studies, it is indispensable to validate the renormalization methods at the perturbative level, i.e., to gain control over the perturbative treatment first. A clear understanding of divergences in perturbation theory, as well as their numerical treatment, is a necessary first step towards formulating such a program. The first objective of this dissertation is to clarify this issue, at least in second and fourth-order in perturbation theory. The work in this dissertation can provide guidance for the choice of counterterms in Discrete Light-Cone Quantization or the Tamm-Dancoff approach. A second objective of this work is the study of light-cone perturbation theory as a competitive tool for conducting perturbative Feynman diagram calculations. Feynman perturbation theory has become the most practical tool for computing cross sections in high energy physics and other physical properties of field theory. Although this standard covariant method has been applied to a great range of problems, computations beyond one-loop corrections are very difficult. Because of the algebraic complexity of the Feynman calculations in higher-order perturbation theory, it is desirable to automatize Feynman diagram calculations so that algebraic manipulation programs can carry out almost the entire calculation. This thesis presents a step in this direction. The technique we are elaborating on here is known as light-cone perturbation theory.
Multi-time wave functions for quantum field theory
Petrat, Sren; Tumulka, Roderich
2014-06-15
Multi-time wave functions such as ?(t{sub 1},x{sub 1},,t{sub N},x{sub N}) have one time variable t{sub j} for each particle. This type of wave function arises as a relativistic generalization of the wave function ?(t,x{sub 1},,x{sub N}) of non-relativistic quantum mechanics. We show here how a quantum field theory can be formulated in terms of multi-time wave functions. We mainly consider a particular quantum field theory that features particle creation and annihilation. Starting from the particleposition representation of state vectors in Fock space, we introduce multi-time wave functions with a variable number of time variables, set up multi-time evolution equations, and show that they are consistent. Moreover, we discuss the relation of the multi-time wave function to two other representations, the TomonagaSchwinger representation and the Heisenberg picture in terms of operator-valued fields on spacetime. In a certain sense and under natural assumptions, we find that all three representations are equivalent; yet, we point out that the multi-time formulation has several technical and conceptual advantages. -- Highlights: Multi-time wave functions are manifestly Lorentz-covariant objects. We develop consistent multi-time equations with interaction for quantum field theory. We discuss in detail a particular model with particle creation and annihilation. We show how multi-time wave functions are related to the TomonagaSchwinger approach. We show that they have a simple representation in terms of operator valued fields.
Ten Brinke, JoAnn
1995-08-01
Volatile organic compounds (VOCs) are suspected to contribute significantly to ''Sick Building Syndrome'' (SBS), a complex of subchronic symptoms that occurs during and in general decreases away from occupancy of the building in question. A new approach takes into account individual VOC potencies, as well as the highly correlated nature of the complex VOC mixtures found indoors. The new VOC metrics are statistically significant predictors of symptom outcomes from the California Healthy Buildings Study data. Multivariate logistic regression analyses were used to test the hypothesis that a summary measure of the VOC mixture, other risk factors, and covariates for each worker will lead to better prediction of symptom outcome. VOC metrics based on animal irritancy measures and principal component analysis had the most influence in the prediction of eye, dermal, and nasal symptoms. After adjustment, a water-based paints and solvents source was found to be associated with dermal and eye irritation. The more typical VOC exposure metrics used in prior analyses were not useful in symptom prediction in the adjusted model (total VOC (TVOC), or sum of individually identified VOCs ({Sigma}VOC{sub i})). Also not useful were three other VOC metrics that took into account potency, but did not adjust for the highly correlated nature of the data set, or the presence of VOCs that were not measured. High TVOC values (2--7 mg m{sup {minus}3}) due to the presence of liquid-process photocopiers observed in several study spaces significantly influenced symptoms. Analyses without the high TVOC values reduced, but did not eliminate the ability of the VOC exposure metric based on irritancy and principal component analysis to explain symptom outcome.
Constrained Generalized Supersymmetries
Toppan, Francesco; Kuznetsova, Zhanna
2005-10-17
We present a classification of admissible types of constraint (hermitian, holomorphic, with reality condition on the bosonic sectors, etc.) for generalized supersymmetries in the presence of complex spinors. A generalized supersymmetry algebra involving n-component real spinors Qa is given by the anticommutators {l_brace}Q{sub a},Q{sub b}{r_brace} = Z{sub ab} where the matrix Z appearing in the r.h.s. is the most general symmetric matrix. A complex generalized supersymmetry algebra is expressed in terms of complex spinors Qa and their complex conjugate Q* a. The most general (with a saturated r.h.s.) algebra is in this case given by {l_brace}Q{sub a},Q{sub b}{r_brace} P{sub ab}{l_brace}Q*{sub a}, Q*{sub b}{r_brace} = P*{sub ab}{l_brace}Q{sub a},Q*{sub b}{r_brace} = R{sub ab} where the matrix Pab is symmetric, while Rab is hermitian. The bosonic right hand side can be expressed in terms of the rank-k totally antisymmetric tensors P{sub ab} {sigma}k(C{gamma}{sub [{mu}}{sub 1...{mu}}{sub k]}){sub ab}P{sup [{mu}{sup 1...{mu}{sup k}]}.The decomposition in terms of anti-symmetric tensors for any space-time up to dimension D = 13 is presented. Real type, complex type, and quaternionic type space-times are classified. Any restriction on the saturated bosonic generators that allows all possible combinations of these tensors is in principle admissible by a Lorenz-covariant requirement. We investigate division algebra constraints and their influence on physical models. High spin theory models are presented as examples of the applications of such models.
Martin, Spencer; Rodrigues, George; Department of Epidemiology Patil, Nikhilesh; Bauman, Glenn; Department of Radiation Oncology, London Regional Cancer Program, London ; D'Souza, David; Sexton, Tracy; Palma, David; Louie, Alexander V.; Khalvati, Farzad; Tizhoosh, Hamid R.; Segasist Technologies, Toronto, Ontario ; Gaede, Stewart
2013-01-01
Purpose: To perform a rigorous technological assessment and statistical validation of a software technology for anatomic delineations of the prostate on MRI datasets. Methods and Materials: A 3-phase validation strategy was used. Phase I consisted of anatomic atlas building using 100 prostate cancer MRI data sets to provide training data sets for the segmentation algorithms. In phase II, 2 experts contoured 15 new MRI prostate cancer cases using 3 approaches (manual, N points, and region of interest). In phase III, 5 new physicians with variable MRI prostate contouring experience segmented the same 15 phase II datasets using 3 approaches: manual, N points with no editing, and full autosegmentation with user editing allowed. Statistical analyses for time and accuracy (using Dice similarity coefficient) endpoints used traditional descriptive statistics, analysis of variance, analysis of covariance, and pooled Student t test. Results: In phase I, average (SD) total and per slice contouring time for the 2 physicians was 228 (75), 17 (3.5), 209 (65), and 15 seconds (3.9), respectively. In phase II, statistically significant differences in physician contouring time were observed based on physician, type of contouring, and case sequence. The N points strategy resulted in superior segmentation accuracy when initial autosegmented contours were compared with final contours. In phase III, statistically significant differences in contouring time were observed based on physician, type of contouring, and case sequence again. The average relative timesaving for N points and autosegmentation were 49% and 27%, respectively, compared with manual contouring. The N points and autosegmentation strategies resulted in average Dice values of 0.89 and 0.88, respectively. Pre- and postedited autosegmented contours demonstrated a higher average Dice similarity coefficient of 0.94. Conclusion: The software provided robust contours with minimal editing required. Observed time savings were seen for all physicians irrespective of experience level and baseline manual contouring speed.
Evaluation of three lidar scanning strategies for turbulence measurements
DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)
Newman, J. F.; Klein, P. M.; Wharton, S.; Sathe, A.; Bonin, T. A.; Chilson, P. B.; Muschinski, A.
2015-11-24
Several errors occur when a traditional Doppler-beam swinging (DBS) or velocityazimuth display (VAD) strategy is used to measure turbulence with a lidar. To mitigate some of these errors, a scanning strategy was recently developed which employs six beam positions to independently estimate the u, v, and w velocity variances and covariances. In order to assess the ability of these different scanning techniques to measure turbulence, a Halo scanning lidar, WindCube v2 pulsed lidar and ZephIR continuous wave lidar were deployed at field sites in Oklahoma and Colorado with collocated sonic anemometers. Results indicate that the six-beam strategy mitigates somemoreof the errors caused by VAD and DBS scans, but the strategy is strongly affected by errors in the variance measured at the different beam positions. The ZephIR and WindCube lidars overestimated horizontal variance values by over 60 % under unstable conditions as a result of variance contamination, where additional variance components contaminate the true value of the variance. A correction method was developed for the WindCube lidar that uses variance calculated from the vertical beam position to reduce variance contamination in the u and v variance components. The correction method reduced WindCube variance estimates by over 20 % at both the Oklahoma and Colorado sites under unstable conditions, when variance contamination is largest. This correction method can be easily applied to other lidars that contain a vertical beam position and is a promising method for accurately estimating turbulence with commercially available lidars.less
Dipole anisotropy of galaxy distribution: Does the CMB rest frame exist in the local universe?
Itoh, Yousuke; Yahata, Kazuhiro; Takada, Masahiro
2010-08-15
The peculiar motion of the Earth causes a dipole anisotropy modulation in the distant galaxy distribution due to the aberration effect. However, the amplitude and angular direction of the effect is not necessarily the same as those of the cosmic microwave background (CMB) dipole anisotropy due to the growth of cosmic structures. In other words exploring the aberration effect may give us a clue to the horizon-scale physics perhaps related to the cosmic acceleration. In this paper we develop a method to explore the dipole angular modulation from the pixelized galaxy data on the sky, properly taking into account the covariances due to the shot noise and the intrinsic galaxy clustering contamination as well as the partial sky coverage. We applied the method to the galaxy catalogs constructed from the Sloan Digital Sky Survey Data Release 6 data. After constructing the four galaxy catalogs that are different in the ranges of magnitudes and photometric redshifts to study possible systematics, we found that the most robust sample against systematics indicates no dipole anisotropy in the galaxy distribution. This finding is consistent with the expectation from the concordance {Lambda}-dominated cold dark matter model. Finally, we argue that an almost full-sky galaxy survey such as Large Synoptic Survey Telescope may allow for a significant detection of the aberration effect of the CMB dipole having the precision of constraining the angular direction to {approx}20 deg in radius. Assuming a hypothetical Large Synoptic Survey Telescope galaxy survey, we find that this method can confirm or reject the result implied from a stacked analysis of the kinetic Sunyaev-Zel'dovich effect of X-ray luminous clusters in Kashlinsky et al. (2008, 2009) if the implied cosmic bulk flow is not extended out to the horizon.
Wobb, Jessica L.; Chen, Peter Y.; Shah, Chirag; Moran, Meena S.; Shaitelman, Simona F.; Vicini, Frank A.; Beitsch, Peter
2015-02-01
Purpose: To develop a nomogram taking into account clinicopathologic features to predict locoregional recurrence (LRR) in patients treated with accelerated partial-breast irradiation (APBI) for early-stage breast cancer. Methods and Materials: A total of 2000 breasts (1990 women) were treated with APBI at William Beaumont Hospital (n=551) or on the American Society of Breast Surgeons MammoSite Registry Trial (n=1449). Techniques included multiplanar interstitial catheters (n=98), balloon-based brachytherapy (n=1689), and 3-dimensional conformal radiation therapy (n=213). Clinicopathologic variables were gathered prospectively. A nomogram was formulated utilizing the Cox proportional hazards regression model to predict for LRR. This was validated by generating a bias-corrected index and cross-validated with a concordance index. Results: Median follow-up was 5.5 years (range, 0.9-18.3 years). Of the 2000 cases, 435 were excluded because of missing data. Univariate analysis found that age <50 years, pre-/perimenopausal status, close/positive margins, estrogen receptor negativity, and high grade were associated with a higher frequency of LRR. These 5 independent covariates were used to create adjusted estimates, weighting each on a scale of 0-100. The total score is identified on a points scale to obtain the probability of an LRR over the study period. The model demonstrated good concordance for predicting LRR, with a concordance index of 0.641. Conclusions: The formulation of a practical, easy-to-use nomogram for calculating the risk of LRR in patients undergoing APBI will help guide the appropriate selection of patients for off-protocol utilization of APBI.
ZHANG, H; Huang, J; Ma, J; Chen, W; Ouyang, L; Wang, J
2014-06-15
Purpose: To study the noise correlation properties of cone-beam CT (CBCT) projection data and to incorporate the noise correlation information to a statistics-based projection restoration algorithm for noise reduction in low-dose CBCT. Methods: In this study, we systematically investigated the noise correlation properties among detector bins of CBCT projection data by analyzing repeated projection measurements. The measurements were performed on a TrueBeam on-board CBCT imaging system with a 4030CB flat panel detector. An anthropomorphic male pelvis phantom was used to acquire 500 repeated projection data at six different dose levels from 0.1 mAs to 1.6 mAs per projection at three fixed angles. To minimize the influence of the lag effect, lag correction was performed on the consecutively acquired projection data. The noise correlation coefficient between detector bin pairs was calculated from the corrected projection data. The noise correlation among CBCT projection data was then incorporated into the covariance matrix of the penalized weighted least-squares (PWLS) criterion for noise reduction of low-dose CBCT. Results: The analyses of the repeated measurements show that noise correlation coefficients are non-zero between the nearest neighboring bins of CBCT projection data. The average noise correlation coefficients for the first- and second- order neighbors are about 0.20 and 0.06, respectively. The noise correlation coefficients are independent of the dose level. Reconstruction of the pelvis phantom shows that the PWLS criterion with consideration of noise correlation (PWLS-Cor) results in a lower noise level as compared to the PWLS criterion without considering the noise correlation (PWLS-Dia) at the matched resolution. Conclusion: Noise is correlated among nearest neighboring detector bins of CBCT projection data. An accurate noise model of CBCT projection data can improve the performance of the statistics-based projection restoration algorithm for low-dose CBCT.
$$|V_{ub}|$$ from $$B\\to\\pi\\ell\
DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)
Bailey, Jon A.; et al.
2015-07-23
We present a lattice-QCD calculation of the B → πℓν semileptonic form factors and a new determination of the CKM matrix element |Vub|. We use the MILC asqtad (2+1)-flavor lattice configurations at four lattice spacings and light-quark masses down to 1/20 of the physical strange-quark mass. We extrapolate the lattice form factors to the continuum using staggered chiral perturbation theory in the hard-pion and SU(2) limits. We employ a model-independent z parametrization to extrapolate our lattice form factors from large-recoil momentum to the full kinematic range. We introduce a new functional method to propagate information from the chiral-continuum extrapolation tomore » the z expansion. We present our results together with a complete systematic error budget, including a covariance matrix to enable the combination of our form factors with other lattice-QCD and experimental results. To obtain |Vub|, we simultaneously fit the experimental data for the B → πℓν differential decay rate obtained by the BABAR and Belle collaborations together with our lattice form-factor results. We find |Vub|=(3.72±0.16) × 10–3, where the error is from the combined fit to lattice plus experiments and includes all sources of uncertainty. Our form-factor results bring the QCD error on |Vub| to the same level as the experimental error. We also provide results for the B → πℓν vector and scalar form factors obtained from the combined lattice and experiment fit, which are more precisely determined than from our lattice-QCD calculation alone. Lastly, these results can be used in other phenomenological applications and to test other approaches to QCD.« less
DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)
Grotjahn, Richard; Black, Robert; Leung, Ruby; Wehner, Michael F.; Barlow, Mathew; Bosilovich, Michael; Gershunov, Alexander; Gutowski, Jr., William J.; Gyakum, John R.; Katz, Richard W.; et al
2015-05-22
This paper reviews research approaches and open questions regarding data, statistical analyses, dynamics, modeling efforts, and trends in relation to temperature extremes. Our specific focus is upon extreme events of short duration (roughly less than 5 days) that affect parts of North America. These events are associated with large scale meteorological patterns (LSMPs). Methods used to define extreme events statistics and to identify and connect LSMPs to extreme temperatures are presented. Recent advances in statistical techniques can connect LSMPs to extreme temperatures through appropriately defined covariates that supplements more straightforward analyses. A wide array of LSMPs, ranging from synoptic tomore » planetary scale phenomena, have been implicated as contributors to extreme temperature events. Current knowledge about the physical nature of these contributions and the dynamical mechanisms leading to the implicated LSMPs is incomplete. There is a pressing need for (a) systematic study of the physics of LSMPs life cycles and (b) comprehensive model assessment of LSMP-extreme temperature event linkages and LSMP behavior. Generally, climate models capture the observed heat waves and cold air outbreaks with some fidelity. However they overestimate warm wave frequency and underestimate cold air outbreaks frequency, and underestimate the collective influence of low-frequency modes on temperature extremes. Climate models have been used to investigate past changes and project future trends in extreme temperatures. Overall, modeling studies have identified important mechanisms such as the effects of large-scale circulation anomalies and land-atmosphere interactions on changes in extreme temperatures. However, few studies have examined changes in LSMPs more specifically to understand the role of LSMPs on past and future extreme temperature changes. Even though LSMPs are resolvable by global and regional climate models, they are not necessarily well simulated so more research is needed to understand the limitations of climate models and improve model skill in simulating extreme temperatures and their associated LSMPs. Furthermore, the paper concludes with unresolved issues and research questions.« less
Wahlgren, Thomas; Levitt, Seymour; Kowalski, Jan; Nilsson, Sten; Brandberg, Yvonne
2011-11-15
Purpose: To determine the impact of pretreatment comorbidity on late health-related quality of life (HRQoL) scores after patients have undergone combined radiotherapy for prostate cancer, including high-dose rate brachytherapy boost and hormonal deprivation therapy. Methods and Materials: Results from the European Organization for Research and Treatment of Cancer QLQ-C30 questionnaire survey of 158 patients 5 years or more after completion of therapy were used from consecutively accrued subjects treated with curative radiotherapy at our institution, with no signs of disease at the time of questionnaire completion. HRQoL scores were compared with the Charlson combined comorbidity index (CCI), using analysis of covariance and multivariate regression models together with pretreatment factors including tumor stage, tumor grade, pretreatment prostate-specific antigen level, neoadjuvant hormonal treatment, diabetes status, cardiovascular status, and age and Charlson score as separate variables or the composite CCI. Results: An inverse correlation between the two HRQoL domains, long-term global health (QL) and physical function (PF) scores, and the CCI score was observed, indicating an impact of comorbidity in these function areas. Selected pretreatment factors poorly explained the variation in functional HRQoL in the multivariate models; however, a statistically significant impact was found for the CCI (with QL and PF scores) and the presence of diabetes (with QL and emotional function). Cognitive function and social function were not statistically significantly predicted by any of the pretreatment factors. Conclusions: The CCI proved to be valid in this context, but it seems useful mainly in predicting long-term QL and PF scores. Of the other variables investigated, diabetes had more impact than cardiovascular morbidity on HRQoL outcomes in prostate cancer.
$|V_{ub}|$ from $B\\to\\pi\\ell\
Bailey, Jon A.; et al.
2015-07-23
We present a lattice-QCD calculation of the B → πℓν semileptonic form factors and a new determination of the CKM matrix element |V_{ub}|. We use the MILC asqtad (2+1)-flavor lattice configurations at four lattice spacings and light-quark masses down to 1/20 of the physical strange-quark mass. We extrapolate the lattice form factors to the continuum using staggered chiral perturbation theory in the hard-pion and SU(2) limits. We employ a model-independent z parametrization to extrapolate our lattice form factors from large-recoil momentum to the full kinematic range. We introduce a new functional method to propagate information from the chiral-continuum extrapolation to the z expansion. We present our results together with a complete systematic error budget, including a covariance matrix to enable the combination of our form factors with other lattice-QCD and experimental results. To obtain |V_{ub}|, we simultaneously fit the experimental data for the B → πℓν differential decay rate obtained by the BABAR and Belle collaborations together with our lattice form-factor results. We find |V_{ub}|=(3.72±0.16) × 10^{–3}, where the error is from the combined fit to lattice plus experiments and includes all sources of uncertainty. Our form-factor results bring the QCD error on |V_{ub}| to the same level as the experimental error. We also provide results for the B → πℓν vector and scalar form factors obtained from the combined lattice and experiment fit, which are more precisely determined than from our lattice-QCD calculation alone. Lastly, these results can be used in other phenomenological applications and to test other approaches to QCD.
Flow units from integrated WFT and NMR data
Kasap, E.; Altunbay, M.; Georgi, D.
1997-08-01
Reliable and continuous permeability profiles are vital as both hard and soft data required for delineating reservoir architecture. They can improve the vertical resolution of seismic data, well-to-well stratigraphic correlations, and kriging between the well locations. In conditional simulations, permeability profiles are imposed as the conditioning data. Variograms, covariance functions and other geostatistical indicators are more reliable when based on good quality permeability data. Nuclear Magnetic Resonance (NMR) logging and Wireline Formation Tests (WFT) separately generate a wealth of information, and their synthesis extends the value of this information further by providing continuous and accurate permeability profiles without increasing the cost. NMR and WFT data present a unique combination because WFTs provide discrete, in situ permeability based on fluid-flow, whilst NMR responds to the fluids in the pore space and yields effective porosity, pore-size distribution, bound and moveable fluid saturations, and permeability. The NMR permeability is derived from the T{sub 2}-distribution data. Several equations have been proposed to transform T{sub 2} data to permeability. Regardless of the transform model used, the NMR-derived permeabilities depend on interpretation parameters that may be rock specific. The objective of this study is to integrate WFT permeabilities with NMR-derived, T{sub 2} distribution-based permeabilities and thereby arrive at core quality, continuously measured permeability profiles. We outlined the procedures to integrate NMR and WFT data and applied the procedure to a field case. Finally, this study advocates the use of hydraulic unit concepts to extend the WFT-NMR derived, core quality permeabilities to uncored intervals or uncored wells.
Stessin, Alexander M.; Sison, Cristina; Nieto, Jaime; Raifu, Muri; Li, Baoqing
2013-03-01
Purpose: The aim of this study was to examine the effect of postoperative radiation therapy (RT) on cause-specific survival in patients with meningeal hemangiopericytomas. Methods and Materials: The Surveillance, Epidemiology, and End Results database from 1990-2008 was queried for cases of surgically resected central nervous system hemangiopericytoma. Patient demographics, tumor location, and extent of resection were included in the analysis as covariates. The Kaplan-Meier product-limit method was used to analyze cause-specific survival. A Cox proportional hazards regression analysis was conducted to determine which factors were associated with cause-specific survival. Results: The mean follow-up time is 7.9 years (95 months). There were 76 patients included in the analysis, of these, 38 (50%) underwent gross total resection (GTR), whereas the other half underwent subtotal resection (STR). Postoperative RT was administered to 42% (16/38) of the patients in the GTR group and 50% (19/38) in the STR group. The 1-year, 10-year, and 20-year cause-specific survival rates were 99%, 75%, and 43%, respectively. On multivariate analysis, postoperative RT was associated with significantly better survival (HR = 0.269, 95% CI 0.084-0.862; P=.027), in particular for patients who underwent STR (HR = 0.088, 95% CI: 0.015-0.528; P<.008). Conclusions: In the absence of large prospective trials, the current clinical decision-making of hemangiopericytoma is mostly based on retrospective data. We recommend that postoperative RT be considered after subtotal resection for patients who could tolerate it. Based on the current literature, the practical approach is to deliver limited field RT to doses of 50-60 Gy while respecting the normal tissue tolerance. Further investigations are clearly needed to determine the optimal therapeutic strategy.
Vuichard, N.
2015-07-13
In this study, exchanges of carbon, water and energy between the land surface and the atmosphere are monitored by eddy covariance technique at the ecosystem level. Currently, the FLUXNET database contains more than 500 registered sites, and up to 250 of them share data (free fair-use data set). Many modelling groups use the FLUXNET data set for evaluating ecosystem models' performance, but this requires uninterrupted time series for the meteorological variables used as input. Because original in situ data often contain gaps, from very short (few hours) up to relatively long (some months) ones, we develop a new and robust method for filling the gaps in meteorological data measured at site level. Our approach has the benefit of making use of continuous data available globally (ERA-Interim) and a high temporal resolution spanning from 1989 to today. These data are, however, not measured at site level, and for this reason a method to downscale and correct the ERA-Interim data is needed. We apply this method to the level 4 data (L4) from the La Thuile collection, freely available after registration under a fair-use policy. The performance of the developed method varies across sites and is also function of the meteorological variable. On average over all sites, applying the bias correction method to the ERA-Interim data reduced the mismatch with the in situ data by 10 to 36 %, depending on the meteorological variable considered. In comparison to the internal variability of the in situ data, the root mean square error (RMSE) between the in situ data and the unbiased ERA-I (ERA-Interim) data remains relatively large (on average over all sites, from 27 to 76 % of the standard deviation of in situ data, depending on the meteorological variable considered). The performance of the method remains poor for the wind speed field, in particular regarding its capacity to conserve a standard deviation similar to the one measured at FLUXNET stations.
Vulnerability of crops and native grasses to summer drying in the U.S. Southern Great Plains
DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)
Raz-Yaseef, Naama; Billesbach, Dave P.; Fischer, Marc L.; Biraud, Sebastien C.; Gunter, Stacey A.; Bradford, James A.; Torn, Margaret S.
2015-08-31
The Southern Great Plains are characterized by a fine-scale mixture of different land-cover types, predominantly winter-wheat and grazed pasture, with relatively small areas of other crops, native prairie, and switchgrass. Recent droughts and predictions of increased drought in the Southern Great Plains, especially during the summer months, raise concern for these ecosystems. We measured ecosystem carbon and water fluxes with eddy-covariance systems over cultivated cropland for 10 years, and over lightly grazed prairie and new switchgrass fields for 2 years each. Growing-season precipitation showed the strongest control over net carbon uptake for all ecosystems, but with a variable effect: grassesmore » (prairie and switchgrass) needed at least 350 mm of precipitation during the growing season to become net carbon sinks, while crops needed only 100 mm. In summer, high temperatures enhanced evaporation and led to higher likelihood of dry soil conditions. Therefore, summer-growing native prairie species and switchgrass experienced more seasonal droughts than spring-growing crops. For wheat, the net reduction in carbon uptake resulted mostly from a decrease in gross primary production rather than an increase in respiration. Flux measurements suggested that management practices for crops were effective in suppressing evapotranspiration and decomposition (by harvesting and removing secondary growth), and in increasing carbon uptake (by fertilizing and conserving summer soil water). In light of future projections for wetter springs and drier and warmer summers in the Southern Great Plains, our study indicates an increased vulnerability in native ecosystems and summer crops over time.« less
Kim, Hyun S. Czuczman, Gregory J.; Nicholson, Wanda K.; Pham, Luu D.; Richman, Jeffrey M.
2008-11-15
The purpose of this study was to assess the presence and severity of pain levels during 24 h after uterine fibroid embolization (UFE) for symptomatic leiomyomata and compare the effectiveness and adverse effects of morphine patient-controlled analgesia (PCA) versus fentanyl PCA. We carried out a prospective, nonrandomized study of 200 consecutive women who received UFE and morphine or fentanyl PCA after UFE. Pain perception levels were obtained on a 0-10 scale for the 24-h period after UFE. Linear regression methods were used to determine pain trends and differences in pain trends between two groups and the association between pain scores and patient covariates. One hundred eighty-five patients (92.5%) reported greater-than-baseline pain after UFE, and 198 patients (99%) required IV opioid PCA. One hundred thirty-six patients (68.0%) developed nausea during the 24-h period. Seventy-two patients (36%) received morphine PCA and 128 (64%) received fentanyl PCA, without demographic differences. The mean dose of morphine used was 33.8 {+-} 26.7 mg, while the mean dose of fentanyl was 698.7 {+-} 537.4 {mu}g. Using this regimen, patients who received morphine PCA had significantly lower pain levels than those who received fentanyl PCA (p < 0.0001). We conclude that patients develop pain requiring IV opioid PCA within 24 h after UFE. Morphine PCA is more effective in reducing post-uterine artery embolization pain than fentanyl PCA. Nausea is a significant adverse effect from opioid PCA.
Shirvani, Shervin M.; Jiang, Jing; Chang, Joe Y.; Welsh, James W.; Gomez, Daniel R.; Swisher, Stephen; Buchholz, Thomas A.; Smith, Benjamin D.
2012-12-01
Purpose: The incidence of early-stage non-small cell lung cancer (NSCLC) among older adults is expected to increase because of demographic trends and computed tomography-based screening; yet, optimal treatment in the elderly remains controversial. Using the Surveillance, Epidemiology, and End Results (SEER)-Medicare cohort spanning 2001-2007, we compared survival outcomes associated with 5 strategies used in contemporary practice: lobectomy, sublobar resection, conventional radiation therapy, stereotactic ablative radiation therapy (SABR), and observation. Methods and Materials: Treatment strategy and covariates were determined in 10,923 patients aged {>=}66 years with stage IA-IB NSCLC. Cox regression, adjusted for patient and tumor factors, compared overall and disease-specific survival for the 5 strategies. In a second exploratory analysis, propensity-score matching was used for comparison of SABR with other options. Results: The median age was 75 years, and 29% had moderate to severe comorbidities. Treatment distribution was lobectomy (59%), sublobar resection (11.7%), conventional radiation (14.8%), observation (12.6%), and SABR (1.1%). In Cox regression analysis with a median follow-up time of 3.2 years, SABR was associated with the lowest risk of death within 6 months of diagnosis (hazard ratio [HR] 0.48; 95% confidence interval [CI] 0.38-0.63; referent is lobectomy). After 6 months, lobectomy was associated with the best overall and disease-specific survival. In the propensity-score matched analysis, survival after SABR was similar to that after lobectomy (HR 0.71; 95% CI 0.45-1.12; referent is SABR). Conventional radiation and observation were associated with poor outcomes in all analyses. Conclusions: In this population-based experience, lobectomy was associated with the best long-term outcomes in fit elderly patients with early-stage NSCLC. Exploratory analysis of SABR early adopters suggests efficacy comparable with that of surgery in select populations. Evaluation of these therapies in randomized trials is urgently needed.
Showalter, Timothy N.; Hegarty, Sarah E.; Rabinowitz, Carol; Maio, Vittorio; Hyslop, Terry; Dicker, Adam P.; Louis, Daniel Z.
2015-03-15
Purpose: Although the likelihood of radiation-related adverse events influences treatment decisions regarding radiation therapy after prostatectomy for eligible patients, the data available to inform decisions are limited. This study was designed to evaluate the genitourinary, gastrointestinal, and sexual adverse events associated with postprostatectomy radiation therapy and to assess the influence of radiation timing on the risk of adverse events. Methods: The Regione Emilia-Romagna Italian Longitudinal Health Care Utilization Database was queried to identify a cohort of men who received radical prostatectomy for prostate cancer during 2003 to 2009, including patients who received postprostatectomy radiation therapy. Patients with prior radiation therapy were excluded. Outcome measures were genitourinary, gastrointestinal, and sexual adverse events after prostatectomy. Rates of adverse events were compared between the cohorts who did and did not receive postoperative radiation therapy. Multivariable Cox proportional hazards models were developed for each class of adverse events, including models with radiation therapy as a time-varying covariate. Results: A total of 9876 men were included in the analyses: 2176 (22%) who received radiation therapy and 7700 (78%) treated with prostatectomy alone. In multivariable Cox proportional hazards models, the additional exposure to radiation therapy after prostatectomy was associated with increased rates of gastrointestinal (rate ratio [RR] 1.81; 95% confidence interval [CI] 1.44-2.27; P<.001) and urinary nonincontinence events (RR 1.83; 95% CI 1.83-2.80; P<.001) but not urinary incontinence events or erectile dysfunction. The addition of the time from prostatectomy to radiation therapy interaction term was not significant for any of the adverse event outcomes (P>.1 for all outcomes). Conclusion: Radiation therapy after prostatectomy is associated with an increase in gastrointestinal and genitourinary adverse events. However, the timing of radiation therapy did not influence the risk of radiation therapy–associated adverse events in this cohort, which contradicts the commonly held clinical tenet that delaying radiation therapy reduces the risk of adverse events.
Final Scientific Report for ER41087
Hiller, John R.
2013-08-23
The primary focus of the work was the development of methods for the nonperturbative solution of quantum chromodynamics (QCD) in a form that yields wave functions for the eigenstates, from which hadronic properties can be computed. The principal approach was to use a light-front Hamiltonian formulation. In light-front coordinates, t+z/c plays the role of time, with t the ordinary time, z a space direction, and c the speed of light. This leads to a relativistic formulation that retains useful characteristics of nonrelativistic treatments. A bound state of many constituents can be represented by wave functions that define probabilities for each possible arrangement of internal momenta. These functions satisfy integral equations that can be approximated numerically to yield a matrix representation. The matrix problem can be solved by iterative methods. The approximate wave functions can then be used to compute properties of the bound state. Methods have been developed for model theories and gauge theories, including quantum electrodynamics and theories that are supersymmetric. The work has required the development of new numerical algorithms and computer codes for singular integral equations and eigenvalue problems. A key aspect of the work is the construction of practical procedures for nonperturbative regularization and renormalization. Two methods of regularization have been studied. One is the addition of heavy Pauli--Villars (PV) particles to the Lagrangian, with their metrics and couplings tuned to provide the necessary cancellations in the regularization. The other method of regularization is the addition of supersymmetric partners, to extend a theory to a supersymmetric form. The supersymmetric theories were solved by the supersymmetric discrete light-cone quantization (SDLCQ) method. The most significant accomplishments of the project were the SDLCQ calculation of direct evidence for a Maldacena duality conjecture, construction of a practical light-front quantization for QED in an arbitrary covariant gauge, and invention of the light-front coupled-cluster method, designed to eliminate the need for Fock-space truncations.
Gu, Lianhong; Van Gorsel, Eva; Leuning, Ray; Delpierre, Nicolas; Black, Andy; Chen, Baozhang; Munger, J. William; Wofsy, Steve; Aubinet, M.
2009-11-01
Micrometeorological measurements of nighttime ecosystem respiration can be systematically biased when stable atmospheric conditions lead to drainage flows associated with decoupling of air flow above and within plant canopies. The associated horizontal and vertical advective fluxes cannot be measured using instrumentation on the single towers typically used at micrometeorological sites. A common approach to minimize bias is to use a threshold in friction velocity, u*, to exclude periods when advection is assumed to be important, but this is problematic in situations when in-canopy flows are decoupled from the flow above. Using data from 25 flux stations in a wide variety of forest ecosystems globally, we examine the generality of a novel approach to estimating nocturnal respiration developed by van Gorsel et al. (van Gorsel, E., Leuning, R., Cleugh, H.A., Keith, H., Suni, T., 2007. Nocturnal carbon efflux: reconciliation of eddy covariance and chamber measurements using an alternative to the u*-threshold filtering technique. Tellus 59B, 397 403, Tellus, 59B, 307-403). The approach is based on the assumption that advection is small relative to the vertical turbulent flux (FC) and change in storage (FS) of CO2 in the few hours after sundown. The sum of FC and FS reach a maximum during this period which is used to derive a temperature response function for ecosystem respiration. Measured hourly soil temperatures are then used with this function to estimate respiration RRmax. The new approach yielded excellent agreement with (1) independent measurements using respiration chambers, (2) with estimates using ecosystem light-response curves of Fc + Fs extrapolated to zero light, RLRC, and (3) with a detailed process-based forest ecosystem model, Rcast. At most sites respiration rates estimated using the u*-filter, Rust, were smaller than RRmax and RLRC. Agreement of our approach with independent measurements indicates that RRmax provides an excellent estimate of nighttime ecosystem respiration
THE GRAVITATIONAL POTENTIAL NEAR THE SUN FROM SEGUE K-DWARF KINEMATICS
Zhang Lan; Liu Chao; Zhao Gang; Rix, Hans-Walter; Van de Ven, Glenn; Bovy, Jo
2013-08-01
To constrain the Galactic gravitational potential near the Sun ({approx}1.5 kpc), we derive and model the spatial and velocity distributions for a sample of 9000 K-dwarfs with spectra from SDSS/SEGUE, which yield radial velocities and abundances ([Fe/H] and [{alpha}/Fe]). We first derive the spatial density distribution for three abundance-selected sub-populations of stars accounting for the survey's selection function. The vertical profiles of these sub-populations are simple exponentials and their vertical dispersion profile is nearly isothermal. To model these data, we apply the 'vertical' Jeans equation, which relates the observable tracer number density and vertical velocity dispersion to the gravitational potential or vertical force. We explore a number of functional forms for the vertical force law, fit the dispersion and density profiles of all abundance-selected sub-populations simultaneously in the same potential, and explore all parameter co-variances using a Markov Chain Monte Carlo technique. Our fits constrain a disk mass scale height {approx}< 300 pc and the total surface mass density to be 67 {+-} 6 M{sub Sun} pc{sup -2} at |z| = 1.0 kpc of which the contribution from all stars is 42 {+-} 5 M{sub Sun} pc{sup -2} (assuming a contribution from cold gas of 13 M{sub Sun} pc{sup -2}). We find significant constraints on the local dark matter density of 0.0065 {+-} 0.0023 M{sub Sun} pc{sup -3} (0.25 {+-} 0.09 GeV cm{sup -3}). Together with recent experiments this firms up the best estimate of 0.0075 {+-} 0.0021 M{sub Sun} pc{sup -3} (0.28 {+-} 0.08 GeV cm{sup -3}), consistent with global fits of approximately round dark matter halos to kinematic data in the outskirts of the Galaxy.
Jasoni, Richard L; Larsen, Jessica D; Lyles, Brad F.; Healey, John M; Cooper, Clay A; Hershey, Ronald L; Lefebre, Karen J
2013-04-01
Pahute Mesa is a groundwater recharge area at the Nevada National Security Site. Because underground nuclear testing was conducted at Pahute Mesa, groundwater recharge may transport radionuclides from underground test sites downward to the water table; the amount of groundwater recharge is also an important component of contaminant transport models. To estimate the amount of groundwater recharge at Pahute Mesa, an INFIL3.0 recharge-runoff model is being developed. Two eddy covariance (EC) stations were installed on Pahute Mesa to estimate evapotranspiration (ET) to support the groundwater recharge modeling project. This data report describes the methods that were used to estimate ET and collect meteorological data. Evapotranspiration was estimated for two predominant plant communities on Pahute Mesa; one site was located in a sagebrush plant community, the other site in a pinyon pine/juniper community. Annual ET was estimated to be 31013.9 mm for the sagebrush site and 34715.9 mm for the pinyon pine/juniper site (March 26, 2011 to March 26, 2012). Annual precipitation measured with unheated tipping bucket rain gauges was 179 mm at the sagebrush site and 159 mm at the pinyon pine/juniper site. Annual precipitation measured with bulk precipitation gauges was 222 mm at the sagebrush site and 227 mm at the pinyon pine/juniper site (March 21, 2011 to March 28, 2012). A comparison of tipping bucket versus bulk precipitation data showed that total precipitation measured by the tipping bucket rain gauges was 17 to 20 percent lower than the bulk precipitation gauges. These differences were most likely the result of the unheated tipping bucket precipitation gauges not measuring frozen precipitation as accurately as the bulk precipitation gauges. In this one-year study, ET exceeded precipitation at both study sites because estimates of ET included precipitation that fell during the winter of 2010-2011 prior to EC instrumentation and the precipitation gauges started collecting data in March 2011.
Derrien, H.; Harvey, J.A.; Larson, N.M.; Leal, L.C.; Wright, R.Q.
2000-05-01
The average {sup 235}U neutron total cross sections were obtained in the energy range 2 keV to 330 keV from high-resolution transmission measurements of a 0.033 atom/b sample.1 The experimental data were corrected for the contribution of isotope impurities and for resonance self-shielding effects in the sample. The results are in very good agreement with the experimental data of Poenitz et al.4 in the energy range 40 keV to 330 keV and are the only available accurate experimental data in the energy range 2 keV to 40 keV. ENDF/B-VI evaluated data are 1.7% larger. The SAMMY/FITACS code 2 was used for a statistical model analysis of the total cross section, selected fission cross sections and data in the energy range 2 keV to 200 keV. SAMMY/FITACS is an extended version of SAMMY which allows consistent analysis of the experimental data in the resolved and unresolved resonance region. The Reich-Moore resonance parameters were obtained 3 from a SAMMY Bayesian fits of high resolution experimental neutron transmission and partial cross section data below 2.25 keV, and the corresponding average parameters and covariance data were used in the present work as input for the statistical model analysis of the high energy range of the experimental data. The result of the analysis shows that the average resonance parameters obtained from the analysis of the unresolved resonance region are consistent with those obtained in the resolved energy region. Another important result is that ENDF/B-VI capture cross section could be too small by more than 10% in the energy range 10 keV to 200 keV.
Ray, Jaideep; Lee, Jina; Lefantzi, Sophia; Yadav, Vineet; Michalak, Anna M.; van Bloemen Waanders, Bart Gustaaf; McKenna, Sean Andrew
2013-09-01
The estimation of fossil-fuel CO2 emissions (ffCO2) from limited ground-based and satellite measurements of CO2 concentrations will form a key component of the monitoring of treaties aimed at the abatement of greenhouse gas emissions. The limited nature of the measured data leads to a severely-underdetermined estimation problem. If the estimation is performed at fine spatial resolutions, it can also be computationally expensive. In order to enable such estimations, advances are needed in the spatial representation of ffCO2 emissions, scalable inversion algorithms and the identification of observables to measure. To that end, we investigate parsimonious spatial parameterizations of ffCO2 emissions which can be used in atmospheric inversions. We devise and test three random field models, based on wavelets, Gaussian kernels and covariance structures derived from easily-observed proxies of human activity. In doing so, we constructed a novel inversion algorithm, based on compressive sensing and sparse reconstruction, to perform the estimation. We also address scalable ensemble Kalman filters as an inversion mechanism and quantify the impact of Gaussian assumptions inherent in them. We find that the assumption does not impact the estimates of mean ffCO2 source strengths appreciably, but a comparison with Markov chain Monte Carlo estimates show significant differences in the variance of the source strengths. Finally, we study if the very different spatial natures of biogenic and ffCO2 emissions can be used to estimate them, in a disaggregated fashion, solely from CO2 concentration measurements, without extra information from products of incomplete combustion e.g., CO. We find that this is possible during the winter months, though the errors can be as large as 50%.
Kordilla, Jannes; Pan, Wenxiao; Tartakovsky, Alexandre M.
2014-12-14
We propose a novel Smoothed Particle Hydrodynamics (SPH) discretization of the fully-coupled Landau-Lifshitz-Navier-Stokes (LLNS) and advection-diffusion equations. The accuracy of the SPH solution of the LLNS equations is demonstrated by comparing the scaling of velocity variance and self-diffusion coefficient with kinetic temperature and particle mass obtained from the SPH simulations and analytical solutions. The spatial covariance of pressure and velocity fluctuations are found to be in a good agreement with theoretical models. To validate the accuracy of the SPH method for the coupled LLNS and advection-diffusion equations, we simulate the interface between two miscible fluids. We study the formation of the so-called giant fluctuations of the front between light and heavy fluids with and without gravity, where the light fluid lays on the top of the heavy fluid. We find that the power spectra of the simulated concentration field is in good agreement with the experiments and analytical solutions. In the absence of gravity the the power spectra decays as the power -4 of the wave number except for small wave numbers which diverge from this power law behavior due to the effect of finite domain size. Gravity suppresses the fluctuations resulting in the much weaker dependence of the power spectra on the wave number. Finally the model is used to study the effect of thermal fluctuation on the Rayleigh-Taylor instability, an unstable dynamics of the front between a heavy fluid overlying a light fluid. The front dynamics is shown to agree well with the analytical solutions.
Ortega, John; Turnipseed, A.; Guenther, Alex B.; Karl, Thomas G.; Day, D. A.; Gochis, David; Huffman, J. A.; Prenni, Anthony J.; Levin, E. J.; Kreidenweis, Sonia M.; DeMott, Paul J.; Tobo, Y.; Patton, E. G.; Hodzic, Alma; Cui, Y. Y.; Harley, P.; Hornbrook, R. S.; Apel, E. C.; Monson, Russell K.; Eller, A. S.; Greenberg, J. P.; Barth, Mary; Campuzano-Jost, Pedro; Palm, B. B.; Jiminez, J. L.; Aiken, A. C.; Dubey, Manvendra K.; Geron, Chris; Offenberg, J.; Ryan, M. G.; Fornwalt, Paula J.; Pryor, S. C.; Keutsch, Frank N.; DiGangi, J. P.; Chan, A. W.; Goldstein, Allen H.; Wolfe, G. M.; Kim, S.; Kaser, L.; Schnitzhofer, R.; Hansel, A.; Cantrell, Chris; Mauldin, R. L.; Smith, James N.
2014-01-01
The Bio-hydro-atmosphere interactions of Energy, Aerosols, Carbon, H2O, Organics & Nitrogen (BEACHON) project seeks to understand the feedbacks and interrelationships between hydrology, biogenic emissions, carbon assimilation, aerosol properties, clouds and associated feedbacks within water-limited ecosystems. The Manitou Experimental Forest Observatory (MEFO) was established in 2008 by the National Center for Atmospheric Research to address many of the BEACHON research objectives, and it now provides a fixed field site with significant infrastructure. MEFO is a mountainous, semi-arid ponderosa pine-dominated forest site that is normally dominated by clean continental air but is periodically influenced by anthropogenic sources from Colorado Front Range cities. This article summarizes the past and ongoing research activities at the site, and highlights some of the significant findings that have resulted from these measurements. These activities include soil property measurements; hydrological studies; measurements of high-frequency turbulence parameters; eddy covariance flux measurements of water, energy, aerosols and carbon dioxide through the canopy; determination of biogenic and anthropogenic volatile organic compound emissions and their influence on regional atmospheric chemistry; aerosol number and mass distributions; chemical speciation of aerosol particles; characterization of ice and cloud condensation nuclei; trace gas measurements; and model simulations using coupled chemistry and meteorology. In addition to various long-term continuous measurements, three focused measurement campaigns with state-of-the-art instrumentation have taken place since the site was established, and two of these studies are the subjects of this special issue: BEACHON-ROCS (Rocky Mountain Organic Carbon Study, 2010) and BEACHON-RoMBAS (Rocky Mountain Biogenic Aerosol Study, 2011).
Jin, Cui; Xiao, Xiangming; Wagle, Pradeep; Griffis, Timothy; Dong, Jinwei; Wu, Chaoyang; Qin, Yuanwei; Cook, David R.
2015-11-01
Satellite-based Production Efficiency Models (PEMs) often require meteorological reanalysis data such as the North America Regional Reanalysis (NARR) by the National Centers for Environmental Prediction (NCEP) as model inputs to simulate Gross Primary Production (GPP) at regional and global scales. This study first evaluated the accuracies of air temperature (TNARR) and downward shortwave radiation (RNARR) of the NARR by comparing with in-situ meteorological measurements at 37 AmeriFlux non-crop eddy flux sites, then used one PEM – the Vegetation Photosynthesis Model (VPM) to simulate 8-day mean GPP (GPPVPM) at seven AmeriFlux crop sites, and investigated the uncertainties in GPPVPM from climate inputs as compared with eddy covariance-based GPP (GPPEC). Results showed that TNARR agreed well with in-situ measurements; RNARR, however, was positively biased. An empirical linear correction was applied to RNARR, and significantly reduced the relative error of RNARR by ~25% for crop site-years. Overall, GPPVPM calculated from the in-situ (GPPVPM(EC)), original (GPPVPM(NARR)) and adjusted NARR (GPPVPM(adjNARR)) climate data tracked the seasonality of GPPEC well, albeit with different degrees of biases. GPPVPM(EC) showed a good match with GPPEC for maize (Zea mays L.), but was slightly underestimated for soybean (Glycine max L.). Replacing the in-situ climate data with the NARR resulted in a significant overestimation of GPPVPM(NARR) (18.4/29.6% for irrigated/rainfed maize and 12.7/12.5% for irrigated/rainfed soybean). GPPVPM(adjNARR) showed a good agreement with GPPVPM(EC) for both crops due to the reduction in the bias of RNARR. The results imply that the bias of RNARR introduced significant uncertainties into the PEM-based GPP estimates, suggesting that more accurate surface radiation datasets are needed to estimate primary production of terrestrial ecosystems at regional and global scales.
Xiao, Jingfeng; Zhuang, Qianlai; Baldocchi, Dennis D.; Law, Beverly E.; Richardson, Andrew D.; Chen, Jiquan; Oren, Ram; Starr, Gregory; Noormets, Asko; Ma, Siyan; Verma, Shashi B.; Wharton, Sonia; Wofsy, Steven C.; Bolstad, Paul V.; Burns, Sean P.; Cook, David R.; Curtis, Peter S.; Drake, Bert G.; Falk, Matthias; Fischer, Marc L.; Foster, David R.; Gu, Lianhong; Hadley, Julian L.; Hollinger, David Y.; Katul, Gabriel G.; Litvak, Marcy; Martin, Timothy A.; Matamala, Roser; McNulty, Steve; Meyers, Tilden P.; Monson, Russell K.; Munger, J. William; Oechel, Walter C.; U, Kyaw Tha Paw; Schmid, Hans Peter; Scott, Russell L.; Sun, Ge; Suyker, Andrew E.; Torn, Margaret S.
2009-03-06
Eddy covariance flux towers provide continuous measurements of net ecosystem carbon exchange (NEE) for a wide range of climate and biome types. However, these measurements only represent the carbon fluxes at the scale of the tower footprint. To quantify the net exchange of carbon dioxide between the terrestrial biosphere and the atmosphere for regions or continents, flux tower measurements need to be extrapolated to these large areas. Here we used remotely-sensed data from the Moderate Resolution Imaging Spectrometer (MODIS) instrument on board NASA's Terra satellite to scale up AmeriFlux NEE measurements to the continental scale. We first combined MODIS and AmeriFlux data for representative U.S. ecosystems to develop a predictive NEE model using a regression tree approach. The predictive model was trained and validated using NEE data over the periods 2000-2004 and 2005-2006, respectively. We found that the model predicted NEE reasonably well at the site level. We then applied the model to the continental scale and estimated NEE for each 1 km x 1 km cell across the conterminous U.S. for each 8-day period in 2005 using spatially-explicit MODIS data. The model generally captured the expected spatial and seasonal patterns of NEE. Our study demonstrated that our empirical approach is effective for scaling up eddy flux NEE measurements to the continental scale and producing wall-to-wall NEE estimates across multiple biomes. Our estimates may provide an independent dataset from simulations with biogeochemical models and inverse modeling approaches for examining the spatiotemporal patterns of NEE and constraining terrestrial carbon budgets for large areas.
Schwalm, C.R.; Williams, C.A.; Schaefer, K.; Anderson, R.; Arain, M.A.; Baker, I.; Black, T.A.; Chen, G.; Ciais, P.; Davis, K. J.; Desai, A. R.; Dietze, M.; Dragoni, D.; Fischer, M.L.; Flanagan, L.B.; Grant, R.F.; Gu, L.; Hollinger, D.; Izaurralde, R.C.; Kucharik, C.; Lafleur, P.M.; Law, B.E.; Li, L.; Li, Z.; Liu, S.; Lokupitiya, E.; Luo, Y.; Ma, S.; Margolis, H.; Matamala, R.; McCaughey, H.; Monson, R. K.; Oechel, W. C.; Peng, C.; Poulter, B.; Price, D.T.; Riciutto, D.M.; Riley, W.J.; Sahoo, A.K.; Sprintsin, M.; Sun, J.; Tian, H.; Tonitto, C.; Verbeeck, H.; Verma, S.B.
2011-06-01
Our current understanding of terrestrial carbon processes is represented in various models used to integrate and scale measurements of CO{sub 2} exchange from remote sensing and other spatiotemporal data. Yet assessments are rarely conducted to determine how well models simulate carbon processes across vegetation types and environmental conditions. Using standardized data from the North American Carbon Program we compare observed and simulated monthly CO{sub 2} exchange from 44 eddy covariance flux towers in North America and 22 terrestrial biosphere models. The analysis period spans {approx}220 site-years, 10 biomes, and includes two large-scale drought events, providing a natural experiment to evaluate model skill as a function of drought and seasonality. We evaluate models' ability to simulate the seasonal cycle of CO{sub 2} exchange using multiple model skill metrics and analyze links between model characteristics, site history, and model skill. Overall model performance was poor; the difference between observations and simulations was {approx}10 times observational uncertainty, with forested ecosystems better predicted than nonforested. Model-data agreement was highest in summer and in temperate evergreen forests. In contrast, model performance declined in spring and fall, especially in ecosystems with large deciduous components, and in dry periods during the growing season. Models used across multiple biomes and sites, the mean model ensemble, and a model using assimilated parameter values showed high consistency with observations. Models with the highest skill across all biomes all used prescribed canopy phenology, calculated NEE as the difference between GPP and ecosystem respiration, and did not use a daily time step.
Xiao, Jingfeng; Zhuang, Qianlai; Baldocchi, Dennis D.; Bolstad, Paul V.; Burns, Sean P.; Chen, Jiquan; Cook, David R.; Curtis, Peter S.; Drake, Bert G.; Foster, David R.; Gu, Lianhong; Hadley, Julian L.; Hollinger, David Y.; Katul, Gabriel G.; Law, Beverly E.; Litvak, Marcy; Ma, Siyan; Martin, Timothy A.; Matamala, Roser; McNulty, Steve; Meyers, Tilden P.; Monson, Russell K.; Munger, J. William; Noormets, Asko; Oechel, Walter C.; Oren, Ram; Richardson, Andrew D.; Schmid, Hans Peter; Scott, Russell L.; Starr, Gregory; Sun, Ge; Suyker, Andrew E.; Torn, Margaret S.; Paw, Kyaw; Verma, Shashi B.; Wharton, Sonia; Wofsy, Steven C.
2008-10-01
Eddy covariance flux towers provide continuous measurements of net ecosystem carbon exchange (NEE) for a wide range of climate and biome types. However, these measurements only represent the carbon fluxes at the scale of the tower footprint. To quantify the net exchange of carbon dioxide between the terrestrial biosphere and the atmosphere for regions or continents, flux tower measurements need to be extrapolated to these large areas. Here we used remotely sensed data from the Moderate Resolution Imaging Spectrometer (MODIS) instrument on board the National Aeronautics and Space Administration's (NASA) Terra satellite to scale up AmeriFlux NEE measurements to the continental scale. We first combined MODIS and AmeriFlux data for representative U.S. ecosystems to develop a predictive NEE model using a modified regression tree approach. The predictive model was trained and validated using eddy flux NEE data over the periods 2000-2004 and 2005-2006, respectively. We found that the model predicted NEE well (r = 0.73, p < 0.001). We then applied the model to the continental scale and estimated NEE for each 1 km x 1 km cell across the conterminous U.S. for each 8-day interval in 2005 using spatially explicit MODIS data. The model generally captured the expected spatial and seasonal patterns of NEE as determined from measurements and the literature. Our study demonstrated that our empirical approach is effective for scaling up eddy flux NEE measurements to the continental scale and producing wall-to-wall NEE estimates across multiple biomes. Our estimates may provide an independent dataset from simulations with biogeochemical models and inverse modeling approaches for examining the spatiotemporal patterns of NEE and constraining terrestrial carbon budgets over large areas.
Development of High Throughput Process for Constructing 454 Titanium and Illumina Libraries
Deshpande, Shweta; Hack, Christopher; Tang, Eric; Malfatti, Stephanie; Ewing, Aren; Lucas, Susan; Cheng, Jan-Fang
2010-05-28
We have developed two processes with the Biomek FX robot to construct 454 titanium and Illumina libraries in order to meet the increasing library demands. All modifications in the library construction steps were made to enable the adaptation of the entire processes to work with the 96-well plate format. The key modifications include the shearing of DNA with Covaris E210 and the enzymatic reaction cleaning and fragment size selection with SPRI beads and magnetic plate holders. The construction of 96 Titanium libraries takes about 8 hours from sheared DNA to ssDNA recovery. The processing of 96 Illumina libraries takes less time than that of the Titanium library process. Although both processes still require manual transfer of plates from robot to other work stations such as thermocyclers, these robotic processes represent about 12- to 24-folds increase of library capacity comparing to the manual processes. To enable the sequencing of many libraries in parallel, we have also developed sets of molecular barcodes for both library types. The requirements for the 454 library barcodes include 10 bases, 40-60percent GC, no consecutive same base, and no less than 3 bases difference between barcodes. We have used 96 of the resulted 270 barcodes to construct libraries and pool to test the ability of accurately assigning reads to the right samples. When allowing 1 base error occurred in the 10 base barcodes, we could assign 99.6percent of the total reads and 100percent of them were uniquely assigned. As for the Illumina barcodes, the requirements include 4 bases, balanced GC, and at least 2 bases difference between barcodes. We have begun to assess the ability to assign reads after pooling different number of libraries. We will discuss the progress and the challenges of these scale-up processes.
Harlim, John; Mahdi, Adam; Majda, Andrew J.
2014-01-15
A central issue in contemporary science is the development of nonlinear data driven statisticaldynamical models for time series of noisy partial observations from nature or a complex model. It has been established recently that ad-hoc quadratic multi-level regression models can have finite-time blow-up of statistical solutions and/or pathological behavior of their invariant measure. Recently, a new class of physics constrained nonlinear regression models were developed to ameliorate this pathological behavior. Here a new finite ensemble Kalman filtering algorithm is developed for estimating the state, the linear and nonlinear model coefficients, the model and the observation noise covariances from available partial noisy observations of the state. Several stringent tests and applications of the method are developed here. In the most complex application, the perfect model has 57 degrees of freedom involving a zonal (eastwest) jet, two topographic Rossby waves, and 54 nonlinearly interacting Rossby waves; the perfect model has significant non-Gaussian statistics in the zonal jet with blocked and unblocked regimes and a non-Gaussian skewed distribution due to interaction with the other 56 modes. We only observe the zonal jet contaminated by noise and apply the ensemble filter algorithm for estimation. Numerically, we find that a three dimensional nonlinear stochastic model with one level of memory mimics the statistical effect of the other 56 modes on the zonal jet in an accurate fashion, including the skew non-Gaussian distribution and autocorrelation decay. On the other hand, a similar stochastic model with zero memory levels fails to capture the crucial non-Gaussian behavior of the zonal jet from the perfect 57-mode model.
Generalized Uncertainty Quantification for Linear Inverse Problems in X-ray Imaging
Fowler, Michael James
2014-04-25
In industrial and engineering applications, X-ray radiography has attained wide use as a data collection protocol for the assessment of material properties in cases where direct observation is not possible. The direct measurement of nuclear materials, particularly when they are under explosive or implosive loading, is not feasible, and radiography can serve as a useful tool for obtaining indirect measurements. In such experiments, high energy X-rays are pulsed through a scene containing material of interest, and a detector records a radiograph by measuring the radiation that is not attenuated in the scene. One approach to the analysis of these radiographs is to model the imaging system as an operator that acts upon the object being imaged to produce a radiograph. In this model, the goal is to solve an inverse problem to reconstruct the values of interest in the object, which are typically material properties such as density or areal density. The primary objective in this work is to provide quantitative solutions with uncertainty estimates for three separate applications in X-ray radiography: deconvolution, Abel inversion, and radiation spot shape reconstruction. For each problem, we introduce a new hierarchical Bayesian model for determining a posterior distribution on the unknowns and develop efficient Markov chain Monte Carlo (MCMC) methods for sampling from the posterior. A Poisson likelihood, based on a noise model for photon counts at the detector, is combined with a prior tailored to each application: an edge-localizing prior for deconvolution; a smoothing prior with non-negativity constraints for spot reconstruction; and a full covariance sampling prior based on a Wishart hyperprior for Abel inversion. After developing our methods in a general setting, we demonstrate each model on both synthetically generated datasets, including those from a well known radiation transport code, and real high energy radiographs taken at two U. S. Department of Energy laboratories.
PROSPECTS FOR PROBING THE SPACETIME OF Sgr A* WITH PULSARS
Liu, K.; Wex, N.; Kramer, M.; Cordes, J. M.; Lazio, T. J. W.
2012-03-01
The discovery of radio pulsars in compact orbits around Sgr A* would allow an unprecedented and detailed investigation of the spacetime of this supermassive black hole. This paper shows that pulsar timing, including that of a single pulsar, has the potential to provide novel tests of general relativity, in particular its cosmic censorship conjecture and no-hair theorem for rotating black holes. These experiments can be performed by timing observations with 100 {mu}s precision, achievable with the Square Kilometre Array for a normal pulsar at frequency above 15 GHz. Based on the standard pulsar timing technique, we develop a method that allows the determination of the mass, spin, and quadrupole moment of Sgr A*, and provides a consistent covariance analysis of the measurement errors. Furthermore, we test this method in detailed mock data simulations. It seems likely that only for orbital periods below {approx}0.3 yr is there the possibility of having negligible external perturbations. For such orbits, we expect a {approx}10{sup -3} test of the frame dragging and a {approx}10{sup -2} test of the no-hair theorem within five years, if Sgr A* is spinning rapidly. Our method is also capable of identifying perturbations caused by distributed mass around Sgr A*, thus providing high confidence in these gravity tests. Our analysis is not affected by uncertainties in our knowledge of the distance to the Galactic center, R{sub 0}. A combination of pulsar timing with the astrometric results of stellar orbits would greatly improve the measurement precision of R{sub 0}.
Smith, Steven G.; Skalski, John R.; Schelechte, J. Warren
1994-12-01
Program SURPH is the culmination of several years of research to develop a comprehensive computer program to analyze survival studies of fish and wildlife populations. Development of this software was motivated by the advent of the PIT-tag (Passive Integrated Transponder) technology that permits the detection of salmonid smolt as they pass through hydroelectric facilities on the Snake and Columbia Rivers in the Pacific Northwest. Repeated detections of individually tagged smolt and analysis of their capture-histories permits estimates of downriver survival probabilities. Eventual installation of detection facilities at adult fish ladders will also permit estimation of ocean survival and upstream survival of returning salmon using the statistical methods incorporated in SURPH.1. However, the utility of SURPH.1 far exceeds solely the analysis of salmonid tagging studies. Release-recapture and radiotelemetry studies from a wide range of terrestrial and aquatic species have been analyzed using SURPH.1 to estimate discrete time survival probabilities and investigate survival relationships. The interactive computing environment of SURPH.1 was specifically developed to allow researchers to investigate the relationship between survival and capture processes and environmental, experimental and individual-based covariates. Program SURPH.1 represents a significant advancement in the ability of ecologists to investigate the interplay between morphologic, genetic, environmental and anthropogenic factors on the survival of wild species. It is hoped that this better understanding of risk factors affecting survival will lead to greater appreciation of the intricacies of nature and to improvements in the management of wild resources. This technical report is an introduction to SURPH.1 and provides a user guide for both the UNIX and MS-Windows{reg_sign} applications of the SURPH software.
Smith, Benjamin D. Smith, Grace L.; Roberts, Kenneth B.; Buchholz, Thomas A.
2009-08-01
Purpose: In 2007, Medicare implemented the Physician Quality Reporting Initiative (PQRI), which provides financial incentives to physicians who report their performance on certain quality measures. PQRI measure no. 74 recommends radiotherapy for patients treated with conservative surgery (CS) for invasive breast cancer. As a first step in evaluating the potential impact of this measure, we assessed baseline use of radiotherapy among women diagnosed with invasive breast cancer before implementation of PQRI. Methods and Materials: Using the SEER-Medicare data set, we identified women aged 66-70 diagnosed with invasive breast cancer and treated with CS between 2000 and 2002. Treatment with radiotherapy was determined using SEER and claims data. Multivariate logistic regression tested whether receipt of radiotherapy varied significantly across clinical, pathologic, and treatment covariates. Results: Of 3,674 patients, 94% (3,445) received radiotherapy. In adjusted analysis, the presence of comorbid illness (odds ratio [OR] 1.69; 95% confidence interval [CI], 1.19-2.42) and unmarried marital status were associated with omission of radiotherapy (OR 1.65; 95% CI, 1.22-2.20). In contrast, receipt of chemotherapy was protective against omission of radiotherapy (OR 0.25; 95% CI, 0.16-0.38). Race and geographic region did not correlate with radiotherapy utilization. Conclusions: Utilization of radiotherapy following CS was high for patients treated before institution of PQRI, suggesting that at most 6% of patients could benefit from measure no. 74. Further research is needed to determine whether institution of PQRI will affect radiotherapy utilization.
Aghamousa, Amir; Shafieloo, Arman; Arjunwadkar, Mihir; Souradeep, Tarun E-mail: shafieloo@kasi.re.kr E-mail: tarun@iucaa.ernet.in
2015-02-01
Estimation of the angular power spectrum is one of the important steps in Cosmic Microwave Background (CMB) data analysis. Here, we present a nonparametric estimate of the temperature angular power spectrum for the Planck 2013 CMB data. The method implemented in this work is model-independent, and allows the data, rather than the model, to dictate the fit. Since one of the main targets of our analysis is to test the consistency of the ΛCDM model with Planck 2013 data, we use the nuisance parameters associated with the best-fit ΛCDM angular power spectrum to remove foreground contributions from the data at multipoles ℓ ≥50. We thus obtain a combined angular power spectrum data set together with the full covariance matrix, appropriately weighted over frequency channels. Our subsequent nonparametric analysis resolves six peaks (and five dips) up to ℓ ∼1850 in the temperature angular power spectrum. We present uncertainties in the peak/dip locations and heights at the 95% confidence level. We further show how these reflect the harmonicity of acoustic peaks, and can be used for acoustic scale estimation. Based on this nonparametric formalism, we found the best-fit ΛCDM model to be at 36% confidence distance from the center of the nonparametric confidence set—this is considerably larger than the confidence distance (9%) derived earlier from a similar analysis of the WMAP 7-year data. Another interesting result of our analysis is that at low multipoles, the Planck data do not suggest any upturn, contrary to the expectation based on the integrated Sachs-Wolfe contribution in the best-fit ΛCDM cosmology.
Frederik Reitsma; Gerhard Strydom; Bismark Tyobeka; Kostadin Ivanov
2012-10-01
The continued development of High Temperature Gas Cooled Reactors (HTGRs) requires verification of design and safety features with reliable high fidelity physics models and robust, efficient, and accurate codes. The uncertainties in the HTR analysis tools are today typically assessed with sensitivity analysis and then a few important input uncertainties (typically based on a PIRT process) are varied in the analysis to find a spread in the parameter of importance. However, one wish to apply a more fundamental approach to determine the predictive capability and accuracies of coupled neutronics/thermal-hydraulics and depletion simulations used for reactor design and safety assessment. Today there is a broader acceptance of the use of uncertainty analysis even in safety studies and it has been accepted by regulators in some cases to replace the traditional conservative analysis. Finally, there is also a renewed focus in supplying reliable covariance data (nuclear data uncertainties) that can then be used in uncertainty methods. Uncertainty and sensitivity studies are therefore becoming an essential component of any significant effort in data and simulation improvement. In order to address uncertainty in analysis and methods in the HTGR community the IAEA launched a Coordinated Research Project (CRP) on the HTGR Uncertainty Analysis in Modelling early in 2012. The project is built on the experience of the OECD/NEA Light Water Reactor (LWR) Uncertainty Analysis in Best-Estimate Modelling (UAM) benchmark activity, but focuses specifically on the peculiarities of HTGR designs and its simulation requirements. Two benchmark problems were defined with the prismatic type design represented by the MHTGR-350 design from General Atomics (GA) while a 250 MW modular pebble bed design, similar to the INET (China) and indirect-cycle PBMR (South Africa) designs are also included. In the paper more detail on the benchmark cases, the different specific phases and tasks and the latest status and plans are presented.
Wu Shuangqing
2009-10-15
We continue to investigate the separability of massive field equations for spin-0 and spin-1/2 charged particles in the general, nonextremal, rotating, charged, Chong-Cvetic-Lue-Pope black holes with two independent angular momenta and a nonzero cosmological constant in minimal D=5 gauged supergravity theory. We show that the complex Klein-Gordon equation and the modified Dirac equation with the inclusion of an extra counterterm can be separated by variables into purely radial and purely angular parts in this general Einstein-Maxwell-Chern-Simons background spacetime. A second-order symmetry operator that commutes with the complex Laplacian operator is constructed from the separated solutions and expressed compactly in terms of a rank-2 Staeckel-Killing tensor which admits a simple diagonal form in the chosen pentad one-forms so that it can be understood as the square of a rank-3 totally antisymmetric tensor. A first-order symmetry operator that commutes with the modified Dirac operator is expressed in terms of a rank-3 generalized Killing-Yano tensor and its covariant derivative. The Hodge dual of this generalized Killing-Yano tensor is a generalized principal conformal Killing-Yano tensor of rank-2, which can generate a 'tower' of generalized (conformal) Killing-Yano and Staeckel-Killing tensors that are responsible for the whole hidden symmetries of this general, rotating, charged, Kerr-anti-de Sitter black hole geometry. In addition, the first laws of black hole thermodynamics have been generalized to the case that the cosmological constant can be viewed as a thermodynamical variable.
Instantaneous spatially local projective measurements are consistent in a relativistic quantum field
Lin, Shih-Yuin
2012-12-15
Suppose the postulate of measurement in quantum mechanics can be extended to quantum field theory; then a local projective measurement at some moment on an object locally coupled with a relativistic quantum field will result in a projection or collapse of the wavefunctional of the combined system defined on the whole time-slice associated with the very moment of the measurement, if the relevant degrees of freedom have nonzero correlations. This implies that the wavefunctionals in the same Hamiltonian system but defined in different reference frames would collapse on different time-slices passing through the same local event where the measurement was done. Are these post-measurement states consistent with each other? We illustrate that the quantum states of the Raine-Sciama-Grove detector-field system started with the same initial Gaussian state defined on the same initial time-slice, then collapsed by the measurements on the pointlike detectors on different time-slices in different frames, will evolve to the same state of the combined system up to a coordinate transformation when compared on the same final time-slice. Such consistency is guaranteed by the spatial locality of interactions and the general covariance in a relativistic system, together with the spatial locality of measurements and the linearity of quantum dynamics in its quantum theory. - Highlights: Black-Right-Pointing-Pointer Spatially local quantum measurements in detector-field models are studied. Black-Right-Pointing-Pointer Local quantum measurement collapses the wavefunctional on the whole time-slice. Black-Right-Pointing-Pointer In different frames wavefunctionals of a field would collapse on different time-slices. Black-Right-Pointing-Pointer States collapsed by the same measurement will be consistent on the same final slice.
Motion of small bodies in classical field theory
Gralla, Samuel E. [Enrico Fermi Institute and Department of Physics University of Chicago 5640 S. Ellis Avenue, Chicago, Illinois 60637 (United States)
2010-04-15
I show how prior work with R. Wald on geodesic motion in general relativity can be generalized to classical field theories of a metric and other tensor fields on four-dimensional spacetime that (1) are second-order and (2) follow from a diffeomorphism-covariant Lagrangian. The approach is to consider a one-parameter-family of solutions to the field equations satisfying certain assumptions designed to reflect the existence of a body whose size, mass, and various charges are simultaneously scaled to zero. (That such solutions exist places a further restriction on the class of theories to which our results apply.) Assumptions are made only on the spacetime region outside of the body, so that the results apply independent of the body's composition (and, e.g., black holes are allowed). The worldline 'left behind' by the shrinking, disappearing body is interpreted as its lowest-order motion. An equation for this worldline follows from the 'Bianchi identity' for the theory, without use of any properties of the field equations beyond their being second-order. The form of the force law for a theory therefore depends only on the ranks of its various tensor fields; the detailed properties of the field equations are relevant only for determining the charges for a particular body (which are the ''monopoles'' of its exterior fields in a suitable limiting sense). I explicitly derive the force law (and mass-evolution law) in the case of scalar and vector fields, and give the recipe in the higher-rank case. Note that the vector force law is quite complicated, simplifying to the Lorentz force law only in the presence of the Maxwell gauge symmetry. Example applications of the results are the motion of 'chameleon' bodies beyond the Newtonian limit, and the motion of bodies in (classical) non-Abelian gauge theory. I also make some comments on the role that scaling plays in the appearance of universality in the motion of bodies.
Elementary wideband timing of radio pulsars
Pennucci, Timothy T.; Demorest, Paul B.; Ransom, Scott M. E-mail: pdemores@nrao.edu
2014-08-01
We present an algorithm for the simultaneous measurement of a pulse time-of-arrival (TOA) and dispersion measure (DM) from folded wideband pulsar data. We extend the prescription from Taylor's 1992 work to accommodate a general two-dimensional template 'portrait', the alignment of which can be used to measure a pulse phase and DM. We show that there is a dedispersion reference frequency that removes the covariance between these two quantities and note that the recovered pulse profile scaling amplitudes can provide useful information. We experiment with pulse modeling by using a Gaussian-component scheme that allows for independent component evolution with frequency, a 'fiducial component', and the inclusion of scattering. We showcase the algorithm using our publicly available code on three years of wideband data from the bright millisecond pulsar J18242452A (M28A) from the Green Bank Telescope, and a suite of Monte Carlo analyses validates the algorithm. By using a simple model portrait of M28A, we obtain DM trends comparable to those measured by more standard methods, with improved TOA and DM precisions by factors of a few. Measurements from our algorithm will yield precisions at least as good as those from traditional techniques, but is prone to fewer systematic effects and is without ad hoc parameters. A broad application of this new method for dispersion measure tracking with modern large-bandwidth observing systems should improve the timing residuals for pulsar timing array experiments, such as the North American Nanohertz Observatory for Gravitational Waves.
B → Dℓν form factors at nonzero recoil and |Vcb| from 2+1-flavor lattice QCD
DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)
Bailey, Jon A.
2015-08-10
We present the first unquenched lattice-QCD calculation of the hadronic form factors for the exclusive decay B¯→Dℓν¯ at nonzero recoil. We carry out numerical simulations on 14 ensembles of gauge-field configurations generated with 2+1 flavors of asqtad-improved staggered sea quarks. The ensembles encompass a wide range of lattice spacings (approximately 0.045 to 0.12 fm) and ratios of light (up and down) to strange sea-quark masses ranging from 0.05 to 0.4. For the b and c valence quarks we use improved Wilson fermions with the Fermilab interpretation, while for the light valence quarks we use asqtad-improved staggered fermions. We extrapolate ourmore » results to the physical point using rooted staggered heavy-light meson chiral perturbation theory. We then parametrize the form factors and extend them to the full kinematic range using model-independent functions based on analyticity and unitarity. We present our final results for f+(q2) and f0(q2), including statistical and systematic errors, as coefficients of a series in the variable z and the covariance matrix between these coefficients. We then fit the lattice form-factor data jointly with the experimentally measured differential decay rate from BABAR to determine the CKM matrix element, |Vcb|=(39.6 ± 1.7QCD+exp ± 0.2QED) × 10–3. As a byproduct of the joint fit we obtain the form factors with improved precision at large recoil. In conclusion, we use them to update our calculation of the ratio R(D) in the Standard Model, which yields R(D)=0.299(11).« less
Matter-enhanced transition probabilities in quantum field theory
Ishikawa, Kenzo Tobita, Yutaka
2014-05-15
The relativistic quantum field theory is the unique theory that combines the relativity and quantum theory and is invariant under the Poincar transformation. The ground state, vacuum, is singlet and one particle states are transformed as elements of irreducible representation of the group. The covariant one particles are momentum eigenstates expressed by plane waves and extended in space. Although the S-matrix defined with initial and final states of these states hold the symmetries and are applied to isolated states, out-going states for the amplitude of the event that they are detected at a finite-time interval T in experiments are expressed by microscopic states that they interact with, and are surrounded by matters in detectors and are not plane waves. These matter-induced effects modify the probabilities observed in realistic situations. The transition amplitudes and probabilities of the events are studied with the S-matrix, S[T], that satisfies the boundary condition at T. Using S[T], the finite-size corrections of the form of 1/T are found. The corrections to Fermis golden rule become larger than the original values in some situations for light particles. They break Lorentz invariance even in high energy region of short de Broglie wave lengths. -- Highlights: S-matrix S[T] for the finite-time interval in relativistic field theory. S[T] satisfies the boundary condition and gives correction of 1/T . The large corrections for light particles breaks Lorentz invariance. The corrections have implications to neutrino experiments.
Aerosol remote sensing in polar regions
Tomasi, Claudio; Kokhanovsky, Alexander A.; Lupi, Angelo; Ritter, Christoph; Smirnov, Alexander; O'Neill, Norman T.; Stone, Robert S.; Holben, Brent N.; Nyeki, Stephan; Mazzola, Mauro; Lanconelli, Christian; Vitale, Vito; Stebel, Kerstin; Aaltonen, Veijo; de Leeuw, Gerrit; Rodriguez, Edith; Herber, Andreas B.; Radionov, Vladimir F.; Zielinski, Tymon; Petelski, Tomasz; Sakerin, Sergey M.; Kabanov, Dmitry M.; Xue, Yong; Mei, Linlu; Istomina, Larysa; Wagener, Richard; McArthur, Bruce; Sobolewski, Piotr S.; Kivi, Rigel; Courcoux, Yann; Larouche, Pierre; Broccardo, Stephen; Piketh, Stuart J.
2015-01-01
Multi-year sets of ground-based sun-photometer measurements conducted at 12 Arctic sites and 9 Antarctic sites were examined to determine daily mean values of aerosol optical thickness ?(?) at visible and near-infrared wavelengths, from which best-fit values of ngstrm's exponent ? were calculated. Analysing these data, the monthly mean values of ?(0.50 ?m) and ? and the relative frequency histograms of the daily mean values of both parameters were determined for winterspring and summerautumn in the Arctic and for austral summer in Antarctica. The Arctic and Antarctic covariance plots of the seasonal median values of ? versus ?(0.50 ?m) showed: (i) a considerable increase in ?(0.50 ?m) for the Arctic aerosol from summer to winterspring, without marked changes in ?; and (ii) a marked increase in ?(0.50 ?m) passing from the Antarctic Plateau to coastal sites, whereas ? decreased considerably due to the larger fraction of sea-salt aerosol. Good agreement was found when comparing ground-based sun-photometer measurements of ?(?) and ? at Arctic and Antarctic coastal sites with Microtops measurements conducted during numerous AERONET/MAN cruises from 2006 to 2013 in three Arctic Ocean sectors and in coastal and off-shore regions of the Southern Atlantic, Pacific, and Indian Oceans, and the Antarctic Peninsula. Lidar measurements were also examined to characterise vertical profiles of the aerosol backscattering coefficient measured throughout the year at Ny-lesund. Satellite-based MODIS, MISR, and AATSR retrievals of ?(?) over large parts of the oceanic polar regions during spring and summer were in close agreement with ship-borne and coastal ground-based sun-photometer measurements. An overview of the chemical composition of mode particles is also presented, based on in-situ measurements at Arctic and Antarctic sites. Fourteen log-normal aerosol number size-distributions were defined to represent the average features of nuclei, accumulation and coarse mode particles for Arctic haze, summer background aerosol, Asian dust and boreal forest fire smoke, and for various background austral summer aerosol types at coastal and high-altitude Antarctic sites. The main columnar aerosol optical characteristics were determined for all 14 particle modes, based on in-situ measurements of the scattering and absorption coefficients. Diurnally averaged direct aerosol-induced radiative forcing and efficiency were calculated for a set of multimodal aerosol extinction models, using various Bidirectional Reflectance Distribution Function models over vegetation-covered, oceanic and snow-covered surfaces. These gave a reliable measure of the pronounced effects of aerosols on the radiation balance of the surfaceatmosphere system over polar regions.
Strydom, Gerhard; Bostelmann, F.
2015-09-01
The continued development of High Temperature Gas Cooled Reactors (HTGRs) requires verification of HTGR design and safety features with reliable high fidelity physics models and robust, efficient, and accurate codes. The predictive capability of coupled neutronics/thermal-hydraulics and depletion simulations for reactor design and safety analysis can be assessed with sensitivity analysis (SA) and uncertainty analysis (UA) methods. Uncertainty originates from errors in physical data, manufacturing uncertainties, modelling and computational algorithms. (The interested reader is referred to the large body of published SA and UA literature for a more complete overview of the various types of uncertainties, methodologies and results obtained). SA is helpful for ranking the various sources of uncertainty and error in the results of core analyses. SA and UA are required to address cost, safety, and licensing needs and should be applied to all aspects of reactor multi-physics simulation. SA and UA can guide experimental, modelling, and algorithm research and development. Current SA and UA rely either on derivative-based methods such as stochastic sampling methods or on generalized perturbation theory to obtain sensitivity coefficients. Neither approach addresses all needs. In order to benefit from recent advances in modelling and simulation and the availability of new covariance data (nuclear data uncertainties) extensive sensitivity and uncertainty studies are needed for quantification of the impact of different sources of uncertainties on the design and safety parameters of HTGRs. Only a parallel effort in advanced simulation and in nuclear data improvement will be able to provide designers with more robust and well validated calculation tools to meet design target accuracies. In February 2009, the Technical Working Group on Gas-Cooled Reactors (TWG-GCR) of the International Atomic Energy Agency (IAEA) recommended that the proposed Coordinated Research Program (CRP) on the HTGR Uncertainty Analysis in Modelling (UAM) be implemented. This CRP is a continuation of the previous IAEA and Organization for Economic Co-operation and Development (OECD)/Nuclear Energy Agency (NEA) international activities on Verification and Validation (V&V) of available analytical capabilities for HTGR simulation for design and safety evaluations. Within the framework of these activities different numerical and experimental benchmark problems were performed and insight was gained about specific physics phenomena and the adequacy of analysis methods.
Noise correlation in CBCT projection data and its application for noise reduction in low-dose CBCT
Zhang, Hua; Ouyang, Luo; Wang, Jing E-mail: jing.wang@utsouthwestern.edu; Ma, Jianhua E-mail: jing.wang@utsouthwestern.edu; Huang, Jing; Chen, Wufan
2014-03-15
Purpose: To study the noise correlation properties of cone-beam CT (CBCT) projection data and to incorporate the noise correlation information to a statistics-based projection restoration algorithm for noise reduction in low-dose CBCT. Methods: In this study, the authors systematically investigated the noise correlation properties among detector bins of CBCT projection data by analyzing repeated projection measurements. The measurements were performed on a TrueBeam onboard CBCT imaging system with a 4030CB flat panel detector. An anthropomorphic male pelvis phantom was used to acquire 500 repeated projection data at six different dose levels from 0.1 to 1.6 mAs per projection at three fixed angles. To minimize the influence of the lag effect, lag correction was performed on the consecutively acquired projection data. The noise correlation coefficient between detector bin pairs was calculated from the corrected projection data. The noise correlation among CBCT projection data was then incorporated into the covariance matrix of the penalized weighted least-squares (PWLS) criterion for noise reduction of low-dose CBCT. Results: The analyses of the repeated measurements show that noise correlation coefficients are nonzero between the nearest neighboring bins of CBCT projection data. The average noise correlation coefficients for the first- and second-order neighbors are 0.20 and 0.06, respectively. The noise correlation coefficients are independent of the dose level. Reconstruction of the pelvis phantom shows that the PWLS criterion with consideration of noise correlation (PWLS-Cor) results in a lower noise level as compared to the PWLS criterion without considering the noise correlation (PWLS-Dia) at the matched resolution. At the 2.0 mm resolution level in the axial-plane noise resolution tradeoff analysis, the noise level of the PWLS-Cor reconstruction is 6.3% lower than that of the PWLS-Dia reconstruction. Conclusions: Noise is correlated among nearest neighboring detector bins of CBCT projection data. An accurate noise model of CBCT projection data can improve the performance of the statistics-based projection restoration algorithm for low-dose CBCT.
Hu, Zhi; Huang, Ge; Sadanandam, Anguraj; Gu, Shenda; Lenburg, Marc E; Pai, Melody; Bayani, Nora; Blakely, Eleanor A; Gray, Joe W; Mao, Jian-Hua
2010-06-25
Introduction: HJURP (Holliday Junction Recognition Protein) is a newly discovered gene reported to function at centromeres and to interact with CENPA. However its role in tumor development remains largely unknown. The goal of this study was to investigate the clinical significance of HJURP in breast cancer and its correlation with radiotherapeutic outcome. Methods: We measured HJURP expression level in human breast cancer cell lines and primary breast cancers by Western blot and/or by Affymetrix Microarray; and determined its associations with clinical variables using standard statistical methods. Validation was performed with the use of published microarray data. We assessed cell growth and apoptosis of breast cancer cells after radiation using high-content image analysis. Results: HJURP was expressed at higher level in breast cancer than in normal breast tissue. HJURP mRNA levels were significantly associated with estrogen receptor (ER), progesterone receptor (PR), Scarff-Bloom-Richardson (SBR) grade, age and Ki67 proliferation indices, but not with pathologic stage, ERBB2, tumor size, or lymph node status. Higher HJURP mRNA levels significantly decreased disease-free and overall survival. HJURP mRNA levels predicted the prognosis better than Ki67 proliferation indices. In a multivariate Cox proportional-hazard regression, including clinical variables as covariates, HJURP mRNA levels remained an independent prognostic factor for disease-free and overall survival. In addition HJURP mRNA levels were an independent prognostic factor over molecular subtypes (normal like, luminal, Erbb2 and basal). Poor clinical outcomes among patients with high HJURP expression werevalidated in five additional breast cancer cohorts. Furthermore, the patients with high HJURP levels were much more sensitive to radiotherapy. In vitro studies in breast cancer cell lines showed that cells with high HJURP levels were more sensitive to radiation treatment and had a higher rate of apoptosis than those with low levels. Knock down of HJURP in human breast cancer cells using shRNA reduced the sensitivity to radiation treatment. HJURP mRNA levels were significantly correlated with CENPA mRNA levels. Conclusions: HJURP mRNA level is a prognostic factor for disease-free and overall survival in patients with breast cancer and is a predictive biomarker for sensitivity to radiotherapy.
Validity of Five Satellite-Based Latent Heat Flux Algorithms for Semi-arid Ecosystems
DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)
Feng, Fei; Chen, Jiquan; Li, Xianglan; Yao, Yunjun; Liang, Shunlin; Liu, Meng; Zhang, Nannan; Guo, Yang; Yu, Jian; Sun, Minmin
2015-12-09
Accurate estimation of latent heat flux (LE) is critical in characterizing semiarid ecosystems. Many LE algorithms have been developed during the past few decades. However, the algorithms have not been directly compared, particularly over global semiarid ecosystems. In this paper, we evaluated the performance of five LE models over semiarid ecosystems such as grassland, shrub, and savanna using the Fluxnet dataset of 68 eddy covariance (EC) sites during the period 2000–2009. We also used a modern-era retrospective analysis for research and applications (MERRA) dataset, the Normalized Difference Vegetation Index (NDVI) and Fractional Photosynthetically Active Radiation (FPAR) from the moderate resolutionmore » imaging spectroradiometer (MODIS) products; the leaf area index (LAI) from the global land surface satellite (GLASS) products; and the digital elevation model (DEM) from shuttle radar topography mission (SRTM30) dataset to generate LE at region scale during the period 2003–2006. The models were the moderate resolution imaging spectroradiometer LE (MOD16) algorithm, revised remote sensing based Penman–Monteith LE algorithm (RRS), the Priestley–Taylor LE algorithm of the Jet Propulsion Laboratory (PT-JPL), the modified satellite-based Priestley–Taylor LE algorithm (MS-PT), and the semi-empirical Penman LE algorithm (UMD). Direct comparison with ground measured LE showed the PT-JPL and MS-PT algorithms had relative high performance over semiarid ecosystems with the coefficient of determination (R2) ranging from 0.6 to 0.8 and root mean squared error (RMSE) of approximately 20 W/m2. Empirical parameters in the structure algorithms of MOD16 and RRS, and calibrated coefficients of the UMD algorithm may be the cause of the reduced performance of these LE algorithms with R2 ranging from 0.5 to 0.7 and RMSE ranging from 20 to 35 W/m2 for MOD16, RRS and UMD. Sensitivity analysis showed that radiation and vegetation terms were the dominating variables affecting LE Fluxes in global semiarid ecosystem.« less
Validity of Five Satellite-Based Latent Heat Flux Algorithms for Semi-arid Ecosystems
Feng, Fei; Chen, Jiquan; Li, Xianglan; Yao, Yunjun; Liang, Shunlin; Liu, Meng; Zhang, Nannan; Guo, Yang; Yu, Jian; Sun, Minmin
2015-12-09
Accurate estimation of latent heat flux (LE) is critical in characterizing semiarid ecosystems. Many LE algorithms have been developed during the past few decades. However, the algorithms have not been directly compared, particularly over global semiarid ecosystems. In this paper, we evaluated the performance of five LE models over semiarid ecosystems such as grassland, shrub, and savanna using the Fluxnet dataset of 68 eddy covariance (EC) sites during the period 2000–2009. We also used a modern-era retrospective analysis for research and applications (MERRA) dataset, the Normalized Difference Vegetation Index (NDVI) and Fractional Photosynthetically Active Radiation (FPAR) from the moderate resolution imaging spectroradiometer (MODIS) products; the leaf area index (LAI) from the global land surface satellite (GLASS) products; and the digital elevation model (DEM) from shuttle radar topography mission (SRTM30) dataset to generate LE at region scale during the period 2003–2006. The models were the moderate resolution imaging spectroradiometer LE (MOD16) algorithm, revised remote sensing based Penman–Monteith LE algorithm (RRS), the Priestley–Taylor LE algorithm of the Jet Propulsion Laboratory (PT-JPL), the modified satellite-based Priestley–Taylor LE algorithm (MS-PT), and the semi-empirical Penman LE algorithm (UMD). Direct comparison with ground measured LE showed the PT-JPL and MS-PT algorithms had relative high performance over semiarid ecosystems with the coefficient of determination (R2) ranging from 0.6 to 0.8 and root mean squared error (RMSE) of approximately 20 W/m^{2}. Empirical parameters in the structure algorithms of MOD16 and RRS, and calibrated coefficients of the UMD algorithm may be the cause of the reduced performance of these LE algorithms with R2 ranging from 0.5 to 0.7 and RMSE ranging from 20 to 35 W/m^{2} for MOD16, RRS and UMD. Sensitivity analysis showed that radiation and vegetation terms were the dominating variables affecting LE Fluxes in global semiarid ecosystem.
Hanson, Paul J; Amthor, Jeffrey S; Wullschleger, Stan D; Wilson, K.; Grant, Robert F.; Hartley, Anne; Hui, D.; HuntJr., E. Raymond; Johnson, Dale W.; Kimball, John S.; King, Anthony Wayne; Luo, Yiqi; McNulty, Steven G.; Sun, G.; Thornton, Peter; Wang, S.; Williams, M.; Baldocchi, D. D.; Cushman, Robert Michael
2004-01-01
Models represent our primary method for integration of small-scale, processlevel phenomena into a comprehensive description of forest-stand or ecosystem function. They also represent a key method for testing hypotheses about the response of forest ecosystems to multiple changing environmental conditions. This paper describes the evaluation of 13 stand-level models varying in their spatial, mechanistic, and temporal complexity for their ability to capture intra- and interannual components of the water and carbon cycle for an upland, oak-dominated forest of eastern Tennessee. Comparisons between model simulations and observations were conducted for hourly, daily, and annual time steps. Data for the comparisons were obtained from a wide range of methods including: eddy covariance, sapflow, chamber-based soil respiration, biometric estimates of stand-level net primary production and growth, and soil water content by time or frequency domain reflectometry. Response surfaces of carbon and water flux as a function of environmental drivers, and a variety of goodness-of-fit statistics (bias, absolute bias, and model efficiency) were used to judge model performance. A single model did not consistently perform the best at all time steps or for all variables considered. Intermodel comparisons showed good agreement for water cycle fluxes, but considerable disagreement among models for predicted carbon fluxes. The mean of all model outputs, however, was nearly always the best fit to the observations. Not surprisingly, models missing key forest components or processes, such as roots or modeled soil water content, were unable to provide accurate predictions of ecosystem responses to short-term drought phenomenon. Nevertheless, an inability to correctly capture short-term physiological processes under drought was not necessarily an indicator of poor annual water and carbon budget simulations. This is possible because droughts in the subject ecosystem were of short duration and therefore had a small cumulative impact. Models using hourly time steps and detailed mechanistic processes, and having a realistic spatial representation of the forest ecosystem provided the best predictions of observed data. Predictive ability of all models deteriorated under drought conditions, suggesting that further work is needed to evaluate and improve ecosystem model performance under unusual conditions, such as drought, that are a common focus of environmental change discussions.
Wharton, S; Schroeder, M; Bible, K; Falk, M; Paw U, K T
2009-02-23
This study examines how stand age affects ecosystem mass and energy exchange response to seasonal drought in three adjacent Douglas-fir (Pseudotsuga menziesii (Mirb.) Franco) forests. The sites include two early seral stands (ES) (0-15 years old) and an old-growth (OG) ({approx} 450-500) forest in the Wind River Experiment Forest, Washington, USA. We use eddy covariance flux measurements of carbon dioxide (F{sub NEE}), latent energy ({lambda}E) and sensible heat (H) to derive evapotranspiration rate (E{sub T}), bowen ratio ({beta}), water use efficiency (WUE), canopy conductance (G{sub c}), the Priestley-Taylor coefficient ({alpha}) and a canopy decoupling factor ({Omega}). The canopy and bulk parameters are examined to see how ecophysiological responses to water stress, including changes in available soil water ({theta}{sub r}) and vapor pressure deficit ({delta}e) differ among the two forest successional-stages. Despite very different rainfall patterns in 2006 and 2007, we observed distinct successional-stage relationships between E{sub T}, {alpha}, and G{sub c} to {delta}e and {theta}{sub r} during both years. The largest stand differences were (1) higher morning G{sub c} (> 10 mm s{sup -1}) at the OG forest coinciding with higher CO{sub 2} uptake (F{sub NEE} = -9 to -6 {micro}mol m{sup -2} s{sup -1}) but a strong negative response in G{sub c} to moderate {delta}e later in the day and a subsequent reduction in E{sub T}, and (2) higher E{sub T} at the ES stands because midday canopy conductance did not decrease until very low water availability levels (<30%) were reached at the end of the summer. Our results suggest that early seral stands are more likely than mature forests to experience declines in production if the summer drought becomes longer or intensifies because water conserving ecophysiological responses were only observed at the very end of the seasonal drought period in the youngest stands.
CALiPER Exploratory Study: Accounting for Uncertainty in Lumen Measurements
Bergman, Rolf; Paget, Maria L.; Richman, Eric E.
2011-03-31
With a well-defined and shared understanding of uncertainty in lumen measurements, testing laboratories can better evaluate their processes, contributing to greater consistency and credibility of lighting testing a key component of the U.S. Department of Energy (DOE) Commercially Available LED Product Evaluation and Reporting (CALiPER) program. Reliable lighting testing is a crucial underlying factor contributing toward the success of many energy-efficient lighting efforts, such as the DOE GATEWAY demonstrations, Lighting Facts Label, ENERGY STAR energy efficient lighting programs, and many others. Uncertainty in measurements is inherent to all testing methodologies, including photometric and other lighting-related testing. Uncertainty exists for all equipment, processes, and systems of measurement in individual as well as combined ways. A major issue with testing and the resulting accuracy of the tests is the uncertainty of the complete process. Individual equipment uncertainties are typically identified, but their relative value in practice and their combined value with other equipment and processes in the same test are elusive concepts, particularly for complex types of testing such as photometry. The total combined uncertainty of a measurement result is important for repeatable and comparative measurements for light emitting diode (LED) products in comparison with other technologies as well as competing products. This study provides a detailed and step-by-step method for determining uncertainty in lumen measurements, working closely with related standards efforts and key industry experts. This report uses the structure proposed in the Guide to Uncertainty Measurements (GUM) for evaluating and expressing uncertainty in measurements. The steps of the procedure are described and a spreadsheet format adapted for integrating sphere and goniophotometric uncertainty measurements is provided for entering parameters, ordering the information, calculating intermediate values and, finally, obtaining expanded uncertainties. Using this basis and examining each step of the photometric measurement and calibration methods, mathematical uncertainty models are developed. Determination of estimated values of input variables is discussed. Guidance is provided for the evaluation of the standard uncertainties of each input estimate, covariances associated with input estimates and the calculation of the result measurements. With this basis, the combined uncertainty of the measurement results and finally, the expanded uncertainty can be determined.
McAleer, Mary Frances; Moughan, Jennifer M.S.; Byhardt, Roger W.; Cox, James D.; Sause, William T.; Komaki, Ritsuko
2010-03-01
Purpose: Induction chemotherapy (ICT) improves survival compared with radiotherapy (RT) alone in locally advanced non-small-cell lung cancer (LANSCLC) patients with good prognostic factors. Concurrent chemoradiotherapy (CCRT) is superior to ICT followed by RT. The question arises whether ICT response predicts the outcome of patients subsequently treated with CCRT or RT. Methods and Materials: Between 1988 and 1992, 194 LANSCLC patients were treated prospectively with ICT (two cycles of vinblastine and cisplatin) and then CCRT (cisplatin plus 63 Gy for 7 weeks) in the Radiation Therapy Oncology Group 8804 trial (n = 30) or ICT and then RT (60 Gy/6 wk) on Radiation Therapy Oncology Group 8808 trial (n = 164). Of the 194 patients, 183 were evaluable and 141 had undergone a postinduction assessment. The overall survival (OS) of those with complete remission (CR) or partial remission (PR) was compared with that of patients with stable disease (SD) or progressive disease (PD) after ICT. Results: Of the 141 patients, 6, 30, 99, and 6 had CR, PR, SD, and PD, respectively. The log-rank test showed a significant difference (p <0.0001) in OS when the response groups were compared (CR/PR vs. SD/PD). On univariate and multivariate analyses, a trend was seen toward a response to ICT with OS (p = 0.097 and p = 0.06, respectively). A squamous histologic type was associated with worse OS on univariate and multivariate analyses (p = 0.031 and p = 0.018, respectively). SD/PD plus a squamous histologic type had a hazard ratio of 2.25 vs. CR/PR plus a nonsquamous histologic type (p = 0.007) on covariate analysis. Conclusion: The response to ICT was associated with a significant survival difference when the response groups were compared. A response to ICT showed a trend toward, but was not predictive of, improved OS in LANSCLC patients. Patients with SD/PD after ICT and a squamous histologic type had the poorest OS. These data suggest that patients with squamous LANSCLC might benefit from immediate RT or CCRT.
Miles, Edward F.; Nelson, John W.; Alkaissi, Ali K.; Das, Shiva; Clough, Robert W.; Broadwater, Gloria; Anscher, Mitchell S.; Chino, Junzo P.; Oleson, James R.
2010-05-01
Purpose: To assess the correlation of postimplant dosimetric quantifiers with biochemical control of prostate cancer after low-dose rate brachytherapy. Methods and Materials: The biologically effective dose (BED), dose in Gray (Gy) to 90% of prostate (D{sub 90}), and percent volume of the prostate receiving 100% of the prescription dose (V{sub 100}) were calculated from the postimplant dose-volume histogram for 140 patients undergoing low-dose rate prostate brachytherapy from 1997 to 2003 at Durham Regional Hospital and the Durham VA Medical Center (Durham, NC). Results: The median follow-up was 50 months. There was a 7% biochemical failure rate (10 of 140), and 91% of patients (127 of 140) were alive at last clinical follow-up. The median BED was 148 Gy (range, 46-218 Gy). The median D{sub 90} was 139 Gy (range, 45-203 Gy). The median V{sub 100} was 85% (range, 44-100%). The overall 5-year biochemical relapse-free survival (bRFS) rate was 90.1%. On univariate Cox proportional hazards modeling, no pretreatment characteristic (Gleason score sum, age, baseline prostate-specific antigen, or clinical stage) was predictive of bRFS. The BED, D{sub 90}, and V{sub 100} were all highly correlated (Pearson coefficients >92%), and all were strongly correlated with bRFS. Using the Youden method, we identified the following cut points for predicting freedom from biochemical failure: D{sub 90} >= 110 Gy, V{sub 100} >= 74%, and BED >= 115 Gy. None of the covariates significantly predicted overall survival. Conclusions: We observed significant correlation between BED, D{sub 90}, and V{sub 100} with bRFS. The BED is at least as predictive of bRFS as D{sub 90} or V{sub 100}. Dosimetric quantifiers that account for heterogeneity in tumor location and dose distribution, tumor repopulation, and survival probability of tumor clonogens should be investigated.
The Phylogenetic Signature Underlying ATP Synthase c-Ring Compliance
Pandini, Alessandro; Kleinjung, Jens; Taylor, Willie R.; Junge, Wolfgang; Khan, Shahid
2015-09-01
The proton-driven ATP synthase (F_{O}F_{1}) is comprised of two rotary, stepping motors (F_{O} and F_{1}) coupled by an elastic power transmission. The elastic compliance resides in the rotor module that includes the membrane-embedded FO c-ring. Proton transport by FO is firmly coupled to the rotation of the c-ring relative to other F_{O} subunits (ab_{2}). It drives ATP synthesis. We used a computational method to investigate the contribution of the c-ring to the total elastic compliance. We performed principal component analysis of conformational ensembles built using distance constraints from the bovine mitochondrial c-ring x-ray structure. Angular rotary twist, the dominant ring motion, was estimated to show that the c-ring accounted in part for the measured compliance. Ring rotation was entrained to rotation of the external helix within each hairpin-shaped c-subunit in the ring. Ensembles of monomer and dimers extracted from complete c-rings showed that the coupling between collective ring and the individual subunit motions was independent of the size of the c-ring, which varies between organisms. Molecular determinants were identified by covariance analysis of residue coevolution and structural-alphabet-based local dynamics correlations. The residue coevolution gave a readout of subunit architecture. The dynamic couplings revealed that the hinge for both ring and subunit helix rotations was constructed from the proton-binding site and the adjacent glycine motif (IB-GGGG) in the midmembrane plane. IB-GGGG motifs were linked by long-range couplings across the ring, while intrasubunit couplings connected the motif to the conserved cytoplasmic loop and adjacent segments. The correlation with principal collective motions shows that the couplings underlie both ring rotary and bending motions. Noncontact couplings between IB-GGGG motifs matched the coevolution signal as well as contact couplings. The residue coevolution reflects the physiological importance of the dynamics that may link proton transfer to ring compliance.
Cao, Y; Li, R; Chi, Z
2014-06-01
Purpose: To compare the performances of four commercial treatment planning systems (TPS) used for the intensity-modulated radiotherapy (IMRT). Methods: Ten patients of nasopharyngeal (4 cases), esophageal (3 cases) and cervical (3 cases) cancer were randomly selected from a 3-month IMRT plan pool at one radiotherapy center. For each patient, four IMRT plans were newly generated by using four commercial TPS (Corvus, Monaco, Pinnacle and Xio), and then verified with Matrixx (two-dimensional array/IBA Company) on Varian23EX accelerator. A pass rate (PR) calculated from the Gamma index by OminiPro IMRT 1.5 software was evaluated at four plan verification standards (1%/1mm, 2%/2mm, 3%/3mm, 4%/4mm and 5%/5mm) for each treatment plan. Overall and multiple pairwise comparisons of PRs were statistically conducted by analysis of covariance (ANOVA) F and LSD tests among four TPSs. Results: Overall significant (p>0.05) differences of PRs were found among four TPSs with F test values of 3.8 (p=0.02), 21.1(>0.01), 14.0 (>0.01), 8.3(>0.01) at standards of 1%/1mm to 4%/4mm respectively, except at 5%/5mm standard with 2.6 (p=0.06). All means (standard deviation) of PRs at 3%/3mm of 94.3 3.3 (Corvus), 98.8 0.8 (Monaco), 97.5 1.7 (Pinnacle), 98.4 1.0 (Xio) were above 90% and met clinical requirement. Multiple pairwise comparisons had not demonstrated a consistent low or high pattern on either TPS. Conclusion: Matrixx dose verification results show that the validation pass rates of Monaco and Xio plans are relatively higher than those of the other two; Pinnacle plan shows slight higher pass rate than Corvus plan; lowest pass rate was achieved by the Corvus plan among these four kinds of TPS.
DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)
Burns, S. P.; Blanken, P. D.; Turnipseed, A. A.; Monson, R. K.
2015-06-16
Precipitation changes the physical and biological characteristics of an ecosystem. Using a precipitation-based conditional sampling technique and a 14 year dataset from a 25 m micrometeorological tower in a high-elevation subalpine forest, we examined how warm-season precipitation affected the above-canopy diel cycle of wind and turbulence, net radiation Rnet, ecosystem eddy covariance fluxes (sensible heat H, latent heat LE, and CO2 net ecosystem exchange NEE) and vertical profiles of scalars (air temperature Ta, specific humidity q, and CO2 dry mole fraction ?c). This analysis allowed us to examine how precipitation modified these variables from hourly (i.e., the diel cycle) tomoremulti-day time-scales (i.e., typical of a weather-system frontal passage). During mid-day we found: (i) even though precipitation caused mean changes on the order of 5070% to Rnet, H, and LE, the surface energy balance (SEB) was relatively insensitive to precipitation with mid-day closure values ranging between 7080%, and (ii) compared to a typical dry day, a day following a rainy day was characterized by increased ecosystem uptake of CO2 (NEE increased by ≈ 10%), enhanced evaporative cooling (mid-day LE increased by ≈ 30 W m-2), and a smaller amount of sensible heat transfer (mid-day H decreased by ≈ 70 W m-2). Based on the mean diel cycle, the evaporative contribution to total evapotranspiration was, on average, around 6% in dry conditions and 20% in wet conditions. Furthermore, increased LE lasted at least 18 h following a rain event. At night, precipitation (and accompanying clouds) reduced Rnet and increased LE. Any effect of precipitation on the nocturnal SEB closure and NEE was overshadowed by atmospheric phenomena such as horizontal advection and decoupling that create measurement difficulties. Above-canopy mean ?c during wet conditions was found to be about 23 ?mol mol-1 larger than ?c on dry days. This difference was fairly constant over the full diel cycle suggesting that it was due to synoptic weather patterns (different air masses and/or effects of barometric pressure). In the evening hours during wet conditions, weakly stable conditions resulted in smaller vertical ?c differences compared to those in dry conditions. Finally, the effect of clouds on the timing and magnitude of daytime ecosystem fluxes is described.less
AmeriFlux US-Bar Bartlett Experimental Forest
Richardson, Andrew
2016-01-01
This is the AmeriFlux version of the carbon flux data for the site US-Bar Bartlett Experimental Forest. Site Description - The Bartlett Experimental Forest (448170 N, 71830 W) is located within the White Mountains National Forest in north-central New Hampshire, USA. The 1050 ha forest extends across an elevational range from 200 to 900 m a.s.l. It was established in 1931 and is managed by the USDA Forest Service Northeastern Research Station in Durham, NH. The climate is humid continental with short, cool summers (mean July temperature, 19.8C) and long, cold winters (mean January temperature, 9.8C). Annual precipitation averages 130 cm and is distributed evenly throughout the year. Soils are developed from glacial till and are predominantly shallow, well-drained spodosols. At lowto mid-elevation, vegetation is dominated by northern hardwoods (American beech, Fagus grandifolia; sugar maple, Acer saccharum; yellow birch, Betula alleghaniensis; with some red maple, Acer rubrum and paper birch, Betula papyrifera). Conifers (eastern hemlock, Tsuga canadensis; eastern white pine, Pinus strobus; red spruce, Picea rubens) are occasionally found intermixed with the more abundant deciduous species but are generally confined to the highest (red spruce) and lowest (hemlock and pine) elevations. In 2003, the site was adopted as a NASA North American Carbon Program (NACP) Tier-2 field research and validation site. A 26.5 m high tower was installed in a low-elevation northern hardwood stand in November, 2003, for the purpose of making eddy covariance measurements of the forest–atmosphere exchange of CO2, H2O and radiant energy. Continuous flux and meteorological measurements began in January, 2004, and are ongoing. Average canopy height in the vicinity of the tower is approximately 20–22 m. In the tower footprint, the forest is predominantly classified into red maple, sugar maple, and American beech forest types. Leaf area index in the vicinity of the tower is 3.6 as measured by seasonal litterfall collection, and 4.5 as measured by the optically based Li-Cor LAI-2000 instrument. Further site information: http://www.fs.fed.us/ne/durham/4155/bartlett.htm
DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)
Burns, S. P.; Blanken, P. D.; Turnipseed, A. A.; Hu, J.; Monson, R. K.
2015-12-15
Precipitation changes the physical and biological characteristics of an ecosystem. Using a precipitation-based conditional sampling technique and a 14 year data set from a 25 m micrometeorological tower in a high-elevation subalpine forest, we examined how warm-season precipitation affected the above-canopy diel cycle of wind and turbulence, net radiation Rnet, ecosystem eddy covariance fluxes (sensible heat H, latent heat LE, and CO2 net ecosystem exchange NEE) and vertical profiles of scalars (air temperature Ta, specific humidity q, and CO2 dry mole fraction χc). This analysis allowed us to examine how precipitation modified these variables from hourly (i.e., the diel cycle)more » to multi-day time-scales (i.e., typical of a weather-system frontal passage). During mid-day we found the following: (i) even though precipitation caused mean changes on the order of 50–70 % to Rnet, H, and LE, the surface energy balance (SEB) was relatively insensitive to precipitation with mid-day closure values ranging between 90 and 110 %, and (ii) compared to a typical dry day, a day following a rainy day was characterized by increased ecosystem uptake of CO2 (NEE increased by ≈ 10 %), enhanced evaporative cooling (mid-day LE increased by ≈ 30 W m−2), and a smaller amount of sensible heat transfer (mid-day H decreased by ≈ 70 W m−2). Based on the mean diel cycle, the evaporative contribution to total evapotranspiration was, on average, around 6 % in dry conditions and between 15 and 25 % in partially wet conditions. Furthermore, increased LE lasted at least 18 h following a rain event. At night, even though precipitation (and accompanying clouds) reduced the magnitude of Rnet, LE increased from ≈ 10 to over 20 W m−2 due to increased evaporation. Any effect of precipitation on the nocturnal SEB closure and NEE was overshadowed by atmospheric phenomena such as horizontal advection and decoupling that create measurement difficulties. Above-canopy mean χc during wet conditions was found to be about 2–3 μmol mol−1 larger than χc on dry days. This difference was fairly constant over the full diel cycle suggesting that it was due to synoptic weather patterns (different air masses and/or effects of barometric pressure). Finally, the effect of clouds on the timing and magnitude of daytime ecosystem fluxes is described.« less
Rutter, Charles E.; Chagpar, Anees B.; Evans, Suzanne B.
2014-10-01
Objectives: Radiation therapy for left-sided breast cancer has been associated with an elevated risk of cardiac mortality, based on studies predating treatment planning based on computed tomography. This study assessed the impact of tumor laterality on overall survival (OS) in a large cohort treated with modern techniques, to indirectly determine whether left-sided treatment remains associated with increased cardiac mortality. Methods and Materials: Patients treated for breast cancer with breast conserving surgery and adjuvant external beam radiation therapy were identified in the National Cancer Database, and OS was compared based on tumor laterality using Kaplan-Meier analysis. Separate analyses were performed for noninvasive and invasive carcinoma and for breast-only and breast plus regional nodal radiation therapy. Multivariate regression analysis of OS was performed with demographic, pathologic, and treatment variables as covariates to adjust for factors associated with breast cancer–specific survival. Results: We identified 344,831 patients whose cancer was diagnosed from 1998 to 2006 with a median follow-up time of 6.04 years (range, 0-14.17 years). Clinical, tumor, and treatment characteristics were similar between laterality groups. Regional nodal radiation was used in 14.2% of invasive cancers. No OS difference was noted based on tumor laterality for patients treated with breast-only (hazard ratio [HR] 0.984, P=.132) and breast plus regional nodal radiation therapy (HR 1.001, P=.957). In multivariate analysis including potential confounders, OS was identical between left and right sided cancers (HR 1.002, P=.874). No significant OS difference by laterality was observed when analyses were restricted to patients with at least 10 years of follow-up (n=27,725), both in patients treated with breast-only (HR 0.955, P=.368) and breast plus regional nodal radiation therapy (HR 0.859, P=.155). Conclusions: Radiation therapy for left-sided breast cancer does not appear to increase the risk of death in this national database relative to right-sided tumors. Consequently, radiation therapy–induced cardiac disease may be less prominent than previously demonstrated.
Fried, David V.; Tucker, Susan L.; Zhou, Shouhao; Liao, Zhongxing; Mawlawi, Osama; Ibbott, Geoffrey; Court, Laurence E.
2014-11-15
Purpose: To determine whether pretreatment CT texture features can improve patient risk stratification beyond conventional prognostic factors (CPFs) in stage III non-small cell lung cancer (NSCLC). Methods and Materials: We retrospectively reviewed 91 cases with stage III NSCLC treated with definitive chemoradiation therapy. All patients underwent pretreatment diagnostic contrast enhanced computed tomography (CE-CT) followed by 4-dimensional CT (4D-CT) for treatment simulation. We used the average-CT and expiratory (T50-CT) images from the 4D-CT along with the CE-CT for texture extraction. Histogram, gradient, co-occurrence, gray tone difference, and filtration-based techniques were used for texture feature extraction. Penalized Cox regression implementing cross-validation was used for covariate selection and modeling. Models incorporating texture features from the 33 image types and CPFs were compared to those with models incorporating CPFs alone for overall survival (OS), local-regional control (LRC), and freedom from distant metastases (FFDM). Predictive Kaplan-Meier curves were generated using leave-one-out cross-validation. Patients were stratified based on whether their predicted outcome was above or below the median. Reproducibility of texture features was evaluated using test-retest scans from independent patients and quantified using concordance correlation coefficients (CCC). We compared models incorporating the reproducibility seen on test-retest scans to our original models and determined the classification reproducibility. Results: Models incorporating both texture features and CPFs demonstrated a significant improvement in risk stratification compared to models using CPFs alone for OS (P=.046), LRC (P=.01), and FFDM (P=.005). The average CCCs were 0.89, 0.91, and 0.67 for texture features extracted from the average-CT, T50-CT, and CE-CT, respectively. Incorporating reproducibility within our models yielded 80.4% (±3.7% SD), 78.3% (±4.0% SD), and 78.8% (±3.9% SD) classification reproducibility in terms of OS, LRC, and FFDM, respectively. Conclusions: Pretreatment tumor texture may provide prognostic information beyond that obtained from CPFs. Models incorporating feature reproducibility achieved classification rates of ∼80%. External validation would be required to establish texture as a prognostic factor.
Hamilton, Sarah Nicole; Tyldesley, Scott; Li, Dongdong; Olson, Robert; McBride, Mary
2015-04-01
Purpose: This study was undertaken to determine whether there was an increased risk of second malignancies (SM), particularly lung cancer, in early stage breast cancer patients treated with the addition of nodal fields to breast and/or chest wall radiation therapy (RT). Materials and Methods: Subjects were stage I/II female breast cancer patients 20 to 79 years of age, diagnosed between 1989 and 2005 and treated with adjuvant RT at our institution. Patients were included if they survived and did not have SM within 3 years of diagnosis. Standardized incidence ratios (SIR) with 95% confidence intervals (CI) were calculated to compare SM incidence to cancer incidence in the general sex- and age-matched populations. Secondary malignancy risks in patients treated with local RT (LRT) to the breast/chest wall were compared to those in patients treated with locoregional RT (LRRT) to the breast/chest wall and regional nodes, using multivariate regression analysis (MVA) to account for covariates. Results: The cohort included 12,836 patients with a median follow-up of 8.4 years. LRRT was used in 18% of patients. The SIR comparing patients treated with LRT to the general population was 1.29 (CI: 1.21-1.38). No statistically significant increased incidence of in-field malignancies (SIR, 1.04; CI: 0.87-1.23) and lung cancers (SIR, 1.06; CI: 0.88-1.26) was detected. The SIR comparing patients treated with LRRT to the general population was 1.39 (CI: 1.17-1.64). No statistically significant increased incidence of in-field malignancies (SIR, 1.26; CI: 0.77-1.94) and lung cancers (SIR, 1.27; CI: 0.76-1.98) was detected. On MVA comparing LRRT to LRT, the adjusted hazard ratio was 1.20 for in-field malignancies (CI: 0.68-2.16) and 1.26 for lung cancer (CI: 0.67-2.36). The excess attributable risk (EAR) to regional RT was 3.1 per 10,000 person years (CI: −8.7 to 9.9). Conclusions: No statistically significant increased risk of second malignancy was detected after LRRT relative to that for LRT. The upper limit of the EAR was approximately 1% at 10 years.
The Phylogenetic Signature Underlying ATP Synthase c-Ring Compliance
DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)
Pandini, Alessandro; Kleinjung, Jens; Taylor, Willie R.; Junge, Wolfgang; Khan, Shahid
2015-09-01
The proton-driven ATP synthase (FOF1) is comprised of two rotary, stepping motors (FO and F1) coupled by an elastic power transmission. The elastic compliance resides in the rotor module that includes the membrane-embedded FO c-ring. Proton transport by FO is firmly coupled to the rotation of the c-ring relative to other FO subunits (ab2). It drives ATP synthesis. We used a computational method to investigate the contribution of the c-ring to the total elastic compliance. We performed principal component analysis of conformational ensembles built using distance constraints from the bovine mitochondrial c-ring x-ray structure. Angular rotary twist, the dominant ringmore » motion, was estimated to show that the c-ring accounted in part for the measured compliance. Ring rotation was entrained to rotation of the external helix within each hairpin-shaped c-subunit in the ring. Ensembles of monomer and dimers extracted from complete c-rings showed that the coupling between collective ring and the individual subunit motions was independent of the size of the c-ring, which varies between organisms. Molecular determinants were identified by covariance analysis of residue coevolution and structural-alphabet-based local dynamics correlations. The residue coevolution gave a readout of subunit architecture. The dynamic couplings revealed that the hinge for both ring and subunit helix rotations was constructed from the proton-binding site and the adjacent glycine motif (IB-GGGG) in the midmembrane plane. IB-GGGG motifs were linked by long-range couplings across the ring, while intrasubunit couplings connected the motif to the conserved cytoplasmic loop and adjacent segments. The correlation with principal collective motions shows that the couplings underlie both ring rotary and bending motions. Noncontact couplings between IB-GGGG motifs matched the coevolution signal as well as contact couplings. The residue coevolution reflects the physiological importance of the dynamics that may link proton transfer to ring compliance.« less
Assessment of model estimates of land-atmosphere CO2 exchange across Northern Eurasia
DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)
Rawlins, M. A.; McGuire, A. D.; Kimball, J. S.; Dass, P.; Lawrence, D.; Burke, E.; Chen, X.; Delire, C.; Koven, C.; MacDougall, A.; et al
2015-07-28
A warming climate is altering land-atmosphere exchanges of carbon, with a potential for increased vegetation productivity as well as the mobilization of permafrost soil carbon stores. Here we investigate land-atmosphere carbon dioxide (CO2) cycling through analysis of net ecosystem productivity (NEP) and its component fluxes of gross primary productivity (GPP) and ecosystem respiration (ER) and soil carbon residence time, simulated by a set of land surface models (LSMs) over a region spanning the drainage basin of Northern Eurasia. The retrospective simulations cover the period 1960–2009 at 0.5° resolution, which is a scale common among many global carbon and climate modelmore » simulations. Model performance benchmarks were drawn from comparisons against both observed CO2 fluxes derived from site-based eddy covariance measurements as well as regional-scale GPP estimates based on satellite remote-sensing data. The site-based comparisons depict a tendency for overestimates in GPP and ER for several of the models, particularly at the two sites to the south. For several models the spatial pattern in GPP explains less than half the variance in the MODIS MOD17 GPP product. Across the models NEP increases by as little as 0.01 to as much as 0.79 g C m⁻² yr⁻², equivalent to 3 to 340 % of the respective model means, over the analysis period. For the multimodel average the increase is 135 % of the mean from the first to last 10 years of record (1960–1969 vs. 2000–2009), with a weakening CO2 sink over the latter decades. Vegetation net primary productivity increased by 8 to 30 % from the first to last 10 years, contributing to soil carbon storage gains. The range in regional mean NEP among the group is twice the multimodel mean, indicative of the uncertainty in CO2 sink strength. The models simulate that inputs to the soil carbon pool exceeded losses, resulting in a net soil carbon gain amid a decrease in residence time. Our analysis points to improvements in model elements controlling vegetation productivity and soil respiration as being needed for reducing uncertainty in land-atmosphere CO2 exchange. These advances will require collection of new field data on vegetation and soil dynamics, the development of benchmarking data sets from measurements and remote-sensing observations, and investments in future model development and intercomparison studies.« less
Near-Surface CO2 Monitoring And Analysis To Detect Hidden Geothermal Systems
Lewicki, Jennifer L.; Oldenburg, Curtis M.
2005-01-19
''Hidden'' geothermal systems are systems devoid of obvious surface hydrothermal manifestations. Emissions of moderate-to-low solubility gases may be one of the primary near-surface signals from these systems. We investigate the potential for CO2 detection and monitoring below and above ground in the near-surface environment as an approach to exploration targeting hidden geothermal systems. We focus on CO2 because it is the dominant noncondensible gas species in most geothermal systems and has moderate solubility in water. We carried out numerical simulations of a CO2 migration scenario to calculate the magnitude of expected fluxes and concentrations. Our results show that CO2 concentrations can reach high levels in the shallow subsurface even for relatively low geothermal source CO2 fluxes. However, once CO2 seeps out of the ground into the atmospheric surface layer, winds are effective at dispersing CO2 seepage. In natural ecological systems in the absence of geothermal gas emissions, near-surface CO2 fluxes and concentrations are predominantly controlled by CO2 uptake by photosynthesis, production by root respiration, microbial decomposition of soil/subsoil organic matter, groundwater degassing, and exchange with the atmosphere. Available technologies for monitoring CO2 in the near-surface environment include the infrared gas analyzer, the accumulation chamber method, the eddy covariance method, hyperspectral imaging, and light detection and ranging. To meet the challenge of detecting potentially small-magnitude geothermal CO2 emissions within the natural background variability of CO2, we propose an approach that integrates available detection and monitoring techniques with statistical analysis and modeling strategies. The proposed monitoring plan initially focuses on rapid, economical, reliable measurements of CO2 subsurface concentrations and surface fluxes and statistical analysis of the collected data. Based on this analysis, are as with a high probability of containing geothermal CO2 anomalies can be further sampled and analyzed using more expensive chemical and isotopic methods. Integrated analysis of all measurements will determine definitively if CO2 derived from a deep geothermal source is present, and if so, the spatial extent of the anomaly. The suitability of further geophysical measurements, installation of deep wells, and geochemical analyses of deep fluids can then be determined based on the results of the near surface CO2 monitoring program.
Regional Ecosystem-Atmosphere CO2 Exchange Via Atmospheric Budgets
Davis, K.J.; Richardson, S.J.; Miles, N.L.
2007-03-07
Inversions of atmospheric CO2 mixing ratio measurements to determine CO2 sources and sinks are typically limited to coarse spatial and temporal resolution. This limits our ability to evaluate efforts to upscale chamber- and stand-level CO2 flux measurements to regional scales, where coherent climate and ecosystem mechanisms govern the carbon cycle. As a step towards the goal of implementing atmospheric budget or inversion methodology on a regional scale, a network of five relatively inexpensive CO2 mixing ratio measurement systems was deployed on towers in northern Wisconsin. Four systems were distributed on a circle of roughly 150-km radius, surrounding one centrally located system at the WLEF tower near Park Falls, WI. All measurements were taken at a height of 76 m AGL. The systems used single-cell infrared CO2 analyzers (Licor, model LI-820) rather than the siginificantly more costly two-cell models, and were calibrated every two hours using four samples known to within 0.2 ppm CO2. Tests prior to deployment in which the systems sampled the same air indicate the precision of the systems to be better than 0.3 ppm and the accuracy, based on the difference between the daily mean of one system and a co-located NOAA-ESRL system, is consistently better than 0.3 ppm. We demonstrate the utility of the network in two ways. We interpret regional CO2 differences using a Lagrangian parcel approach. The difference in the CO2 mixing ratios across the network is at least 2?3 ppm, which is large compared to the accuracy and precision of the systems. Fluxes estimated assuming Lagrangian parcel transport are of the same sign and magnitude as eddy-covariance flux measurements at the centrally-located WLEF tower. These results indicate that the network will be useful in a full inversion model. Second, we present a case study involving a frontal passage through the region. The progression of a front across the network is evident; changes as large as four ppm in one minute are captured. Influence functions, derived using a Lagrangian Particle Dispersion model driven by the CSU Regional Atmospheric Modeling System and nudged to NCEP reanalysis meteorological fields, are used to determine source regions for the towers. The influence functions are combined with satellite vegetation observations to interpret the observed trends in CO2 concentration. Full inversions will combine these elements in a more formal analytic framework.
Riley, W. J.; Biraud, S.C.; Torn, M.S.; Fischer, M.L.; Billesbach, D.P.; Berry, J.A.
2009-08-15
Characterizing net ecosystem exchanges (NEE) of CO{sub 2} and sensible and latent heat fluxes in heterogeneous landscapes is difficult, yet critical given expected changes in climate and land use. We report here a measurement and modeling study designed to improve our understanding of surface to atmosphere gas exchanges under very heterogeneous land cover in the mostly agricultural U.S. Southern Great Plains (SGP). We combined three years of site-level, eddy covariance measurements in several of the dominant land cover types with regional-scale climate data from the distributed Mesonet stations and Next Generation Weather Radar precipitation measurements to calibrate a land surface model of trace gas and energy exchanges (isotope-enabled land surface model (ISOLSM)). Yearly variations in vegetation cover distributions were estimated from Moderate Resolution Imaging Spectroradiometer normalized difference vegetation index and compared to regional and subregional vegetation cover type estimates from the U.S. Department of Agriculture census. We first applied ISOLSM at a 250 m spatial scale to account for vegetation cover type and leaf area variations that occur on hundred meter scales. Because of computational constraints, we developed a subsampling scheme within 10 km 'macrocells' to perform these high-resolution simulations. We estimate that the Atmospheric Radiation Measurement Climate Research Facility SGP region net CO{sub 2} exchange with the local atmosphere was -240, -340, and -270 gC m{sup -2} yr{sup -1} (positive toward the atmosphere) in 2003, 2004, and 2005, respectively, with large seasonal variations. We also performed simulations using two scaling approaches at resolutions of 10, 30, 60, and 90 km. The scaling approach applied in current land surface models led to regional NEE biases of up to 50 and 20% in weekly and annual estimates, respectively. An important factor in causing these biases was the complex leaf area index (LAI) distribution within cover types. Biases in predicted weekly average regional latent heat fluxes were smaller than for NEE, but larger than for either ecosystem respiration or assimilation alone. However, spatial and diurnal variations of hundreds of W m{sup -2} in latent heat fluxes were common. We conclude that, in this heterogeneous system, characterizing vegetation cover type and LAI at the scale of spatial variation are necessary for accurate estimates of bottom-up, regional NEE and surface energy fluxes.
Xie, Xianjun; Wang, Yanxin; Ellis, Andre; Liu, Chongxuan; Duan, Mengyu; Li, Junxia
2014-11-01
Arsenic (As)-contaminated aquifer sediments from Datong basin, China have been analyzed to infer the provenance and depositional environment related to As distribution in the aquifer sediments. The As content in the sediments ranged from 2.45 to 27.38 mg/kg with an average value of 9.54 mg/kg, which is comparable to the average value in modern unconsolidated sediments. However, minor variation in As concentration with depth has been observed in the core. There was a significant correlation between Fe, and Al and As, which was attributed to the adsorption or co-precipitation of As onto/with Fe oxides/hydroxides and/or Fe-coated clay minerals. Post-Archean Australian Shale (PAAS)-normalized REEs patterns of sediment samples along the borehole were constant, and the sediments had a notably restricted range of La-N/Yb-N ratios from 0.7 to 1.0. These results suggested that the provenance of the Datong basin remained similar throughout the whole depositional period. The analysis of major geochemical compositions confirmed that all core sediments were from the same sedimentary source and experienced significant sedimentary recycling. The co-variation of As, V/Al, Ni/Al and chemical index of alteration (CIA) values in the sediments along the borehole suggested that As distribution in the sediments was primarily controlled by weathering processes. The calculated CIA values of the sediments along the borehole indicate that a relative strong chemical weathering occurred during the deposition of sediments at depths of similar to 35 to 88 m, which was corresponding to the depth at which high As groundwater was observed at the site. Strong chemical weathering favored the deposition of Fe-bearing minerals including poorly crystalline and crystalline Fe oxide mineral phases and concomitant co-precipitation of As with these minerals in the sediments. Subsequent reductive dissolution of As-bearing poorly crystalline and crystalline Fe oxides would result in the enrichment of As in groundwater. In general, the chemical weathering during the deposition of the sediments governed the co-accumulation of Fe oxides and As in the aquifer sediments. And then, the reductive dissolution of Fe oxides/hydroxides is the mechanism of As enrichment in the groundwater in the Datong basin
Exposure to mercury among Spanish preschool children: Trend from birth to age four
Llop, Sabrina; Murcia, Mario; Aguinagalde, Xabier; Vioque, Jesus; Rebagliato, Marisa; Iñiguez, Carmen; Lopez-Espinosa, Maria-Jose; Amurrio, Ascensión; María Navarrete-Muñoz, Eva; and others
2014-07-15
The purpose of this study is to describe the total hair mercury concentrations and their determinants in preschool Spanish children, as well as to explore the trend in mercury exposure from birth to the age four. This evolution has been scarcely studied in other birth cohort studies. The study population was 580 four year old children participating in the INMA (i.e. Childhood and Environment) birth cohort study in Valencia (2008–2009). Total mercury concentration at age four was measured in hair samples by atomic absorption spectrometry. Fish consumption and other covariates were obtained by questionnaire. Multivariate linear regression models were conducted in order to explore the association between mercury exposure and fish consumption, socio-demographic characteristics and prenatal exposure to mercury. The geometric mean was 1.10 µg/g (95%CI: 1.02, 1.19). Nineteen percent of children had mercury concentrations above the equivalent to the Provisional Tolerable Weekly Intake proposed by WHO. Mercury concentration was associated with increasing maternal age, fish consumption and cord blood mercury levels, as well as decreasing parity. Children whose mothers worked had higher mercury levels than those with non working mothers. Swordfish, lean fish and canned fish were the fish categories most associated with hair mercury concentrations. We observed a decreasing trend in mercury concentrations between birth and age four. In conclusion, the children participating in this study had high hair mercury concentrations compared to reported studies on children from other European countries and similar to other countries with high fish consumption. The INMA study design allows the evaluation of the exposure to mercury longitudinally and enables this information to be used for biomonitoring purposes and dietary recommendations. - Highlights: • The geometric mean of hair Hg concentrations was 1.10 µg/g. • 19% of children had Hg concentrations above the RfD proposed by the WHO. • Hair Hg concentrations in children increased as a function of total fish intake. • Swordfish, lean fish and canned fish were the most related to Hg concentrations. • There was a decrease in Hg concentrations from birth to age four.
DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)
Vuichard, N.; Papale, D.
2015-07-13
In this study, exchanges of carbon, water and energy between the land surface and the atmosphere are monitored by eddy covariance technique at the ecosystem level. Currently, the FLUXNET database contains more than 500 registered sites, and up to 250 of them share data (free fair-use data set). Many modelling groups use the FLUXNET data set for evaluating ecosystem models' performance, but this requires uninterrupted time series for the meteorological variables used as input. Because original in situ data often contain gaps, from very short (few hours) up to relatively long (some months) ones, we develop a new and robustmore » method for filling the gaps in meteorological data measured at site level. Our approach has the benefit of making use of continuous data available globally (ERA-Interim) and a high temporal resolution spanning from 1989 to today. These data are, however, not measured at site level, and for this reason a method to downscale and correct the ERA-Interim data is needed. We apply this method to the level 4 data (L4) from the La Thuile collection, freely available after registration under a fair-use policy. The performance of the developed method varies across sites and is also function of the meteorological variable. On average over all sites, applying the bias correction method to the ERA-Interim data reduced the mismatch with the in situ data by 10 to 36 %, depending on the meteorological variable considered. In comparison to the internal variability of the in situ data, the root mean square error (RMSE) between the in situ data and the unbiased ERA-I (ERA-Interim) data remains relatively large (on average over all sites, from 27 to 76 % of the standard deviation of in situ data, depending on the meteorological variable considered). The performance of the method remains poor for the wind speed field, in particular regarding its capacity to conserve a standard deviation similar to the one measured at FLUXNET stations.« less
Beverly E. Law , Christoph K. Thomas
2011-09-20
This is the final technical report containing a summary of all findings with regard to the following objectives of the project: (1) To quantify and understand the effects of wildfire on carbon storage and the exchanges of energy, CO2, and water vapor in a chronosequence of ponderosa pine (disturbance gradient); (2) To investigate the effects of seasonal and interannual variation in climate on carbon storage and the exchanges of energy, CO2, and water vapor in mature conifer forests in two climate zones: mesic 40-yr old Douglas-fir and semi-arid 60-yr old ponderosa pine (climate gradient); (3) To reduce uncertainty in estimates of CO2 feedbacks to the atmosphere by providing an improved model formulation for existing biosphere-atmosphere models; and (4) To provide high quality data for AmeriFlux and the NACP on micrometeorology, meteorology, and biology of these systems. Objective (1): A study integrating satellite remote sensing, AmeriFlux data, and field surveys in a simulation modeling framework estimated that the pyrogenic carbon emissions, tree mortality, and net carbon exchange associated with four large wildfires that burned ~50,000 hectares in 2002-2003 were equivalent to 2.4% of Oregon statewide anthropogenic carbon emissions over the same two-year period. Most emissions were from the combustion of the forest floor and understory vegetation, and only about 1% of live tree mass was combusted on average. Objective (2): A study of multi-year flux records across a chronosequence of ponderosa pine forests yielded that the net carbon uptake is over three times greater at a mature pine forest compared with young pine. The larger leaf area and wetter and cooler soils of the mature forest mainly caused this effect. A study analyzing seven years of carbon and water dynamics showed that interannual and seasonal variability of net carbon exchange was primarily related to variability in growing season length, which was a linear function of plant-available soil moisture in spring and early summer. A multi-year drought (2001-2003) led to a significant reduction of net ecosystem exchange due to carry-over effects in soil moisture and carbohydrate reserves in plant-tissue. In the same forest, the interannual variability in the rate carbon is lost from the soil and forest floor is considerable and related to the variability in tree growth as much as it is to variability in soil climatic conditions. Objective (3): Flux data from the mature ponderosa pine site support a physical basis for filtering nighttime data with friction velocity above the canopy. An analysis of wind fields and heat transport in the subcanopy at the mesic 40-year old Douglas site yielded that the non-linear structure and behavior of spatial temperature gradients and the flow field require enhanced sensor networks to estimate advective fluxes in the subcanopy of forest to close the surface energy balance in forests. Reliable estimates for flux uncertainties are needed to improve model validation and data assimilation in process-based carbon models, inverse modeling studies and model-data synthesis, where the uncertainties may be as important as the fluxes themselves. An analysis of the time scale dependence of the random and flux sampling error yielded that the additional flux obtained by increasing the perturbation timescale beyond about 10 minutes is dominated by random sampling error, and therefore little confidence can be placed in its value. Artificial correlation between gross ecosystem productivity (GEP) and ecosystem respiration (Re) is a consequence of flux partitioning of eddy covariance flux data when GEP is computed as the difference between NEE and computed daytime Re (e.g. using nighttime Re extrapolated into daytime using soil or air temperatures). Tower-data must be adequately spatially averaged before comparison to gridded model output as the time variability of both is inherently different. The eddy-covariance data collected at the mature ponderosa pine site and the mesic Douglas fir site were used to develop and evaluate a new method to extra
National Geo-Database for Biofuel Simulations and Regional Analysis
Izaurralde, Roberto C.; Zhang, Xuesong; Sahajpal, Ritvik; Manowitz, David H.
2012-04-01
The goal of this project undertaken by GLBRC (Great Lakes Bioenergy Research Center) Area 4 (Sustainability) modelers is to develop a national capability to model feedstock supply, ethanol production, and biogeochemical impacts of cellulosic biofuels. The results of this project contribute to sustainability goals of the GLBRC; i.e. to contribute to developing a sustainable bioenergy economy: one that is profitable to farmers and refiners, acceptable to society, and environmentally sound. A sustainable bioenergy economy will also contribute, in a fundamental way, to meeting national objectives on energy security and climate mitigation. The specific objectives of this study are to: (1) develop a spatially explicit national geodatabase for conducting biofuel simulation studies; (2) model biomass productivity and associated environmental impacts of annual cellulosic feedstocks; (3) simulate production of perennial biomass feedstocks grown on marginal lands; and (4) locate possible sites for the establishment of cellulosic ethanol biorefineries. To address the first objective, we developed SENGBEM (Spatially Explicit National Geodatabase for Biofuel and Environmental Modeling), a 60-m resolution geodatabase of the conterminous USA containing data on: (1) climate, (2) soils, (3) topography, (4) hydrography, (5) land cover/ land use (LCLU), and (6) ancillary data (e.g., road networks, federal and state lands, national and state parks, etc.). A unique feature of SENGBEM is its 2008-2010 crop rotation data, a crucially important component for simulating productivity and biogeochemical cycles as well as land-use changes associated with biofuel cropping. We used the EPIC (Environmental Policy Integrated Climate) model to simulate biomass productivity and environmental impacts of annual and perennial cellulosic feedstocks across much of the USA on both croplands and marginal lands. We used data from LTER and eddy-covariance experiments within the study region to test the performance of EPIC and, when necessary, improve its parameterization. We investigated three scenarios. In the first, we simulated a historical (current) baseline scenario composed mainly of corn-, soybean-, and wheat-based rotations as grown existing croplands east of the Rocky Mountains in 30 states. In the second scenario, we simulated a modified baseline in which we harvested corn and wheat residues to supply feedstocks to potential cellulosic ethanol biorefineries distributed within the study area. In the third scenario, we simulated the productivity of perennial cropping systems such as switchgrass or perennial mixtures grown on either marginal or Conservation Reserve Program (CRP) lands. In all cases we evaluated the environmental impacts (e.g., soil carbon changes, soil erosion, nitrate leaching, etc.) associated with the practices. In summary, we have reported on the development of a spatially explicit national geodatabase to conduct biofuel simulation studies and provided initial simulation results on the potential of annual and perennial cropping systems to serve as feedstocks for the production of cellulosic ethanol. To accomplish this, we have employed sophisticated spatial analysis methods in combination with the process-based biogeochemical model EPIC. This work provided the opportunity to test the hypothesis that marginal lands can serve as sources of cellulosic feedstocks and thus contribute to avoid potential conflicts between bioenergy and food production systems. This work, we believe, opens the door for further analysis on the characteristics of cellulosic feedstocks as major contributors to the development of a sustainable bioenergy economy.
Ecological interactions between metals and microbes that impact bioremediation
Allan Konopka; Cindy Nakatsu
2004-03-17
Distinct microbial communities had been found in contaminated soils that varied in their concentrations of Pb, Cr and aromatic compounds. It is difficult to distinguish between their effects as their presence is highly correlated. Microcosms were constructed in which either Pb{sup +2} or CrO{sub 4}{sup -2} was added at levels that produced acute modest or severe acute effects (50 or 90% reduction). We previously reported on changes in microbial activity and broad patterns of Bacterial community composition. These results showed that addition of an organic energy source selected for a relatively small number of phylotypes and the addition of Pb or Cr(VI) modulated the community response. We sequenced dominant phylotypes from microcosms amended with xylene and Cr(VI) and from those with the simple addition of glucose only. In both cases, the dominant selected phylotypes were diverse. We found a number of distinct Arthrobacter strains, as well as several Pseudomonas spp. In addition, the high GC-content bands belonged to members of the genera Nocardioides and Rhodococcus. The focus of amended microcosm work has now shifted to anaerobic processes. The reduction of Cr(VI) to Cr(III) as a detoxification mechanism is of greater interest, as is the specific role of particular physiological groups of anaerobes in mediating Cr(VI) detoxification. The correlation between microbial activity, community structure, and metal level has been analyzed on 150 mg of soil collected at spatial scales <1, 5, 15 and 50 cm. There was no correlation between metal content and activity level. Soils <1 cm apart could differ in activity 10-fold and extractable Pb and Cr 7-fold. Therefore, we turned to geostatistical analysis. There was spatial periodicity which is likely to reflect the heterogeneous distribution of active microbes and metal contaminants. Variograms indicated that the range of spatial dependence was up to 20 cm. To visualize the spatial relationships between the primary variate (activity) and its covariates (lead and chromium content), block kriging was used. The kriging maps suggest that areas exist where increased metal concentrations have zones of decreased metabolic microbial activity. Cr(VI) resistant bacteria have been isolated from two contaminated sites. Most isolates are Arthrobacter, Rhodococcus, or Pseudomonas spp. A chrA gene has been cloned from Arthrobacter strain CR15 isolated from Cannelton, MI. PCR-primers have been produced against conserved motifs analyzed from 8 chrA sequences. Of the 96 Cr-resistant isolates from Cannelton, 85% gave a positive reaction to these primers. In contrast, none of the 38 isolates from Seymour, IN were positive. Therefore, at least for the culturable community, a particular resistance determinant appears to be widespread at a geographical site but rare (absent) at another site. The phylogenetic relatedness of the Arthrobacter strains is being evaluated via the distribution of repetitive elements as well as genome-wide restriction fragment analysis. Work to date on the latter has also suggested that Arthrobacter genomes are small (<2.5 Mbp). Gene capture experiments demonstrated that chromate-sensitive Gram-negative bacterial strains could obtain resistance from Cr-contaminated soil. However, frequency of transfer is low (10-6-10-8). Genetic diversity of the acquired chromate resistance mechanism is being assessed.
Liu, M. L.; Rajagopalan, K.; Chung, S. H.; Jiang, X.; Harrison, J. H.; Nergui, T.; Guenther, Alex B.; Miller, C.; Reyes, J.; Tague, C. L.; Choate, J. S.; Salathe, E.; Stockle, Claudio O.; Adam, J. C.
2014-05-16
Regional climate change impact (CCI) studies have widely involved downscaling and bias-correcting (BC) Global Climate Model (GCM)-projected climate for driving land surface models. However, BC may cause uncertainties in projecting hydrologic and biogeochemical responses to future climate due to the impaired spatiotemporal covariance of climate variables and a breakdown of physical conservation principles. Here we quantify the impact of BC on simulated climate-driven changes in water variables(evapotranspiration, ET; runoff; snow water equivalent, SWE; and water demand for irrigation), crop yield, biogenic volatile organic compounds (BVOC), nitric oxide (NO) emissions, and dissolved inorganic nitrogen (DIN) export over the Pacific Northwest (PNW) Region. We also quantify the impacts on net primary production (NPP) over a small watershed in the region (HJ Andrews). Simulation results from the coupled ECHAM5/MPI-OM model with A1B emission scenario were firstly dynamically downscaled to 12 km resolutions with WRF model. Then a quantile mapping based statistical downscaling model was used to downscale them into 1/16th degree resolution daily climate data over historical and future periods. Two series climate data were generated according to the option of bias-correction (i.e. with bias-correction (BC) and without bias-correction, NBC). Impact models were then applied to estimate hydrologic and biogeochemical responses to both BC and NBC meteorological datasets. These im20 pact models include a macro-scale hydrologic model (VIC), a coupled cropping system model (VIC-CropSyst), an ecohydrologic model (RHESSys), a biogenic emissions model (MEGAN), and a nutrient export model (Global-NEWS). Results demonstrate that the BC and NBC climate data provide consistent estimates of the climate-driven changes in water fluxes (ET, runoff, and water demand), VOCs (isoprene and monoterpenes) and NO emissions, mean crop yield, and river DIN export over the PNW domain. However, significant differences rise from projected SWE, crop yield from dry lands, and HJ Andrews’s ET between BC and NBC data. Even though BC post-processing has no significant impacts on most of the studied variables when taking PNW as a whole, their effects have large spatial variations and some local areas are substantially influenced. In addition, there are months during which BC and NBC post-processing produces significant differences in projected changes, such as summer runoff. Factor-controlled simulations indicate that BC post-processing of precipitation and temperature both substantially contribute to these differences at region scales. We conclude that there are trade-offs between using BC climate data for offline CCI studies vs. direct modeled climate data. These trade-offs should be considered when designing integrated modeling frameworks for specific applications; e.g., BC may be more important when considering impacts on reservoir operations in mountainous watersheds than when investigating impacts on biogenic emissions and air quality (where VOCs are a primary indicator).
Criticality Safety Validation of Scale 6.1
Marshall, William BJ J; Rearden, Bradley T
2011-11-01
The computational bias of criticality safety computer codes must be established through the validation of the codes to critical experiments. A large collection of suitable experiments has been vetted by the International Criticality Safety Benchmark Experiment Program (ICSBEP) and made available in the International Handbook of Evaluated Criticality Safety Benchmark Experiments (IHECSBE). A total of more than 350 cases from this reference have been prepared and reviewed within the Verified, Archived Library of Inputs and Data (VALID) maintained by the Reactor and Nuclear Systems Division at Oak Ridge National Laboratory. The performance of the KENO V.a and KENO-VI Monte Carlo codes within the Scale 6.1 code system with ENDF/B-VII.0 cross-section data in 238-group and continuous energy is assessed using the VALID models of benchmark experiments. The TSUNAMI tools for sensitivity and uncertainty analysis are utilized to examine some systems further in an attempt to identify potential causes of unexpected results. The critical experiments available for validation of the KENO V.a code cover eight different broad categories of systems. These systems use a range of fissile materials including a range of uranium enrichments, various plutonium isotopic vectors, and mixed uranium-plutonium oxides. The physical form of the fissile material also varies and is represented as metal, solutions, or arrays of rods or plates in a water moderator. The neutron energy spectra of the systems also vary and cover both fast and thermal spectra. Over 300 of the total cases used utilize the KENO V.a code. The critical experiments available for the validation of the KENO-VI code cover three broad categories of systems. The fissile materials in the systems vary and include high and intermediate-enrichment uranium and mixed uranium/plutonium oxides. The physical form of the fissile material is either metal or rod arrays in water. As with KENO V.a, both fast and thermal neutron energy spectra are represented in the systems considered. The results indicate generally good performance of both the KENO V.a and KENO-VI codes across the range of systems analyzed. The bias of calculated k{sub eff} from expected values is less than 0.9% {Delta}k in all cases. All eight categories of experiments show biases of less than 0.5% {Delta}k in KENO V.a with the exception of intermediate enrichment metal systems using the 238-group library. The continuous energy library generally manifests lower biases than the multi-group data. The KENO-VI results show slightly larger biases, though this may primarily be the result of modeling systems with more geometric complexity, which are more difficult to describe accurately, even with a generalized geometry code like KENO-VI. Several additional conclusions can be drawn from the results of this validation effort. These conclusions include that the TSUNAMI tools can be used successfully to explain the cause of aberrant results, that some evaluations in the IHECSBE should be updated to provide more rigorous expected k{sub eff} values and uncertainties, and that potential cross-section errors can be identified by detailed review of the results of this validation. It also appears that the overall cross-section uncertainty as quantified through the Scale covariance library is overestimated. Overall, the KENO V.a and KENO-VI codes are shown to provide consistent, low bias results for a wide range of physical systems of potential interest in criticality safety applications.
Aerosol remote sensing in polar regions
Tomasi, Claudio; Kokhanovsky, Alexander A.; Lupi, Angelo; Ritter, Christoph; Smirnov, Alexander; O'Neill, Norman T.; Stone, Robert S.; Holben, Brent N.; Nyeki, Stephan; Mazzola, Mauro; Lanconelli, Christian; Vitale, Vito; Stebel, Kerstin; Aaltonen, Veijo; de Leeuw, Gerrit; Rodriguez, Edith; Herber, Andreas B.; Radionov, Vladimir F.; Zielinski, Tymon; Petelski, Tomasz; Sakerin, Sergey M.; Kabanov, Dmitry M.; Xue, Yong; Mei, Linlu; Istomina, Larysa; Wagener, Richard; McArthur, Bruce; Sobolewski, Piotr S.; Kivi, Rigel; Courcoux, Yann; Larouche, Pierre; Broccardo, Stephen; Piketh, Stuart J.
2015-01-01
Multi-year sets of ground-based sun-photometer measurements conducted at 12 Arctic sites and 9 Antarctic sites were examined to determine daily mean values of aerosol optical thickness τ(λ) at visible and near-infrared wavelengths, from which best-fit values of Ångström's exponent α were calculated. Analysing these data, the monthly mean values of τ(0.50 μm) and α and the relative frequency histograms of the daily mean values of both parameters were determined for winter–spring and summer–autumn in the Arctic and for austral summer in Antarctica. The Arctic and Antarctic covariance plots of the seasonal median values of α versus τ(0.50 μm) showed: (i) a considerable increase in τ(0.50 μm) for the Arctic aerosol from summer to winter–spring, without marked changes in α; and (ii) a marked increase in τ(0.50 μm) passing from the Antarctic Plateau to coastal sites, whereas α decreased considerably due to the larger fraction of sea-salt aerosol. Good agreement was found when comparing ground-based sun-photometer measurements of τ(λ) and α at Arctic and Antarctic coastal sites with Microtops measurements conducted during numerous AERONET/MAN cruises from 2006 to 2013 in three Arctic Ocean sectors and in coastal and off-shore regions of the Southern Atlantic, Pacific, and Indian Oceans, and the Antarctic Peninsula. Lidar measurements were also examined to characterise vertical profiles of the aerosol backscattering coefficient measured throughout the year at Ny-Ålesund. Satellite-based MODIS, MISR, and AATSR retrievals of τ(λ) over large parts of the oceanic polar regions during spring and summer were in close agreement with ship-borne and coastal ground-based sun-photometer measurements. An overview of the chemical composition of mode particles is also presented, based on in-situ measurements at Arctic and Antarctic sites. Fourteen log-normal aerosol number size-distributions were defined to represent the average features of nuclei, accumulation and coarse mode particles for Arctic haze, summer background aerosol, Asian dust and boreal forest fire smoke, and for various background austral summer aerosol types at coastal and high-altitude Antarctic sites. The main columnar aerosol optical characteristics were determined for all 14 particle modes, based on in-situ measurements of the scattering and absorption coefficients. Diurnally averaged direct aerosol-induced radiative forcing and efficiency were calculated for a set of multimodal aerosol extinction models, using various Bidirectional Reflectance Distribution Function models over vegetation-covered, oceanic and snow-covered surfaces. These gave a reliable measure of the pronounced effects of aerosols on the radiation balance of the surface–atmosphere system over polar regions.
Aerosol remote sensing in polar regions
DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)
Tomasi, Claudio; Kokhanovsky, Alexander A.; Lupi, Angelo; Ritter, Christoph; Smirnov, Alexander; O'Neill, Norman T.; Stone, Robert S.; Holben, Brent N.; Nyeki, Stephan; Wehrli, Christoph; et al
2015-01-01
Multi-year sets of ground-based sun-photometer measurements conducted at 12 Arctic sites and 9 Antarctic sites were examined to determine daily mean values of aerosol optical thickness τ(λ) at visible and near-infrared wavelengths, from which best-fit values of Ångström's exponent α were calculated. Analysing these data, the monthly mean values of τ(0.50 μm) and α and the relative frequency histograms of the daily mean values of both parameters were determined for winter–spring and summer–autumn in the Arctic and for austral summer in Antarctica. The Arctic and Antarctic covariance plots of the seasonal median values of α versus τ(0.50 μm) showed: (i)more » a considerable increase in τ(0.50 μm) for the Arctic aerosol from summer to winter–spring, without marked changes in α; and (ii) a marked increase in τ(0.50 μm) passing from the Antarctic Plateau to coastal sites, whereas α decreased considerably due to the larger fraction of sea-salt aerosol. Good agreement was found when comparing ground-based sun-photometer measurements of τ(λ) and α at Arctic and Antarctic coastal sites with Microtops measurements conducted during numerous AERONET/MAN cruises from 2006 to 2013 in three Arctic Ocean sectors and in coastal and off-shore regions of the Southern Atlantic, Pacific, and Indian Oceans, and the Antarctic Peninsula. Lidar measurements were also examined to characterise vertical profiles of the aerosol backscattering coefficient measured throughout the year at Ny-Ålesund. Satellite-based MODIS, MISR, and AATSR retrievals of τ(λ) over large parts of the oceanic polar regions during spring and summer were in close agreement with ship-borne and coastal ground-based sun-photometer measurements. An overview of the chemical composition of mode particles is also presented, based on in-situ measurements at Arctic and Antarctic sites. Fourteen log-normal aerosol number size-distributions were defined to represent the average features of nuclei, accumulation and coarse mode particles for Arctic haze, summer background aerosol, Asian dust and boreal forest fire smoke, and for various background austral summer aerosol types at coastal and high-altitude Antarctic sites. The main columnar aerosol optical characteristics were determined for all 14 particle modes, based on in-situ measurements of the scattering and absorption coefficients. Diurnally averaged direct aerosol-induced radiative forcing and efficiency were calculated for a set of multimodal aerosol extinction models, using various Bidirectional Reflectance Distribution Function models over vegetation-covered, oceanic and snow-covered surfaces. These gave a reliable measure of the pronounced effects of aerosols on the radiation balance of the surface–atmosphere system over polar regions.« less
Switchgrass Biofuel Research: Carbon Sequestration and Life Cycle Analysis; Final Report
Liska, Adam J; Suyker, Andrew E; Arkebauer, Timothy J; Pelton, Matthew; Fang, Xiao Xue
2013-12-20
Soil emissions have been inadequately characterized in life cycle assessment of biofuels (see section 3.2.3). This project measures the net differences in field‐level greenhouse gas emissions (CO2, N2O, and CH4) due to corn residue removal for cellulosic ethanol production. Gas measurements are then incorporated into life cycle assessment of the final biofuel product to determine whether it is in compliance with federal greenhouse gas emissions standards for biofuels (Renewable Fuel Standard 2, RFS2). The field measurements have been conducted over three years on two, quarter‐section, production‐scale, irrigated corn fields (both roughly 50 hectares, as this size of field is necessary for reproducible eddy covariance flux measurements of CO2; chamber measurements are used to determine N2O and CH4 emissions). Due to a large hail storm in 2010, estimates of the emission from residue could not be separated from the total CO2 flux in 2011. This led us to develop soil organic carbon (SOC) modeling techniques to estimate changes in CO2 emissions from residue removal. Modeling has predicted emissions of CO2 from oxidation of SOC that are consistent (<12%) with 9 years of CO2 flux measurements at the two production field sites, and modeling is also consistent with other field measurements (Liska et al., submitted). The model was then used to estimate the average change in SOC and CO2 emissions from nine years of simulated residue removal (6 Mg biomass per hectare per year) at the sites; a loss of 0.43 Mg C ha‐1 yr‐1 resulted. The model was then used to estimate SOC changes over 10 years across Nebraska using supercomputing, based on 61 million, 30 x 30 meter, grid cells to account for regional variability in initial SOC, crop yield, and temperature; an average loss of 0.47 Mg C ha‐1 yr‐1 resulted. When these CO2 emissions are included in simple life cycle assessment calculations, emissions from cellulosic ethanol from crop residue are above mandated levels of 60% reduction compared to gasoline (Liska, in press). These approaches are both technically effective and economically feasible. This work has been extensively peer reviewed. The useful focused data provided for biofuel producers is the relative change in SOC and CO2 emission rates that can be incorporated into LCA models, with the unit of gCO2‐equivalent per megajoule of biofuel; this information can be used directly by the EPA in RFS2 standards. All Final Report, DOE DE‐EE0003149, December 20, 2013 5 ecosystem models are based on limited selected data. The model that we use is supported by the research at our field sites; these large production‐scale field sites, covering nearly a square mile, have also resulted in over 50 research publications over the last 10 years. To extend the model across the larger region, direct field measurements of soil, crop yields (annually), and temperature (monthly averages) are used via geospatial databases. Such dedicated field sites are the basis of advanced scientific understanding of greenhouse gas fluxes in modern agriculture, which is why they have been extensively supported by the USDA, NASA, and many other government agencies (nearly $10 million in research support over the last 12 years for these field sites); we plan to further validate the SOC model with data from other regional field sites, contingent on funding. Continuation of this research would increase confidence in the understanding of residue removal and net CO2 emissions by quantifying these changes for building accurate models (more information can reduce the uncertainty in these processes); we believe these are unique experiments. This work quantifies primarily one factor in the lifecycle (CO2 emissions from soil carbon), but the results in section 3 below address other factors.
Physics Division annual report 2004.
Glover, J.
2006-04-06
This report highlights the research performed in 2004 in the Physics Division of Argonne National Laboratory. The Division's programs include operation of ATLAS as a national user facility, nuclear structure and reaction research, nuclear theory, medium energy nuclear research and accelerator research and development. The intellectual challenges of this research represent some of the most fundamental challenges in modern science, shaping our understanding of both tiny objects at the center of the atom and some of the largest structures in the universe. A great strength of these efforts is the critical interplay of theory and experiment. Notable results in research at ATLAS include a measurement of the charge radius of He-6 in an atom trap and its explanation in ab-initio calculations of nuclear structure. Precise mass measurements on critical waiting point nuclei in the rapid-proton-capture process set the time scale for this important path in nucleosynthesis. An abrupt fall-off was identified in the subbarrier fusion of several heavy-ion systems. ATLAS operated for 5559 hours of research in FY2004 while achieving 96% efficiency of beam delivery for experiments. In Medium Energy Physics, substantial progress was made on a long-term experiment to search for the violation of time-reversal invariance using trapped Ra atoms. New results from HERMES reveal the influence of quark angular momentum. Experiments at JLAB search for evidence of color transparency in rho-meson production and study the EMC effect in helium isotopes. New theoretical results include a Poincare covariant description of baryons as composites of confined quarks and non-point-like diquarks. Green's function Monte Carlo techniques give accurate descriptions of the excited states of light nuclei and these techniques been extended to scattering states for astrophysics studies. A theoretical description of the phenomena of proton radioactivity has been extended to triaxial nuclei. Argonne continues to lead in the development and exploitation of the new technical concepts that will truly make RIA, in the words of NSAC, ''the world-leading facility for research in nuclear structure and nuclear astrophysics''. The performance standards for new classes of superconducting cavities continue to increase. Driver linac transients and faults have been analyzed to understand reliability issues and failure modes. Liquid-lithium targets were shown to successfully survive the full-power deposition of a RIA beam. Our science and our technology continue to point the way to this major advance. It is a tremendously exciting time in science for RIA holds the keys to unlocking important secrets of nature. The work described here shows how far we have come and makes it clear we know the path to meet these intellectual challenges. The great progress that has been made in meeting the exciting intellectual challenges of modern nuclear physics reflects the talents and dedication of the Physics Division staff and the visitors, guests and students who bring so much to the research.
Jansen, Erik
2013-08-10
The consistent wind resource in the Great Plains of North America has encouraged the development of wind energy facilities across this region. In the Texas Panhandle, a high quality wind resource is only one factor that has led to the expansion of wind energy development. Other factors include federal tax incentives and the availability of subsidies. Moreover, the State Renewable Portfolio Standards (RPS), mandating production of 10,000 mega-watts of renewable energy in the state by 2025, has contributed to an amicable regulatory and permitting environment (State Energy Conservation Office 2010). Considering the current rate of development, the RPS will be met in coming years (American Wind Energy Association 2011) and the rate of development is likely to continue. To meet increased energy demands in the face of a chronically constrained transmission grid, Texas has developed a comprehensive plan that organizes and prioritizes new transmission systems in high quality wind resource areas called Competitive Renewable Energy Zones (CREZ). The CREZ plan provides developers a solution to transmission constraints and unlocks large areas of undeveloped wind resource areas. In the northern Texas panhandle, there are two CREZs that are classified Class 3 wind (Class 5 is the highest) and range from 862,725 to 1,772,328 ha in size (Public Utility Commission of Texas 2008). Grassland bird populations have declined more than any other bird group in North America (Peterjohn and Sauer 1999, Sauer et al. 2004). Loss of grassland habitat from agricultural development has been the greatest contributor to the decline of grassland bird populations, but development of non-renewable (i.e., oil, coal, and gas) and renewable energy (i.e., wind, solar, biomass, and geothermal) sources have contributed to the decline as well (Pimentel et al. 2002, Maybe and Paul 2007). The effects of wind energy development on declining grassland bird populations has become an area of extensive research, as we attempt to understand and minimize potential impacts of a growing energy sector on declining bird populations. Based on data from post-construction fatality surveys, two grassland bird groups have been the specific focus of research, passerines (songbird guild) and raptors (birds of prey). The effects of wind energy development on these two groups of birds, both of conservation concern, have been examined over the last decade. The primary focus of this research has been on mortality resulting from collision with wind turbines (Kuvlesky et al. 2007). Most studies just quantify post-construction fatality levels (e.g., Erickson et al. 2002) while very few studies provide a comparison of bird populations prior to development through a Before-After-Control-Impact (BACI) study design. Before-After-Control-Impact studies provide powerful evidence of avian/wind energy relationships (Anderson et al. 1999). Despite repeated urgency on conducting these types of studies (Anderson et al. 1999, Madders and Whitfield 2006, Kuvelsky et al. 2007), few have been conducted in North America. Although several European researchers (Larsson 1994, de Lucas et al. 2007) have used BACI designs to examine whether wind facilities modified raptor behavior, there is a scarcity of BACI data relating to North America grassland ecosystems that examine avian-wind energy relationships. There are less than a handful of studies in the entire United States, let alone the southern short grass prairie ecosystem, that incorporate preconstruction data to form the baseline for post-construction impact estimates (Johnson et al. 2000, Erickson et al. 2002). Although declines in grassland bird populations are well-documented (Peterjohn and Sauer 1999, Sauer et al. 2004), the causal mechanisms affecting the decline of grassland birds with increasing wind energy development in the southern short grass prairie are not well-understood (Kuvlesky et al. 2007, Maybe and Paul 2007). Several factors may potentially affect the bird population when wind turbines are constructed in areas with high bird densities (de Lucas et al. 2007). Habitat fragmentation, noise from turbines, physical movement of turbine blades, and increased vehicle traffic have been suggested as causes of decreased density of nesting grassland birds in Minnesota (Leddy et al. 1999), Oklahoma (O’Connell and Piorkowski 2006), and South Dakota (Shaffer and Johnson 2008). Similarly, constructing turbines in areas where bird flight patterns place them at similar heights of turbine blades increases the potential for bird collisions (Johnson et al. 2000, Hoover 2002). Raptor fatalities have been associated with topographic features such as ridges, saddles and rims where birds use updrafts from prevailing winds (Erickson et al. 2000, Johnson et al. 2000, Barrios and Rodriquez 2004, Hoover and Morrison 2005). Thus, wind energy development can result in indirect (e.g., habitat avoidance, decreased nest success) and direct (e.g., collision fatalities) impacts to bird populations (Anderson et al. 1999). Directly quantifying the level of potential impacts (e.g., estimated fatalities/mega-watthour) from wind energy development is beyond the scope of this study. Instead, I aim to quantify density, occupancy and flight behavior for the two bird groups mentioned earlier: obligate grassland songbirds and raptors, respectively, predict where impacts may occur, and provide management recommendations to minimize potential impacts. The United States Department of Energy (DOE), through the Office of Energy Efficiency and Renewable Energy Allocation, contracted Texas Tech University to investigate grassland bird patterns of occurrence in the anticipated CREZ in support of DOE’s 20% Wind Energy by 2030 initiative. In cooperation with Iberdrola Renewables, Inc., studies initiated by Wulff (2010) at Texas Tech University were continued at an area proposed for wind energy development and a separate reference site unassociated with wind energy development. I focused on four primary objectives and this thesis is accordingly organized in four separate chapters that address grassland bird density, grassland bird occupancy, raptor flight patterns, and finally I summarize species diversity and composition. The following chapters use formatting from the Journal of Wildlife Management guidelines (Block et al. 2011) with modifications as required by the Texas Tech University Graduate School. 1) I estimate pre-construction bird density patterns using methods that adjust for imperfect detection. I used a distance sampling protocol that effectively accounts for incomplete detection in the field where birds are present but not detected (Buckland et al. 2001). I improved density estimates with hierarchical distance sampling models, a modeling technique that effectively incorporates the detection process with environmental covariates that further influence bird density (Royle et al. 2004, Royle and Dorazio 2008). Covariates included road density and current oil and gas infrastructure to determine the relationship between existing energy development and bird density patterns. Further, I used remote sensing techniques and vegetation field data to investigate how landcover characteristics influenced bird density patterns. I focused species-specific analyses on obligate grassland birds with >70 detections per season namely grasshopper sparrow (Ammodramus savannarum) and horned lark (Eremophila alpestris). Chapter II focuses on hierarchical models that model and describe relationships between grassland bird density and anthropogenic and landscape features. 2) A large number of bird detections (>70) are needed to estimate density using distance sampling and collection of such quantity are often not feasible, particularly for cryptic species or species that naturally occur at low densities (Buckland et al. 2001). Occupancy models operate with far fewer data and are often used as a surrogate for bird abundance when there are fewer detections (MacKenzie and Nichols 2004). I used occupancy models that allow for the possibility of imperfect detection and species abundance to improve estimates of occurrence probability (Royal 2004). I focused species-specific analyses on grassland birds with few detections: Cassin’s sparrow (Peucaea cassinii), eastern meadowlark (Sturnella magna), and upland sandpiper (Bartramia longicauda). Chapter III uses a multi-season dynamic site occupancy model that incorporates bird abundance to better estimate occurrence probability. 3) When I considered the topographic relief of the study sites, the proposed design of the wind facility and its location within the central U.S. migratory corridor, I expanded the study to investigate raptor abundance and flight behavior (Hoover 2002, Miller 2008). I developed a new survey technique that improved the accuracy of raptor flight height estimates and compared seasonal counts and flight heights at the plateau rim and areas further inland. I used counts and flight behaviors to calculate species-specific collision risk indices for raptors based on topographic features. I focused species-specific analyses on raptors with the highest counts: American kestrel (Falco sparverius), northern harrier (Circus cyaneus), red-tailed hawk (Buteo jamaicensis), Swainson’s hawk (Buteo swainsoni), and turkey vulture (Cathartes aura). Chapter IV describes patterns of seasonal raptor abundance and flight behavior and how topography modulates collision risk with proposed wind energy turbines. 4) Finally, for completeness, in Chapter V I summarize morning point count data for all species and provide estimates of relative composition and species diversity with the Shannon-Wiener Diversity Index (Shannon and Weaver 1949).
Molina, Luisa T.; Molina, Mario J.; Volkamer, Rainer; de Foy, Benjamin; Lei, Wenfang; Zavaka, Miguel; Velasco, Erik
2008-10-31
This project was one of three collaborating grants funded by DOE/ASP to characterize the fine particulate matter (PM) and secondary PM precursors in the Mexico City Metropolitan Area (MCMA) during the MILAGRO Campaign. The overall effort of MCMA-2006, one of the four components, focused on i) examination of the primary emissions of fine particles and precursor gases leading to photochemical production of atmospheric oxidants and secondary aerosol particles; ii) measurement and analysis of secondary oxidants and secondary fine PM production, with particular emphasis on secondary organic aerosol (SOA), and iii) evaluation of the photochemical and meteorological processes characteristic of the Mexico City Basin. The collaborative teams pursued the goals through three main tasks: i) analyses of fine PM and secondary PM precursor gaseous species data taken during the MCMA-2002/2003 campaigns and preparation of publications; ii) planning of the MILAGRO Campaign and deployment of the instrument around the MCMA; and iii) analysis of MCMA-2006 data and publication preparation. The measurement phase of the MILAGRO Campaign was successfully completed in March 2006 with excellent participation from the international scientific community and outstanding cooperation from the Mexican government agencies and institutions. The project reported here was led by the Massachusetts Institute of Technology/Molina Center for Energy and the Environment (MIT/MCE2) team and coordinated with DOE/ASP-funded collaborators at Aerodyne Research Inc., University of Colorado at Boulder and Montana State University. Currently 24 papers documenting the findings from this project have been published. The results from the project have improved significantly our understanding of the meteorological and photochemical processes contributing to the formation of ozone, secondary aerosols and other pollutants. Key findings from the MCMA-2003 include a vastly improved speciated emissions inventory from on-road vehicles: the MCMA motor vehicles produce abundant amounts of primary PM, elemental carbon, particle-bound polycyclic aromatic hydrocarbons, carbon monoxide and a wide range of air toxics; the feasibility of using eddy covariance techniques to measure fluxes of volatile organic compounds in an urban core and a valuable tool for validating local emissions inventory; a much better understanding of the sources and atmospheric loadings of volatile organic compounds; the first spectroscopic detection of glyoxal in the atmosphere; a unique analysis of the high fraction of ambient formaldehyde from primary emission sources; characterization of ozone formation and its sensitivity to VOCs and NO_{x}; a much more extensive knowledge of the composition, size distribution and atmospheric mass loadings of both primary and secondary fine PM, including the fact that the rate of MCMA SOA production greatly exceeded that predicted by current atmospheric models; evaluations of significant errors that can arise from standard air quality monitors for O_{3} and NO_{2}; and the implementation of an innovative Markov Chain Monte Carlo method for inorganic aerosol modeling as a powerful tool to analyze aerosol data and predict gas phase concentrations where these are unavailable. During the MILAGRO Campaign the collaborative team utilized a combination of central fixed sites and a mobile laboratory deployed throughout the MCMA to representative urban and boundary sites to measure trace gases and fine particles. Analysis of the extensive 2006 data sets has confirmed the key findings from MCMA-2002/2003; additionally MCMA-2006 provided more detailed gas and aerosol chemistry and wider regional scale coverage. Key results include an updated 2006 emissions inventory; extension of the flux system to measure fluxes of fine particles; better understanding of the sources and apportionment of aerosols, including contribution from biomass burning and industrial sources; a comprehensive evaluation of metal containing particles in a complex urban environment; identification of a close correlation between the rate of production of SOA and “Odd Oxygen” (O_{3} + NO_{3}) and primary organic PM with CO in the urban plume; a more sophisticated understanding of the relationship between ozone formation and ozone precursors: while ozone production in the urban area is VOC-limited, the response is mostly NOx-limited in the surrounding mountain. Comparison of the findings from 2003 and 2006 also confirm that the VOC levels have decreased during the three-year period, while NO_{x} levels remain the same. The results from the 2002/2003 and 2006 have been presented at international conferences and communicated to Mexican government officials. In addition, a large number of graduate students and post-doctoral associates were involved in the project. All data sets and publications are available to the scientific community.
Davidson, E.A.; Dail, D.B., Hollinger, D.; Scott, N.; Richardson, A.
2012-08-02
Forests provide wildlife habitat, water and air purification, climate moderation, and timber and nontimber products. Concern about climate change has put forests in the limelight as sinks of atmospheric carbon. The C stored in the global vegetation, mostly in forests, is nearly equivalent to the amount present in atmospheric CO{sub 2}. Both voluntary and government-mandated carbon trading markets are being developed and debated, some of which include C sequestration resulting from forest management as a possible tradeable commodity. However, uncertainties regarding sources of variation in sequestration rates, validation, and leakage remain significant challenges for devising strategies to include forest management in C markets. Hence, the need for scientifically-based information on C sequestration by forest management has never been greater. The consequences of forest management on the US carbon budget are large, because about two-thirds of the {approx}300 million hectare US forest resource is classified as 'commercial forest.' In most C accounting budgets, forest harvesting is usually considered to cause a net release of C from the terrestrial biosphere to the atmosphere. However, forest management practices could be designed to meet the multiple goals of providing wood and paper products, creating economic returns from natural resources, while sequestering C from the atmosphere. The shelterwood harvest strategy, which removes about 30% of the basal area of the overstory trees in each of three successive harvests spread out over thirty years as part of a stand rotation of 60-100 years, may improve net C sequestration compared to clear-cutting because: (1) the average C stored on the land surface over a rotation increases, (2) harvesting only overstory trees means that a larger fraction of the harvested logs can be used for long-lived sawtimber products, compared to more pulp resulting from clearcutting, (3) the shelterwood cut encourages growth of subcanopy trees by opening up the forest canopy to increasing light penetration. Decomposition of onsite harvest slash and of wastes created during timber processing releases CO{sub 2} to the atmosphere, thus offsetting some of the C sequestered in vegetation. Decomposition of soil C and dead roots may also be temporarily stimulated by increased light penetration and warming of the forest floor. Quantification of these processes and their net effect is needed. We began studying C sequestration in a planned shelterwood harvest at the Howland Forest in central Maine in 2000. The harvest took place in 2002 by the International Paper Corporation, who assisted us to track the fates of harvest products (Scott et al., 2004, Environmental Management 33: S9-S22). Here we present the results of intensive on-site studies of the decay of harvest slash, soil respiration, growth of the remaining trees, and net ecosystem exchange (NEE) of CO{sub 2} during the first six years following the harvest. These results are combined with calculations of C in persisting off-site harvest products to estimate the net C consequences to date of this commercial shelterwood harvest operation. Tower-based eddy covariance is an ideal method for this study, as it integrates all C fluxes in and out of the forest over a large 'footprint' area and can reveal how the net C flux, as well as gross primary productivity and respiration, change following harvest. Because the size of this experiment precludes large-scale replication, we are use a paired-airshed approach, similar to classic large-scale paired watershed experiments. Measurements of biomass and C fluxes in control and treatment stands were compared during a pre-treatment calibration period, and then divergence from pre-treatment relationships between the two sites measured after the harvest treatment. Forests store carbon (C) as they accumulate biomass. Many forests are also commercial sources of timber and wood fiber. In most C accounting budgets, forest harvesting is usually considered to cause a net release of C from the terrestrial biosphere to the at
Sandercock, Brett K.
2013-05-22
Executive Summary 1. We investigated the impacts of wind power development on the demography, movements, and population genetics of Greater Prairie-Chickens (Tympanuchus cupido) at three sites in northcentral and eastern Kansas for a 7-year period. Only 1 of 3 sites was developed for wind power, the 201MW Meridan Way Wind Power Facility at the Smoky Hills site in northcentral Kansas. Our project report is based on population data for prairie chickens collected during a 2-year preconstruction period (2007-2008), a 3-year postconstruction period (2009-2011) and one final year of lek surveys (2012). Where relevant, we present preconstruction data from our field studies at reference sites in the northern Flint Hills (2007-2009) and southern Flint Hills (2006-2008). 2. We addressed seven potential impacts of wind power development on prairie chickens: lek attendance, mating behavior, use of breeding habitat, fecundity rates, natal dispersal, survival rates, and population numbers. Our analyses of pre- and postconstruction impacts are based on an analysis of covariance design where we modeled population performance as a function of treatment period, distance to eventual or actual site of the nearest wind turbine, and the interaction of these factors. Our demographic and movement data from the 6-year study period at the Smoky Hills site included 23 lek sites, 251 radio-marked females monitored for 287 bird-years, and 264 nesting attempts. Our genetic data were based on genotypes of 1,760 females, males and chicks that were screened with a set of 27 microsatellite markers that were optimized in the lab. 3. In our analyses of lek attendance, the annual probability of lek persistence during the preconstruction period was ~0.9. During the postconstruction period, distance to nearest turbine did not have a significant effect on the probability of lek persistence. However, the probability of lek persistence increased from 0.69 at 0 m to 0.89 at 30 km from turbines, and most abandoned lek sites were located <5 km from turbines. Probability of lek persistence was significantly related to habitat and number of males. Leks had a higher probability of persistence in grasslands than agricultural fields, and increased from ~0.2 for leks of 5 males, to >0.9 for leks of 10 or more males. Large leks in grasslands should be a higher priority for conservation. Overall, wind power development had a weak effect on the annual probability of lek persistence. 3. We used molecular methods to investigate the mating behavior of prairie chickens. The prevailing view for lek-mating grouse is that females mate once to fertilize the clutch and that conspecific nest parasitism is rare. We found evidence that females mate multiple times to fertilize the clutch (8-18% of broods, 4-38% of chicks) and will parasitize nests of other females during egg-laying (~17% of nests). Variable rates of parentage were highest in the fragmented landscapes at the Smoky Hills field site, and were lower at the Flint Hills field site. Comparisons of the pre- and postconstruction periods showed that wind energy development did not affect the mating behaviors of prairie chickens. 4. We examined use of breeding habitats by radio-marked females and conducted separate analyses for nest site selection, and movements of females not attending nests or broods. The landscape was a mix of native prairie and agricultural habitats, and nest site selection was not random because females preferred to nest in grasslands. Nests tended to be closer to turbines during the postconstruction period and there was no evidence of behavioral avoidance of turbines by females during nest site selection. Movements of females not attending nests or broods showed that females crossed the site of the wind power development at higher rates during the preconstruction period (20%) than the postconstruction period (11%), and that movements away from turbines were more frequent during the postconstruction period. Thus, wind power development appears to affect movements in breeding habitats but not nest site s
Strategies for Detecting Hidden Geothermal Systems by Near-Surface Gas Monitoring
Lewicki, Jennifer L.; Oldenburg, Curtis M.
2004-12-15
''Hidden'' geothermal systems are those systems above which hydrothermal surface features (e.g., hot springs, fumaroles, elevated ground temperatures, hydrothermal alteration) are lacking. Emissions of moderate to low solubility gases (e.g., CO2, CH4, He) may be one of the primary near-surface signals from these systems. Detection of anomalous gas emissions related to hidden geothermal systems may therefore be an important tool to discover new geothermal resources. This study investigates the potential for CO2 detection and monitoring in the subsurface and above ground in the near-surface environment to serve as a tool to discover hidden geothermal systems. We focus the investigation on CO2 due to (1) its abundance in geothermal systems, (2) its moderate solubility in water, and (3) the wide range of technologies available to monitor CO2 in the near-surface environment. However, monitoring in the near-surface environment for CO2 derived from hidden geothermal reservoirs is complicated by the large variation in CO2 fluxes and concentrations arising from natural biological and hydrologic processes. In the near-surface environment, the flow and transport of CO2 at high concentrations will be controlled by its high density, low viscosity, and high solubility in water relative to air. Numerical simulations of CO2 migration show that CO2 concentrations can reach very high levels in the shallow subsurface even for relatively low geothermal source CO2 fluxes. However, once CO2 seeps out of the ground into the atmospheric surface layer, surface winds are effective at dispersing CO2 seepage. In natural ecological systems in the absence of geothermal gas emissions, near-surface CO2 fluxes and concentrations are primarily controlled by CO2 uptake by photosynthesis, production by root respiration, and microbial decomposition of soil/subsoil organic matter, groundwater degassing, and exchange with the atmosphere. Available technologies for monitoring CO2 in the near-surface environment include (1) the infrared gas analyzer (IRGA) for measurement of concentrations at point locations, (2) the accumulation chamber (AC) method for measuring soil CO2 fluxes at point locations, (3) the eddy covariance (EC) method for measuring net CO2 flux over a given area, (4) hyperspectral imaging of vegetative stress resulting from elevated CO2 concentrations, and (5) light detection and ranging (LIDAR) that can measure CO2 concentrations over an integrated path. Technologies currently in developmental stages that have the potential to be used for CO2 monitoring include tunable lasers for long distance integrated concentration measurements and micro-electronic mechanical systems (MEMS) that can make widespread point measurements. To address the challenge of detecting potentially small-magnitude geothermal CO2 emissions within the natural background variability of CO2, we propose an approach that integrates available detection and monitoring methodologies with statistical analysis and modeling strategies. Within the area targeted for geothermal exploration, point measurements of soil CO2 fluxes and concentrations using the AC method and a portable IRGA, respectively, and measurements of net surface flux using EC should be made. Also, the natural spatial and temporal variability of surface CO2 fluxes and subsurface CO2 concentrations should be quantified within a background area with similar geologic, climatic, and ecosystem characteristics to the area targeted for geothermal exploration. Statistical analyses of data collected from both areas should be used to guide sampling strategy, discern spatial patterns that may be indicative of geothermal CO2 emissions, and assess the presence (or absence) of geothermal CO2 within the natural background variability with a desired confidence level. Once measured CO2 concentrations and fluxes have been determined to be of anomalous geothermal origin with high confidence, more expensive vertical subsurface gas sampling and chemical and isotopic analyses can be undertaken. Integrated analysis of all measurements will determine definitively if CO2 derived from a deep geothermal source is present, and if so, the spatial extent of the anomaly. The appropriateness of further geophysical measurements, installation of deep wells, and geochemical analyses of deep fluids can then be decided based on the results of the near surface CO2 monitoring program.