Normal Basis Multiplication Algorithms for GF(2n ) (Full Version)
International Association for Cryptologic Research (IACR)
1 Normal Basis Multiplication Algorithms for GF(2n ) (Full Version) Haining Fan, Duo Liu and Yiqi. fan_haining@yahoo.com Abstract - In this paper, we propose a new normal basis multiplication algorithm for GF(2n ). This algorithm can be used to design not only fast software algorithms but also low
On the relation between the MXL family of algorithms and Grobner basis algorithms
International Association for Cryptologic Research (IACR)
On the relation between the MXL family of algorithms and Gr¨obner basis algorithms Martin R Solving (PoSSo) problem. The most efficient known algorithms reduce the Gr¨obner basis computation", on which a new family of algorithms is based (MXL, MXL2 and MXL3). By studying and de- scribing
Decision Trees: More Theoretical Justification for Practical Algorithms
Fiat, Amos
Decision Trees: More Theoretical Justification for Practical Algorithms Amos Fiat and Dmitry,pechyony}@tau.ac.il Abstract. We study impuritybased decision tree algorithms such as CART, C4.5, etc., so as to better understand their theoretical under pinnings. We consider such algorithms on special forms of functions
A theoretical basis for the Harmonic Balance Method
García-Saldańa, Johanna D
2012-01-01
The Harmonic Balance method provides a heuristic approach for finding truncated Fourier series as an approximation to the periodic solutions of ordinary differential equations. Another natural way for obtaining these type of approximations consists in applying numerical methods. In this paper we recover the pioneering results of Stokes and Urabe that provide a theoretical basis for proving that near these truncated series, whatever is the way they have been obtained, there are actual periodic solutions of the equation. We will restrict our attention to one-dimensional non-autonomous ordinary differential equations and we apply the results obtained to a couple of concrete examples coming from planar autonomous systems.
Theoretical Analysis and Efficient Algorithms for Crowdsourcing
Li, Hongwei
2015-01-01
algorithm to solve it efficiently. Empirical results on bothto solve problems efficiently by online game players. Foralgorithm is proposed to efficiently infer the true labels
Crawford, T. Daniel
The balance between theoretical method and basis set quality: A systematic study of equilibrium the best balance between theoretical method and basis set quality. This "balance" was evident
THE PERFORMANCE OF QUEUING THEORETIC VIDEO ON DEMAND ALGORITHMS
THE PERFORMANCE OF QUEUING THEORETIC VIDEO ON DEMAND ALGORITHMS BOURAS C.(1)(2), GAROFALAKIS J.(1,Greece KEYWORDS Video On Demand (VOD), Performance of Algorithms, Simulation, Modeling ABSTRACT Video On Demand on state-of-the-art technologies is Video On Demand (VOD). A Video On Demand System provides on demand
Theoretical Basis for the Design of a DWPF Evacuated Canister
Routt, K.R.
2001-09-17
This report provides the theoretical bases for use of an evacuated canister for draining a glass melter. Design recommendations are also presented to ensure satisfactory performance in future tests of the concept.
Centrifuge Permeameter for Unsaturated Soils. I: Theoretical Basis and Experimental Developments
Zornberg, Jorge G.
Centrifuge Permeameter for Unsaturated Soils. I: Theoretical Basis and Experimental Developments Jorge G. Zornberg, M.ASCE1 ; and John S. McCartney, A.M.ASCE2 Abstract: A new centrifuge permeameter the centrifuge permeame- ter for concurrent determination of the soil-water retention curve SWRC and hydraulic
A Decision Theoretic Approach to Evaluate Radiation Detection Algorithms
Nobles, Mallory A.; Sego, Landon H.; Cooley, Scott K.; Gosink, Luke J.; Anderson, Richard M.; Hays, Spencer E.; Tardiff, Mark F.
2013-07-01
There are a variety of sensor systems deployed at U.S. border crossings and ports of entry that scan for illicit nuclear material. In this work, we develop a framework for comparing the performance of detection algorithms that interpret the output of these scans and determine when secondary screening is needed. We optimize each algorithm to minimize its risk, or expected loss. We measure an algorithm’s risk by considering its performance over a sample, the probability distribution of threat sources, and the consequence of detection errors. While it is common to optimize algorithms by fixing one error rate and minimizing another, our framework allows one to simultaneously consider multiple types of detection errors. Our framework is flexible and easily adapted to many different assumptions regarding the probability of a vehicle containing illicit material, and the relative consequences of a false positive and false negative errors. Our methods can therefore inform decision makers of the algorithm family and parameter values which best reduce the threat from illicit nuclear material, given their understanding of the environment at any point in time. To illustrate the applicability of our methods, in this paper, we compare the risk from two families of detection algorithms and discuss the policy implications of our results.
Storjohann, Arne
a given integer lattice basis b1 ; b2 ; : : : ; bn 2 ZZ n into a reduced basis. The cost of L 3 reduction product. The L 3 reduction algorithm presented in [12] guarantees to return a basis with initial vector for Integer Lattice Basis Reduction Arne Storjohann Eidgen¨ossische Technische Hochschule CH8092 Z
Vazirani, Umesh
CS294-4: Fourier Transforms and Theoretical Computer Science Spring 1999 Lecture 11: Quantum of the major quantum algorithms. We will discuss the next algorithms: #15; Computation of Fourier Transform #15 Computation of Fourier Transform We consider two kinds of Fourier Transforms: #15; Fourier Transform over ZZ N
The Back and Forth Nudging algorithm for data assimilation problems: theoretical results on
Boyer, Edmond
The Back and Forth Nudging algorithm for data assimilation problems: theoretical results consider the back and forth nudging algorithm that has been introduced for data assimilation purposes of the system can then be seen as a control vector [LDT86]. Finally, the basic idea of stochastic methods
Algorithmic Tamper-Proof (ATP) Security: Theoretical Foundations for Security Against
Lysyanskaya, Anna
Algorithmic Tamper-Proof (ATP) Security: Theoretical Foundations for Security Against Hardware Tampering Rosario Gennaro1 , Anna Lysyanskaya2 , Tal Malkin3 , Silvio Micali4 , and Tal Rabin1 1 IBM T under (feasible) attacks that tamper with the secret key. In this paper we propose a theoretical
Algorithmic Tamper-Proof (ATP) Security : Theoretical Foundations for Security Against
Malkin, Tal
Algorithmic Tamper-Proof (ATP) Security : Theoretical Foundations for Security Against Hardware Tampering Rosario Gennaro 1 , Anna Lysyanskaya 2 , Tal Malkin 3 , Silvio Micali 4 , and Tal Rabin 1 1 IBM T may completely break under (feasible) attacks that tamper with the secret key. In this paper we
Pedram, Massoud
A Game-Theoretic Price Determination Algorithm for Utility Companies Serving a Community in Smart-cooperative utility companies who have incentives to maximize their own profits. The energy price competition forms an n-person game among utility companies where one's price strategy will affect the payoffs of others
A new chemo-evolutionary population synthesis model for early-type galaxies. I: Theoretical basis
A. Vazdekis; E. Casuso; R. F. Peletier; J. E. Beckman
1996-05-17
We have developed a new stellar population synthesis model designed to study early-type galaxies. It provides optical and near-infrared colors, and line indices for 25 absorption lines. It can synthesize single age, single metallicity stellar populations or follow the galaxy through its evolution from an initial gas cloud to the present time. The model incorporates the new isochrones of the Padova group and the latest stellar spectral libraries. The model has been extensively compared with previous ones in the literature to establish its accuracy as well as the accuracy of this kind of models in general. Using the evolutionary version of the model we find that we cannot fit the most metal-rich elliptical galaxies if we keep the IMF constant and do not allow infall of gas. We do however reproduce the results of Arimoto \\& Yoshii (1986) for the evolution of the gas, and produce colors, and, for the first time with this type of models, absorption line-strengths. It is in fact possible to fit the data for the elliptical galaxies by varying the IMF with time. Our numerical model is in good broad agreement with the analytical 'simple model'. In the present paper we describe the model, and compare a few key observables with new data for three early-type {\\em standard} galaxies. However the data, as well as our fits, will be discussed in much more detail in a second paper (Vazdekis {\\it et al.} 1996), where some conclusions will be drawn about elliptical galaxies on the basis of this model.
Two Software Normal Basis Multiplication Algorithms for GF(2n Haining Fan and Yiqi Dai
International Association for Cryptologic Research (IACR)
attention for efficient implementation. For portability as well as for price reasons, it is often is that the size of lookup tables of Algorithm 1 is larger than that of the RH algorithm. The total number
Quinn, M.J.
1983-01-01
The problem of developing efficient algorithms and data structures to solve graph theoretic problems on tightly-coupled MIMD comuters is addressed. Several approaches to parallelizing a serial algorithm are examined. A technique is developed which allows the prediction of the expected execution time of some kinds of parallel algorithms. This technique can be used to determine which parallel algorithm is best for a particular application. Two parallel approximate algorithms for the Euclidean traveling salesman problem are designed and analyzed. The algorithms are parallelizations of the farthest-insertion heuristic and Karp's partitioning algorithm. Software lockout, the delay of processes due to contention for shared data structure, can be a significant hindrance to obtaining satisfactory speedup. Using the tactics of indirection and replication, new data structures are devised which can reduce the severity of software lockout. Finally, an upper bound to the speedup of parallel branch-and-bound algorithms which use the best-bound search strategy is determined.
Masci, Frank
IEEE TRANSACTIONS ON SIGNAL PROCESSING, VOL. 1, NO. 1, JANUARY 2012 1 An Information Theoretic Algorithm for Finding Periodicities in Stellar Light Curves Pablo Huijse, Student Member, IEEE, Pablo A. Est´evez*, Senior Member, IEEE, Pavlos Protopapas, Pablo Zegers, Senior Member, IEEE, and Jos´e C. Pr´incipe, Fellow
Impact of Fading Wireless Channel on The Performance of Game Theoretic Power Control Algorithms mhayajneh@uaeu.ac.ae Chaouki Abdallah EECE Building 112 University of New Mexico Albuquerque, NM 87131 study the existence, uniqueness of Nash equilibrium (NE), and the social desirability of NE
A theoretical analysis of a pattern recognition algorithm for bank failure prediction
Prieto Orlando, Rodrigo Javier
1994-01-01
This thesis describes a theoretical analysis and a series of empirical tests of a pattern recognition based Early Warning System for bank failure prediction. The theoretical analysis centers on the binarization, feature ...
Vincenzo Tamma
2015-05-18
We describe a novel analogue algorithm that allows the simultaneous factorization of an exponential number of large integers with a polynomial number of experimental runs. It is the interference-induced periodicity of "factoring" interferograms measured at the output of an analogue computer that allows the selection of the factors of each integer [1,2,3,4]. At the present stage the algorithm manifests an exponential scaling which may be overcome by an extension of this method to correlated qubits emerging from n-order quantum correlations measurements. We describe the conditions for a generic physical system to compute such an analogue algorithm. A particular example given by an "optical computer" based on optical interference will be addressed in the second paper of this series [5].
Tamir, Tami
In the last few years the idea of electrically powered cars has turned into reality. This old idea, fromBattery Utilization in Electric Vehicles: Theoretical Analysis and an Almost Optimal Online n current demands in electric vehicles. When serving a demand, the current allocation might be split
Min Liang; Li Yang
2012-05-10
Public-key cryptosystems for quantum messages are considered from two aspects: public-key encryption and public-key authentication. Firstly, we propose a general construction of quantum public-key encryption scheme, and then construct an information-theoretic secure instance. Then, we propose a quantum public-key authentication scheme, which can protect the integrity of quantum messages. This scheme can both encrypt and authenticate quantum messages. It is information-theoretic secure with regard to encryption, and the success probability of tampering decreases exponentially with the security parameter with regard to authentication. Compared with classical public-key cryptosystems, one private-key in our schemes corresponds to an exponential number of public-keys, and every quantum public-key used by the sender is an unknown quantum state to the sender.
Liang, Min
2012-01-01
Public-key cryptosystems for quantum messages are considered from two aspects: public-key encryption and public-key authentication. Firstly, we propose a general construction of quantum public-key encryption scheme, and then construct an information-theoretic secure instance. Then, we propose a quantum public-key authentication scheme, which can protect the integrity of quantum messages. This scheme can both encrypt and authenticate quantum messages. It is information-theoretic secure with regard to encryption, and the success probability of tampering decreases exponentially with the security parameter with regard to authentication. Compared with classical public-key cryptosystems, one private-key in our schemes corresponds to an exponential number of public-keys, and every quantum public-key used by the sender is an unknown quantum state to the sender.
Coplan, Kevin P.
1984-01-01
An algorithm is presented for game-tree searching that is shown under fairly general but formally specifiable conditions to be more sparing of computational resource than classical alpha-beta minimax. The algorithm was ...
Nikolova, Evdokia Velinova
2009-01-01
Classical algorithms from theoretical computer science arise time and again in practice. However,a practical situations typically do not fit precisely into the traditional theoretical models. Additional necessary components ...
Mintert, James R.; Davis, Ernest E.; Dhuyvetter, Kevin C.; Bevers, Stan
1999-06-23
explains how livestock basis is computed, outlines an approach to developing a history of local basis levels, and discusses how historical basis data can be used to forecast basis....
Paris-Sud XI, Université de
Discrete Mathematics and Theoretical Computer Science DMTCS vol. 14:1, 2012, 147158 A linear time in H such that each edge of H appears in this sequence exactly once and vi-1, vi ei, vi-1 = vi Mathematics and Theoretical Computer Science 14, 1 (2012) 147-158" #12;148 Zbigniew Lonc and Pawel Naroski (i
Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)
HEP Theoretical Physics Understanding discoveries at the Energy, Intensity, and Cosmic Frontiers Get Expertise Rajan Gupta (505) 667-7664 Email Bruce Carlsten (505) 667-5657 Email...
Alasdair Macleod
2007-08-23
MOND is a phenomenological theory with no apparent physical justification which seems to undermine some of the basic principles that underpin established theoretical physics. It is nevertheless remarkably successful over its sphere of application and this suggests MOND may have some physical basis. It is shown here that two simple axioms pertaining to fundamental principles will reproduce the characteristic behaviour of MOND, though the axioms are in conflict with general relativistic cosmology.
Theoretical Analysis and Efficient Algorithms for Crowdsourcing
Li, Hongwei
2015-01-01
Tail and Concentration Inequalities. Lecture Notes, pages 1–0, which satisfies the inequality trivially. Next, we focusOld and new concentration inequalities, Chapter 2 in Complex
Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)
AFDC Printable Version Share this resource Send a link to EERE: Alternative Fuels Data Center Home Page to someone by E-mail Share EERE: Alternative Fuels Data Center Home Page on Facebook Tweet about EERE: Alternative Fuels Data Center Home Page on Twitter Bookmark EERE: Alternative Fuels Data Center Homesum_a_epg0_fpd_mmcf_m.xls" ,"Available from WebQuantityBonneville Power AdministrationRobust,Field-effectWorking With U.S.Week DayDr. Jeffrey publication of thetimeTheoreticalHEP
Sharkey, Keeper L.; Adamowicz, Ludwik; Department of Physics, University of Arizona, Tucson, Arizona 85721
2014-05-07
An algorithm for quantum-mechanical nonrelativistic variational calculations of L = 0 and M = 0 states of atoms with an arbitrary number of s electrons and with three p electrons have been implemented and tested in the calculations of the ground {sup 4}S state of the nitrogen atom. The spatial part of the wave function is expanded in terms of all-electrons explicitly correlated Gaussian functions with the appropriate pre-exponential Cartesian angular factors for states with the L = 0 and M = 0 symmetry. The algorithm includes formulas for calculating the Hamiltonian and overlap matrix elements, as well as formulas for calculating the analytic energy gradient determined with respect to the Gaussian exponential parameters. The gradient is used in the variational optimization of these parameters. The Hamiltonian used in the approach is obtained by rigorously separating the center-of-mass motion from the laboratory-frame all-particle Hamiltonian, and thus it explicitly depends on the finite mass of the nucleus. With that, the mass effect on the total ground-state energy is determined.
Ghelli, Giorgio
Basi di dati: FunzionalitĂ , Progettazione, Interrogazione Giorgio Ghelli DBMS's 2 Temi Â· FunzionalitĂ ed uso dei DBMS Â· Progettazione di una Base di Dati Â· Interrogazione di una Base di Dati FunzionalitĂ dei DBMS DBMS's 4 Riferimenti Â· A. Albano, G. Ghelli, R. Orsini, Basi di Dati Relazionali e
Broader source: Directives, Delegations, and Requirements [Office of Management (MA)]
2007-07-11
The Guide assists DOE/NNSA field elements and operating contractors in identifying and analyzing hazards at facilities and sites to provide the technical planning basis for emergency management programs. Supersedes DOE G 151.1-1, Volume 2.
Michele Mosca
2008-08-04
This article surveys the state of the art in quantum computer algorithms, including both black-box and non-black-box results. It is infeasible to detail all the known quantum algorithms, so a representative sample is given. This includes a summary of the early quantum algorithms, a description of the Abelian Hidden Subgroup algorithms (including Shor's factoring and discrete logarithm algorithms), quantum searching and amplitude amplification, quantum algorithms for simulating quantum mechanical systems, several non-trivial generalizations of the Abelian Hidden Subgroup Problem (and related techniques), the quantum walk paradigm for quantum algorithms, the paradigm of adiabatic algorithms, a family of ``topological'' algorithms, and algorithms for quantum tasks which cannot be done by a classical computer, followed by a discussion.
Algorithms for active learning
Hsu, Daniel Joseph
2010-01-01
6.2 Algorithms . . . . . . . . . . . . . . . . . . . 6.2.1CAL algorithm. . . . . . . . . . . . . . . . . . . .IWAL-CAL algorithm. . . . . . . . . . . . . . . . . . . . .
Radioactive Waste Management Basis
Perkins, B K
2009-06-03
The purpose of this Radioactive Waste Management Basis is to describe the systematic approach for planning, executing, and evaluating the management of radioactive waste at LLNL. The implementation of this document will ensure that waste management activities at LLNL are conducted in compliance with the requirements of DOE Order 435.1, Radioactive Waste Management, and the Implementation Guide for DOE Manual 435.1-1, Radioactive Waste Management Manual. Technical justification is provided where methods for meeting the requirements of DOE Order 435.1 deviate from the DOE Manual 435.1-1 and Implementation Guide.
Euclid's Algorithm, Guass' Elimination and Buchberger's Algorithm
International Association for Cryptologic Research (IACR)
Euclid's Algorithm, Guass' Elimination and Buchberger's Algorithm Shaohua Zhang School of Mathematics, Shandong University, Jinan, Shandong, 250100, PRC Abstract: It is known that Euclid's algorithm, Guass' elimination and Buchberger's algorithm play important roles in algorithmic number the- ory
Quantum Public-Key Encryption with Information Theoretic Security
Jiangyou Pan; Li Yang
2012-02-20
We propose a definition for the information theoretic security of a quantum public-key encryption scheme, and present bit-oriented and two-bit-oriented encryption schemes satisfying our security definition via the introduction of a new public-key algorithm structure. We extend the scheme to a multi-bitoriented one, and conjecture that it is also information theoretically secure, depending directly on the structure of our new algorithm.
Topics in Approximation Algorithms
Khare, Monik
2012-01-01
Hybrid Algorithm . . . . . . . . . . . . . . . . . . . 2.42 Empirical study of algorithms for packing and covering 2.12.3.1 CPLEX algorithms . . . . . . . . . . . . . . . . . . .
Papalaskari, Mary-Angela
efficiencyTime efficiency ·· Space efficiencySpace efficiency ·· OptimalityOptimality Approaches of Algorithms - Lecture 2 3 Theoretical analysis of time efficiencyTheoretical analysis of time efficiency Time and Analysis of Algorithms - Lecture 2 5 Empirical analysis of time efficiencyEmpirical analysis of time
Control algorithms for dynamic attenuators
Hsieh, Scott S., E-mail: sshsieh@stanford.edu [Department of Radiology, Stanford University, Stanford, California 94305 and Department of Electrical Engineering, Stanford University, Stanford, California 94305 (United States); Pelc, Norbert J. [Department of Radiology, Stanford University, Stanford California 94305 and Department of Bioengineering, Stanford University, Stanford, California 94305 (United States)] [Department of Radiology, Stanford University, Stanford California 94305 and Department of Bioengineering, Stanford University, Stanford, California 94305 (United States)
2014-06-15
Purpose: The authors describe algorithms to control dynamic attenuators in CT and compare their performance using simulated scans. Dynamic attenuators are prepatient beam shaping filters that modulate the distribution of x-ray fluence incident on the patient on a view-by-view basis. These attenuators can reduce dose while improving key image quality metrics such as peak or mean variance. In each view, the attenuator presents several degrees of freedom which may be individually adjusted. The total number of degrees of freedom across all views is very large, making many optimization techniques impractical. The authors develop a theory for optimally controlling these attenuators. Special attention is paid to a theoretically perfect attenuator which controls the fluence for each ray individually, but the authors also investigate and compare three other, practical attenuator designs which have been previously proposed: the piecewise-linear attenuator, the translating attenuator, and the double wedge attenuator. Methods: The authors pose and solve the optimization problems of minimizing the mean and peak variance subject to a fixed dose limit. For a perfect attenuator and mean variance minimization, this problem can be solved in simple, closed form. For other attenuator designs, the problem can be decomposed into separate problems for each view to greatly reduce the computational complexity. Peak variance minimization can be approximately solved using iterated, weighted mean variance (WMV) minimization. Also, the authors develop heuristics for the perfect and piecewise-linear attenuators which do not requirea priori knowledge of the patient anatomy. The authors compare these control algorithms on different types of dynamic attenuators using simulated raw data from forward projected DICOM files of a thorax and an abdomen. Results: The translating and double wedge attenuators reduce dose by an average of 30% relative to current techniques (bowtie filter with tube current modulation) without increasing peak variance. The 15-element piecewise-linear dynamic attenuator reduces dose by an average of 42%, and the perfect attenuator reduces dose by an average of 50%. Improvements in peak variance are several times larger than improvements in mean variance. Heuristic control eliminates the need for a prescan. For the piecewise-linear attenuator, the cost of heuristic control is an increase in dose of 9%. The proposed iterated WMV minimization produces results that are within a few percent of the true solution. Conclusions: Dynamic attenuators show potential for significant dose reduction. A wide class of dynamic attenuators can be accurately controlled using the described methods.
Wang, Kunpeng; Chai, Yi [College of Automation, Chongqing University, Chongqing 400044 (China)] [College of Automation, Chongqing University, Chongqing 400044 (China); Su, Chunxiao [Research Center of Laser Fusion, CAEP, P. O. Box 919-983, Mianyang 621900 (China)] [Research Center of Laser Fusion, CAEP, P. O. Box 919-983, Mianyang 621900 (China)
2013-08-15
In this paper, we consider the problem of extracting the desired signals from noisy measurements. This is a classical problem of signal recovery which is of paramount importance in inertial confinement fusion. To accomplish this task, we develop a tractable algorithm based on continuous basis pursuit and reweighted ?{sub 1}-minimization. By modeling the observed signals as superposition of scale time-shifted copies of theoretical waveform, structured noise, and unstructured noise on a finite time interval, a sparse optimization problem is obtained. We propose to solve this problem through an iterative procedure that alternates between convex optimization to estimate the amplitude, and local optimization to estimate the dictionary. The performance of the method was evaluated both numerically and experimentally. Numerically, we recovered theoretical signals embedded in increasing amounts of unstructured noise and compared the results with those obtained through popular denoising methods. We also applied the proposed method to a set of actual experimental data acquired from the Shenguang-II laser whose energy was below the detector noise-equivalent energy. Both simulation and experiments show that the proposed method improves the signal recovery performance and extends the dynamic detection range of detectors.
Basis Token Consistency A Practical Mechanism for Strong Web Cache Consistency
call \\Basis Token Consistency" or BTC; when im- plemented at the server, this mechanism allows any between the BTC algorithm and the use of the Time-To-Live (TTL) heuristic. #3; This research was supported
Algorithms and Experiments: The New (and Old) Methodology
Moret, Bernard
Algorithms and Experiments: The New (and Old) Methodology Bernard M.E. Moret Department of Computer twenty years have seen enormous progress in the design of algorithms, but little of it has been put into practice. Because many recently developed algorithms are hard to characterize theoretically and have large
Priority Algorithms for Graph Optimization Problems Allan Borodin
Larsen, Kim Skak
Priority Algorithms for Graph Optimization Problems Allan Borodin University of Toronto bor of priority or "greedy-like" algorithms as initiated in [10] and as extended to graph theoretic problems, there are several natural input formulations for a given problem and we show that priority algorithm bounds
Algorithms for builder guidelines
Balcomb, J.D.; Lekov, A.B.
1989-06-01
The Builder Guidelines are designed to make simple, appropriate guidelines available to builders for their specific localities. Builders may select from passive solar and conservation strategies with different performance potentials. They can then compare the calculated results for their particular house design with a typical house in the same location. Algorithms used to develop the Builder Guidelines are described. The main algorithms used are the monthly solar ratio (SLR) method for winter heating, the diurnal heat capacity (DHC) method for temperature swing, and a new simplified calculation method (McCool) for summer cooling. This paper applies the algorithms to estimate the performance potential of passive solar strategies, and the annual heating and cooling loads of various combinations of conservation and passive solar strategies. The basis of the McCool method is described. All three methods are implemented in a microcomputer program used to generate the guideline numbers. Guidelines for Denver, Colorado, are used to illustrate the results. The structure of the guidelines and worksheet booklets are also presented. 5 refs., 3 tabs.
A Complexity Analysis of a Jacobi Method for Lattice Basis Reduction
Qiao, Sanzheng
A Complexity Analysis of a Jacobi Method for Lattice Basis Reduction Zhaofei Tian Department the Jacobi method introduced by S. Qiao [23], and show that it has the same complexity as the LLL algorithm. Our experimental results show that the Jacobi method outperforms the LLL algorithm in not only
Milk Futures, Options and Basis
Haigh, Michael; Stockton, Matthew; Anderson, David P.; Schwart Jr., Robert B.
2001-10-12
The milk futures and options market enables producers and processors to manage price risk. This publication explains hedging, margin accounts, basis and how to track it, and other fundamentals of the futures and options market....
Broader source: Energy.gov [DOE]
CRAD for Safety Basis (SB). Criteria Review and Approach Documents (CRADs) that can be used to conduct a well-organized and thorough assessment of elements of safety and health programs.
Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)
AFDC Printable Version Share this resource Send a link to EERE: Alternative Fuels Data Center Home Page to someone by E-mail Share EERE: Alternative Fuels Data Center Home Page on Facebook Tweet about EERE: Alternative Fuels Data Center Home Page on Twitter Bookmark EERE: Alternative Fuels Data Center Homesum_a_epg0_fpd_mmcf_m.xls" ,"Available from WebQuantity ofkandz-cm11 OutreachProductswsicloudwsiclouddenDVA N C E D B L O OLaura BeaneCardwell,Production1358 Approved forcover:
Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)
AFDC Printable Version Share this resource Send a link to EERE: Alternative Fuels Data Center Home Page to someone by E-mail Share EERE: Alternative Fuels Data Center Home Page on Facebook Tweet about EERE: Alternative Fuels Data Center Home Page on Twitter Bookmark EERE: Alternative Fuels Data Center Homesum_a_epg0_fpd_mmcf_m.xls" ,"Available from WebQuantity ofkandz-cm11 OutreachProductswsicloudwsiclouddenDVA N C E D B L O OLaura BeaneCardwell,Production1358 Approved
Accelerating Majorization Algorithms
Jan de Leeuw
2011-01-01
incomplete data via the em algorithm. Journal of the RoyalACCELERATING MAJORIZATION ALGORITHMS JAN DE LEEUW Abstract.construc- tion of majorization algorithms and their rate of
Accelerating Majorization Algorithms
Leeuw, Jan de
2008-01-01
incomplete data via the em algorithm. Journal of the RoyalACCELERATING MAJORIZATION ALGORITHMS JAN DE LEEUW Abstract.construc- tion of majorization algorithms and their rate of
KIRKPATRICK, BONNIE
2011-01-01
41 3.2.1 The Peeling Algorithm and Elston-Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .46 iv 4 Algorithms for Inference 4.1 Gibbs
A Theoretical and Algorithmic Characterization of Bulge Knees
2015-05-29
the Pareto front) and bulge knee, to the best of our knowledge, is the only .... magnitudes (stress versus displacement trade-off that is inherent in engineering.
Compulsory Elective Theoretical Physics
Dutz, Hartmut
Aug Sep Compulsory Elective Theoretical Physics (physics606 or - if done previously - 1 module out of physics751, physics754, physics755, physics760, physics7501) 7 cp Specialization (at least 24 cp out of physics61a, -61b, -61c and/or physics62a, -62b, -62c) 24 cp Elective Advanced Lectures (at least 18 cp out
An algorithm for constrained one-step inversion of spectral CT data
Barber, Rina Foygel; Schmidt, Taly Gilat; Pan, Xiaochuan
2015-01-01
We develop a primal-dual algorithm that allows for one-step inversion of spectral CT transmission photon counts data to a basis map decomposition. The algorithm allows for image constraints to be enforced on the basis maps during the inversion. The derivation of the algorithm makes use of a local upper bounding quadratic approximation to generate descent steps for non-convex spectral CT data discrepancy terms, combined with a new convex-concave optimization algorithm. Convergence of the algorithm is demonstrated on simulated spectral CT data. Simulations with noise and anthropomorphic phantoms show examples of how to employ the constrained one-step algorithm for spectral CT data.
Sampling-based algorithms for optimal motion planning
Karaman, Sertac
During the last decade, sampling-based path planning algorithms, such as probabilistic roadmaps (PRM) and rapidly exploring random trees (RRT), have been shown to work well in practice and possess theoretical guarantees ...
Zheng, Chunmiao
of uncertainties in the hydraulic conductivity (K) field. Both methodologies couple a genetic algorithm (GA on the basis of those potential monitoring wells that are most frequently selected by the individual designs
Hanford Generic Interim Safety Basis
Lavender, J.C.
1994-09-09
The purpose of this document is to identify WHC programs and requirements that are an integral part of the authorization basis for nuclear facilities that are generic to all WHC-managed facilities. The purpose of these programs is to implement the DOE Orders, as WHC becomes contractually obligated to implement them. The Hanford Generic ISB focuses on the institutional controls and safety requirements identified in DOE Order 5480.23, Nuclear Safety Analysis Reports.
Mathemathical methods of theoretical physics
Karl Svozil
2015-07-01
Course material for mathematical methods of theoretical physics intended for an undergraduate audience.
Protein Folding Challenge and Theoretical Computer Science Somenath Biswas
Biswas, Somenath
Protein Folding Challenge and Theoretical Computer Science Somenath Biswas Department of Computer the chain of amino acids that defines a protein. The protein folding problem is: given a sequence of amino to use an efficient algorithm to carry out protein folding. The atoms in a protein molecule attract each
Theoretical Computer Science 00 (2010) 114 Procedia Computer
2010-01-01
. An abort wastes all computation of a transaction and might happen right before its completion. A waitingTheoretical Computer Science 00 (2010) 114 Procedia Computer Science Bounds On Contention Management Algorithms1 Johannes Schneider, Roger Wattenhofer {jschneid, wattenhofer}@tik.ee.ethz.ch Computer
Gravitational lens modeling with basis sets
Birrer, Simon; Refregier, Alexandre
2015-01-01
We present a strong lensing modeling technique based on versatile basis sets for the lens and source planes. Our method uses high performance Monte Carlo algorithms, allows for an adaptive build up of complexity and bridges the gap between parametric and pixel based reconstruction methods. We apply our method to a HST image of the strong lens system RXJ1131-1231 and show that our method finds a reliable solution and is able to detect substructure in the lens and source planes simultaneously. Using mock data we show that our method is sensitive to sub-clumps with masses four orders of magnitude smaller than the main lens, which corresponds to about $10^8 M_{\\odot}$, without prior knowledge on the position and mass of the sub-clump. The modelling approach is flexible and maximises automation to facilitate the analysis of the large number of strong lensing systems expected in upcoming wide field surveys. The resulting search for dark sub-clumps in these systems, without mass-to-light priors, offers promise for p...
THEORETICAL MODELING AND COMPUTATIONAL SIMULATION OF ROBUST CONTROL FOR MARS AIRCRAFT
Oh, Seyool
2014-05-31
The focus of this dissertation is the development of control system design algorithms for autonomous operation of an aircraft in the Martian atmosphere. This research will show theoretical modeling and computational ...
A polynomialtime Nash equilibrium algorithm for repeated games #
Littman, Michael L.
A polynomialÂtime Nash equilibrium algorithm for repeated games # Michael L. Littman Dept theoretical and practical interest. The computational complexity of finding a Nash equilibrium for a one a Nash equilibrium for an averageÂpayo# repeated bimatrix game, and presents a polynomialÂtime algorithm
Satisfiability of logic programming based on radial basis function neural networks
Hamadneh, Nawaf; Sathasivam, Saratha; Tilahun, Surafel Luleseged; Choon, Ong Hong [School of Mathematical Sciences, Universiti Sains Malaysia, 11800 USM, Penang (Malaysia)
2014-07-10
In this paper, we propose a new technique to test the Satisfiability of propositional logic programming and quantified Boolean formula problem in radial basis function neural networks. For this purpose, we built radial basis function neural networks to represent the proportional logic which has exactly three variables in each clause. We used the Prey-predator algorithm to calculate the output weights of the neural networks, while the K-means clustering algorithm is used to determine the hidden parameters (the centers and the widths). Mean of the sum squared error function is used to measure the activity of the two algorithms. We applied the developed technique with the recurrent radial basis function neural networks to represent the quantified Boolean formulas. The new technique can be applied to solve many applications such as electronic circuits and NP-complete problems.
Approximation Algorithms for Covering Problems
Koufogiannakis, Christos
2009-01-01
1.3.1 Sequential Algorithms . . . . . . . . . . . . .Distributed 2-approximation algorithm for CMIP 2 (Alg.2 Sequential Algorithm 2.1 The Greedy Algorithm for Monotone
Algorithmic Gauss-Manin Connection Algorithms to Compute Hodge-theoretic Invariants
Schulze, Mathias
Singular library linalg.lib . . . . . . . . . . . . . . . . . . 113 A.2 Singular library gaussman are considered to be equal and form an (equiv- alence) class. This leads to a classification problem form being an object in this class. The concept of invariants serves to approach classification
-time algorithms for computing reduced dimension models for uncertain systems. Here we present algorithms that compute lower dimensional realizations of an uncertain system, and compare their theoretical and com of the computational di- culties in handling more realistic systems. The uncertain system representation
DOE Data Explorer [Office of Scientific and Technical Information (OSTI)]
Feller, D; Schuchardt, Karen L.; Didier, Brett T.; Elsethagen, Todd; Sun, Lisong; Gurumoorthi, Vidhya; Chase, Jared; Li, Jun
The Basis Set Exchange (BSE) provides a web-based user interface for downloading and uploading Gaussian-type (GTO) basis sets, including effective core potentials (ECPs), from the EMSL Basis Set Library. It provides an improved user interface and capabilities over its predecessor, the EMSL Basis Set Order Form, for exploring the contents of the EMSL Basis Set Library. The popular Basis Set Order Form and underlying Basis Set Library were originally developed by Dr. David Feller and have been available from the EMSL webpages since 1994. BSE not only allows downloading of the more than 200 Basis sets in various formats; it allows users to annotate existing sets and to upload new sets. (Specialized Interface)
Internal dosimetry technical basis manual
Not Available
1990-12-20
The internal dosimetry program at the Savannah River Site (SRS) consists of radiation protection programs and activities used to detect and evaluate intakes of radioactive material by radiation workers. Examples of such programs are: air monitoring; surface contamination monitoring; personal contamination surveys; radiobioassay; and dose assessment. The objectives of the internal dosimetry program are to demonstrate that the workplace is under control and that workers are not being exposed to radioactive material, and to detect and assess inadvertent intakes in the workplace. The Savannah River Site Internal Dosimetry Technical Basis Manual (TBM) is intended to provide a technical and philosophical discussion of the radiobioassay and dose assessment aspects of the internal dosimetry program. Detailed information on air, surface, and personal contamination surveillance programs is not given in this manual except for how these programs interface with routine and special bioassay programs.
Common basis for cellular motility
Henry G. Zot; Javier E. Hasbun; Nguyen Van Minh
2015-10-31
Motility is characteristic of life, but a common basis for movement has remained to be identified. Diverse systems in motion shift between two states depending on interactions that turnover at the rate of an applied cycle of force. Although one phase of the force cycle terminates the decay of the most recent state, continuation of the cycle of force regenerates the original decay process in a recursive cycle. By completing a cycle, kinetic energy is transformed into probability of sustaining the most recent state and the system gains a frame of reference for discrete transitions having static rather than time-dependent probability. The probability of completing a recursive cycle is computed with a Markov chain comprised of two equilibrium states and a kinetic intermediate. Given rate constants for the reactions, a random walk reproduces bias and recurrence times of walking motor molecules and bacterial flagellar switching with unrivaled fidelity.
A LOGICAL INVERTED TAXONOMY OF SORTING ALGORITHMS S.M. Merritt K.K. Lau
Lau, Kung-Kiu
A LOGICAL INVERTED TAXONOMY OF SORTING ALGORITHMS S.M. Merritt K.K. Lau School of Computer Science taxonomy of sorting algorithms, a highÂlevel, topÂdown, conceptually simple and symmetric categorization taxonomy of sorting algorithms. This provides a logical basis for the inverted taxonomy and expands
The Static Universe Hypothesis: Theoretical Basis and Observational Tests of the Hypothesis
Thomas B. Andrews
2001-09-07
From the axiom of the unrestricted repeatability of all experiments, Bondi and Gold argued that the universe is in a stable, self-perpetuating equilibrium state. This concept generalizes the usual cosmological principle to the perfect cosmological principle in which the universe looks the same from any location at any time. Consequently, I hypothesize that the universe is static and in an equilibrium state (non-evolving). New physics is proposed based on the concept that the universe is a pure wave system. Based on the new physics and assuming a static universe, processes are derived for the Hubble redshift and the cosmic background radiation field. Then, following the scientific method, I test deductions of the static universe hypothesis using precise observational data primarily from the Hubble Space Telescope. Applying four different global tests of the space-time metric, I find that the observational data consistently fits the static universe model. The observational data also show that the average absolute magnitudes and physical radii of first-rank elliptical galaxies have not changed over the last 5 to 15 billion years. Because the static universe hypothesis is a logical deduction from the perfect cosmological principle and the hypothesis is confirmed by the observational data, I conclude that the universe is static and in an equilibrium state.
Boyer, Edmond
example, the long-term use of groundwater heat pumps for air conditioning of homes or buildings can induce and hydrogeological background. The presence of organic pollutants in the aquifer can amplify these phenomena/or the well productivity, (ii) an inappropriate temperature for the use of groundwater heat pumps for air
Cooling algorithms based on the 3-bit majority
Phillip Kaye
2007-05-15
Algorithmic cooling is a potentially important technique for making scalable NMR quantum computation feasible in practice. Given the constraints imposed by this approach to quantum computing, the most likely cooling algorithms to be practicable are those based on simple reversible polarization compression (RPC) operations acting locally on small numbers of bits. Several different algorithms using 2- and 3-bit RPC operations have appeared in the literature, and these are the algorithms I consider in this note. Specifically, I show that the RPC operation used in all these algorithms is essentially a majority vote of 3 bits, and prove the optimality of the best such algorithm. I go on to derive some theoretical bounds on the performance of these algorithms under some specific assumptions about errors.
Advanced Fuel Cycle Cost Basis
D. E. Shropshire; K. A. Williams; W. B. Boore; J. D. Smith; B. W. Dixon; M. Dunzik-Gougar; R. D. Adams; D. Gombert; E. Schneider
2008-03-01
This report, commissioned by the U.S. Department of Energy (DOE), provides a comprehensive set of cost data supporting a cost analysis for the relative economic comparison of options for use in the Advanced Fuel Cycle Initiative (AFCI) Program. The report describes the AFCI cost basis development process, reference information on AFCI cost modules, a procedure for estimating fuel cycle costs, economic evaluation guidelines, and a discussion on the integration of cost data into economic computer models. This report contains reference cost data for 25 cost modules—23 fuel cycle cost modules and 2 reactor modules. The cost modules were developed in the areas of natural uranium mining and milling, conversion, enrichment, depleted uranium disposition, fuel fabrication, interim spent fuel storage, reprocessing, waste conditioning, spent nuclear fuel (SNF) packaging, long-term monitored retrievable storage, near surface disposal of low-level waste (LLW), geologic repository and other disposal concepts, and transportation processes for nuclear fuel, LLW, SNF, transuranic, and high-level waste.
Advanced Fuel Cycle Cost Basis
D. E. Shropshire; K. A. Williams; W. B. Boore; J. D. Smith; B. W. Dixon; M. Dunzik-Gougar; R. D. Adams; D. Gombert
2007-04-01
This report, commissioned by the U.S. Department of Energy (DOE), provides a comprehensive set of cost data supporting a cost analysis for the relative economic comparison of options for use in the Advanced Fuel Cycle Initiative (AFCI) Program. The report describes the AFCI cost basis development process, reference information on AFCI cost modules, a procedure for estimating fuel cycle costs, economic evaluation guidelines, and a discussion on the integration of cost data into economic computer models. This report contains reference cost data for 26 cost modules—24 fuel cycle cost modules and 2 reactor modules. The cost modules were developed in the areas of natural uranium mining and milling, conversion, enrichment, depleted uranium disposition, fuel fabrication, interim spent fuel storage, reprocessing, waste conditioning, spent nuclear fuel (SNF) packaging, long-term monitored retrievable storage, near surface disposal of low-level waste (LLW), geologic repository and other disposal concepts, and transportation processes for nuclear fuel, LLW, SNF, and high-level waste.
Advanced Fuel Cycle Cost Basis
D. E. Shropshire; K. A. Williams; W. B. Boore; J. D. Smith; B. W. Dixon; M. Dunzik-Gougar; R. D. Adams; D. Gombert; E. Schneider
2009-12-01
This report, commissioned by the U.S. Department of Energy (DOE), provides a comprehensive set of cost data supporting a cost analysis for the relative economic comparison of options for use in the Advanced Fuel Cycle Initiative (AFCI) Program. The report describes the AFCI cost basis development process, reference information on AFCI cost modules, a procedure for estimating fuel cycle costs, economic evaluation guidelines, and a discussion on the integration of cost data into economic computer models. This report contains reference cost data for 25 cost modules—23 fuel cycle cost modules and 2 reactor modules. The cost modules were developed in the areas of natural uranium mining and milling, conversion, enrichment, depleted uranium disposition, fuel fabrication, interim spent fuel storage, reprocessing, waste conditioning, spent nuclear fuel (SNF) packaging, long-term monitored retrievable storage, near surface disposal of low-level waste (LLW), geologic repository and other disposal concepts, and transportation processes for nuclear fuel, LLW, SNF, transuranic, and high-level waste.
Splitting Algorithms for Convex Optimization and Applications to Sparse Matrix Factorization
Rong, Rong
2013-01-01
Algorithms . . . . . . . . . . . . . . . . . . . . . . .Splitting Algorithms . . . . .Proximal Point Algorithm . . . . . . . . . .
Algorithms and Problem Solving Introduction
Razak, Saquib
Unit 16 1 Algorithms and Problem Solving · Introduction · What is an Algorithm? · Algorithm Properties · Example · Exercises #12;Unit 16 2 What is an Algorithm? What is an Algorithm? · An algorithm. · The algorithm must be general, that is, it should solve the problem for all possible input sets to the problem
Efficient Algebraic Representations for Throughput-Oriented Algorithms
McKinlay, Christopher E.
2013-01-01
Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . .Throughput-Oriented Algorithm Design Multilinear
Algorithms and Software for PCR Primer Design
Huang, Yu-Ting
2015-01-01
5.3.3 Algorithm . . . . . . . . . . . . . . . . . . . . .5.2.4 Algorithm . . . . . .clique problems and MCDPD . . . . . . . . . . . Algorithm 1
Rubinfeld, Ronitt
Sublinear time algorithms represent a new paradigm in computing, where an algorithm must give some sort of an answer after inspecting only a very small portion of the input. We discuss the types of answers that one can ...
Algorithms for strategic agents
Weinberg, S. Matthew (Seth Matthew)
2014-01-01
In traditional algorithm design, no incentives come into play: the input is given, and your algorithm must produce a correct output. How much harder is it to solve the same problem when the input is not given directly, but ...
Indigenous Algorithms, Organizations, and Rationality
Leaf, Murray
2008-01-01
Indigenous Optimizing Algorithm. Mathematical Anthropologythe use of maximizing algorithms in behavior is a crucialthe knowledge, rules, and algorithms that they apply. If we
Variational Algorithms for Marginal MAP
Liu, Q; Ihler, A
2013-01-01
2004. A. L. Yuille. CCCP algorithms to minimize the BetheA tutorial on MM algorithms. The American Statistician, 1(time approximation algorithms for the ising model. SIAM
Energy Science and Technology Software Center (OSTI)
002651IBMPC00 Algorithm for Accounting for the Interactions of Multiple Renewable Energy Technologies in Estimation of Annual Performance
Physical Algorithms Roger Wattenhofer
Physical Algorithms Roger Wattenhofer Computer Engineering and Networks Laboratory TIK ETH Zurich to an ICALP 2010 invited talk, intending to encourage research in physical algorithms. The area of physical algorithms deals with networked systems of active agents. These agents have access to limited information
James R. Chelikowsky
2009-03-31
The work reported here took place at the University of Minnesota from September 15, 2003 to November 14, 2005. This funding resulted in 10 invited articles or book chapters, 37 articles in refereed journals and 13 invited talks. The funding helped train 5 PhD students. The research supported by this grant focused on developing theoretical methods for predicting and understanding the properties of matter at the nanoscale. Within this regime, new phenomena occur that are characteristic of neither the atomic limit, nor the crystalline limit. Moreover, this regime is crucial for understanding the emergence of macroscopic properties such as ferromagnetism. For example, elemental Fe clusters possess magnetic moments that reside between the atomic and crystalline limits, but the transition from the atomic to the crystalline limit is not a simple interpolation between the two size regimes. To capitalize properly on predicting such phenomena in this transition regime, a deeper understanding of the electronic, magnetic and structural properties of matter is required, e.g., electron correlation effects are enhanced within this size regime and the surface of a confined system must be explicitly included. A key element of our research involved the construction of new algorithms to address problems peculiar to the nanoscale. Typically, one would like to consider systems with thousands of atoms or more, e.g., a silicon nanocrystal that is 7 nm in diameter would contain over 10,000 atoms. Previous ab initio methods could address systems with hundreds of atoms whereas empirical methods can routinely handle hundreds of thousands of atoms (or more). However, these empirical methods often rely on ad hoc assumptions and lack incorporation of structural and electronic degrees of freedom. The key theoretical ingredients in our work involved the use of ab initio pseudopotentials and density functional approaches. The key numerical ingredients involved the implementation of algorithms for solving the Kohn-Sham equation without the use of an explicit basis, i.e., a real space grid. We invented algorithms for a solution of the Kohn-Sham equation based on Chebyshev 'subspace filtering'. Our filtering algorithms dramatically enhanced our ability to explore systems with thousands of atoms, i.e., we examined silicon quantum dots with approximately 11,000 atoms (or 40,000 electrons). We applied this algorithm to a number of nanoscale systems to examine the role of quantum confinement on electronic and magnetic properties: (1) Doping of nanocrystals and nanowires, including both magnetic and non-magnetic dopants and the role of self-purification; (2) Optical excitations and electronic properties of nanocrystals; (3) Intrinsic defects in nanostructures; and (4) The emergence of ferromagnetism from atoms to crystals.
THEORETICAL PHYSICS Faculty of Physics
Pachucki, Krzysztof
of Field Theory and Statistical Physics RG Division of General Relativity and Gravitation MP DivisionINSTITUTE OF THEORETICAL PHYSICS Faculty of Physics Warsaw University 1998-1999 Warsaw 2000 #12;INSTITUTE OF THEORETICAL PHYSICS Address: Hoza 69, PL-00 681 Warsaw, Poland Phone: (+48 22) 628 33 96 Fax
Game Theoretical Snapshots Sergiu Hart
Hart, Sergiu
Game Theoretical Snapshots Sergiu Hart June 2015 SERGIU HART c 2015 p. #12;Game Theoretical Snapshots Sergiu Hart Center for the Study of Rationality Dept of Mathematics Dept of Economics The Hebrew University of Jerusalem hart@huji.ac.il http://www.ma.huji.ac.il/hart SERGIU HART c 2015 p. #12
Authorization basis for the 209-E Building
TIFFANY, M.S.
1999-02-23
This Authorization Basis document is one of three documents that constitute the Authorization Basis for the 209-E Building. Per the U.S. Department of Energy, Richland Operations Office (RL) letter 98-WSD-074, this document, the 209-E Building Preliminary Hazards Analysis (WHC-SD-WM-TI-789), and the 209-E Building Safety Evaluation Report (97-WSD-074) constitute the Authorization Basis for the 209-E Building. This Authorization Basis and the associated controls and safety programs will remain in place until safety documentation addressing deactivation of the 209-E Building is developed by the contractor and approved by RL.
Recent Theoretical Results for Advanced Thermoelectric Materials...
Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site
Theoretical Results for Advanced Thermoelectric Materials Recent Theoretical Results for Advanced Thermoelectric Materials Transport theory and first principles calculations...
Algorithms for Quantum Computers
Jamie Smith; Michele Mosca
2010-01-07
This paper surveys the field of quantum computer algorithms. It gives a taste of both the breadth and the depth of the known algorithms for quantum computers, focusing on some of the more recent results. It begins with a brief review of quantum Fourier transform based algorithms, followed by quantum searching and some of its early generalizations. It continues with a more in-depth description of two more recent developments: algorithms developed in the quantum walk paradigm, followed by tensor network evaluation algorithms (which include approximating the Tutte polynomial).
Relation between XL algorithm and Grobner Bases Algorithms
International Association for Cryptologic Research (IACR)
Relation between XL algorithm and Gr¨obner Bases Algorithms Makoto Sugita1 , Mitsuru Kawazoe2 the XL algorithm and Gr¨obner bases algorithms. The XL algorithm was proposed to be a more efficient algorithm to solve a system of equations with a special assumption with- out trying to calculate a whole Gr
Optimizing qubit Hamiltonian parameter estimation algorithms using PSO
Alexandr Sergeevich; Stephen D. Bartlett
2012-06-18
We develop qubit Hamiltonian single parameter estimation techniques using a Bayesian approach. The algorithms considered are restricted to projective measurements in a fixed basis, and are derived under the assumption that the qubit measurement is much slower than the characteristic qubit evolution. We optimize a non-adaptive algorithm using particle swarm optimization (PSO) and compare with a previously-developed locally-optimal scheme.
Sec$on Summary ! Properties of Algorithms
#12;Sec$on Summary ! Properties of Algorithms ! Algorithms for Searching and Sorting ! Greedy Algorithms ! Halting Problem #12;Problems and Algorithms ! In many. This procedure is called an algorithm. #12;Algorithms Definition: An algorithm
Landscape Engineering: removing local traps in the chopped random basis optimization
Niklas Rach; Matthias M. Müller; Tommaso Calarco; Simone Montangero
2015-06-15
In quantum optimal control theory the success of an optimization algorithm is highly influenced by how the figure of merit to be optimized behaves as a function of the control field, i.e. by the control landscape. Constraints on the control field introduce local minima in the landscape --traps-- which might prevent an efficient solution of the optimal control problem. The Chopped Random Basis (CRAB) optimal control algorithm is constructed to improve the optimization efficiency by introducing an expansion of the control field onto a truncated basis, that is, it works with a limited control field bandwidth. We study the influence of traps on the success probability of CRAB and extend the original algorithm to engineer the landscape in order to eliminate the traps; we demonstrate that this development exploits the advantages of both (unconstrained) gradient algorithms and of truncated basis methods. Finally, we characterize the behavior of the extended CRAB under additional constraints and show that for reasonable constraints the convergence properties are still maintained.
McLachlan, Geoff
738 IEEE TRANSACTIONS ON NEURAL NETWORKS, VOL. 15, NO. 3, MAY 2004 Using the EM Algorithm to Train Neural Networks: Misconceptions and a New Algorithm for Multiclass Classification Shu-Kay Ng and Geoffrey in recent years as the basis for var- ious algorithms in application areas of neural networks such as pat
CRAD, Safety Basis - Los Alamos National Laboratory Waste Characteriza...
Office of Environmental Management (EM)
Safety Basis - Los Alamos National Laboratory Waste Characterization, Reduction, and Repackaging Facility CRAD, Safety Basis - Los Alamos National Laboratory Waste...
Mathematics: The Basis for Quantitative Knowledge
Trevors, J. T.; Saier, M. H.
2010-01-01
1–2 DOI 10.1007/s11270-009-0300-9 Mathematics: The Basis forthe inference that mathematics has underpinned virtually allin future research. Mathematics can be considered the
Scientific Basis for Bacterial TMDLs in Georgia
Radcliffe, David
and Natural Resources University of Georgia, Athens, GA Atlanta, Georgia June 2006 Scientific Basis Advisory Committee as part of the Georgia Statewide Water Planning process. www.gadnr.org/gswp/Documents/info
Adaptive Basis Sampling for Smoothing Splines
Zhang, Nan
2015-08-03
. However, the high computational cost of smoothing splines for large data sets has hindered their wide application. We develop a new method, named adaptive basis sampling, for efficient computation of smoothing splines in super-large samples. Generally, a...
Rossi, Tuomas P; Sakko, Arto; Puska, Martti J; Nieminen, Risto M
2015-01-01
We present an approach for generating local numerical basis sets of improving accuracy for first-principles nanoplasmonics simulations within time-dependent density functional theory. The method is demonstrated for copper, silver, and gold nanoparticles that are of experimental interest but computationally demanding due to the semi-core d-electrons that affect their plasmonic response. The basis sets are constructed by augmenting numerical atomic orbital basis sets by truncated Gaussian-type orbitals generated by the completeness-optimization scheme, which is applied to the photoabsorption spectra of homoatomic metal atom dimers. We obtain basis sets of improving accuracy up to the complete basis set limit and demonstrate that the performance of the basis sets transfers to simulations of larger nanoparticles and nanoalloys as well as to calculations with various exchange-correlation functionals. This work promotes the use of the local basis set approach of controllable accuracy in first-principles nanoplasmon...
Fast Algorithms for High Frequency Interconnect Modeling in VLSI Circuits and Packages
Yi, Yang
2011-02-22
. The algorithm is accelerated by approximating magnetic charge effects and by modeling currents with solenoidal basis. The relative error of the algorithm with respect to the commercial tool is below 3%, while the speed is up to one magnitude faster. 3) Since...
A Probability Analysis for Candidate-Based Frequent Itemset Algorithms
Van Gucht, Dirk
of Antwerp Middelheimlaan 1 2020 Antwerp, Belgium nele.dexters@ua.ac.be Paul W. Purdom Indiana University of candidates, which is an important step in frequent itemset mining algorithms, from a theoretical point), and failure (a candidate that is infrequent). For a selection of candidate-based frequent itemset mining algo
Critical Review of Theoretical Models for Anomalous Effects (Cold Fusion) in Deuterated Metals
Chechin, V A; Rabinowitz, M; Kim, Y E
1994-01-01
We briefly summarize the reported anomalous effects in deuterated metals at ambient temperature, commonly known as "Cold Fusion" (CF), with an emphasis on important experiments as well as the theoretical basis for the opposition to interpreting them as cold fusion. Then we critically examine more than 25 theoretical models for CF, including unusual nuclear and exotic chemical hypotheses. We conclude that they do not explain the data.
Critical Review of Theoretical Models for Anomalous Effects (Cold Fusion) in Deuterated Metals
V. A. Chechin; V. A. Tsarev; M. Rabinowitz; Y. E. Kim
2003-04-06
We briefly summarize the reported anomalous effects in deuterated metals at ambient temperature, commonly known as "Cold Fusion" (CF), with an emphasis on important experiments as well as the theoretical basis for the opposition to interpreting them as cold fusion. Then we critically examine more than 25 theoretical models for CF, including unusual nuclear and exotic chemical hypotheses. We conclude that they do not explain the data.
Call for Papers 5th Workshop on Algorithm Engineering { WAE 2001
Brodal, Gerth Střlting
Call for Papers 5th Workshop on Algorithm Engineering { WAE 2001 BRICS, University of Aarhus, Denmark, August 28{30, 2001 Scope The Workshop on Algorithm Engineering covers research in all aspects of future research. WAE 2001 is spon- sored by BRICS and EATCS (the European Association for Theoretical
Call for Papers 5th Workshop on Algorithm Engineering WAE 2001
Brodal, Gerth Střlting
Call for Papers 5th Workshop on Algorithm Engineering WAE 2001 BRICS, University of Aarhus, Denmark, August 2830, 2001 Scope The Workshop on Algorithm Engineering covers research in all aspects of future research. WAE 2001 is spon- sored by BRICS and EATCS (the European Association for Theoretical
Williams, P.T.
1993-09-01
As the field of computational fluid dynamics (CFD) continues to mature, algorithms are required to exploit the most recent advances in approximation theory, numerical mathematics, computing architectures, and hardware. Meeting this requirement is particularly challenging in incompressible fluid mechanics, where primitive-variable CFD formulations that are robust, while also accurate and efficient in three dimensions, remain an elusive goal. This dissertation asserts that one key to accomplishing this goal is recognition of the dual role assumed by the pressure, i.e., a mechanism for instantaneously enforcing conservation of mass and a force in the mechanical balance law for conservation of momentum. Proving this assertion has motivated the development of a new, primitive-variable, incompressible, CFD algorithm called the Continuity Constraint Method (CCM). The theoretical basis for the CCM consists of a finite-element spatial semi-discretization of a Galerkin weak statement, equal-order interpolation for all state-variables, a 0-implicit time-integration scheme, and a quasi-Newton iterative procedure extended by a Taylor Weak Statement (TWS) formulation for dispersion error control. Original contributions to algorithmic theory include: (a) formulation of the unsteady evolution of the divergence error, (b) investigation of the role of non-smoothness in the discretized continuity-constraint function, (c) development of a uniformly H{sup 1} Galerkin weak statement for the Reynolds-averaged Navier-Stokes pressure Poisson equation, (d) derivation of physically and numerically well-posed boundary conditions, and (e) investigation of sparse data structures and iterative methods for solving the matrix algebra statements generated by the algorithm.
Theoretical Aspects of Particle Production
B. R. Webber
1999-12-17
These lectures describe some of the latest data on particle production in high-energy collisions and compare them with theoretical calculations and models based on QCD. The main topics covered are: fragmentation functions and factorization, small-x fragmentation, hadronization models, differences between quark and gluon fragmentation, current and target fragmentation in deep inelastic scattering, and heavy quark fragmentation.
THEORETICAL BIOLOGY FORUM 105 · 2/2012 PISA · ROMA FABRIZIO SERRA EDITORE MMXII #12;Autorizzazione del Tribunale di Pisa n. 13 del 14 maggio 2012. Giŕ registrata presso il Tribunale di Genova Fabrizio Serra editore® Casella postale n. 1, succursale n. 8, I 56123 Pisa Uffici di Pisa: Via Santa
Theoretical Perspectives on Protein Folding
Thirumalai, Devarajan
Theoretical Perspectives on Protein Folding D. Thirumalai,1 Edward P. O'Brien,2 Greg Morrison,3 Understanding how monomeric proteins fold under in vitro conditions is crucial to describing their functions remains to be done to solve the protein folding problem in the broadest sense. 159 Annu.Rev.Biophys.2010
Combinatorial Phylogenetics of Reconstruction Algorithms
Kleinman, Aaron Douglas
2012-01-01
and A. Spillner. Consistency of the Neighbor-Net algorithm.Algorithms for Molecular Biology, 2:8, 2007. [10] P.D. Gusfield. Efficient algorithms for inferring evolutionary
Algorithms for Greechie Diagrams
Brendan D. McKay; Norman D. Megill; Mladen Pavicic
2001-01-21
We give a new algorithm for generating Greechie diagrams with arbitrary chosen number of atoms or blocks (with 2,3,4,... atoms) and provide a computer program for generating the diagrams. The results show that the previous algorithm does not produce every diagram and that it is at least 100,000 times slower. We also provide an algorithm and programs for checking of Greechie diagram passage by equations defining varieties of orthomodular lattices and give examples from Hilbert lattices. At the end we discuss some additional characteristics of Greechie diagrams.
Algorithms incorporating concurrency and caching
Fineman, Jeremy T
2009-01-01
This thesis describes provably good algorithms for modern large-scale computer systems, including today's multicores. Designing efficient algorithms for these systems involves overcoming many challenges, including concurrency ...
Optimized Algorithms Boost Combustion Research
Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)
Optimized Algorithms Boost Combustion Research Optimized Algorithms Boost Combustion Research Methane Flame Simulations Run 6x Faster on NERSC's Hopper Supercomputer November 25,...
Nagurney, Anna
Transformation, Appears in Transportation Research D 11, 171-190. Dmytro Matsypura Electric Power Supply Chain Markets Transmission Service Providers Power SupplIntroduction The Model The Algorithm Modeling of Electric Power Supply Chain Networks with Fuel
Benkart, Georgia
2008-01-01
This article contains an investigation of the equitable basis for the Lie algebra sl_2. Denoting this basis by {x,y,z}, we have [x,y] = 2x + 2y, [y,z] = 2y + 2z, [z, x] = 2z + 2x. One focus of our study is the group of automorphisms G generated by exp(ad x*), exp(ad y*), exp(ad z*), where {x*,y*,z*} is the basis for sl_2 dual to {x,y,z} with respect to the trace form (u,v) = tr(uv). We show that G is isomorphic to the modular group PSL_2(Z). Another focus of our investigation is the lattice L=Zx+Zy+Zz. We prove that the orbit G(x) equals {u in L |(u,u)=2}. We determine the precise relationship between (i) the group G, (ii) the group of automorphisms for sl_2 that preserve L, (iii) the group of automorphisms and antiautomorphisms for sl_2 that preserve L, and (iv) the group of isometries for (,) that preserve L. We obtain analogous results for the lattice L* =Zx*+Zy*+Zz*. Relative to the equitable basis, the matrix of the trace form is a Cartan matrix of hyperbolic type; consequently,we identify the equitable ...
PRELIMINARY SELECTION OF MGR DESIGN BASIS EVENTS
J.A. Kappes
1999-09-16
The purpose of this analysis is to identify the preliminary design basis events (DBEs) for consideration in the design of the Monitored Geologic Repository (MGR). For external events and natural phenomena (e.g., earthquake), the objective is to identify those initiating events that the MGR will be designed to withstand. Design criteria will ensure that radiological release scenarios resulting from these initiating events are beyond design basis (i.e., have a scenario frequency less than once per million years). For internal (i.e., human-induced and random equipment failures) events, the objective is to identify credible event sequences that result in bounding radiological releases. These sequences will be used to establish the design basis criteria for MGR structures, systems, and components (SSCs) design basis criteria in order to prevent or mitigate radiological releases. The safety strategy presented in this analysis for preventing or mitigating DBEs is based on the preclosure safety strategy outlined in ''Strategy to Mitigate Preclosure Offsite Exposure'' (CRWMS M&O 1998f). DBE analysis is necessary to provide feedback and requirements to the design process, and also to demonstrate compliance with proposed 10 CFR 63 (Dyer 1999b) requirements. DBE analysis is also required to identify and classify the SSCs that are important to safety (ITS).
Neural Basis & Technical What are ERPs?
Coulson, Seana
1 Neural Basis & Technical Details What are ERPs? Could that work? Neurons communicate invented 1928 Hans Berger Early recording set-up Human Subject EEG monitors alertness EEG and ERPs What are ERPs? · ERPs formed by averaging EEG time-locked to the onset of stimuli that require cognitive
CRAD, Facility Safety- Nuclear Facility Safety Basis
Broader source: Energy.gov [DOE]
A section of Appendix C to DOE G 226.1-2 "Federal Line Management Oversight of Department of Energy Nuclear Facilities." Consists of Criteria Review and Approach Documents (CRADs) that can be used for assessment of a contractor's Nuclear Facility Safety Basis.
Review and Approval of Nuclear Facility Safety Basis and Safety Design Basis Documents
Broader source: Directives, Delegations, and Requirements [Office of Management (MA)]
2014-12-19
This Standard describes a framework and the criteria to be used for approval of (1) safety basis documents, as required by 10 Code of Federal Regulation (C.F.R.) 830, Nuclear Safety Management, and (2) safety design basis documents, as required by Department of Energy (DOE) Standard (STD)-1189-2008, Integration of Safety into the Design Process.
Theoretical issues in Spheromak research
Cohen, R. H.; Hooper, E. B.; LoDestro, L. L.; Mattor, N.; Pearlstein, L. D.; Ryutov, D. D.
1997-04-01
This report summarizes the state of theoretical knowledge of several physics issues important to the spheromak. It was prepared as part of the preparation for the Sustained Spheromak Physics Experiment (SSPX), which addresses these goals: energy confinement and the physics which determines it; the physics of transition from a short-pulsed experiment, in which the equilibrium and stability are determined by a conducting wall (``flux conserver``) to one in which the equilibrium is supported by external coils. Physics is examined in this report in four important areas. The status of present theoretical understanding is reviewed, physics which needs to be addressed more fully is identified, and tools which are available or require more development are described. Specifically, the topics include: MHD equilibrium and design, review of MHD stability, spheromak dynamo, and edge plasma in spheromaks.
Decision-Theoretic User Interface Generation Krzysztof Z. Gajos and Daniel S. Weld
Wobbrock, Jacob O.
Decision-Theoretic User Interface Generation Krzysztof Z. Gajos and Daniel S. Weld Department of Computer Science and Engineering University of Washington Seattle, WA 98195, USA {kgajos,weld interfaces and developing efficient algorithms for their automatic generation (Gajos & Weld, 2004; Gajos et
The matrix square root from a new functional perspective: theoretical results and
Meini, Beatrice
The matrix square root from a new functional perspective: theoretical results and computational issues Beatrice Meini #3; Abstract We give a new characterization of the matrix square root and a new algorithm for its computation. We show how the matrix square root is related to the constant block coe
Adaptive Radial Basis Function Detector for Beamforming
Chen, Sheng
the theoretical linear minimum bit error rate benchmarker, when supporting four users with the aid of two receive outperforms the L-MMSE one and is capable of operating in hostile rank- deficient scenarios. However, digital communication signal detection can be viewed as a classification problem [14]- [16], where the receiver detector
Randomized Algorithms with Splitting: Why the Classic Randomized Algorithms
Del Moral , Pierre
Randomized Algorithms with Splitting: Why the Classic Randomized Algorithms do not Work and how Abstract We show that the original classic randomized algorithms for approximate counting in NP simultaneously multiple Markov chains. We present several algorithms of the combined version, which we simple
The theoretical significance of G
T. Damour
1999-01-22
The quantization of gravity, and its unification with the other interactions, is one of the greatest challenges of theoretical physics. Current ideas suggest that the value of G might be related to the other fundamental constants of physics, and that gravity might be richer than the standard Newton-Einstein description. This gives added significance to measurements of G and to Cavendish-type experiments.
Grid and basis adaptive polynomial chaos techniques for sensitivity and uncertainty analysis
Perkó, Zoltán Gilli, Luca Lathouwers, Danny Kloosterman, Jan Leen
2014-03-01
The demand for accurate and computationally affordable sensitivity and uncertainty techniques is constantly on the rise and has become especially pressing in the nuclear field with the shift to Best Estimate Plus Uncertainty methodologies in the licensing of nuclear installations. Besides traditional, already well developed methods – such as first order perturbation theory or Monte Carlo sampling – Polynomial Chaos Expansion (PCE) has been given a growing emphasis in recent years due to its simple application and good performance. This paper presents new developments of the research done at TU Delft on such Polynomial Chaos (PC) techniques. Our work is focused on the Non-Intrusive Spectral Projection (NISP) approach and adaptive methods for building the PCE of responses of interest. Recent efforts resulted in a new adaptive sparse grid algorithm designed for estimating the PC coefficients. The algorithm is based on Gerstner's procedure for calculating multi-dimensional integrals but proves to be computationally significantly cheaper, while at the same it retains a similar accuracy as the original method. More importantly the issue of basis adaptivity has been investigated and two techniques have been implemented for constructing the sparse PCE of quantities of interest. Not using the traditional full PC basis set leads to further reduction in computational time since the high order grids necessary for accurately estimating the near zero expansion coefficients of polynomial basis vectors not needed in the PCE can be excluded from the calculation. Moreover the sparse PC representation of the response is easier to handle when used for sensitivity analysis or uncertainty propagation due to the smaller number of basis vectors. The developed grid and basis adaptive methods have been implemented in Matlab as the Fully Adaptive Non-Intrusive Spectral Projection (FANISP) algorithm and were tested on four analytical problems. These show consistent good performance both in terms of the accuracy of the resulting PC representation of quantities and the computational costs associated with constructing the sparse PCE. Basis adaptivity also seems to make the employment of PC techniques possible for problems with a higher number of input parameters (15–20), alleviating a well known limitation of the traditional approach. The prospect of larger scale applicability and the simplicity of implementation makes such adaptive PC algorithms particularly appealing for the sensitivity and uncertainty analysis of complex systems and legacy codes.
Theoretical and Experimental Analysis of a Two-Stage System for Classification
Sperduti, Alessandro
Theoretical and Experimental Analysis of a Two-Stage System for Classification Nicola Giusti a popular approach to multicategory classification tasks: a two-stage system based on a first (global to the recognition of handwritten digits. In one system, the first classifier is a fuzzy basis functions network
Structural basis for the antibody neutralization of Herpes simplex...
Office of Scientific and Technical Information (OSTI)
Structural basis for the antibody neutralization of Herpes simplex virus Citation Details In-Document Search Title: Structural basis for the antibody neutralization of Herpes...
Technical Cost Modeling - Life Cycle Analysis Basis for Program...
Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site
- Life Cycle Analysis Basis for Program Focus Technical Cost Modeling - Life Cycle Analysis Basis for Program Focus Polymer Composites Research in the LM Materials Program Overview...
Nuclear Safety Basis Program Review Overview and Management Oversight...
Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site
Safety Basis Program Review Overview and Management Oversight Standard Review Plan Nuclear Safety Basis Program Review Overview and Management Oversight Standard Review Plan This...
ORISE: The Medical Basis for Radiation-Accident Preparedness...
Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)
The Medical Basis for Radiation-Accident Preparedness: Medical Management Proceedings of the Fifth International REACTS Symposium on the Medical Basis for Radiation-Accident...
Assessing Beyond Design Basis Seismic Events and Implications...
Office of Environmental Management (EM)
Assessing Beyond Design Basis Seismic Events and Implications on Seismic Risk Assessing Beyond Design Basis Seismic Events and Implications on Seismic Risk September 19, 2012...
New Effective Multithreaded Matching Algorithms
Manne, Fredrik; Halappanavar, Mahantesh
2014-05-19
Matching is an important combinatorial problem with a number of applications in areas such as community detection, sparse linear algebra, and network alignment. Since computing optimal matchings can be very time consuming, several fast approximation algorithms, both sequential and parallel, have been suggested. Common to the algorithms giving the best solutions is that they tend to be sequential by nature, while algorithms more suitable for parallel computation give solutions of less quality. We present a new simple 1 2 -approximation algorithm for the weighted matching problem. This algorithm is both faster than any other suggested sequential 1 2 -approximation algorithm on almost all inputs and also scales better than previous multithreaded algorithms. We further extend this to a general scalable multithreaded algorithm that computes matchings of weight comparable with the best sequential algorithms. The performance of the suggested algorithms is documented through extensive experiments on different multithreaded architectures.
Radioactive Waste Management BasisApril 2006
Perkins, B K
2011-08-31
This Radioactive Waste Management Basis (RWMB) documents radioactive waste management practices adopted at Lawrence Livermore National Laboratory (LLNL) pursuant to Department of Energy (DOE) Order 435.1, Radioactive Waste Management. The purpose of this Radioactive Waste Management Basis is to describe the systematic approach for planning, executing, and evaluating the management of radioactive waste at LLNL. The implementation of this document will ensure that waste management activities at LLNL are conducted in compliance with the requirements of DOE Order 435.1, Radioactive Waste Management, and the Implementation Guide for DOE Manual 435.1-1, Radioactive Waste Management Manual. Technical justification is provided where methods for meeting the requirements of DOE Order 435.1 deviate from the DOE Manual 435.1-1 and Implementation Guide.
TECHNICAL BASIS DOCUMENT FOR NATURAL EVENT HAZARDS
KRIPPS, L.J.
2006-07-31
This technical basis document was developed to support the documented safety analysis (DSA) and describes the risk binning process and the technical basis for assigning risk bins for natural event hazard (NEH)-initiated accidents. The purpose of the risk binning process is to determine the need for safety-significant structures, systems, and components (SSC) and technical safety requirement (TSR)-level controls for a given representative accident or represented hazardous conditions based on an evaluation of the frequency and consequence. Note that the risk binning process is not applied to facility workers, because all facility worker hazardous conditions are considered for safety-significant SSCs and/or TSR-level controls.
Chopped random-basis quantum optimization
Tommaso Caneva; Tommaso Calarco; Simone Montangero
2011-08-22
In this work we describe in detail the "Chopped RAndom Basis" (CRAB) optimal control technique recently introduced to optimize t-DMRG simulations [arXiv:1003.3750]. Here we study the efficiency of this control technique in optimizing different quantum processes and we show that in the considered cases we obtain results equivalent to those obtained via different optimal control methods while using less resources. We propose the CRAB optimization as a general and versatile optimal control technique.
Safety Basis Information System | Department of Energy
Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site
AFDC Printable Version Share this resource Send a link to EERE: Alternative Fuels Data Center Home Page to someone by E-mail Share EERE: Alternative Fuels Data Center Home Page on Facebook Tweet about EERE: Alternative Fuels Data Center Home Page on Twitter Bookmark EERE: Alternative Fuels Data Center Home Page on Google Bookmark EERE: Alternative Fuels Data Center Home Page on Delicious Rank EERE:Financing Tool FitsProjectData Dashboard RutlandSTEAB's Priorities throughANDSafety Basis
Theoretical studies of combustion dynamics
Bowman, J.M. [Emory Univ., Atlanta, GA (United States)
1993-12-01
The basic objectives of this research program are to develop and apply theoretical techniques to fundamental dynamical processes of importance in gas-phase combustion. There are two major areas currently supported by this grant. One is reactive scattering of diatom-diatom systems, and the other is the dynamics of complex formation and decay based on L{sup 2} methods. In all of these studies, the authors focus on systems that are of interest experimentally, and for which potential energy surfaces based, at least in part, on ab initio calculations are available.
Ricci, Laura
Introduction Epidemic virus diffusion: models Epidemic algorithms Gossip algorithms EpidemicD in Computer Science #12;Introduction Epidemic virus diffusion: models Epidemic algorithms Gossip algorithms Outline 1 Introduction 2 Epidemic virus diffusion: models 3 Epidemic algorithms 4 Gossip algorithms #12
Tiled QR factorization algorithms
Bouwmeester, Henricus; Langou, Julien; Robert, Yves
2011-01-01
This work revisits existing algorithms for the QR factorization of rectangular matrices composed of p-by-q tiles, where p >= q. Within this framework, we study the critical paths and performance of algorithms such as Sameh and Kuck, Modi and Clarke, Greedy, and those found within PLASMA. Although neither Modi and Clarke nor Greedy is optimal, both are shown to be asymptotically optimal for all matrices of size p = q^2 f(q), where f is any function such that \\lim_{+\\infty} f= 0. This novel and important complexity result applies to all matrices where p and q are proportional, p = \\lambda q, with \\lambda >= 1, thereby encompassing many important situations in practice (least squares). We provide an extensive set of experiments that show the superiority of the new algorithms for tall matrices.
Theoretical perspectives on strange physics
Ellis, J.
1983-04-01
Kaons are heavy enough to have an interesting range of decay modes available to them, and light enough to be produced in sufficient numbers to explore rare modes with satisfying statistics. Kaons and their decays have provided at least two major breakthroughs in our knowledge of fundamental physics. They have revealed to us CP violation, and their lack of flavor-changing neutral interactions warned us to expect charm. In addition, K/sup 0/-anti K/sup 0/ mixing has provided us with one of our most elegant and sensitive laboratories for testing quantum mechanics. There is every reason to expect that future generations of kaon experiments with intense sources would add further to our knowledge of fundamental physics. This talk attempts to set future kaon experiments in a general theoretical context, and indicate how they may bear upon fundamental theoretical issues. A survey of different experiments which would be done with an Intense Medium Energy Source of Strangeness, including rare K decays, probes of the nature of CP isolation, ..mu.. decays, hyperon decays and neutrino physics is given. (WHK)
Theoretical Perspectives on Protein Folding
D. Thirumalai; Edward P. O'Brien; Greg Morrison; Changbong Hyeon
2010-07-18
Understanding how monomeric proteins fold under in vitro conditions is crucial to describing their functions in the cellular context. Significant advances both in theory and experiments have resulted in a conceptual framework for describing the folding mechanisms of globular proteins. The experimental data and theoretical methods have revealed the multifaceted character of proteins. Proteins exhibit universal features that can be determined using only the number of amino acid residues (N) and polymer concepts. The sizes of proteins in the denatured and folded states, cooperativity of the folding transition, dispersions in the melting temperatures at the residue level, and time scales of folding are to a large extent determined by N. The consequences of finite N especially on how individual residues order upon folding depends on the topology of the folded states. Such intricate details can be predicted using the Molecular Transfer Model that combines simulations with measured transfer free energies of protein building blocks from water to the desired concentration of the denaturant. By watching one molecule fold at a time, using single molecule methods, the validity of the theoretically anticipated heterogeneity in the folding routes, and the N-dependent time scales for the three stages in the approach to the native state have been established. Despite the successes of theory, of which only a few examples are documented here, we conclude that much remains to be done to solve the "protein folding problem" in the broadest sense.
Quantum Algorithms for Unit Group and principal ideal problem
Hong Wang; Zhi Ma
2010-09-01
Computing the unit group and solving the principal ideal problem for a number field are two of the main tasks in computational algebraic number theory. This paper proposes efficient quantum algorithms for these two problems when the number field has constant degree. We improve these algorithms proposed by Hallgren by using a period function which is not one-to-one on its fundamental period. Furthermore, given access to a function which encodes the lattice, a new method to compute the basis of an unknown real-valued lattice is presented.
Inner model theoretic geology Gunter Fuchs
Schindler, Ralf
Inner model theoretic geology Gunter Fuchs Ralf Schindler November 18, 2014 Abstract One of the basic concepts of set theoretic geology is the mantle of a model of set theory V: it is the intersection in what was dubbed Set Theoretic Geology in that paper. One of the main results of [FHR] was that any
Incentives and Internet Algorithms
Feigenbaum, Joan
Game Theory Internet Design #12;9 Game Theory and the Internet Â· Long history of work: Â NetworkingIncentives and Internet Algorithms Joan Feigenbaum Yale University http://www.cs.yale.edu/~jf Scott with selfishness? Â· Internet Architecture: robust scalability Â How to build large and robust systems? #12
Quantum algorithms for algebraic problems
Andrew M. Childs; Wim van Dam
2008-12-02
Quantum computers can execute algorithms that dramatically outperform classical computation. As the best-known example, Shor discovered an efficient quantum algorithm for factoring integers, whereas factoring appears to be difficult for classical computers. Understanding what other computational problems can be solved significantly faster using quantum algorithms is one of the major challenges in the theory of quantum computation, and such algorithms motivate the formidable task of building a large-scale quantum computer. This article reviews the current state of quantum algorithms, focusing on algorithms with superpolynomial speedup over classical computation, and in particular, on problems with an algebraic flavor.
Technical basis for internal dosimetry at Hanford
Sula, M.J.; Carbaugh, E.H.; Bihl, D.E.
1989-04-01
The Hanford Internal Dosimetry Program, administered by Pacific Northwest Laboratory for the US Department of Energy, provides routine bioassay monitoring for employees who are potentially exposed to radionuclides in the workplace. This report presents the technical basis for routine bioassay monitoring and the assessment of internal dose at Hanford. The radionuclides of concern include tritium, corrosion products (/sup 58/Co, /sup 60/Co, /sup 54/Mn, and /sup 59/Fe), strontium, cesium, iodine, europium, uranium, plutonium, and americium. Sections on each of these radionuclides discuss the sources and characteristics; dosimetry; bioassay measurements and monitoring; dose measurement, assessment, and mitigation; and bioassay follow-up treatment. 64 refs., 42 figs., 118 tabs.
Technical basis for internal dosimetry at Hanford
Sula, M.J.; Carbaugh, E.H.; Bihl, D.E.
1991-07-01
The Hanford Internal Dosimetry Program, administered by Pacific Northwest Laboratory for the US Department of Energy, provides routine bioassay monitoring for employees who are potentially exposed to radionuclides in the workplace. This report presents the technical basis for routine bioassay monitoring and the assessment of internal dose at Hanford. The radionuclides of concern include tritium, corrosion products ({sup 58}Co, {sup 60}Co, {sup 54}Mn, and {sup 59}Fe), strontium, cesium, iodine, europium, uranium, plutonium, and americium,. Sections on each of these radionuclides discuss the sources and characteristics; dosimetry; bioassay measurements and monitoring; dose measurement, assessment, and mitigation and bioassay follow-up treatment. 78 refs., 35 figs., 115 tabs.
NDRPProtocolTechBasisCompiled020705.doc
Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)
AFDC Printable Version Share this resource Send a link to EERE: Alternative Fuels Data Center Home Page to someone by E-mail Share EERE: Alternative Fuels Data Center Home Page on Facebook Tweet about EERE: Alternative Fuels Data Center Home Page on Twitter Bookmark EERE: Alternative Fuels Data Center Homesum_a_epg0_fpd_mmcf_m.xls" ,"Available from WebQuantityBonneville Power Administration wouldMass map shines light on77 PAGEMissionStressMoveMuncrief Ames019NAPL107,Basis Document
Structural Basis for Activation of Cholera Toxin
Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)
AFDC Printable Version Share this resource Send a link to EERE: Alternative Fuels Data Center Home Page to someone by E-mail Share EERE: Alternative Fuels Data Center Home Page on Facebook Tweet about EERE: Alternative Fuels Data Center Home Page on Twitter Bookmark EERE: Alternative Fuels Data Center Homesum_a_epg0_fpd_mmcf_m.xls" ,"Available from WebQuantityBonneville Power AdministrationRobust,Field-effect Photovoltaics -7541C.3X-rays3Energy U.S.Structural Basis for Activation
Structural Basis for Activation of Cholera Toxin
Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)
AFDC Printable Version Share this resource Send a link to EERE: Alternative Fuels Data Center Home Page to someone by E-mail Share EERE: Alternative Fuels Data Center Home Page on Facebook Tweet about EERE: Alternative Fuels Data Center Home Page on Twitter Bookmark EERE: Alternative Fuels Data Center Homesum_a_epg0_fpd_mmcf_m.xls" ,"Available from WebQuantityBonneville Power AdministrationRobust,Field-effect Photovoltaics -7541C.3X-rays3Energy U.S.Structural Basis for
Technical Basis for PNNL Beryllium Inventory
Johnson, Michelle Lynn
2014-07-09
The Department of Energy (DOE) issued Title 10 of the Code of Federal Regulations Part 850, “Chronic Beryllium Disease Prevention Program” (the Beryllium Rule) in 1999 and required full compliance by no later than January 7, 2002. The Beryllium Rule requires the development of a baseline beryllium inventory of the locations of beryllium operations and other locations of potential beryllium contamination at DOE facilities. The baseline beryllium inventory is also required to identify workers exposed or potentially exposed to beryllium at those locations. Prior to DOE issuing 10 CFR 850, Pacific Northwest Nuclear Laboratory (PNNL) had documented the beryllium characterization and worker exposure potential for multiple facilities in compliance with DOE’s 1997 Notice 440.1, “Interim Chronic Beryllium Disease.” After DOE’s issuance of 10 CFR 850, PNNL developed an implementation plan to be compliant by 2002. In 2014, an internal self-assessment (ITS #E-00748) of PNNL’s Chronic Beryllium Disease Prevention Program (CBDPP) identified several deficiencies. One deficiency is that the technical basis for establishing the baseline beryllium inventory when the Beryllium Rule was implemented was either not documented or not retrievable. In addition, the beryllium inventory itself had not been adequately documented and maintained since PNNL established its own CBDPP, separate from Hanford Site’s program. This document reconstructs PNNL’s baseline beryllium inventory as it would have existed when it achieved compliance with the Beryllium Rule in 2001 and provides the technical basis for the baseline beryllium inventory.
Algorithms for Next-Generation High-Throughput Sequencing Technologies
Kao, Wei-Chun
2011-01-01
Algorithm . . . . . . . . . . . . . . . . . . . . . .2.6.1 A hybrid base-calling algorithm . . . . . . . . .Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . .
The optimization problem Genetic Algorithm
Giménez, Domingo
The optimization problem Genetic Algorithm Particle Swarm Optimization Experimental results for time-power optimization META, October 27-31, 2014 1 / 25 #12;The optimization problem Genetic Algorithm Particle Swarm Optimization Experimental results Conclusions Time and energy optimization Traditionally
A. Khan; B. Yoshimura; J. K. Freericks
2015-08-11
One of the challenges with quantum simulation in ion traps is that the effective spin-spin exchange couplings are not uniform across the lattice. This can be particularly important in Penning trap realizations where the presence of an ellipsoidal boundary at the edge of the trap leads to dislocations in the crystal. By adding an additional anharmonic potential to better control interion spacing, and a triangular shaped rotating wall potential to reduce the appearance of dislocations, one can achieve better uniformity of the ionic positions. In this work, we calculate the axial phonon frequencies and the spin-spin interactions driven by a spin-dependent optical dipole force, and discuss what effects the more uniform ion spacing has on the spin simulation properties of Penning trap quantum simulators. Indeed, we find the spin-spin interactions behave more like a power law for a wide range of parameters.
Graph Algorithms in the Internet Age
Stanton, Isabelle Lesley
2012-01-01
5.2 Classic Matching Algorithms . . . . . . . . . . . . .4.3 Analysis of Algorithms on Random Graphs . . . . . . . .Graph Problems 5 An Introduction to Matching Algorithms 5.1
A Flexible Reservation Algorithm for Advance Network Provisioning
Balman, Mehmet; Chaniotakis, Evangelos; Shoshani, Arie; Sim, Alex
2010-04-12
Many scientific applications need support from a communication infrastructure that provides predictable performance, which requires effective algorithms for bandwidth reservations. Network reservation systems such as ESnet's OSCARS, establish guaranteed bandwidth of secure virtual circuits for a certain bandwidth and length of time. However, users currently cannot inquire about bandwidth availability, nor have alternative suggestions when reservation requests fail. In general, the number of reservation options is exponential with the number of nodes n, and current reservation commitments. We present a novel approach for path finding in time-dependent networks taking advantage of user-provided parameters of total volume and time constraints, which produces options for earliest completion and shortest duration. The theoretical complexity is only O(n2r2) in the worst-case, where r is the number of reservations in the desired time interval. We have implemented our algorithm and developed efficient methodologies for incorporation into network reservation frameworks. Performance measurements confirm the theoretical predictions.
High-performance combinatorial algorithms
Pinar, Ali
2003-01-01
mathematics, and high performance computing. The numericalalgorithms on high performance computing platforms.algorithms on high performance computing platforms, which
Algorithm FIRE -- Feynman Integral REduction
A. V. Smirnov
2008-08-02
The recently developed algorithm FIRE performs the reduction of Feynman integrals to master integrals. It is based on a number of strategies, such as applying the Laporta algorithm, the s-bases algorithm, region-bases and integrating explicitly over loop momenta when possible. Currently it is being used in complicated three-loop calculations.
Multipartite entanglement in quantum algorithms
D. Bruß; C. Macchiavello
2010-07-23
We investigate the entanglement features of the quantum states employed in quantum algorithms. In particular, we analyse the multipartite entanglement properties in the Deutsch-Jozsa, Grover and Simon algorithms. Our results show that for these algorithms most instances involve multipartite entanglement.
Axioms, algorithms and Hilbert's Entscheidungsproblem
Axioms, algorithms and Hilbert's Entscheidungsproblem Jan Stovicek Department of Mathematical Sciences September 9th, 2008 www.ntnu.no Jan Stovicek, Axioms & algorithms #12;2 Outline The Decision & algorithms #12;3 Outline The Decision Problem Formal Languages and Theories Incompleteness Undecidability www
Unique Aspects and Scientific Challenges - Theoretical Physics...
Office of Science (SC) Website
have been achieved recently due to improved computing technology and more efficient algorithms, but further improvement will be required to confront future Intensity Frontier...
Large scale tracking algorithms.
Hansen, Ross L.; Love, Joshua Alan; Melgaard, David Kennett; Karelitz, David B.; Pitts, Todd Alan; Zollweg, Joshua David; Anderson, Dylan Z.; Nandy, Prabal; Whitlow, Gary L.; Bender, Daniel A.; Byrne, Raymond Harry
2015-01-01
Low signal-to-noise data processing algorithms for improved detection, tracking, discrimination and situational threat assessment are a key research challenge. As sensor technologies progress, the number of pixels will increase signi cantly. This will result in increased resolution, which could improve object discrimination, but unfortunately, will also result in a significant increase in the number of potential targets to track. Many tracking techniques, like multi-hypothesis trackers, suffer from a combinatorial explosion as the number of potential targets increase. As the resolution increases, the phenomenology applied towards detection algorithms also changes. For low resolution sensors, "blob" tracking is the norm. For higher resolution data, additional information may be employed in the detection and classfication steps. The most challenging scenarios are those where the targets cannot be fully resolved, yet must be tracked and distinguished for neighboring closely spaced objects. Tracking vehicles in an urban environment is an example of such a challenging scenario. This report evaluates several potential tracking algorithms for large-scale tracking in an urban environment.
Research in Theoretical Particle Physics
Feldman, Hume A; Marfatia, Danny
2014-09-24
This document is the final report on activity supported under DOE Grant Number DE-FG02-13ER42024. The report covers the period July 15, 2013 – March 31, 2014. Faculty supported by the grant during the period were Danny Marfatia (1.0 FTE) and Hume Feldman (1% FTE). The grant partly supported University of Hawaii students, David Yaylali and Keita Fukushima, who are supervised by Jason Kumar. Both students are expected to graduate with Ph.D. degrees in 2014. Yaylali will be joining the University of Arizona theory group in Fall 2014 with a 3-year postdoctoral appointment under Keith Dienes. The group’s research covered topics subsumed under the Energy Frontier, the Intensity Frontier, and the Cosmic Frontier. Many theoretical results related to the Standard Model and models of new physics were published during the reporting period. The report contains brief project descriptions in Section 1. Sections 2 and 3 lists published and submitted work, respectively. Sections 4 and 5 summarize group activity including conferences, workshops and professional presentations.
Theoretical models for Bump Cepheids
G. Bono; V. Castellani; M. Marconi
2002-01-08
We present the results of a theoretical investigation aimed at testing whether full amplitude, nonlinear, convective models account for the I-band light curves of Bump Cepheids in the Large Magellanic Cloud (LMC). We selected two objects from the OGLE sample that show a well-defined bump along the decreasing (short-period) and the rising (long-period) branch respectively. We find that current models do reproduce the luminosity variation over the entire pulsation cycle if the adopted stellar mass is roughly 15 % smaller than predicted by evolutionary models that neglect both mass loss and convective core overshooting. Moreover, we find that the fit to the light curve of the long-period Cepheid located close to the cool edge of the instability strip requires an increase in the mixing length from 1.5 to 1.8 Hp. This suggests an increase in the efficiency of the convective transport when moving toward cooler effective temperatures. Current pulsation calculations supply a LMC distance modulus ranging from 18.48 to 18.58 mag.
Random Search Algorithms Zelda B. Zabinsky
Del Moral , Pierre
Random Search Algorithms Zelda B. Zabinsky April 5, 2009 Abstract Random search algorithms with convergence results in probability. Random search algorithms include simulated an- nealing, tabu search, genetic algorithms, evolutionary programming, particle swarm optimization, ant colony optimization, cross
Authorization basis status report (miscellaneous TWRS facilities, tanks and components)
Stickney, R.G.
1998-04-29
This report presents the results of a systematic evaluation conducted to identify miscellaneous TWRS facilities, tanks and components with potential needed authorization basis upgrades. It provides the Authorization Basis upgrade plan for those miscellaneous TWRS facilities, tanks and components identified.
Office of Nuclear Safety Basis and Facility Design
Broader source: Energy.gov [DOE]
The Office of Nuclear Safety Basis & Facility Design establishes safety basis and facility design requirements and expectations related to analysis and design of nuclear facilities to ensure protection of workers and the public from the hazards associated with nuclear operations.
CRAD, Integrated Safety Basis and Engineering Design Review ...
Broader source: Energy.gov (indexed) [DOE]
Integrated Safety Basis and Engineering Design Review - August 20, 2014 (EA CRAD 31-4, Rev. 0) CRAD, Integrated Safety Basis and Engineering Design Review - August 20, 2014 (EA...
Nonlinear adaptive control using radial basis function approximants
Petersen, Jerry Lee
1993-01-01
The purpose of this research is to present an adaptive control strategy using the radial basis function approximation method. Surface approximation methods using radial basis function approximants will first be discussed. ...
Theoretical and Computational Neuroscience Gustavo Deco
Lambert, Patrik
(Germany, DAAD,Boehringer-Foundation, Volkswagen-Foundation) - Johan Larsson (Sweden, Generalitat,UPF, EU: Plasticity of cross- modal integration (Volkswagen Foundation) Computational and Theoretical Neuroscience
Catalyst by Design - Theoretical, Nanostructural, and Experimental...
Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site
Emission Treatment Catalyst Catalyst by Design - Theoretical, Nanostructural, and Experimental Studies of Emission Treatment Catalyst Poster presented at the 16th Directions in...
Catalyst by Design - Theoretical, Nanostructural, and Experimental...
Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site
Oxidation Catalyst for Diesel Engine Emission Treatment Catalyst by Design - Theoretical, Nanostructural, and Experimental Studies of Oxidation Catalyst for Diesel Engine Emission...
New algorithms for adaptive optics point-spread function reconstruction
Eric Gendron; Yann Clénet; Thierry Fusco; Gérard Rousset
2006-06-28
Context. The knowledge of the point-spread function compensated by adaptive optics is of prime importance in several image restoration techniques such as deconvolution and astrometric/photometric algorithms. Wavefront-related data from the adaptive optics real-time computer can be used to accurately estimate the point-spread function in adaptive optics observations. The only point-spread function reconstruction algorithm implemented on astronomical adaptive optics system makes use of particular functions, named $U\\_{ij}$. These $U\\_{ij}$ functions are derived from the mirror modes, and their number is proportional to the square number of these mirror modes. Aims. We present here two new algorithms for point-spread function reconstruction that aim at suppressing the use of these $U\\_{ij}$ functions to avoid the storage of a large amount of data and to shorten the computation time of this PSF reconstruction. Methods. Both algorithms take advantage of the eigen decomposition of the residual parallel phase covariance matrix. In the first algorithm, the use of a basis in which the latter matrix is diagonal reduces the number of $U\\_{ij}$ functions to the number of mirror modes. In the second algorithm, this eigen decomposition is used to compute phase screens that follow the same statistics as the residual parallel phase covariance matrix, and thus suppress the need for these $U\\_{ij}$ functions. Results. Our algorithms dramatically reduce the number of $U\\_{ij}$ functions to be computed for the point-spread function reconstruction. Adaptive optics simulations show the good accuracy of both algorithms to reconstruct the point-spread function.
5. Greedy and other efficient optimization algorithms
Keil, David M.
5. Greedy and other efficient optimization algorithms David Keil Analysis of Algorithms 7/14 1David Keil Analysis of Algorithms 5. Greedy algorithms 8/14 CSCI 347 Analysis of Algorithms David M. Keil, Framingham State University 5. Greedy and other fast optimization algorithms 1. When the next step is easy
Zhou, Ping
2008-01-01
Communication Theory and Systems / Electrical and Computerin Electrical Engineering (Communication Theory and Systems)in Electrical Engineering (Communication Theory and Systems)
Zhou, Ping
2008-01-01
and et al, "Low-energy wireless communication networkT.H. Meng, “Minimun energy mobile wireless networks,” IEEE46] Ephremides, "Energy concerns in wireless networks," IEEE
Zhou, Ping
2008-01-01
Transmission Strategies for Wireless Devices,” in Proc. ofand Power Control for Wireless Ad-hoc Networks," in Proc. offor ad hoc mobile wireless networks," IEEE Personal
Zhou, Ping
2008-01-01
one of the k 1 2 wireless mesh routers can transmit packetssome access points with wireless mesh routers with gatewayindicates that all wireless mesh routers contend for the
Game-theoretic learning algorithm for a spatial coverage problem Ketan Savla and Emilio Frazzoli
Savla, Ketan
spent alone" at the next target location and show that the Nash equilibria of the game correspond of particular interest is concerned with the generation of efficient cooperative strategies for several mobile to complete the task, or the fuel/energy expenditure. A related problem has been investigated as the Weapon
PARFUME Theory and Model basis Report
Darrell L. Knudson; Gregory K Miller; G.K. Miller; D.A. Petti; J.T. Maki; D.L. Knudson
2009-09-01
The success of gas reactors depends upon the safety and quality of the coated particle fuel. The fuel performance modeling code PARFUME simulates the mechanical, thermal and physico-chemical behavior of fuel particles during irradiation. This report documents the theory and material properties behind vari¬ous capabilities of the code, which include: 1) various options for calculating CO production and fission product gas release, 2) an analytical solution for stresses in the coating layers that accounts for irradiation-induced creep and swelling of the pyrocarbon layers, 3) a thermal model that calculates a time-dependent temperature profile through a pebble bed sphere or a prismatic block core, as well as through the layers of each analyzed particle, 4) simulation of multi-dimensional particle behavior associated with cracking in the IPyC layer, partial debonding of the IPyC from the SiC, particle asphericity, and kernel migration (or amoeba effect), 5) two independent methods for determining particle failure probabilities, 6) a model for calculating release-to-birth (R/B) ratios of gaseous fission products that accounts for particle failures and uranium contamination in the fuel matrix, and 7) the evaluation of an accident condition, where a particle experiences a sudden change in temperature following a period of normal irradiation. The accident condi¬tion entails diffusion of fission products through the particle coating layers and through the fuel matrix to the coolant boundary. This document represents the initial version of the PARFUME Theory and Model Basis Report. More detailed descriptions will be provided in future revisions.
Kramer, Peter
Theoretical Framework for Microscopic Osmotic Phenomena Theoretical Framework for Microscopic Osmotic Phenomena Paul J. Atzberger Department of Mathematics University of California, Santa Barbara 25, 2007) The basic ingredients of osmotic pressure are a solvent fluid with a soluble molecular
Multisensor data fusion algorithm development
Yocky, D.A.; Chadwick, M.D.; Goudy, S.P.; Johnson, D.K.
1995-12-01
This report presents a two-year LDRD research effort into multisensor data fusion. We approached the problem by addressing the available types of data, preprocessing that data, and developing fusion algorithms using that data. The report reflects these three distinct areas. First, the possible data sets for fusion are identified. Second, automated registration techniques for imagery data are analyzed. Third, two fusion techniques are presented. The first fusion algorithm is based on the two-dimensional discrete wavelet transform. Using test images, the wavelet algorithm is compared against intensity modulation and intensity-hue-saturation image fusion algorithms that are available in commercial software. The wavelet approach outperforms the other two fusion techniques by preserving spectral/spatial information more precisely. The wavelet fusion algorithm was also applied to Landsat Thematic Mapper and SPOT panchromatic imagery data. The second algorithm is based on a linear-regression technique. We analyzed the technique using the same Landsat and SPOT data.
Nonextensive lattice gauge theories: algorithms and methods
Rafael B. Frigori
2014-04-26
High-energy phenomena presenting strong dynamical correlations, long-range interactions and microscopic memory effects are well described by nonextensive versions of the canonical Boltzmann-Gibbs statistical mechanics. After a brief theoretical review, we introduce a class of generalized heat-bath algorithms that enable Monte Carlo lattice simulations of gauge fields on the nonextensive statistical ensemble of Tsallis. The algorithmic performance is evaluated as a function of the Tsallis parameter q in equilibrium and nonequilibrium setups. Then, we revisit short-time dynamic techniques, which in contrast to usual simulations in equilibrium present negligible finite-size effects and no critical slowing down. As an application, we investigate the short-time critical behaviour of the nonextensive hot Yang-Mills theory at q- values obtained from heavy-ion collision experiments. Our results imply that, when the equivalence of statistical ensembles is obeyed, the long-standing universality arguments relating gauge theories and spin systems hold also for the nonextensive framework.
Non adiabatic quantum search algorithms
A. Perez; A. Romanelli
2007-06-08
We present two new continuous time quantum search algorithms similar to the adiabatic search algorithm, but now without an adiabatic evolution. We find that both algorithms work for a wide range of values of the parameters of the Hamiltonian, and one of them has, as an additional feature that, for values of time larger than a characteristic one, it will converge to a state which can be close to the searched state.
Selected Items in Jet Algorithms
Giuseppe Bozzi
2008-08-06
I provide a very brief overview of recent developments in jet algorithms, mostly focusing on the issue of infrared-safety.
Algorithms for dynamical overlap fermions
Stefan Schaefer
2006-09-28
An overview of the current status of algorithmic approaches to dynamical overlap fermions is given. In particular the issue of changing the topological sector is discussed.
A New Numerical Algorithm for Thermoacoustic and Photoacoustic Tomography with Variable Sound Speed
Qian, Jianliang; Uhlmann, Gunther; Zhao, Hongkai
2011-01-01
We present a new algorithm for reconstructing an unknown source in Thermoacoustic and Photoacoustic Tomography based on the recent advances in understanding the theoretical nature of the problem. We work with variable sound speeds that might be also discontinuous across some surface. The latter problem arises in brain imaging. The new algorithm is based on an explicit formula in the form of a Neumann series. We present numerical examples with non-trapping, trapping and piecewise smooth speeds, as well as examples with data on a part of the boundary. These numerical examples demonstrate the robust performance of the new algorithm.
Hanford External Dosimetry Technical Basis Manual PNL-MA-842
Rathbone, Bruce A.
2010-01-01
The Hanford External Dosimetry Technical Basis Manual PNL-MA-842 documents the design and implementation of the external dosimetry system used at Hanford. The manual describes the dosimeter design, processing protocols, dose calculation methodology, radiation fields encountered, dosimeter response characteristics, limitations of dosimeter design under field conditions, and makes recommendations for effective use of the dosimeters in the field. The manual describes the technical basis for the dosimetry system in a manner intended to help ensure defensibility of the dose of record at Hanford and to demonstrate compliance with 10 CFR 835, DOELAP, DOE-RL, ORP, PNSO, and Hanford contractor requirements. The dosimetry system is operated by PNNL’s Hanford External Dosimetry Program (HEDP) which provides dosimetry services to all Hanford contractors. The primary users of this manual are DOE and DOE contractors at Hanford using the dosimetry services of PNNL. Development and maintenance of this manual is funded directly by DOE and DOE contractors. Its contents have been reviewed and approved by DOE and DOE contractors at Hanford through the Hanford Personnel Dosimetry Advisory Committee (HPDAC) which is chartered and chaired by DOE-RL and serves as means of coordinating dosimetry practices across contractors at Hanford. This manual was established in 1996. Since its inception, it has been revised many times and maintained by PNNL as a controlled document with controlled distribution. The first revision to be released through PNNL’s Electronic Records & Information Capture Architecture (ERICA) database was designated Revision 0. Revision numbers that are whole numbers reflect major revisions typically involving significant changes to all chapters in the document. Revision numbers that include a decimal fraction reflect minor revisions, usually restricted to selected chapters or selected pages in the document. Maintenance and distribution of controlled hard copies of the manual by PNNL was discontinued beginning with Revision 0.2. Revision Log: Rev. 0 (2/25/2005) Major revision and expansion. Rev. 0.1 (3/12/2007) Updated Chapters 5, 6 and 9 to reflect change in default ring calibration factor used in HEDP dose calculation software. Factor changed from 1.5 to 2.0 beginning January 1, 2007. Pages on which changes were made are as follows: 5.23, 5.69, 5.78, 5.80, 5.82, 6.3, 6.5, 6.29, and 9.2. Rev 0.2 (8/28/2009) Updated Chapters 3, 5, 6, 8 and 9. Chapters 6 and 8 were significantly expanded. References in the Preface and Chapters 1, 2, 4, and 7 were updated to reflect updates to DOE documents. Approved by HPDAC on 6/2/2009. Rev 1.0 (1/1/2010) Major revision. Updated all chapters to reflect the Hanford site wide implementation on January 1, 2010 of new DOE requirements for occupational radiation protection. The new requirements are given in the June 8, 2007 amendment to 10 CFR 835 Occupational Radiation Protection (Federal Register, June 8, 2007. Title 10 Part 835. U.S., Code of Federal Regulations, Vol. 72, No. 110, 31904-31941). Revision 1.0 to the manual replaces ICRP 26 dosimetry concepts and terminology with ICRP 60 dosimetry concepts and terminology and replaces external dose conversion factors from ICRP 51 with those from ICRP 74 for use in measurement of operational quantities with dosimeters. Descriptions of dose algorithms and dosimeter response characteristics, and field performance were updated to reflect changes in the neutron quality factors used in the measurement of operational quantities.
Algorithms for Supporting Compiled Communication
Yuan, Xin
Algorithms for Supporting Compiled Communication Xin Yuan Rami Melhem Rajiv Gupta Dept. We present an experimental compiler, ESUIF, that supports compiled communication for High algorithms used in ESUIF. We further demonstrate the effectiveness of compiled communication on all optical
A Panoply of Quantum Algorithms
Bartholomew Furrow
2006-06-15
We create a variety of new quantum algorithms that use Grover's algorithm and similar techniques to give polynomial speedups over their classical counterparts. We begin by introducing a set of tools that carefully minimize the impact of errors on running time; those tools provide us with speedups to already-published quantum algorithms, such as improving Durr, Heiligman, Hoyer and Mhalla's algorithm for single-source shortest paths [quant-ph/0401091] by a factor of lg N. The algorithms we construct from scratch have a range of speedups, from O(E)->O(sqrt(VE lg V)) speedups in graph theory to an O(N^3)->O(N^2) speedup in dynamic programming.
Algorithms for Computational Solvent Mapping of Proteins Tamas Kortvelyesi,1,2
Vajda, Sandor
are ranked on the basis of their average free energies. To understand the relative importance of these factors, we developed alternative algorithms that use the DOCK and GRAMM programs for the initial search An interesting approach to mapping is the MCSS method, which optimizes the free energy of numer- ous ligand
Recursive Dynamics Algorithms for Serial, Parallel, and Closed-chain Multibody Systems
Saha, Subir Kumar
Kumar Saha Department of Mechanical Engineering, IIT Delhi Hauz Khas, New Delhi 110 016 saha), Wehage and Haug (1982), Kamman and Huston (1984), Angeles and Lee (1988), Saha and Angeles (1991), Saha) and Saha (1997), which are the basis for the development of recursive dynamics algorithms proposed
Theoretical and Experimental Evaluation of Chemical Reactivity
Wang, Qingsheng
2011-10-21
theoretical and experimental methods. Methylcyclopentadiene (MCP) and Hydroxylamine (HA) are selected as representatives of unsaturated hydrocarbons and self-reacting chemicals, respectively. Chemical reactivity of MCP, including isomerization, dimerization...
Theoretical spectra of terrestrial exoplanet surfaces
Hu, Renyu
We investigate spectra of airless rocky exoplanets with a theoretical framework that self-consistently treats reflection and thermal emission. We find that a silicate surface on an exoplanet is spectroscopically detectable ...
Technical Planning Basis - DOE Directives, Delegations, and Requiremen...
Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)
2, Technical Planning Basis by David Freshwater Functional areas: Defense Nuclear Facility Safety and Health Requirement, Safety and Security, The Guide assists DOENNSA field...
Protocol for Enhanced Evaluations of Beyond Design Basis Events...
Office of Environmental Management (EM)
Protocol for Enhanced Evaluations of Beyond Design Basis Events Supporting Implementation of Operating Experience Report 2013-01 Protocol for Enhanced Evaluations of Beyond Design...
Call for Papers 9th Annual European Symposium on Algorithms --ESA 2001
Brodal, Gerth Střlting
Call for Papers 9th Annual European Symposium on Algorithms -- ESA 2001 BRICS, University of Aarhus, Denmark, August 28--31, 2001 Scope The Symposium covers research in the use, design, and analysis of e. ESA 2001 is spon sored by BRICS and EATCS (the European Association for Theoretical Computer Science
Call for Papers 9th Annual European Symposium on Algorithms ESA 2001
Brodal, Gerth Střlting
Call for Papers 9th Annual European Symposium on Algorithms ESA 2001 BRICS, University of Aarhus, Denmark, August 2831, 2001 Scope The Symposium covers research in the use, design, and analysis programming. ESA 2001 is spon- sored by BRICS and EATCS (the European Association for Theoretical Computer
Dhar, Deepak
polymers. Sumedha #3; and Deepak Dhar y Department Of Theoretical Physics Tata Institute Of Fundamental algorithm for linear and branched polymers. There is a qualitative di#11;erence in the eĆciency in these two for linear polymers, but as exp(cn #11; ) for branched (undirected and directed) polymers, where 0
McEwen, Jason
London, Blackett Laboratory, Prince Consort Road, London SW7 2AZ, U.K. 5 Department of Mathematics, Imperial College London, London SW7 2AZ, U.K. (Dated: August 20, 2013) A number of theoretically well to establish its validity. The approximation is implemented using a modular algorithm, designed to avoid
Local algorithms for graph partitioning and finding dense subgraphs
Andersen, Reid
2007-01-01
ed local partitioning algorithm . . . . . . . . . . . .7 A Local Algorithm for Finding DenseComparison of local partitioning algorithms . . . . . . . .
Reflections for quantum query algorithms
Ben W. Reichardt
2010-05-10
We show that any boolean function can be evaluated optimally by a quantum query algorithm that alternates a certain fixed, input-independent reflection with a second reflection that coherently queries the input string. Originally introduced for solving the unstructured search problem, this two-reflections structure is therefore a universal feature of quantum algorithms. Our proof goes via the general adversary bound, a semi-definite program (SDP) that lower-bounds the quantum query complexity of a function. By a quantum algorithm for evaluating span programs, this lower bound is known to be tight up to a sub-logarithmic factor. The extra factor comes from converting a continuous-time query algorithm into a discrete-query algorithm. We give a direct and simplified quantum algorithm based on the dual SDP, with a bounded-error query complexity that matches the general adversary bound. Therefore, the general adversary lower bound is tight; it is in fact an SDP for quantum query complexity. This implies that the quantum query complexity of the composition f(g,...,g) of two boolean functions f and g matches the product of the query complexities of f and g, without a logarithmic factor for error reduction. It further shows that span programs are equivalent to quantum query algorithms.
Sensor Networks: Distributed Algorithms Reloaded or Revolutions?
Sensor Networks: Distributed Algorithms Reloaded or Revolutions? Roger Wattenhofer Computer. This paper wants to motivate the distributed algorithms community to study sensor networks. We discuss why community, a sensor network essentially is a database. The distributed algorithms community should join
Efficient Algorithms for High Dimensional Data Mining
Rakthanmanon, Thanawin
2012-01-01
Resolution QRS Detection Algorithm for Sparsely Sampled ECGShamlo. 2011. A disk-aware algorithm for time series motifJ. M. Kleinberg, 1997. Two algorithms for nearest-neighbor
End of semester project Global Optimization algorithms
Dreyfuss, Pierre
End of semester project Global Optimization algorithms Ecole Polytechnique de l'UniversitĂ© de Nice.......................................................................................................................................3 II. Simulated annealing algorithm (SA.........................................................................................................................................7 2.Principle,algorithm and choice of parameters
Minimally entangled typical thermal state algorithms
Stoudenmire, E. M.; White, Steven R.
2010-01-01
s 2 t 2 ? 1 )? 2 and the algorithm continued by defining R 3order indicated, this algorithm for multiplying MPOs scalestypical thermal state algorithms E M Stoudenmire 1 and
Comparison between Traditional Neural Networks and Radial Basis Function Networks
Wilamowski, Bogdan Maciej
Comparison between Traditional Neural Networks and Radial Basis Function Networks Tiantian Xie, Hao networks: traditional neural networks and radial basis function (RBF) networks, both of which of neural network architectures are analyzed and compared based on four different examples. The comparison
A Jacobi Method for Lattice Basis Reduction Sanzheng Qiao
Qiao, Sanzheng
A Jacobi Method for Lattice Basis Reduction Sanzheng Qiao Department of Computing and Software Mc decoding has been suc- cessfully used in wireless communications. In this paper, we propose a Jacobi method for lattice basis reduction. Jacobi method is attractive, because it is inherently parallel. Thus high
Emergence of a measurement basis in atom-photon scattering
Yinnon Glickman; Shlomi Kotler; Nitzan Akerman; Roee Ozeri
2012-06-18
The process of quantum measurement has been a long standing source of debate. A measurement is postulated to collapse a wavefunction onto one of the states of a predetermined set - the measurement basis. This basis origin is not specified within quantum mechanics. According to the theory of decohernce, a measurement basis is singled out by the nature of coupling of a quantum system to its environment. Here we show how a measurement basis emerges in the evolution of the electronic spin of a single trapped atomic ion due to spontaneous photon scattering. Using quantum process tomography we visualize the projection of all spin directions, onto this basis, as a photon is scattered. These basis spin states are found to be aligned with the scattered photon propagation direction. In accordance with decohernce theory, they are subjected to a minimal increase in entropy due to the photon scattering, while, orthogonal states become fully mixed and their entropy is maximally increased. Moreover, we show that detection of the scattered photon polarization measures the spin state of the ion, in the emerging basis, with high fidelity. Lastly, we show that while photon scattering entangles all superpositions of pointer states with the scattered photon polarization, the measurement-basis states themselves remain classically correlated with it. Our findings show that photon scattering by atomic spin superpositions fulfils all the requirements from a quantum measurement process.
A Direct Manipulation Language for Explaining Algorithms
Scott, Jeremy
Instructors typically explain algorithms in computer science by tracing their behavior, often on blackboards, sometimes with algorithm visualizations. Using blackboards can be tedious because they do not facilitate ...
Fast Computation Algorithm for Discrete Resonances among Gravity Waves
Elena Kartashova
2006-05-25
Traditionally resonant interactions among short waves, with large real wave-numbers, were described statistically and only a small domain in spectral space with integer wave-numbers, discrete resonances, had to be studied separately in resonators. Numerical simulations of the last few years showed unambiguously the existence of some discrete effects in the short-waves part of the wave spectrum. Newly presented model of laminated turbulence explains theoretically appearance of these effects thus putting a novel problem - construction of fast algorithms for computation of solutions of resonance conditions with integer wave-numbers of order $10^3$ and more. Example of such an algorithm for 4-waves interactions of gravity waves is given. Its generalization on the different types of waves is briefly discussed.
Real-time algorithm for robust coincidence search
Petrovic, T.; Vencelj, M.; Lipoglavsek, M.; Gajevic, J.; Pelicon, P.
2012-10-20
In in-beam {gamma}-ray spectroscopy experiments, we often look for coincident detection events. Among every N events detected, coincidence search is naively of principal complexity O(N{sup 2}). When we limit the approximate width of the coincidence search window, the complexity can be reduced to O(N), permitting the implementation of the algorithm into real-time measurements, carried out indefinitely. We have built an algorithm to find simultaneous events between two detection channels. The algorithm was tested in an experiment where coincidences between X and {gamma} rays detected in two HPGe detectors were observed in the decay of {sup 61}Cu. Functioning of the algorithm was validated by comparing calculated experimental branching ratio for EC decay and theoretical calculation for 3 selected {gamma}-ray energies for {sup 61}Cu decay. Our research opened a question on the validity of the adopted value of total angular momentum of the 656 keV state (J{sup {pi}} = 1/2{sup -}) in {sup 61}Ni.
Theoretical stellar models for old galactic clusters
V. Castellani; S. Degl'Innocenti; M. Marconi
1998-12-05
We present new evolutionary stellar models suitable for old Population I clusters, discussing both the consequences of the most recent improvements in the input physics and the effect of element diffusion within the stellar structures. Theoretical cluster isochrones are presented, covering the range of ages from 1 to 9 Gyr for the four selected choices on the metallicity Z= 0.007, 0.010, 0.015 and 0.020. Theoretical uncertainties on the efficiency of superadiabatic convection are discussed in some details. Isochrone fitting to the CM diagrams of the two well observed galactic clusters NGC2420 and M67 indicates that a mixing length parameter alpha = 1.9 appears adequate for reproducing the observed color of cool giant stars. The problems in matching theoretical preditions to the observed slope of MS stars are discussed.
Imaging algorithms in radio interferometry
R. J. Sault; T. A. Oosterloo
2007-01-08
The paper reviews progress in imaging in radio interferometry for the period 1993-1996. Unlike an optical telescope, the basic measurements of a radio interferometer (correlations between antennas) are indirectly related to a sky brightness image. In a real sense, algorithms and computers are the lenses of a radio interferometer. In the last 20 years, whereas interferometer hardware advances have resulted in improvements of a factor of a few, algorithm and computer advances have resulted in orders of magnitude improvement in image quality. Developing these algorithms has been a fruitful and comparatively inexpensive method of improving the performance of existing telescopes, and has made some newer telescopes possible. In this paper, we review recent developments in the algorithms used in the imaging part of the reduction process. What constitutes an `imaging algorithm'? Whereas once there was a steady `forward' progression in the reduction process of editing, calibrating, transforming and, finally, deconvolving, this is no longer true. The introduction of techniques such as self-calibration, and algorithms that go directly from visibilities to final images, have made the dividing lines less clear. Although we briefly consider self-calibration, for the purposes of this paper calibration issues are generally excluded. Most attention will be directed to the steps which form final images from the calibrated visibilities.
Theoretical studies of chemical reaction dynamics
Schatz, G.C. [Argonne National Laboratory, IL (United States)
1993-12-01
This collaborative program with the Theoretical Chemistry Group at Argonne involves theoretical studies of gas phase chemical reactions and related energy transfer and photodissociation processes. Many of the reactions studied are of direct relevance to combustion; others are selected they provide important examples of special dynamical processes, or are of relevance to experimental measurements. Both classical trajectory and quantum reactive scattering methods are used for these studies, and the types of information determined range from thermal rate constants to state to state differential cross sections.
The pointer basis and the feedback stabilization of quantum systems
L. Li; A. Chia; H. M. Wiseman
2014-11-19
The dynamics for an open quantum system can be `unravelled' in infinitely many ways, depending on how the environment is monitored, yielding different sorts of conditioned states, evolving stochastically. In the case of ideal monitoring these states are pure, and the set of states for a given monitoring forms a basis (which is overcomplete in general) for the system. It has been argued elsewhere [D. Atkins et al., Europhys. Lett. 69, 163 (2005)] that the `pointer basis' as introduced by Zurek and Paz [Phys. Rev. Lett 70, 1187(1993)], should be identified with the unravelling-induced basis which decoheres most slowly. Here we show the applicability of this concept of pointer basis to the problem of state stabilization for quantum systems. In particular we prove that for linear Gaussian quantum systems, if the feedback control is assumed to be strong compared to the decoherence of the pointer basis, then the system can be stabilized in one of the pointer basis states with a fidelity close to one (the infidelity varies inversely with the control strength). Moreover, if the aim of the feedback is to maximize the fidelity of the unconditioned system state with a pure state that is one of its conditioned states, then the optimal unravelling for stabilizing the system in this way is that which induces the pointer basis for the conditioned states. We illustrate these results with a model system: quantum Brownian motion. We show that even if the feedback control strength is comparable to the decoherence, the optimal unravelling still induces a basis very close to the pointer basis. However if the feedback control is weak compared to the decoherence, this is not the case.
Queuing Theoretic and Information Theoretic Capacity of Energy Harvesting Sensor Nodes
Sharma, Vinod
devices are solar cells, wind turbines and piezo-electric cells, which extract energy from the environmentQueuing Theoretic and Information Theoretic Capacity of Energy Harvesting Sensor Nodes Vinod Sharma, DRDO Bangalore, India Email: rajesh81r@gmail.com Abstract--Energy harvesting sensor networks provide
Theoretical Physics | U.S. DOE Office of Science (SC)
Office of Science (SC) Website
Theoretical Physics High Energy Physics (HEP) HEP Home About Research Science Drivers of Particle Physics Energy Frontier Intensity Frontier Cosmic Frontier Theoretical Physics...
Experimental and Theoretical Investigation of Lubricant and Additive...
Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site
Theoretical Investigation of Lubricant and Additive Effects on Engine Friction Experimental and Theoretical Investigation of Lubricant and Additive Effects on Engine Friction...
Modified Theoretical Minimum Emittance Lattice for an Electron...
Office of Scientific and Technical Information (OSTI)
Modified Theoretical Minimum Emittance Lattice for an Electron Storage Ring with Extreme-Low Emittance Citation Details In-Document Search Title: Modified Theoretical Minimum...
Research in theoretical nuclear and neutrino physics. Final report...
Office of Scientific and Technical Information (OSTI)
Research in theoretical nuclear and neutrino physics. Final report Citation Details In-Document Search Title: Research in theoretical nuclear and neutrino physics. Final report The...
Theoretical Synthesis of Mixed Materials for CO2 Capture Applications...
Office of Scientific and Technical Information (OSTI)
Theoretical Synthesis of Mixed Materials for CO2 Capture Applications Citation Details In-Document Search Title: Theoretical Synthesis of Mixed Materials for CO2 Capture...
History and Contributions of Theoretical Computer Science
Selman, Alan
History and Contributions of Theoretical Computer Science John E. Savage Department of Computer Science Brown University Providence, RI 02912 savage@cs.brown.edu Alan L. Selman Department of Computer@cse.buffalo.edu Carl Smith Department of Computer Science University of Maryland College Park, MD 20741 smith
Hydraulic Geometry: Empirical Investigations and Theoretical Approaches
Eaton, Brett
Hydraulic Geometry: Empirical Investigations and Theoretical Approaches B.C. Eatona, a Department of Geography, The University of British Columbia 1984 West Mall, Vancouver, BC, V6T 1Z2 Abstract Hydraulic. One approach to hydraulic geometry considers temporal changes at a single location due to variations
Chicago Journal of Theoretical Computer Science
Erickson, Jeff
Chicago Journal of Theoretical Computer Science The MIT Press Volume 1999, Article 8 Lower Bounds for Linear Satisfiability Problems ISSN 10730486. MIT Press Journals, Five Cambridge Center, Cambridge, MA 02142-1493 USA; (617)253-2889; journals-ordersmit.edu, journals-infomit.edu. Published one article
Chicago Journal of Theoretical Computer Science
Fenner, Stephen
Chicago Journal of Theoretical Computer Science The MIT Press Volume 1999, Article 2 Complements of Multivalued Functions ISSN 10730486. MIT Press Journals, Five Cambridge Center, Cambridge, MA 02142-1493 USA; (617)253-2889; journals-orders@mit.edu, journals- info@mit.edu. Published one article at a time
Theoretical Studies in Elementary Particle Physics
Collins, John C.; Roiban, Radu S
2013-04-01
This final report summarizes work at Penn State University from June 1, 1990 to April 30, 2012. The work was in theoretical elementary particle physics. Many new results in perturbative QCD, in string theory, and in related areas were obtained, with a substantial impact on the experimental program.
Chicago Journal of Theoretical Computer Science
Fenner, Stephen
Chicago Journal of Theoretical Computer Science The MIT Press Volume 1999, Article 2 Complements of Multivalued Functions ISSN 1073--0486. MIT Press Journals, Five Cambridge Center, Cambridge, MA 021421493 USA; (617)2532889; journalsorders@mit.edu, journals info@mit.edu. Published one article at a time in L
Chicago Journal of Theoretical Computer Science
Pudlák, Pavel
Chicago Journal of Theoretical Computer Science The MIT Press Volume 1999, Article 11 Satis#12;ability Coding Lemma ISSN 1073{0486. MIT Press Journals, Five Cambridge Center, Cambridge, MA 02142-1493 USA; (617)253-2889; journals-orders@mit.edu, journals-info @mit.edu. Published one article at a time
Chicago Journal of Theoretical Computer Science
Mahajan, Meena
Chicago Journal of Theoretical Computer Science The MIT Press Volume 1997, Article 5 31 December 1997 ISSN 1073--0486. MIT Press Journals, Five Cambridge Center, Cambridge, MA 021421493 USA; (617)2532889; journalsorders@mit.edu, journalsinfo@mit.edu. Published one article at a time in L A T E X source form
Chicago Journal of Theoretical Computer Science
Agrawal, Manindra
Chicago Journal of Theoretical Computer Science The MIT Press Volume 1997, Article 5 31 December 1997 ISSN 10730486. MIT Press Journals, Five Cambridge Center, Cambridge, MA 02142-1493 USA; (617)253-2889; journals-orders@mit.edu, journals-info@mit.edu. Published one article at a time in LATEX source form
Chicago Journal of Theoretical Computer Science
Gouda, Mohamed G.
Chicago Journal of Theoretical Computer Science The MIT Press Volume 1997, Article 3 4 November 1997 ISSN 10730486. MIT Press Journals, Five Cambridge Center, Cambridge, MA 02142-1493 USA; (617)253-2889; journals-orders@mit.edu, journals-info@mit.edu. Published one article at a time in LATEX source form
Chicago Journal of Theoretical Computer Science
Ta-Shma, Amnon
Chicago Journal of Theoretical Computer Science MIT Press Volume 1995, Article 1 30 June, 1995 ISSN 10730486. MIT Press Journals, 55 Hayward St., Cambridge, MA 02142; (617)253-2889; journals-orders@mit.edu, journals-info@mit.edu. Pub- lished one article at a time in LATEX source form on the Internet. Pagination
Chicago Journal of Theoretical Computer Science
Kozen, Dexter
Chicago Journal of Theoretical Computer Science The MIT Press Volume 1995, Article 3 20 September 1995 ISSN 10730486. MIT Press Journals, 55 Hayward St., Cambridge, MA 02142 USA; (617)253-2889; journals-orders@mit.edu, journals-info@mit.edu. Published one article at a time in LATEX source form
Designing a Computational Geometry Algorithms Library \\Lambda
Waldmann, Uwe
for Advanced School on Algorithmic Foundations of Geographic Information Systems, CISM, Udine, Italy, September
9. Genetic Algorithms 9.1 Introduction
Cambridge, University of
66 9. Genetic Algorithms 9.1 Introduction The concept of evolution is prevalent in most biological to computational optimisation methods using "genetic algorithms" [50]. 9.2 Neural Networks and Genetic Algorithms.1) with the function f being non-linear. Genetic algorithms (GAs) is one possible method of solving such a problem
A Reduced Basis Element Approach for the Reynolds Lubrication Equation
A Reduced Basis Element Approach for the Reynolds Lubrication Equation Lösen der Reynolds Reynolds Lubrication Equation 8 2.1 Introduction of the application, background setting . . . . . . . . . . . . . . . . . . . . . . 8 2.2 Reynolds Lubrication Equation
Refined error estimates for matrix-valued radial basis functions
Fuselier, Edward J., Jr.
2007-09-17
Radial basis functions (RBFs) are probably best known for their applications to scattered data problems. Until the 1990s, RBF theory only involved functions that were scalar-valued. Matrix-valued RBFs were subsequently ...
Preconditioned solenoidal basis method for incompressible fluid flows
Wang, Xue
2006-04-12
This thesis presents a preconditioned solenoidal basis method to solve the algebraic system arising from the linearization and discretization of primitive variable formulations of Navier-Stokes equations for incompressible ...
The biomechanical basis of evolutionary change in a territorial display
Ord, Terry
The biomechanical basis of evolutionary change in a territorial display Terry J. Ord*,1 , David C on Puerto Rico. 5. Our study shows how the biomechanics of a social signal can have important implications
Advanced Test Reactor Design Basis Reconstitution Project Issue Resolution Process
Steven D. Winter; Gregg L. Sharp; William E. Kohn; Richard T. McCracken
2007-05-01
The Advanced Test Reactor (ATR) Design Basis Reconstitution Program (DBRP) is a structured assessment and reconstitution of the design basis for the ATR. The DBRP is designed to establish and document the ties between the Document Safety Analysis (DSA), design basis, and actual system configurations. Where the DBRP assessment team cannot establish a link between these three major elements, a gap is identified. Resolutions to identified gaps represent configuration management and design basis recovery actions. The proposed paper discusses the process being applied to define, evaluate, report, and address gaps that are identified through the ATR DBRP. Design basis verification may be performed or required for a nuclear facility safety basis on various levels. The process is applicable to large-scale design basis reconstitution efforts, such as the ATR DBRP, or may be scaled for application on smaller projects. The concepts are applicable to long-term maintenance of a nuclear facility safety basis and recovery of degraded safety basis components. The ATR DBRP assessment team has observed numerous examples where a clear and accurate link between the DSA, design basis, and actual system configuration was not immediately identifiable in supporting documentation. As a result, a systematic approach to effectively document, prioritize, and evaluate each observation is required. The DBRP issue resolution process provides direction for consistent identification, documentation, categorization, and evaluation, and where applicable, entry into the determination process for a potential inadequacy in the safety analysis (PISA). The issue resolution process is a key element for execution of the DBRP. Application of the process facilitates collection, assessment, and reporting of issues identified by the DBRP team. Application of the process results in an organized database of safety basis gaps and prioritized corrective action planning and resolution. The DBRP team follows the ATR DBRP issue resolution process which provides a method for the team to promptly sort and prioritize questions and issues between those that can be addressed as a normal part of the reconstitution project and those that are to be handle as PISAs. Presentation of the DBRP issue resolution process provides an example for similar activities that may be required at other facilities within the Department of Energy complex.
The Oblique Basis Method from an Engineering Point of View
V. G. Gueorguiev
2012-10-16
The oblique basis method is reviewed from engineering point of view related to vibration and control theory. Examples are used to demonstrate and relate the oblique basis in nuclear physics to the equivalent mathematical problems in vibration theory. The mathematical techniques, such as principal coordinates and root locus, used by vibration and control theory engineers are shown to be relevant to the Richardson - Gaudin pairing-like problems in nuclear physics.
Computing single step operators of logic programming in radial basis function neural networks
Hamadneh, Nawaf; Sathasivam, Saratha; Choon, Ong Hong [School of Mathematical Sciences, Universiti Sains Malaysia, 11800 USM, Penang (Malaysia)
2014-07-10
Logic programming is the process that leads from an original formulation of a computing problem to executable programs. A normal logic program consists of a finite set of clauses. A valuation I of logic programming is a mapping from ground atoms to false or true. The single step operator of any logic programming is defined as a function (T{sub p}:I?I). Logic programming is well-suited to building the artificial intelligence systems. In this study, we established a new technique to compute the single step operators of logic programming in the radial basis function neural networks. To do that, we proposed a new technique to generate the training data sets of single step operators. The training data sets are used to build the neural networks. We used the recurrent radial basis function neural networks to get to the steady state (the fixed point of the operators). To improve the performance of the neural networks, we used the particle swarm optimization algorithm to train the networks.
Papalaskari, Mary-Angela
CSC 8301 Design and Analysis of Algorithms Lecture 1 CSC 8301CSC 8301-- Design and Analysis of AlgorithmsDesign and Analysis of Algorithms Lecture 1Lecture 1 Algorithms: OverviewAlgorithms: Overview Next time: Principles of the analysis of algorithms (2.1, 2.2) Design and Analysis of Algorithms
INDEX TO ALGORITHMS AND THEOREMS Algorithm 5.1.1C, 591{592.
Pratt, Vaughan
APPENDIX C INDEX TO ALGORITHMS AND THEOREMS Algorithm 5.1.1C, 591{592. Theorem 5.1.2A, 26. Theorem{54. Theorem 5.1.4C, 55. Algorithm 5.1.4D, 50. Theorem 5.1.4D, 57. Algorithm 5.1.4G, 69. Algorithm 5.1.4H, 612. Theorem 5.1.4H, 60. Algorithm 5.1.4I, 49{50. Algorithm 5.1.4P, 70. Algorithm 5.1.4Q, 614. Algorithm 5.1.4S
INDEX TO ALGORITHMS AND THEOREMS Algorithm 1.1E, 2, 4.
Pratt, Vaughan
APPENDIX C INDEX TO ALGORITHMS AND THEOREMS Algorithm 1.1E, 2, 4. Algorithm 1.1F, 466. Algorithm 1.2.1E, 13{14. Algorithm 1.2.1I, 11{12. Algorithm 1.2.2E, 470. Algorithm 1.2.2L, 26. Law 1.2.4A, 40. Law, 81{82. Theorem 1.2.10A, 101. Algorithm 1.2.10M, 96. Theorem 1.2.11.3A, 119. Algorithm 1.3.2E, 160
A preliminary evaluation of a speed threshold incident detection algorithm
Kolb, Stephanie Lang
1996-01-01
and California algorithm #8 using Fuzzy Logic to evaluate the new algorithm's effectiveness in detecting incidents on freeways. To test these algorithms, real data from TransGuide were run through the algorithms. Algorithm output were compared with CCTV (closed...
Leaky LMS AlgorithmLeaky LMS Algorithm Convergence of tap-weight error modes dependent on
Santhanam, Balu
Leaky LMS AlgorithmLeaky LMS Algorithm Convergence of tap-weight error modes dependent. Stability and convergence time issues of concern for ill- conditioned inputs. Leaky LMS AlgorithmLeaky LMS cost. Block LMS AlgorithmBlock LMS Algorithm Uses type-I polyphase components of the input u[n]: Block
Theoretical, Methodological, and Empirical Approaches to Cost Savings: A Compendium
M Weimar
1998-12-10
This publication summarizes and contains the original documentation for understanding why the U.S. Department of Energy's (DOE's) privatization approach provides cost savings and the different approaches that could be used in calculating cost savings for the Tank Waste Remediation System (TWRS) Phase I contract. The initial section summarizes the approaches in the different papers. The appendices are the individual source papers which have been reviewed by individuals outside of the Pacific Northwest National Laboratory and the TWRS Program. Appendix A provides a theoretical basis for and estimate of the level of savings that can be" obtained from a fixed-priced contract with performance risk maintained by the contractor. Appendix B provides the methodology for determining cost savings when comparing a fixed-priced contractor with a Management and Operations (M&O) contractor (cost-plus contractor). Appendix C summarizes the economic model used to calculate cost savings and provides hypothetical output from preliminary calculations. Appendix D provides the summary of the approach for the DOE-Richland Operations Office (RL) estimate of the M&O contractor to perform the same work as BNFL Inc. Appendix E contains information on cost growth and per metric ton of glass costs for high-level waste at two other DOE sites, West Valley and Savannah River. Appendix F addresses a risk allocation analysis of the BNFL proposal that indicates,that the current approach is still better than the alternative.
An Information-Theoretic Approach to PMU Placement in Electric Power Systems
Li, Qiao; Weng, Yang; Negi, Rohit; Franchetti, Franz; Ilic, Marija D
2012-01-01
This paper presents an information-theoretic approach to address the phasor measurement unit (PMU) placement problem in electric power systems. Different from the conventional 'topological observability' based approaches, this paper advocates a much more refined, information-theoretic criterion, namely the mutual information (MI) between the PMU measurements and the power system states. The proposed MI criterion can not only include the full system observability as a special case, but also can rigorously model the remaining uncertainties in the power system states with PMU measurements, so as to generate highly informative PMU configurations. Further, the MI criterion can facilitate robust PMU placement by explicitly modeling probabilistic PMU outages. We propose a greedy PMU placement algorithm, and show that it achieves an approximation ratio of (1-1/e) for any PMU placement budget. We further show that the performance is the best that one can achieve in practice, in the sense that it is NP-hard to achieve ...
Optimisation of Quantum Evolution Algorithms
Apoorva Patel
2015-03-04
Given a quantum Hamiltonian and its evolution time, the corresponding unitary evolution operator can be constructed in many different ways, corresponding to different trajectories between the desired end-points. A choice among these trajectories can then be made to obtain the best computational complexity and control over errors. As an explicit example, Grover's quantum search algorithm is described as a Hamiltonian evolution problem. It is shown that the computational complexity has a power-law dependence on error when a straightforward Lie-Trotter discretisation formula is used, and it becomes logarithmic in error when reflection operators are used. The exponential change in error control is striking, and can be used to improve many importance sampling methods. The key concept is to make the evolution steps as large as possible while obeying the constraints of the problem. In particular, we can understand why overrelaxation algorithms are superior to small step size algorithms.
Quantum Chaos and Quantum Algorithms
Daniel Braun
2001-10-05
It was recently shown (quant-ph/9909074) that parasitic random interactions between the qubits in a quantum computer can induce quantum chaos and put into question the operability of a quantum computer. In this work I investigate whether already the interactions between the qubits introduced with the intention to operate the quantum computer may lead to quantum chaos. The analysis focuses on two well--known quantum algorithms, namely Grover's search algorithm and the quantum Fourier transform. I show that in both cases the same very unusual combination of signatures from chaotic and from integrable dynamics arises.
Implications of Theoretical Ideas Regarding Cold Fusion
Afsar Abbas
1995-03-29
A lot of theoretical ideas have been floated to explain the so called cold fusion phenomenon. I look at a large subset of these and study further physical implications of the concepts involved. I suggest that these can be tested by other independent physical means. Because of the significance of these the experimentalists are urged to look for these signatures. The results in turn will be important for a better understanding and hence control of the cold fusion phenomenon.
Algorithmic Aspects of Risk Management
Gehani, Ashish
Algorithmic Aspects of Risk Management Ashish Gehani1 , Lee Zaniewski2 , and K. Subramani2 1 SRI International 2 West Virginia University Abstract. Risk analysis has been used to manage the security of sys configuration. This allows risk management to occur in real time and reduces the window of exposure to attack
Algorithmic + Geometric characterization of CAR
Gill, Richard D.
Algorithmic + Geometric characterization of CAR (Coarsening at Random) Richard Gill - Utrecht but independent) CCAR 3 door problem X=door with car behind Y=two doors still closed = {your first choice, other door left closed} 3 door problem X=door with car behind Y=(your first choice, other door left closed
GEET DUGGAL Algorithms for Determining
Relationship to Gene Regulation Final Public Oral Examination Doctor of Philosophy Recent genome sequencing. Analyses from them have shown that the 3D structure of DNA may be closely linked to genome functions structure of DNA and genome function on the scale of the whole genome. Specifically, we designed algorithms
Adaptive protection algorithm and system
Hedrick, Paul (Pittsburgh, PA) [Pittsburgh, PA; Toms, Helen L. (Irwin, PA) [Irwin, PA; Miller, Roger M. (Mars, PA) [Mars, PA
2009-04-28
An adaptive protection algorithm and system for protecting electrical distribution systems traces the flow of power through a distribution system, assigns a value (or rank) to each circuit breaker in the system and then determines the appropriate trip set points based on the assigned rank.
Digitales Video Scanline Algorithms -1 -
that exploits simplifications in digital filtering and memory access. The geometric transformations a special class of geometric transformation techniques that operate only along rows and columns. The purpose be resampled independently of the other. Separable algorithms spatially transform 2-D images by decomposing
R. Guerraoui 1 Distributed algorithms
Guerraoui, Rachid
and then algorithms 7 Best-effort broadcast (beb) Events Request: bebBroadcast, m> Indication: bebDeliver, src, m> Â· Properties: BEB1, BEB2, BEB3 8 Best-effort broadcast (beb) Properties BEB1. Validity: If pi and pj are correct, then every message broadcast by pi is eventually delivered by pj BEB2. No duplication: No message
Algorithmic Thermodynamics John C. Baez
Cortes, Corinna
Algorithmic Thermodynamics John C. Baez Department of Mathematics, University of California in statistical mechanics. This viewpoint allows us to apply many techniques developed for use in thermodynamics and chemical potential. We derive an analogue of the fundamental thermodynamic relation dE = TdS - PdV + Âµd
Hierarchical Correctness Proofs Distributed Algorithms
Tuttle, Mark R.
distributed networks. With this model we are able to construct modular, hierarchical correct- ness proofs these messages and process variables can be extremely di cult, and the resulting proofs of correct- ness of the full algorithm's correct- ness. Some time ago, we began to consider this approach of proof by re nement
Emergence of the pointer basis through the dynamics of correlations
M. F. Cornelio; O. Jiménez Farías; F. F. Fanchini; I. Frerot; G. H. Aguilar; M. O. Hor-Meyll; M. C. de Oliveira; S. P. Walborn; A. O. Caldeira; P. H. Souto Ribeiro
2012-10-04
We use the classical correlation between a quantum system being measured and its measurement apparatus to analyze the amount of information being retrieved in a quantum measurement process. Accounting for decoherence of the apparatus, we show that these correlations may have a sudden transition from a decay regime to a constant level. This transition characterizes a non-asymptotic emergence of the pointer basis, while the system-apparatus can still be quantum correlated. We provide a formalization of the concept of emergence of a pointer basis in an apparatus subject to decoherence. This contrast of the pointer basis emergence to the quantum to classical transition is demonstrated in an experiment with polarization entangled photon pairs.
Goddard III, William A.
Mechanism of Atmospheric Photooxidation of Aromatics: A Theoretical Study Jean M. Andino, James N, California 91125 ReceiVed: October 3, 1995; In Final Form: December 13, 1995X The mechanisms of atmospheric-31G(d,p) basis set to study the intermediate structures. Full mechanisms for the OH
Resilient Control Systems Practical Metrics Basis for Defining Mission Impact
Craig G. Rieger
2014-08-01
"Resilience” describes how systems operate at an acceptable level of normalcy despite disturbances or threats. In this paper we first consider the cognitive, cyber-physical interdependencies inherent in critical infrastructure systems and how resilience differs from reliability to mitigate these risks. Terminology and metrics basis are provided to integrate the cognitive, cyber-physical aspects that should be considered when defining solutions for resilience. A practical approach is taken to roll this metrics basis up to system integrity and business case metrics that establish “proper operation” and “impact.” A notional chemical processing plant is the use case for demonstrating how the system integrity metrics can be applied to establish performance, and
The Functional Requirements and Design Basis for Information Barriers
Fuller, James L.
2012-05-01
This report summarizes the results of the Information Barrier Working Group workshop held at Sandia National Laboratory in Albuquerque, NM, February 2-4, 1999. This workshop was convened to establish the functional requirements associated with warhead radiation signature information barriers, to identify the major design elements of any such system or approach, and to identify a design basis for each of these major elements. Such information forms the general design basis to be used in designing, fabricating, and evaluating the complete integrated systems developed for specific purposes.
Basis invariant measure of CP-violation and renormalization
A. Hohenegger; A. Kartavtsev
2014-12-27
We analyze, in the context of a simple toy model, for which renormalization schemes the CP-properties of bare Lagrangian and its finite part coincide. We show that this is the case for the minimal subtraction and on-shell schemes. The CP-properties of the theory can then be characterized by CP-odd basis invariants expressed in terms of renormalized masses and couplings. For the minimal subtraction scheme we furthermore show that in CP-conserving theories the CP-odd basis invariants are zero at any scale but are not renormalization group invariant in CP-violating ones.
D. Gulo; O. Alexejev
1999-03-12
Theoretical calculations of the electrical conductivity and electroosmotic transfer as functions of the disperse phase volume fraction and non-dissolving boundary layer thickness were provided on the basis of the cell theory of electroosmosis for the limiting case of large degree of electric double layers overlapping in interparticle space. The obtained results are in qualitative agreement with the experimental data and describe the main features of the latter
On Learning Algorithms for Nash Equilibria
Daskalakis, Constantinos
Can learning algorithms find a Nash equilibrium? This is a natural question for several reasons. Learning algorithms resemble the behavior of players in many naturally arising games, and thus results on the convergence or ...
Convergence Conditions for Variational Inequality Algorithms
Magnanti, Thomas L.
Within the extensive variational inequality literature, researchers have developed many algorithms. Depending upon the problem setting, these algorithms ensure the convergence of (i) the entire sequence of iterates, (ii) ...
The bidimensionality theory and its algorithmic applications
Hajiaghayi, MohammadTaghi
2005-01-01
Our newly developing theory of bidimensional graph problems provides general techniques for designing efficient fixed-parameter algorithms and approximation algorithms for NP- hard graph problems in broad classes of graphs. ...
Bayesian Algorithmic Mechanism Design [Extended Abstract
Hartline, Jason D.
Bayesian Algorithmic Mechanism Design [Extended Abstract] Jason D. Hartline Northwestern, Canada blucier@cs.toronto.edu ABSTRACT The principal problem in algorithmic mechanism design approach for designing incen- tive compatible mechanisms, namely that of Vickrey, Clarke, and Groves
Learning Motor Skills: From Algorithms to Robot
Learning Motor Skills: From Algorithms to Robot Experiments Erlernen Motorischer Fähigkeiten: Von Algorithmen zu Roboter-Experimenten Zur Erlangung des akademischen Grades Doktor-Ingenieur (Dr Motor Skills: From Algorithms to Robot Experiments Erlernen Motorischer Fähigkeiten: Von Algorithmen zu
The double-beta decay: Theoretical challenges
Horoi, Mihai
2012-11-20
Neutrinoless double beta decay is a unique process that could reveal physics beyond the Standard Model of particle physics namely, if observed, it would prove that neutrinos are Majorana particles. In addition, it could provide information regarding the neutrino masses and their hierarchy, provided that reliable nuclear matrix elements can be obtained. The two neutrino double beta decay is an associate process that is allowed by the Standard Model, and it was observed for about ten nuclei. The present contribution gives a brief review of the theoretical challenges associated with these two process, emphasizing the reliable calculation of the associated nuclear matrix elements.
Algorithms for Constrained Route Planning in Road Networks
Rice, Michael Norris
2013-01-01
2.2 Graph Search Algorithms . . . . . . . . . . . . .an Efficient Algorithm . . . . . . 4.6.4 RestrictionAn O(r)-Approximation Algorithm for GTSPP . . . . . . . .
Algorithms for testing fault-tolerance of sequenced jobs
Chrobak, Marek; Hurand, Mathilde; Sgall, Ji?í
2009-01-01
5th European symposium on algorithms (ESA) (pp. 296–307).· Real-time systems · Algorithms 1 Introduction Ghosh etfault-tolerance testing algorithm, under the restriction
Algorithms for tandem mass spectrometry-based proteomics
Frank, Ari Michael
2008-01-01
4. MS-Clustering Algorithm . . . . . . . . . . C.De Novo Sequencing Algorithm . . . . . . C. Experimental2. The RankBoost Algorithm (Freund et al. , 2003) B.
Approximation Algorithms for the Fault-Tolerant Facility Placement Problem
Yan, Li
2013-01-01
5.2 Algorithm ECHS with Ratio5.3 Algorithm EBGS with RatioFormulation 2.1.3 Approximation Algorithms . 2.1.4 Bifactor
Canister Storage Building (CSB) Design Basis Accident Analysis Documentation
CROWE, R.D.; PIEPHO, M.G.
2000-03-23
This document provided the detailed accident analysis to support HNF-3553, Spent Nuclear Fuel Project Final Safety Analysis Report, Annex A, ''Canister Storage Building Final Safety Analysis Report''. All assumptions, parameters, and models used to provide the analysis of the design basis accidents are documented to support the conclusions in the Canister Storage Building Final Safety Analysis Report.
Cold Vacuum Drying (CVD) Facility Design Basis Accident Analysis Documentation
PIEPHO, M.G.
1999-10-20
This document provides the detailed accident analysis to support HNF-3553, Annex B, Spent Nuclear Fuel Project Final Safety Analysis Report, ''Cold Vacuum Drying Facility Final Safety Analysis Report (FSAR).'' All assumptions, parameters and models used to provide the analysis of the design basis accidents are documented to support the conclusions in the FSAR.
Basis and Lattice Polarization Mechanisms for Light Transmission
Brolo, Alexandre G.
Basis and Lattice Polarization Mechanisms for Light Transmission through Nanohole Arrays in a Metal light transmission through double-hole and elliptical nanohole arrays in a thin gold film plasmon waves and the evanescent transmission through the nanoholes. Both of these effects need
Solar Power Tower Design Basis Document, Revision 0
ZAVOICO,ALEXIS B.
2001-07-01
This report contains the design basis for a generic molten-salt solar power tower. A solar power tower uses a field of tracking mirrors (heliostats) that redirect sunlight on to a centrally located receiver mounted on top a tower, which absorbs the concentrated sunlight. Molten nitrate salt, pumped from a tank at ground level, absorbs the sunlight, heating it up to 565 C. The heated salt flows back to ground level into another tank where it is stored, then pumped through a steam generator to produce steam and make electricity. This report establishes a set of criteria upon which the next generation of solar power towers will be designed. The report contains detailed criteria for each of the major systems: Collector System, Receiver System, Thermal Storage System, Steam Generator System, Master Control System, and Electric Heat Tracing System. The Electric Power Generation System and Balance of Plant discussions are limited to interface requirements. This design basis builds on the extensive experience gained from the Solar Two project and includes potential design innovations that will improve reliability and lower technical risk. This design basis document is a living document and contains several areas that require trade-studies and design analysis to fully complete the design basis. Project- and site-specific conditions and requirements will also resolve open To Be Determined issues.
NEAT-IGERT Proposal C. THEMATIC BASIS FOR GROUP EFFORT
Islam, M. Saif
NEAT-IGERT Proposal C. THEMATIC BASIS FOR GROUP EFFORT The last decade has seen immense progress the research and teaching interests of fourteen investigators in seven different departments ranging from, to the actual structure and management of the group. The Ph.D.'s from this program will be well poised to embark
Final Report History and Basis of NESHAPs and Subpart W
a Rulemaking to Modify the NESHAP Subpart W Standard for Radon Emissions from Operating Uranium Mills (40 CFR a National Emission Standard for Hazardous Air Pollutant (NESHAP) for radon emissions from operating uraniumFinal Report History and Basis of NESHAPs and Subpart W Prepared by S. Cohen & Associates 1608
Group Non-negative Basis Pursuit for Automatic Music Transcription
Plumbley, Mark
Group Non-negative Basis Pursuit for Automatic Music Transcription Ken O'Hanlon1 , Mark D. Plumbley1 {keno, Mark.Plumbley}@eecs.qmul.ac.uk Centre for Digital Music, Queen Mary University of London for AMT (O'Hanlon et al.). Group sparsity considers that certain groups of atoms tend to be ac- tive
CRAD, Safety Basis- Idaho Accelerated Retrieval Project Phase II
Broader source: Energy.gov [DOE]
A section of Appendix C to DOE G 226.1-2 "Federal Line Management Oversight of Department of Energy Nuclear Facilities." Consists of Criteria Review and Approach Documents (CRADs) used for a February 2006 Commencement of Operations assessment of the Safety Basis at the Idaho Accelerated Retrieval Project Phase II.
Financing industrial boiler projects on a non-recourse basis
Anderson, C.
1995-09-01
Techniques for the financing of industrial boiler projects on a non-recourse basis are outlined. The following topics are discussed: types of projects; why non-recourse (off-balance sheet) financing; the down side; construction lenders requirements; and term lender/subdebt requirements.
Evaluating Las Vegas Algorithms ---Pitfalls and Remedies
Hoos, Holger H.
cism regarding the empirical testing of algorithms (Hooker, 1994; Hooker, 1996; McGeoch, 1996). It has
An implicit numerical algorithm general relativistic hydrodynamics
A. Hujeirat
2008-01-09
An implicit numerical algorithm general relativistic hydrodynamics This article has been replaced by arXiv:0801.1017
An algorithm for minimization of quantum cost
Anindita Banerjee; Anirban Pathak
2010-04-09
A new algorithm for minimization of quantum cost of quantum circuits has been designed. The quantum cost of different quantum circuits of particular interest (eg. circuits for EPR, quantum teleportation, shor code and different quantum arithmetic operations) are computed by using the proposed algorithm. The quantum costs obtained using the proposed algorithm is compared with the existing results and it is found that the algorithm has produced minimum quantum cost in all cases.
A Fast Algorithm for Nonstationary Delay Estimation
So, Hing-Cheung
to the explicit time delay estimator (ETDE) algorithm 4] but it is more computationally e cient and provides more
Energy Management in Microgrids: Algorithms and System
Shi, Wenbo
2015-01-01
study the supply-demand balancing problem in microgrids under more realistic conditions and pro- pose algorithms for microgrid
Direct photons ~basis for characterizing heavy ion collisions~
Takao Sakaguchi
2008-07-30
After years of experimental and theoretical efforts, direct photons become a strong and reliable tool to establish the basic characteristics of a hot and dense matter produced in heavy ion collisions. The recent direct photon measurements are reviewed and a future prospect is given.
Final Technical Report "Multiscale Simulation Algorithms for Biochemical Systems"
Petzold, Linda R.
2012-10-25
Biochemical systems are inherently multiscale and stochastic. In microscopic systems formed by living cells, the small numbers of reactant molecules can result in dynamical behavior that is discrete and stochastic rather than continuous and deterministic. An analysis tool that respects these dynamical characteristics is the stochastic simulation algorithm (SSA, Gillespie, 1976), a numerical simulation procedure that is essentially exact for chemical systems that are spatially homogeneous or well stirred. Despite recent improvements, as a procedure that simulates every reaction event, the SSA is necessarily inefficient for most realistic problems. There are two main reasons for this, both arising from the multiscale nature of the underlying problem: (1) stiffness, i.e. the presence of multiple timescales, the fastest of which are stable; and (2) the need to include in the simulation both species that are present in relatively small quantities and should be modeled by a discrete stochastic process, and species that are present in larger quantities and are more efficiently modeled by a deterministic differential equation (or at some scale in between). This project has focused on the development of fast and adaptive algorithms, and the fun- damental theory upon which they must be based, for the multiscale simulation of biochemical systems. Areas addressed by this project include: (1) Theoretical and practical foundations for ac- celerated discrete stochastic simulation (tau-leaping); (2) Dealing with stiffness (fast reactions) in an efficient and well-justified manner in discrete stochastic simulation; (3) Development of adaptive multiscale algorithms for spatially homogeneous discrete stochastic simulation; (4) Development of high-performance SSA algorithms.
Tuning bandit algorithms in stochastic environments
Szepesvari, Csaba
Tuning bandit algorithms in stochastic environments Jean-Yves Audibert1 and R´emi Munos2 and Csaba@cs.ualberta.ca Abstract. Algorithms based on upper-confidence bounds for balancing exploration and exploitation a variant of the basic algorithm for the stochastic, multi-armed bandit problem that takes into account
PARALLEL EVOLUTIONARY ALGORITHMS FOR UAV PATH PLANNING
PARALLEL EVOLUTIONARY ALGORITHMS FOR UAV PATH PLANNING Dong Jia Post-Doctoral Research Associate vehicles (UAVs). Premature convergence prevents evolutionary-based algorithms from reaching global optimal. To overcome this problem, this paper presents a framework of parallel evolutionary algorithms for UAV path
Algorithms Demands and Bounds Applications of Flow
Kabanets, Valentine
2/28/2014 1 Algorithms Demands and Bounds Applications of Flow Networks Design and Analysis of Algorithms Andrei Bulatov Algorithms Demands and Bounds 12-2 Lower Bounds The problem can be generalized) capacities (ii) demands (iii) lower bounds A circulation f is feasible if (Capacity condition) For each e E
Generalized URV Subspace Tracking LMS Algorithm 1
Boley, Daniel
Generalized URV Subspace Tracking LMS Algorithm 1 S. Hosur and A. H. Tew k and D. Boley Dept The convergence rate of the Least Mean Squares (LMS) algorithm is poor whenever the adaptive lter input auto-correlation matrix is ill-conditioned. In this paper we propose a new LMS algorithm to alleviate this problem
The Observer Algorithm for Visibility Approximation
Doherty, Patrick
, with dif- ferent view ranges and grid cell sizes. By changing the size of the grid cells that the algorithm or more sentries while moving to a goal position. Algorithms for finding a covert paths in the presence of stationary and moving sentries has been devised by [5] [6]. An approximate visibility algorithm was devised
Partitioned algorithms for maximum likelihood and
Smyth, Gordon K.
Partitioned algorithms for maximum likelihood and other nonlinear estimation Gordon K. Smyth There are a variety of methods in the literature which seek to make iterative estimation algorithms more manageable by breaking the iterations into a greater number of simpler or faster steps. Those algorithms which deal
Total Algorithms \\Lambda Gerard Tel y
Utrecht, Universiteit
Total Algorithms \\Lambda Gerard Tel y Department of Computer Science, University of Utrecht, P and February 1993 Abstract We define the notion of total algorithms for networks of processes. A total algorithm enforces that a ``decision'' is taken by a subset of the processes, and that participation of all
Distributed QR Factorization Based on Randomized Algorithms
Zemen, Thomas
Distributed QR Factorization Based on Randomized Algorithms Hana Strakov´a1 , Wilfried N. Gansterer of Algorithms Hana.Strakova@univie.ac.at, Wilfried.Gansterer@univie.ac.at 2 Forschungszentrum Telekommunication Wien, Austria Thomas.Zemen@ftw.at Abstract. Most parallel algorithms for matrix computations assume
Minimum-Flip Supertrees: Complexity and Algorithms
Sanderson, Michael J.
Minimum-Flip Supertrees: Complexity and Algorithms Duhong Chen, Oliver Eulenstein, David Ferna that it is fixed-parameter tractable and give approximation algorithms for special cases. Index Terms assembled from all species in the study. Because the conventional algorithms to solve these problems
Algorithms and Theory of Computation Handbook, Second
Algorithms and Theory of Computation Handbook, Second Edition CRC PRESS Boca Raton Ann Arbor London Parameterized Algorithms 1 Rodney G. Downey and Catherine McCartin School of Mathematical and Computing Sciences.2 The Main Idea . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2 1.3 Practical FPT Algorithms
Finding Algorithms in Scientific Articles Sumit Bhatia
Giles, C. Lee
Finding Algorithms in Scientific Articles Sumit Bhatia , Prasenjit Mitra and C. Lee Giles,giles}@ist.psu.edu ABSTRACT Algorithms are an integral part of computer science literature. How- ever, none of the current search engines offer specialized algorithm search facility. We describe a vertical search engine
Algorithms in pure mathematics G. Stroth
Stroth, Gernot
Algorithms in pure mathematics G. Stroth 1 Introduction In this article, we will discuss algorithmic group theory from the point of view of pure, and where one might be surprised that there is no algorithmic solution. The two most developed areas
Study of Proposed Internet Congestion Control Algorithms*
Study of Proposed Internet Congestion Control Algorithms* Kevin L. Mills, NIST (joint work with D Y Algorithms Mills et al. Innovations in Measurement Science More information @ http;Study of Proposed Internet Congestion Control Algorithms Mills et al. OutlineOutline Technical
Expander Graph Arguments for Message Passing Algorithms
Burshtein, David
Expander Graph Arguments for Message Passing Algorithms David Burshtein and Gadi Miller Dept arguments may be used to prove that message passing algorithms can correct a linear number of erroneous a message passing algorithm has corrected a sufficiently large fraction of the errors, it will eventually
Voronoi Particle Merging Algorithm for PIC Codes
Luu, Phuc T; Pukhov, A
2015-01-01
We present a new particle-merging algorithm for the particle-in-cell method. Based on the concept of the Voronoi diagram, the algorithm partitions the phase space into smaller subsets, which consist of only particles that are in close proximity in the phase space to each other. We show the performance of our algorithm in the case of magnetic shower.
Algorithmic proof of Barnette's Conjecture
I. Cahit
2009-04-22
In this paper we have given an algorithmic proof of an long standing Barnette's conjecture (1969) that every 3-connected bipartite cubic planar graph is hamiltonian. Our method is quite different than the known approaches and it rely on the operation of opening disjoint chambers, bu using spiral-chain like movement of the outer-cycle elastic-sticky edges of the cubic planar graph. In fact we have shown that in hamiltonicity of Barnette graph a single-chamber or double-chamber with a bridge face is enough to transform the problem into finding specific hamiltonian path in the cubic bipartite graph reduced. In the last part of the paper we have demonstrated that, if the given cubic planar graph is non-hamiltonian then the algorithm which constructs spiral-chain (or double-spiral chain) like chamber shows that except one vertex there exists (n-1)-vertex cycle.
Theoretical Framework for Microscopic Osmotic Phenomena
P. J. Atzberger; P. R. Kramer
2009-10-29
The basic ingredients of osmotic pressure are a solvent fluid with a soluble molecular species which is restricted to a chamber by a boundary which is permeable to the solvent fluid but impermeable to the solute molecules. For macroscopic systems at equilibrium, the osmotic pressure is given by the classical van't Hoff Law, which states that the pressure is proportional to the product of the temperature and the difference of the solute concentrations inside and outside the chamber. For microscopic systems the diameter of the chamber may be comparable to the length-scale associated with the solute-wall interactions or solute molecular interactions. In each of these cases, the assumptions underlying the classical van't Hoff Law may no longer hold. In this paper we develop a general theoretical framework which captures corrections to the classical theory for the osmotic pressure under more general relationships between the size of the chamber and the interaction length scales. We also show that notions of osmotic pressure based on the hydrostatic pressure of the fluid and the mechanical pressure on the bounding walls of the chamber must be distinguished for microscopic systems. To demonstrate how the theoretical framework can be applied, numerical results are presented for the osmotic pressure associated with a polymer of N monomers confined in a spherical chamber as the bond strength is varied.
Santhanam, Balu
LMS Algorithm: MotivationLMS Algorithm: Motivation Only a single realization of observations : delay in tap-weight adjustment. Simplicity: real-time applications possible. LMS AlgorithmLMS Algorithm Use instantaneous estimates for statistics: Filter output: Estimation error: Tap-weight update: LMS
Improved algorithms for reaction path following: Higher-order implicit algorithms
Schlegel, H. Bernhard
Improved algorithms for reaction path following: Higher-order implicit algorithms Carlos Gonzaleza (Received 13May 1991;accepted17June 1991) Eight new algorithms for reaction path following are presented or if accurate propertiessuch ascurvature and frequenciesare needed.3*4 Numerous algorithms exist for following
The SIGACT Theoretical Computer Science Genealogy: Preliminary Report
Parberry, Ian
The SIGACT Theoretical Computer Science Genealogy: Preliminary Report Ian Parberry Department The SIGACT Theoretical Computer Science Genealogy, which lists information on earned doctoral degrees of the Computer Science Genealogy lists information on earned doctoral degrees (thesis ad- viser, university
Experimental and theoretical study of horizontal-axis wind turbines
Anderson, Michael Broughton
1981-10-20
An experimental and theoretical study of horizontal-axis wind turbines is undertaken. The theoretical analyses cover the four major areas of aerodynamics, turbulence. aeroelasticity and blade optimisation. EXisting aerodynamic theories based...
Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)
AFDC Printable Version Share this resource Send a link to EERE: Alternative Fuels Data Center Home Page to someone by E-mail Share EERE: Alternative Fuels Data Center Home Page on Facebook Tweet about EERE: Alternative Fuels Data Center Home Page on Twitter Bookmark EERE: Alternative Fuels Data Center Homesum_a_epg0_fpd_mmcf_m.xls" ,"Available from WebQuantity ofkandz-cm11 Outreach Home RoomPreservationBio-Inspired SolarAboutXu Named| Princeton PlasmaZhihong Lin ZhihongTexas2195
Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)
AFDC Printable Version Share this resource Send a link to EERE: Alternative Fuels Data Center Home Page to someone by E-mail Share EERE: Alternative Fuels Data Center Home Page on Facebook Tweet about EERE: Alternative Fuels Data Center Home Page on Twitter Bookmark EERE: Alternative Fuels Data Center Homesum_a_epg0_fpd_mmcf_m.xls" ,"Available from WebQuantity ofkandz-cm11 Outreach Home RoomPreservationBio-Inspired SolarAboutXu Named| Princeton PlasmaZhihong Lin ZhihongTexas21958524
Office of Scientific and Technical Information (OSTI)
AFDC Printable Version Share this resource Send a link to EERE: Alternative Fuels Data Center Home Page to someone by E-mail Share EERE: Alternative Fuels Data Center Home Page on Facebook Tweet about EERE: Alternative Fuels Data Center Home Page on Twitter Bookmark EERE: Alternative Fuels Data Center Homesum_a_epg0_fpd_mmcf_m.xls" ,"Available from WebQuantity of NaturalDukeWakefield MunicipalTechnicalInformation4563 LLNL Small-scale Friction Sensitivityv b W r88fracturing
Bicriteria Optimization of Technological Parameters in Algorithm for Designing Magnetic Composites
Krzysztof Sokalski; Barbara ?lusarek; Bartosz Jankowski; Marek Przybylski
2015-07-27
Novel algorithm for designing values of technological parameters for production of Soft Magnetic Composites (SMC) has been created. These parameters are the following magnitudes: hardening temperature $T$ and compaction pressure $p$. They enable us to optimize of power losses and induction. The advantage of the presented algorithm consists in the bicriteria optimization. The crucial role in the presented algorithm play scaling and notion of pseudo-state equation. On the base of these items the mathematical models of the power losses and induction have been created. The models parameters have been calculated on the basis of the power losses characteristics and hysteresis loops. The created optimization system has been applied to specimens of Somaloy 500. Obtained output consists of finite set of feasible solutions. In order to select unique solution an example of additional criterion has been formulated.
Testing Algorithms for Finite Temperature Lattice QCD
M. Cheng; M. A. Clark; C. Jung; R. D. Mawhinney
2006-08-23
We discuss recent algorithmic improvements in simulating finite temperature QCD on a lattice. In particular, the Rational Hybrid Monte Carlo(RHMC) algorithm is employed to generate lattice configurations for 2+1 flavor QCD. Unlike the Hybrid R Algorithm, RHMC is reversible, admitting a Metropolis accept/reject step that eliminates the $\\mathcal{O}(\\delta t^2)$ errors inherent in the R Algorithm. We also employ several algorithmic speed-ups, including multiple time scales, the use of a more efficient numerical integrator, and Hasenbusch pre-conditioning of the fermion force.
CRAD, Safety Basis Upgrade Review (DOE-STD-3009-2014) - May 15...
Office of Environmental Management (EM)
1) provides objectives, criteria, and approaches for establishing and maintaining the safety basis at nuclear facilities. CRAD, Safety Basis Upgrade Review (DOE-STD-3009-2014)...
Distributed Approaches for Determination of Reconfiguration Algorithm Termination
Lai, Hong-jian
Distributed Approaches for Determination of Reconfiguration Algorithm Termination Pinak Tulpule architecture was used as globally shared memory structure for detection of algorithm termination. This paper of algorithm termination. Keywords--autonomous agent-based reconfiguration, dis- tributed algorithms, shipboard
Final Report: Sublinear Algorithms for In-situ and In-transit Data Analysis at Exascale.
Bennett, Janine Camille; Pinar, Ali; Seshadhri, C.; Thompson, David; Salloum, Maher; Bhagatwala, Ankit; Chen, Jacqueline H.
2015-09-01
Post-Moore's law scaling is creating a disruptive shift in simulation workflows, as saving the entirety of raw data to persistent storage becomes expensive. We are moving away from a post-process centric data analysis paradigm towards a concurrent analysis framework, in which raw simulation data is processed as it is computed. Algorithms must adapt to machines with extreme concurrency, low communication bandwidth, and high memory latency, while operating within the time constraints prescribed by the simulation. Furthermore, in- put parameters are often data dependent and cannot always be prescribed. The study of sublinear algorithms is a recent development in theoretical computer science and discrete mathematics that has significant potential to provide solutions for these challenges. The approaches of sublinear algorithms address the fundamental mathematical problem of understanding global features of a data set using limited resources. These theoretical ideas align with practical challenges of in-situ and in-transit computation where vast amounts of data must be processed under severe communication and memory constraints. This report details key advancements made in applying sublinear algorithms in-situ to identify features of interest and to enable adaptive workflows over the course of a three year LDRD. Prior to this LDRD, there was no precedent in applying sublinear techniques to large-scale, physics based simulations. This project has definitively demonstrated their efficacy at mitigating high performance computing challenges and highlighted the rich potential for follow-on re- search opportunities in this space.
Game Theoretic Methods for the Smart Grid
Saad, Walid; Poor, H Vincent; Ba?ar, Tamer
2012-01-01
The future smart grid is envisioned as a large-scale cyber-physical system encompassing advanced power, communications, control, and computing technologies. In order to accommodate these technologies, it will have to build on solid mathematical tools that can ensure an efficient and robust operation of such heterogeneous and large-scale cyber-physical systems. In this context, this paper is an overview on the potential of applying game theory for addressing relevant and timely open problems in three emerging areas that pertain to the smart grid: micro-grid systems, demand-side management, and communications. In each area, the state-of-the-art contributions are gathered and a systematic treatment, using game theory, of some of the most relevant problems for future power systems is provided. Future opportunities for adopting game theoretic methodologies in the transition from legacy systems toward smart and intelligent grids are also discussed. In a nutshell, this article provides a comprehensive account of the...
Design-Load Basis for LANL Structures, Systems, and Components
I. Cuesta
2004-09-01
This document supports the recommendations in the Los Alamos National Laboratory (LANL) Engineering Standard Manual (ESM), Chapter 5--Structural providing the basis for the loads, analysis procedures, and codes to be used in the ESM. It also provides the justification for eliminating the loads to be considered in design, and evidence that the design basis loads are appropriate and consistent with the graded approach required by the Department of Energy (DOE) Code of Federal Regulation Nuclear Safety Management, 10, Part 830. This document focuses on (1) the primary and secondary natural phenomena hazards listed in DOE-G-420.1-2, Appendix C, (2) additional loads not related to natural phenomena hazards, and (3) the design loads on structures during construction.
Basis for NGNP Reactor Design Down-Selection
L.E. Demick
2011-11-01
The purpose of this paper is to identify the extent of technology development, design and licensing maturity anticipated to be required to credibly identify differences that could make a technical choice practical between the prismatic and pebble bed reactor designs. This paper does not address a business decision based on the economics, business model and resulting business case since these will vary based on the reactor application. The selection of the type of reactor, the module ratings, the number of modules, the configuration of the balance of plant and other design selections will be made on the basis of optimizing the Business Case for the application. These are not decisions that can be made on a generic basis.
The SU(3) Algebra in a Cyclic Basis
P. F. Harrison; R. Krishnan; W. G. Scott
2014-07-31
With the couplings between the eight gluons constrained by the structure constants of the su(3) algebra in QCD, one would expect that there should exist a special basis (or set of bases) for the algebra wherein, unlike in a Cartan-Weyl basis, {\\em all} gluons interact identically (cyclically) with each other, explicitly on an equal footing. We report here particular such bases, which we have found in a computer search, and we indicate associated $3 \\times 3$ representations. We conjecture that essentially all cyclic bases for su(3) may be obtained from these making appropriate circulant transformations,and that cyclic bases may also exist for other su(n), n>3.
Basis for NGNP Reactor Design Down-Selection
L.E. Demick
2010-08-01
The purpose of this paper is to identify the extent of technology development, design and licensing maturity anticipated to be required to credibly identify differences that could make a technical choice practical between the prismatic and pebble bed reactor designs. This paper does not address a business decision based on the economics, business model and resulting business case since these will vary based on the reactor application. The selection of the type of reactor, the module ratings, the number of modules, the configuration of the balance of plant and other design selections will be made on the basis of optimizing the Business Case for the application. These are not decisions that can be made on a generic basis.
on the complexity of some hierarchical structured matrix algorithms
2012-05-17
matrix algorithms, in terms of hierarchically semiseparable (HSS) matrices. ... We perform detailed complexity analysis for some typical HSS algorithms, with.
The Neural Basis of Financial Risk-Taking* Supplementary Material
Knutson, Brian
1 The Neural Basis of Financial Risk-Taking* Supplementary Material Camelia M. Kuhnen1 and Brian in each block, a rational risk-neutral agent should pick stock i if he/she expects to receive a dividend D is the information set up to trial -1. That is: I-1 ={D i t| t-1, i{Stock T, Stock R, Bond C}}. Let x i = Pr{ Stock
Hero, Alfred O.
Background Basic methods Model-based algorithms Model-free algorithms RTI Perspectives Conclusions 23, 2015 1 52 #12;Background Basic methods Model-based algorithms Model-free algorithms RTI Coates (McGill) 2 52 #12;Background Basic methods Model-based algorithms Model-free algorithms RTI
Theoretical crystallography with the Advanced Visualization System
Younkin, C.R.; Thornton, E.N.; Nicholas, J.B.; Jones, D.R.; Hess, A.C.
1993-05-01
Space is an Application Visualization System (AVS) graphics module designed for crystallographic and molecular research. The program can handle molecules, two-dimensional periodic systems, and three-dimensional periodic systems, all referred to in the paper as models. Using several methods, the user can select atoms, groups of atoms, or entire molecules. Selections can be moved, copied, deleted, and merged. An important feature of Space is the crystallography component. The program allows the user to generate the unit cell from the asymmetric unit, manipulate the unit cell, and replicate it in three dimensions. Space includes the Buerger reduction algorithm which determines the asymmetric unit and the space group of highest symmetry of an input unit cell. Space also allows the user to display planes in the lattice based on Miller indices, and to cleave the crystal to expose the surface. The user can display important precalculated volumetric data in Space, such as electron densities and electrostatic surfaces. With a variety of methods, Space can compute the electrostatic potential of any chemical system based on input point charges.
Theoretical Model for Nanoporous Carbon Supercapacitors
Sumpter, Bobby G; Meunier, Vincent; Huang, Jingsong
2008-01-01
The unprecedented anomalous increase in capacitance of nanoporous carbon supercapacitors at pore sizes smaller than 1 nm [Science 2006, 313, 1760.] challenges the long-held presumption that pores smaller than the size of solvated electrolyte ions do not contribute to energy storage. We propose a heuristic model to replace the commonly used model for an electric double-layer capacitor (EDLC) on the basis of an electric double-cylinder capacitor (EDCC) for mesopores (2 {50 nm pore size), which becomes an electric wire-in-cylinder capacitor (EWCC) for micropores (< 2 nm pore size). Our analysis of the available experimental data in the micropore regime is confirmed by 1st principles density functional theory calculations and reveals significant curvature effects for carbon capacitance. The EDCC (and/or EWCC) model allows the supercapacitor properties to be correlated with pore size, specific surface area, Debye length, electrolyte concentration and dielectric constant, and solute ion size. The new model not only explains the experimental data, but also offers a practical direction for the optimization of the properties of carbon supercapacitors through experiments.
Laminated Wave Turbulence: Generic Algorithms II
Elena Kartashova; Alexey Kartashov
2006-11-17
The model of laminated wave turbulence puts forth a novel computational problem - construction of fast algorithms for finding exact solutions of Diophantine equations in integers of order $10^{12}$ and more. The equations to be solved in integers are resonant conditions for nonlinearly interacting waves and their form is defined by the wave dispersion. It is established that for the most common dispersion as an arbitrary function of a wave-vector length two different generic algorithms are necessary: (1) one-class-case algorithm for waves interacting through scales, and (2) two-class-case algorithm for waves interacting through phases. In our previous paper we described the one-class-case generic algorithm and in our present paper we present the two-class-case generic algorithm.
Theoretical Tools for Large Scale Structure
J. R. Bond; L. Kofman; D. Pogosyan; J. Wadsley
1998-10-06
We review the main theoretical aspects of the structure formation paradigm which impinge upon wide angle surveys: the early universe generation of gravitational metric fluctuations from quantum noise in scalar inflaton fields; the well understood and computed linear regime of CMB anisotropy and large scale structure (LSS) generation; the weakly nonlinear regime, where higher order perturbation theory works well, and where the cosmic web picture operates, describing an interconnected LSS of clusters bridged by filaments, with membranes as the intrafilament webbing. Current CMB+LSS data favour the simplest inflation-based $\\Lambda$CDM models, with a primordial spectral index within about 5% of scale invariant and $\\Omega_\\Lambda \\approx 2/3$, similar to that inferred from SNIa observations, and with open CDM models strongly disfavoured. The attack on the nonlinear regime with a variety of N-body and gas codes is described, as are the excursion set and peak-patch semianalytic approaches to object collapse. The ingredients are mixed together in an illustrative gasdynamical simulation of dense supercluster formation.
Theoretical priors on modified growth parametrisations
Song, Yong-Seon; Hollenstein, Lukas; Caldera-Cabral, Gabriela; Koyama, Kazuya E-mail: Lukas.Hollenstein@unige.ch E-mail: Kazuya.Koyama@port.ac.uk
2010-04-01
Next generation surveys will observe the large-scale structure of the Universe with unprecedented accuracy. This will enable us to test the relationships between matter over-densities, the curvature perturbation and the Newtonian potential. Any large-distance modification of gravity or exotic nature of dark energy modifies these relationships as compared to those predicted in the standard smooth dark energy model based on General Relativity. In linear theory of structure growth such modifications are often parameterised by virtue of two functions of space and time that enter the relation of the curvature perturbation to, first, the matter over- density, and second, the Newtonian potential. We investigate the predictions for these functions in Brans-Dicke theory, clustering dark energy models and interacting dark energy models. We find that each theory has a distinct path in the parameter space of modified growth. Understanding these theoretical priors on the parameterisations of modified growth is essential to reveal the nature of cosmic acceleration with the help of upcoming observations of structure formation.
RELEASE OF DRIED RADIOACTIVE WASTE MATERIALS TECHNICAL BASIS DOCUMENT
KOZLOWSKI, S.D.
2007-05-30
This technical basis document was developed to support RPP-23429, Preliminary Documented Safety Analysis for the Demonstration Bulk Vitrification System (PDSA) and RPP-23479, Preliminary Documented Safety Analysis for the Contact-Handled Transuranic Mixed (CH-TRUM) Waste Facility. The main document describes the risk binning process and the technical basis for assigning risk bins to the representative accidents involving the release of dried radioactive waste materials from the Demonstration Bulk Vitrification System (DBVS) and to the associated represented hazardous conditions. Appendices D through F provide the technical basis for assigning risk bins to the representative dried waste release accident and associated represented hazardous conditions for the Contact-Handled Transuranic Mixed (CH-TRUM) Waste Packaging Unit (WPU). The risk binning process uses an evaluation of the frequency and consequence of a given representative accident or represented hazardous condition to determine the need for safety structures, systems, and components (SSC) and technical safety requirement (TSR)-level controls. A representative accident or a represented hazardous condition is assigned to a risk bin based on the potential radiological and toxicological consequences to the public and the collocated worker. Note that the risk binning process is not applied to facility workers because credible hazardous conditions with the potential for significant facility worker consequences are considered for safety-significant SSCs and/or TSR-level controls regardless of their estimated frequency. The controls for protection of the facility workers are described in RPP-23429 and RPP-23479. Determination of the need for safety-class SSCs was performed in accordance with DOE-STD-3009-94, Preparation Guide for US. Department of Energy Nonreactor Nuclear Facility Documented Safety Analyses, as described below.
Algorithm for a microfluidic assembly line
Tobias M. Schneider; Shreyas Mandre; Michael P. Brenner
2011-01-19
Microfluidic technology has revolutionized the control of flows at small scales giving rise to new possibilities for assembling complex structures on the microscale. We analyze different possible algorithms for assembling arbitrary structures, and demonstrate that a sequential assembly algorithm can manufacture arbitrary 3D structures from identical constituents. We illustrate the algorithm by showing that a modified Hele-Shaw cell with 7 controlled flowrates can be designed to construct the entire English alphabet from particles that irreversibly stick to each other.
The Bender-Dunne basis operators as Hilbert space operators
Bunao, Joseph; Galapon, Eric A. E-mail: eric.galapon@upd.edu.ph
2014-02-15
The Bender-Dunne basis operators, T{sub ?m,n}=2{sup ?n}?{sub k=0}{sup n}(n/k )q{sup k}p{sup ?m}q{sup n?k} where q and p are the position and momentum operators, respectively, are formal integral operators in position representation in the entire real line R for positive integers n and m. We show, by explicit construction of a dense domain, that the operators T{sub ?m,n}'s are densely defined operators in the Hilbert space L{sup 2}(R)
Interim safety basis for fuel supply shutdown facility
Brehm, J.R.; Deobald, T.L.; Benecke, M.W.; Remaize, J.A.
1995-05-23
This ISB in conjunction with the new TSRs, will provide the required basis for interim operation or restrictions on interim operations and administrative controls for the Facility until a SAR is prepared in accordance with the new requirements. It is concluded that the risk associated with the current operational mode of the Facility, uranium closure, clean up, and transition activities required for permanent closure, are within Risk Acceptance Guidelines. The Facility is classified as a Moderate Hazard Facility because of the potential for an unmitigated fire associated with the uranium storage buildings.
Structural Basis for the Promiscuous Biosynthetic Prenylation of Aromatic
Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)
AFDC Printable Version Share this resource Send a link to EERE: Alternative Fuels Data Center Home Page to someone by E-mail Share EERE: Alternative Fuels Data Center Home Page on Facebook Tweet about EERE: Alternative Fuels Data Center Home Page on Twitter Bookmark EERE: Alternative Fuels Data Center Homesum_a_epg0_fpd_mmcf_m.xls" ,"Available from WebQuantityBonneville Power AdministrationRobust,Field-effect Photovoltaics -7541C.3X-rays3Energy U.S.Structural Basis forNatural
Design and Analysis of Algorithms Course Page
Design and Analysis of Algorithms. TTR 3:05- 4:25, IC 109. OFFICE HOURS: Wed 11-12 or by appointment (Rm: Skiles, 116).
Parallel GPU Algorithms for Mechanical CAD
Krishnamurthy, Adarsh
2010-01-01
of California, Berkeley, Mechanical Engineering Department,GPU Algorithms for Mechanical CAD by Adarsh Krishnamurthy Aof Philosophy in Engineering - Mechanical Engineering in the
Algorithmic Cooling in Liquid State NMR
Yosi Atia; Yuval Elias; Tal Mor; Yossi Weinstein
2015-08-05
Algorithmic cooling is a method that employs thermalization to increase qubit purification level, namely it reduces the qubit-system's entropy. We utilized gradient ascent pulse engineering (GRAPE), an optimal control algorithm, to implement algorithmic cooling in liquid state nuclear magnetic resonance. Various cooling algorithms were applied onto the three qubits of $^{13}$C$_2$-trichloroethylene, cooling the system beyond Shannon's entropy bound in several different ways. In particular, in one experiment a carbon qubit was cooled by a factor of 4.61. This work is a step towards potentially integrating tools of NMR quantum computing into in vivo magnetic resonance spectroscopy.
High-Performance Engineering Optimization: Applications, Algorithms...
Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)
High-Performance Engineering Optimization: Applications, Algorithms, and Adoption Event Sponsor: Mathematics and Computer Science Division Start Date: Aug 19 2015 - 10:30am...
Algorithmic Cooling in Liquid State NMR
Yosi Atia; Yuval Elias; Tal Mor; Yossi Weinstein
2015-11-08
Algorithmic cooling is a method that employs thermalization to increase qubit purification level, namely it reduces the qubit-system's entropy. We utilized gradient ascent pulse engineering (GRAPE), an optimal control algorithm, to implement algorithmic cooling in liquid state nuclear magnetic resonance. Various cooling algorithms were applied onto the three qubits of $^{13}$C$_2$-trichloroethylene, cooling the system beyond Shannon's entropy bound in several different ways. In particular, in one experiment a carbon qubit was cooled by a factor of 4.61. This work is a step towards potentially integrating tools of NMR quantum computing into in vivo magnetic resonance spectroscopy.
LO, NLO, LO* and jet algorithms
J. Huston
2010-01-14
The impact of NLO corrections, and in particular, the role of jet algorithms, is examined for a variety of processes at the Tevatron and LHC.
Hanford External Dosimetry Technical Basis Manual PNL-MA-842
Rathbone, Bruce A.
2005-02-25
The Hanford External Dosimetry Technical Basis Manual PNL-MA-842 documents the design and implementation of the external dosimetry system used at Hanford. The manual describes the dosimeter design, processing protocols, dose calculation methodology, radiation fields encountered, dosimeter response characteristics, limitations of dosimeter design under field conditions, and makes recommendations for effective use of the dosimeters in the field. The manual describes the technical basis for the dosimetry system in a manner intended to help ensure defensibility of the dose of record at Hanford and to demonstrate compliance with 10 CFR 835, DOELAP, DOE-RL, ORP, PNSO, and Hanford contractor requirements. The dosimetry system is operated by PNNL’s Hanford External Dosimetry Program which provides dosimetry services to all Hanford contractors. The primary users of this manual are DOE and DOE contractors at Hanford using the dosimetry services of PNNL. Development and maintenance of this manual is funded directly by DOE and DOE contractors. Its contents have been reviewed and approved by DOE and DOE contractors at Hanford through the Hanford Personnel Dosimetry Advisory Committee which is chartered and chaired by DOE-RL and serves as means of coordinating dosimetry practices across contractors at Hanford. This manual was established in 1996. Since inception, it has been revised many times and maintained by PNNL as a controlled document with controlled distribution. Rev. 0 marks the first revision to be released through PNNL’s Electronic Records & Information Capture Architecture (ERICA) database.
A New Basis of Geoscience: Whole-Earth Decompression Dynamics
J. Marvin Herndon
2013-07-04
Neither plate tectonics nor Earth expansion theory is sufficient to provide a basis for understanding geoscience. Each theory is incomplete and possesses problematic elements, but both have served as stepping stones to a more fundamental and inclusive geoscience theory that I call Whole-Earth Decompression Dynamics (WEDD). WEDD begins with and is the consequence of our planet's early formation as a Jupiter-like gas giant and permits deduction of:(1) Earth's internal composition, structure, and highly-reduced oxidation state; (2) Core formation without whole-planet melting; (3) Powerful new internal energy sources - proto-planetary energy of compression and georeactor nuclear fission energy; (4) Georeactor geomagnetic field generation; (5) Mechanism for heat emplacement at the base of the crust resulting in the crustal geothermal gradient; (6) Decompression driven geodynamics that accounts for the myriad of observations attributed to plate tectonics without requiring physically-impossible mantle convection, and; (7) A mechanism for fold-mountain formation that does not necessarily require plate collision. The latter obviates the necessity to assume supercontinent cycles. Here, I review the principles of Whole-Earth Decompression Dynamics and describe a new underlying basis for geoscience and geology.
An efficient basis set representation for calculating electrons in molecules
Jeremiah R. Jones; Francois-Henry Rouet; Keith V. Lawler; Eugene Vecharynski; Khaled Z. Ibrahim; Samuel Williams; Brant Abeln; Chao Yang; Daniel J. Haxton; C. William McCurdy; Xiaoye S. Li; Thomas N. Rescigno
2015-07-13
The method of McCurdy, Baertschy, and Rescigno, J. Phys. B, 37, R137 (2004) is generalized to obtain a straightforward, surprisingly accurate, and scalable numerical representation for calculating the electronic wave functions of molecules. It uses a basis set of product sinc functions arrayed on a Cartesian grid, and yields 1 kcal/mol precision for valence transition energies with a grid resolution of approximately 0.1 bohr. The Coulomb matrix elements are replaced with matrix elements obtained from the kinetic energy operator. A resolution-of-the-identity approximation renders the primitive one- and two-electron matrix elements diagonal; in other words, the Coulomb operator is local with respect to the grid indices. The calculation of contracted two-electron matrix elements among orbitals requires only O(N log(N)) multiplication operations, not O(N^4), where N is the number of basis functions; N = n^3 on cubic grids. The representation not only is numerically expedient, but also produces energies and properties superior to those calculated variationally. Absolute energies, absorption cross sections, transition energies, and ionization potentials are reported for one- (He^+, H_2^+ ), two- (H_2, He), ten- (CH_4) and 56-electron (C_8H_8) systems.
Jeongho Bang; Seung-Woo Lee; Chang-Woo Lee; Hyunseok Jeong
2014-09-17
We propose a quantum algorithm to obtain the lowest eigenstate of any Hamiltonian simulated by a quantum computer. The proposed algorithm begins with an arbitrary initial state of the simulated system. A finite series of transforms is iteratively applied to the initial state assisted with an ancillary qubit. The fraction of the lowest eigenstate in the initial state is then amplified up to $\\simeq 1$. We prove that our algorithm can faithfully work for any arbitrary Hamiltonian in the theoretical analysis. Numerical analyses are also carried out. We firstly provide a numerical proof-of-principle demonstration with a simple Hamiltonian in order to compare our scheme with the so-called "Demon-like algorithmic cooling (DLAC)", recently proposed in [Nature Photonics 8, 113 (2014)]. The result shows a good agreement with our theoretical analysis, exhibiting the comparable behavior to the best "cooling" with the DLAC method. We then consider a random Hamiltonian model for further analysis of our algorithm. By numerical simulations, we show that the total number $n_c$ of iterations is proportional to $\\simeq {\\cal O}(D^{-1}\\epsilon^{-0.19})$, where $D$ is the difference between the two lowest eigenvalues, and $\\epsilon$ is an error defined as the probability that the finally obtained system state is in an unexpected (i.e. not the lowest) eigenstate.
A Game-Theoretical Dynamic Model for Electricity Markets
Oct 6, 2010 ... Abstract: We present a game-theoretical dynamic model for competitive electricity markets.We demonstrate that the model can be used to ...
Improvements of Nuclear Data and Its Uncertainties by Theoretical...
Office of Scientific and Technical Information (OSTI)
Improvements of Nuclear Data and Its Uncertainties by Theoretical Modeling Talou, Patrick Los Alamos National Laboratory; Nazarewicz, Witold University of Tennessee, Knoxville,...
Improvements of Nuclear Data and Its Uncertainties by Theoretical...
Office of Scientific and Technical Information (OSTI)
Improvements of Nuclear Data and Its Uncertainties by Theoretical Modeling Citation Details In-Document Search Title: Improvements of Nuclear Data and Its Uncertainties by...
Improvements to Nuclear Data and Its Uncertainties by Theoretical...
Office of Scientific and Technical Information (OSTI)
Technical Report: Improvements to Nuclear Data and Its Uncertainties by Theoretical Modeling Citation Details In-Document Search Title: Improvements to Nuclear Data and Its...
Theoretical/Computational Tools for Energy-Relevant Catalysis...
Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)
TheoreticalComputational Tools for Energy-Relevant Catalysis FWPProject Description: Project Leader(s): James Evans, Mark Gordon Principal Investigators: James Evans, Mark Gordon...
Theoretical overview on top pair production and single top production
Stefan Weinzierl
2012-01-19
In this talk I will give an overview on theoretical aspects of top quark physics. The focus lies on top pair production and single top production.
Karlický, František; Otyepka, Michal; 10.1063/1.4736998
2012-01-01
DFT calculations of the electronic structure of graphane and stoichiometrically halogenated graphene derivatives (fluorographene and other analogous graphene halides) show (i) localized orbital basis sets can be successfully and effectively used for such 2D materials; (ii) several functionals predict that the band gap of graphane is greater than that of fluorographene, whereas HSE06 gives the opposite trend; (iii) HSE06 functional predicts quite good values of band gaps w.r.t benchmark theoretical and experimental data; (iv) the zero band gap of graphene is opened by hydrogenation and halogenation and strongly depends on the chemical composition of mixed graphene halides; (v) the stability of graphene halides decreases sharply with increasing size of the halogen atom - fluorographene is stable, whereas graphene iodide spontaneously decomposes. In terms of band gap and stability, the C2FBr, and C2HBr derivatives seem to be promising materials, e.g., for (opto)electronics applications, because their band gaps a...
When are two algorithms the same?
Andreas Blass; Nachum Dershowitz; Yuri Gurevich
2008-11-05
People usually regard algorithms as more abstract than the programs that implement them. The natural way to formalize this idea is that algorithms are equivalence classes of programs with respect to a suitable equivalence relation. We argue that no such equivalence relation exists.
Algorithmic cooling and scalable NMR quantum computers
Mor, Tal
Algorithmic cooling and scalable NMR quantum computers P. Oscar Boykin*, Tal MorÂ§ , Vwani cooling (via polarization heat bath)--a powerful method for obtaining a large number of highly polarized (quantum) bits, algorithmic cooling cleans dirty bits beyond the Shannon's bound on data compression
Energy Aware Algorithmic Engineering Swapnoneel Roy
Rudra,, Atri
Energy Aware Algorithmic Engineering Swapnoneel Roy School of Computing University of North Florida: akshat.verma@in.ibm.com Abstract--In this work, we argue that energy management should be a guiding are simple and do not aid in design of energy-efficient algorithms. In this work, we conducted a large number
Enhancing Smart Home Algorithms Using Temporal Relations
Cook, Diane J.
Enhancing Smart Home Algorithms Using Temporal Relations Vikramaditya R. JAKKULA1 and Diane J COOK School of Electrical Engineering and Computer Science Abstract. Smart homes offer a potential benefit improves the performance of these algorithms and thus enhances the ability of smart homes to monitor
On the Existence of certain Quantum Algorithms
Bjoern Grohmann
2009-04-11
We investigate the question if quantum algorithms exist that compute the maximum of a set of conjugated elements of a given number field in quantum polynomial time. We will relate the existence of these algorithms for a certain family of number fields to an open conjecture from elementary number theory.
Note on Integer Factoring Algorithms II
N. A. Carella
2007-02-08
This note introduces a new class of integer factoring algorithms. Two versions of this method will be described, deterministic and probabilistic. These algorithms are practical, and can factor large classes of balanced integers N = pq, p < q < 2p in superpolynomial time. Further, an extension of the Fermat factoring method is proposed.
A heuristic algorithm for graph isomorphism
Torres Navarro, Luz
1999-01-01
polynomial time algorithm O(n?), ISO-MT, that seems' to solve the graph isomorphism decision problem correctly for all classes of graphs. Our algorithm is extremely useful from the practical point of view since counter examples (pairs of graphs for which our...
Algorithmic Aspects of Proportional Symbol Sergio Cabello
Utrecht, Universiteit
Algorithmic Aspects of Proportional Symbol Maps Sergio Cabello Herman Haverkort Marc van Kreveld-CS-2008-008 www.cs.uu.nl ISSN: 0924-3275 #12;Algorithmic Aspects of Proportional Symbol Maps Sergio@cs.uu.nl Abstract Proportional symbol maps visualize numerical data associated with point locations by plac- ing
Quadruped Gait Learning Using Cyclic Genetic Algorithms
Hickey, Timothy J.
and in particular, Genetic Algorithms, have previously been used to develop gaits for legged (primarily hexapod]. In a previous work Parker made use of cyclic genetic algorithms to develop walking gaits for a hexapod robot [5]. Each of the six legs of this hexapod robot could only move vertically and horizontally and the number
Stochastic Search for Signal Processing Algorithm Optimization
Stochastic Search for Signal Processing Algorithm Optimization Bryan Singer Manuela Veloso May address the complex task of signal processing optimization. We first introduce and discuss the complexities of this domain. In general, a single signal processing algorithm can be represented by a very
Communication and Computation in Distributed CSP Algorithms
Krishnamachari, Bhaskar
Communication and Computation in Distributed CSP Algorithms C`esar Fern`andez1 , Ram´on B´ejar1 in the context of networked distributed systems. In order to study the performance of Distributed CSP (DisCSP consider two complete DisCSP algorithms: asynchronous backtracking (ABT) and asynchronous weak commitment
Virtual Scanning Algorithm for Road Network Surveillance
Jeong, Jaehoon "Paul"
Virtual Scanning Algorithm for Road Network Surveillance Jaehoon Jeong, Student Member, IEEE, Yu Gu a VIrtual Scanning Algorithm (VISA), tailored and optimized for road network surveillance. Our design roadways and 2) the road network maps are normally known. We guarantee the detection of moving targets
Improvements of the local bosonic algorithm
B. Jegerlehner
1996-12-15
We report on several improvements of the local bosonic algorithm proposed by M. Luescher. We find that preconditioning and over-relaxation works very well. A detailed comparison between the bosonic and the Kramers-algorithms shows comparable performance for the physical situation examined.
Pole Placement Algorithms ROBERT MAHONY~ UWE
Moore, John Barratt
Pole Placement Algorithms ROBERT MAHONY~ UWE for Symmetric Realisations HELMKE$ JOHN MOORE a numerical algorithm for deter- mining optimal output feedback gains for the pole place- ment task is well defined even when an exact solution to the pole placement task does not exist. Thus, the proposed
Theoretical & Experimental Studies of Elementary Particles
McFarland, Kevin
2012-10-04
Abstract High energy physics has been one of the signature research programs at the University of Rochester for over 60 years. The group has made leading contributions to experimental discoveries at accelerators and in cosmic rays and has played major roles in developing the theoretical framework that gives us our ``standard model'' of fundamental interactions today. This award from the Department of Energy funded a major portion of that research for more than 20 years. During this time, highlights of the supported work included the discovery of the top quark at the Fermilab Tevatron, the completion of a broad program of physics measurements that verified the electroweak unified theory, the measurement of three generations of neutrino flavor oscillations, and the first observation of a ``Higgs like'' boson at the Large Hadron Collider. The work has resulted in more than 2000 publications over the period of the grant. The principal investigators supported on this grant have been recognized as leaders in the field of elementary particle physics by their peers through numerous awards and leadership positions. Most notable among them is the APS W.K.H. Panofsky Prize awarded to Arie Bodek in 2004, the J.J. Sakurai Prizes awarded to Susumu Okubo and C. Richard Hagen in 2005 and 2010, respectively, the Wigner medal awarded to Susumu Okubo in 2006, and five principal investigators (Das, Demina, McFarland, Orr, Tipton) who received Department of Energy Outstanding Junior Investigator awards during the period of this grant. The University of Rochester Department of Physics and Astronomy, which houses the research group, provides primary salary support for the faculty and has waived most tuition costs for graduate students during the period of this grant. The group also benefits significantly from technical support and infrastructure available at the University which supports the work. The research work of the group has provided educational opportunities for graduate students, undergraduate students and high school students and teachers. Seventy-two graduate students received a Ph.D. in physics for research supported by this grant.
Theoretical Studies of Hydrogen Storage Alloys.
Jonsson, Hannes
2012-03-22
Theoretical calculations were carried out to search for lightweight alloys that can be used to reversibly store hydrogen in mobile applications, such as automobiles. Our primary focus was on magnesium based alloys. While MgH{sub 2} is in many respects a promising hydrogen storage material, there are two serious problems which need to be solved in order to make it useful: (i) the binding energy of the hydrogen atoms in the hydride is too large, causing the release temperature to be too high, and (ii) the diffusion of hydrogen through the hydride is so slow that loading of hydrogen into the metal takes much too long. In the first year of the project, we found that the addition of ca. 15% of aluminum decreases the binding energy to the hydrogen to the target value of 0.25 eV which corresponds to release of 1 bar hydrogen gas at 100 degrees C. Also, the addition of ca. 15% of transition metal atoms, such as Ti or V, reduces the formation energy of interstitial H-atoms making the diffusion of H-atoms through the hydride more than ten orders of magnitude faster at room temperature. In the second year of the project, several calculations of alloys of magnesium with various other transition metals were carried out and systematic trends in stability, hydrogen binding energy and diffusivity established. Some calculations of ternary alloys and their hydrides were also carried out, for example of Mg{sub 6}AlTiH{sub 16}. It was found that the binding energy reduction due to the addition of aluminum and increased diffusivity due to the addition of a transition metal are both effective at the same time. This material would in principle work well for hydrogen storage but it is, unfortunately, unstable with respect to phase separation. A search was made for a ternary alloy of this type where both the alloy and the corresponding hydride are stable. Promising results were obtained by including Zn in the alloy.
Theoretical Description of the Fission Process
Witold Nazarewicz
2009-10-25
Advanced theoretical methods and high-performance computers may finally unlock the secrets of nuclear fission, a fundamental nuclear decay that is of great relevance to society. In this work, we studied the phenomenon of spontaneous fission using the symmetry-unrestricted nuclear density functional theory (DFT). Our results show that many observed properties of fissioning nuclei can be explained in terms of pathways in multidimensional collective space corresponding to different geometries of fission products. From the calculated collective potential and collective mass, we estimated spontaneous fission half-lives, and good agreement with experimental data was found. We also predicted a new phenomenon of trimodal spontaneous fission for some transfermium isotopes. Our calculations demonstrate that fission barriers of excited superheavy nuclei vary rapidly with particle number, pointing to the importance of shell effects even at large excitation energies. The results are consistent with recent experiments where superheavy elements were created by bombarding an actinide target with 48-calcium; yet even at high excitation energies, sizable fission barriers remained. Not only does this reveal clues about the conditions for creating new elements, it also provides a wider context for understanding other types of fission. Understanding of the fission process is crucial for many areas of science and technology. Fission governs existence of many transuranium elements, including the predicted long-lived superheavy species. In nuclear astrophysics, fission influences the formation of heavy elements on the final stages of the r-process in a very high neutron density environment. Fission applications are numerous. Improved understanding of the fission process will enable scientists to enhance the safety and reliability of the nation’s nuclear stockpile and nuclear reactors. The deployment of a fleet of safe and efficient advanced reactors, which will also minimize radiotoxic waste and be proliferation-resistant, is a goal for the advanced nuclear fuel cycles program. While in the past the design, construction, and operation of reactors were supported through empirical trials, this new phase in nuclear energy production is expected to heavily rely on advanced modeling and simulation capabilities.
Halverson, Thomas, E-mail: tom.halverson@ttu.edu; Poirier, Bill [Department of Chemistry and Biochemistry and Department of Physics, Texas Tech University, P.O. Box 41061, Lubbock, Texas 79409-1061 (United States)
2014-05-28
‘‘Exact” quantum dynamics calculations of vibrational spectra are performed for two molecular systems of widely varying dimensionality (P{sub 2}O and CH{sub 2}NH), using a momentum-symmetrized Gaussian basis. This basis has been previously shown to defeat exponential scaling of computational cost with system dimensionality. The calculations were performed using the new “SWITCHBLADE” black-box code, which utilizes both dimensionally independent algorithms and massive parallelization to compute very large numbers of eigenstates for any fourth-order force field potential, in a single calculation. For both molecules considered here, many thousands of vibrationally excited states were computed, to at least an “intermediate” level of accuracy (tens of wavenumbers). Future modifications to increase the accuracy to “spectroscopic” levels, along with other potential future improvements of the new code, are also discussed.
Discrimination of Unitary Transformations and Quantum Algorithms
David Collins
2008-11-09
Quantum algorithms are typically understood in terms of the evolution of a multi-qubit quantum system under a prescribed sequence of unitary transformations. The input to the algorithm prescribes some of the unitary transformations in the sequence with others remaining fixed. For oracle query algorithms, the input determines the oracle unitary transformation. Such algorithms can be regarded as devices for discriminating amongst a set of unitary transformations. The question arises: "Given a set of known oracle unitary transformations, to what extent is it possible to discriminate amongst them?" We investigate this for the Deutsch-Jozsa problem. The task of discriminating amongst the admissible oracle unitary transformations results in an exhaustive collection of algorithms which can solve the problem with certainty.
2003 Special issue Statistical efficiency of adaptive algorithms
Widrow, Bernard
2003 Special issue Statistical efficiency of adaptive algorithms Bernard Widrow*, Max Kamenetsky Serra Mall, Stanford, CA 94305, USA Abstract The statistical efficiency of a learning algorithm applied gradient descent adaptive algorithms are compared, the LMS algorithm and the LMS/Newton algorithm. LMS
EM Algorithms from a Non-Stochastic Perspective Charles Byrne
Byrne, Charles
EM Algorithms from a Non-Stochastic Perspective Charles Byrne Charles Byrne@uml.edu Department The EM algorithm is not a single algorithm, but a template for the con- struction of iterative algorithms a method for estimat- ing parameters in statistics, the essence of the EM algorithm is not stochastic
A Visualization System for Correctness Proofs of Graph Algorithms
Metaxas, Takis
A Visualization System for Correctness Proofs of Graph Algorithms P.A. Gloor1, D.B. Johnson2, F. Makedon2, P. Metaxas3 Feb. 28, 1993 Running head: Proof Visualization of Graph Algorithms Correspondence proofs of graph algorithms. The system has been demonstrated for a greedy algorithm, Prim's algorithm
Sketching, streaming, and sub-linear space algorithms
Reif, Rafael
Sketching, streaming, and sub-linear space algorithms Piotr Indyk MIT (currently at Rice U) #12 algorithms are approximate · We assume worst-case input stream Adversaries do exist General algorithms Modular composition · Randomized algorithms OK (often necessary) Randomness in the algorithm
QRlike Algorithms---An Overview of Convergence Theory and Practice
QRÂlike Algorithms--- An Overview of Convergence Theory and Practice David S. Watkins Abstract. The family of GR algorithms is discussed. This includes the stanÂ dard and multishift QR and LR algorithms, the Hamiltonian QR algorithm, divideÂandÂconquer algorithms such as the matrix sign function method, and many
Electronic structure basis for the titanic magnetoresistance in WTe?
Pletikosic, I.; Ali, Mazhar N.; Fedorov, A. V.; Cava, R. J.; Valla, T.
2014-11-19
The electronic structure basis of the extremely large magnetoresistance in layered non-magnetic tungsten ditelluride has been investigated by angle-resolved photoelectron spectroscopy. Hole and electron pockets of approximately the same size were found at the Fermi level, suggesting that carrier compensation should be considered the primary source of the effect. The material exhibits a highly anisotropic, quasi one-dimensional Fermi surface from which the pronounced anisotropy of the magnetoresistance follows. A change in the Fermi surface with temperature was found and a high-density-of-states band that may take over conduction at higher temperatures and cause the observed turn-on behavior of the magnetoresistance in WTe? was identified.
Electronic structure basis for the titanic magnetoresistance in WTe?
DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)
Pletikosic, I.; Ali, Mazhar N.; Fedorov, A. V.; Cava, R. J.; Valla, T.
2014-11-19
The electronic structure basis of the extremely large magnetoresistance in layered non-magnetic tungsten ditelluride has been investigated by angle-resolved photoelectron spectroscopy. Hole and electron pockets of approximately the same size were found at the Fermi level, suggesting that carrier compensation should be considered the primary source of the effect. The material exhibits a highly anisotropic, quasi one-dimensional Fermi surface from which the pronounced anisotropy of the magnetoresistance follows. A change in the Fermi surface with temperature was found and a high-density-of-states band that may take over conduction at higher temperatures and cause the observed turn-on behavior of the magnetoresistance inmore »WTe? was identified.« less
Waves in Open Systems via Bi-orthogonal Basis
P. T. Leung; W. -M. Suen; C. P. Sun; K. Young
1999-03-08
Dissipative quantum systems are sometimes phenomenologically described in terms of a non-hermitian hamiltonian $H$, with different left and right eigenvectors forming a bi-orthogonal basis. It is shown that the dynamics of waves in open systems can be cast exactly into this form, thus providing a well-founded realization of the phenomenological description and at the same time placing these open systems into a well-known framework. The formalism leads to a generalization of norms and inner products for open systems, which in contrast to earlier works is finite without the need for regularization. The inner product allows transcription of much of the formalism for conservative systems, including perturbation theory and second-quantization.
Spices form the basis of food pairing in Indian cuisine
Jain, Anupam; Bagler, Ganesh
2015-01-01
Culinary practices are influenced by climate, culture, history and geography. Molecular composition of recipes in a cuisine reveals patterns in food preferences. Indian cuisine encompasses a number of diverse sub-cuisines separated by geographies, climates and cultures. Its culinary system has a long history of health-centric dietary practices focused on disease prevention and promotion of health. We study food pairing in recipes of Indian cuisine to show that, in contrast to positive food pairing reported in some Western cuisines, Indian cuisine has a strong signature of negative food pairing; more the extent of flavor sharing between any two ingredients, lesser their co-occurrence. This feature is independent of recipe size and is not explained by ingredient category-based recipe constitution alone. Ingredient frequency emerged as the dominant factor specifying the characteristic flavor sharing pattern of the cuisine. Spices, individually and as a category, form the basis of ingredient composition in Indian...
A New Basis of Geoscience: Whole-Earth Decompression Dynamics
Herndon, J Marvin
2013-01-01
Neither plate tectonics nor Earth expansion theory is sufficient to provide a basis for understanding geoscience. Each theory is incomplete and possesses problematic elements, but both have served as stepping stones to a more fundamental and inclusive geoscience theory that I call Whole-Earth Decompression Dynamics (WEDD). WEDD begins with and is the consequence of our planet's early formation as a Jupiter-like gas giant and permits deduction of:(1) Earth's internal composition, structure, and highly-reduced oxidation state; (2) Core formation without whole-planet melting; (3) Powerful new internal energy sources - proto-planetary energy of compression and georeactor nuclear fission energy; (4) Georeactor geomagnetic field generation; (5) Mechanism for heat emplacement at the base of the crust resulting in the crustal geothermal gradient; (6) Decompression driven geodynamics that accounts for the myriad of observations attributed to plate tectonics without requiring physically-impossible mantle convection, an...
Electronic structure basis for the titanic magnetoresistance in WTe?
DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)
Pletikosic, I. [Princeton Univ., NJ (United States); Brookhaven National Lab. (BNL), Upton, NY (United States); Ali, Mazhar N. [Princeton Univ., NJ (United States); Fedorov, A. V. [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Cava, R. J. [Princeton Univ., NJ (United States); Valla, T. [Brookhaven National Lab. (BNL), Upton, NY (United States)
2014-11-01
The electronic structure basis of the extremely large magnetoresistance in layered non-magnetic tungsten ditelluride has been investigated by angle-resolved photoelectron spectroscopy. Hole and electron pockets of approximately the same size were found at the Fermi level, suggesting that carrier compensation should be considered the primary source of the effect. The material exhibits a highly anisotropic, quasi one-dimensional Fermi surface from which the pronounced anisotropy of the magnetoresistance follows. A change in the Fermi surface with temperature was found and a high-density-of-states band that may take over conduction at higher temperatures and cause the observed turn-on behavior of the magnetoresistance in WTe? was identified.
Draft Geologic Disposal Requirements Basis for STAD Specification
Ilgen, Anastasia G.; Bryan, Charles R.; Hardin, Ernest
2015-03-25
This document provides the basis for requirements in the current version of Performance Specification for Standardized Transportation, Aging, and Disposal Canister Systems, (FCRD-NFST-2014-0000579) that are driven by storage and geologic disposal considerations. Performance requirements for the Standardized Transportation, Aging, and Disposal (STAD) canister are given in Section 3.1 of that report. Here, the requirements are reviewed and the rationale for each provided. Note that, while FCRD-NFST-2014-0000579 provides performance specifications for other components of the STAD storage system (e.g. storage overpack, transfer and transportation casks, and others), these have no impact on the canister performance during disposal, and are not discussed here.
Electronic structure basis for the extraordinary magnetoresistance in WTe2
DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)
Pletikosi?, I.; Ali, Mazhar N.; Fedorov, A. V.; Cava, R. J.; Valla, T.
2014-11-19
The electronic structure basis of the extremely large magnetoresistance in layered non-magnetic tungsten ditelluride has been investigated by angle-resolved photoelectron spectroscopy. Hole and electron pockets of approximately the same size were found at the Fermi level, suggesting that carrier compensation should be considered the primary source of the effect. The material exhibits a highly anisotropic, quasi one-dimensional Fermi surface from which the pronounced anisotropy of the magnetoresistance follows. As a result, a change in the Fermi surface with temperature was found and a high-density-of-states band that may take over conduction at higher temperatures and cause the observed turn-on behavior ofmore »the magnetoresistance in WTe? was identified.« less
Hanford External Dosimetry Technical Basis Manual PNL-MA-842
Rathbone, Bruce A.
2009-08-28
The Hanford External Dosimetry Technical Basis Manual PNL-MA-842 documents the design and implementation of the external dosimetry system used at Hanford. The manual describes the dosimeter design, processing protocols, dose calculation methodology, radiation fields encountered, dosimeter response characteristics, limitations of dosimeter design under field conditions, and makes recommendations for effective use of the dosimeters in the field. The manual describes the technical basis for the dosimetry system in a manner intended to help ensure defensibility of the dose of record at Hanford and to demonstrate compliance with 10 CFR 835, DOELAP, DOE-RL, ORP, PNSO, and Hanford contractor requirements. The dosimetry system is operated by PNNL’s Hanford External Dosimetry Program (HEDP) which provides dosimetry services to all Hanford contractors. The primary users of this manual are DOE and DOE contractors at Hanford using the dosimetry services of PNNL. Development and maintenance of this manual is funded directly by DOE and DOE contractors. Its contents have been reviewed and approved by DOE and DOE contractors at Hanford through the Hanford Personnel Dosimetry Advisory Committee (HPDAC) which is chartered and chaired by DOE-RL and serves as means of coordinating dosimetry practices across contractors at Hanford. This manual was established in 1996. Since inception, it has been revised many times and maintained by PNNL as a controlled document with controlled distribution. The first revision to be released through PNNL’s Electronic Records & Information Capture Architecture (ERICA) database was designated Revision 0. Revision numbers that are whole numbers reflect major revisions typically involving changes to all chapters in the document. Revision numbers that include a decimal fraction reflect minor revisions, usually restricted to selected chapters or selected pages in the document.
Center for Theoretical Biological Physics University of California, San Diego
Collar, Juan I.
Center for Theoretical Biological Physics University of California, San Diego CHEMOTAXIS To Go physicists with the Center for Theoretical Biological Physics at the University of California, San Diego at the University of California, San Diego. CTBP is a consortium of researchers from UCSD, The Salk Institute
Summary of activities in 2009 1. THEORETICAL NUCLEAR PHYSICS GROUP
Sano, Masaki
II Summary of activities in 2009 #12;#12;1. THEORETICAL NUCLEAR PHYSICS GROUP 1 Theoretical Nuclear. The subjects are divided into three major categories: Nuclear Structure Physics, Quantum Hadron Physics and High Energy Hadron Physics. Nuclear Structure Physics In the Nuclear Structure group (T. Otsuka and N
Towards Cinematic Hypertext: A Theoretical and Empirical Investigation
Towards Cinematic Hypertext: A Theoretical and Empirical Investigation Tech Report kmi-04-6 March submission February 2004 PHD awarded March 2004 #12;Knowledge Media Institute TOWARDS CINEMATIC HYPERTEXT elements of these with new theoretical insights, to investigate a fourth paradigm referred to as Cinematic
Theoretical insights into multibandgap hybrid perovskites for photovoltaic applications
Theoretical insights into multibandgap hybrid perovskites for photovoltaic applications J. Even theoretically the crystalline phases of one of the hybrids relevant for photovoltaic applications, namely CH3NH3, evidencing inversion of band edge states. Keywords: Photovoltaic, Hybrid perovskite, density functional
Chemical Organization Theory as a Theoretical Base for Chemical Computing
Dittrich, Peter
Chemical Organization Theory as a Theoretical Base for Chemical Computing NAOKI MATSUMARU, FLORIAN-07743 Jena, Germany http://www.minet.uni-jena.de/csb/ Submitted 14 November 2005 In chemical computing- gramming chemical systems a theoretical method to cope with that emergent behavior is desired
INVERTING RADON TRANSFORMS : THE GROUP-THEORETIC Franois Rouvire
Vallette, Bruno
INVERTING RADON TRANSFORMS : THE GROUP-THEORETIC APPROACH François Rouvičre Abstract of various inversion formulas from the literature on Radon transforms, obtained by group-theoretic tools such as invariant di¤erential operators and harmonic analysis. We introduce a general concept of shifted Radon
GRAPH THEORETIC APPROACHES TO INJECTIVITY IN CHEMICAL REACTION SYSTEMS
Craciun, Gheorghe
GRAPH THEORETIC APPROACHES TO INJECTIVITY IN CHEMICAL REACTION SYSTEMS MURAD BANAJI AND GHEORGHE algebraic and graph theoretic conditions for injectivity of chemical reaction systems. After developing the possibility of multiple equilibria in the systems in question. Key words. Chemical reactions; Injectivity; SR
The growth of business firms: Theoretical framework and empirical evidence
Buldyrev, Sergey
Pg(g) of business-firm growth rates. The model pre- dicts that Pg(g) is exponential in the central rate at all levels of aggregation studied. The Theoretical Framework We model business firms as classesThe growth of business firms: Theoretical framework and empirical evidence Dongfeng Fu* , Fabio
Advanced Fuel Cycle Economic Tools, Algorithms, and Methodologies
David E. Shropshire
2009-05-01
The Advanced Fuel Cycle Initiative (AFCI) Systems Analysis supports engineering economic analyses and trade-studies, and requires a requisite reference cost basis to support adequate analysis rigor. In this regard, the AFCI program has created a reference set of economic documentation. The documentation consists of the “Advanced Fuel Cycle (AFC) Cost Basis” report (Shropshire, et al. 2007), “AFCI Economic Analysis” report, and the “AFCI Economic Tools, Algorithms, and Methodologies Report.” Together, these documents provide the reference cost basis, cost modeling basis, and methodologies needed to support AFCI economic analysis. The application of the reference cost data in the cost and econometric systems analysis models will be supported by this report. These methodologies include: the energy/environment/economic evaluation of nuclear technology penetration in the energy market—domestic and internationally—and impacts on AFCI facility deployment, uranium resource modeling to inform the front-end fuel cycle costs, facility first-of-a-kind to nth-of-a-kind learning with application to deployment of AFCI facilities, cost tradeoffs to meet nuclear non-proliferation requirements, and international nuclear facility supply/demand analysis. The economic analysis will be performed using two cost models. VISION.ECON will be used to evaluate and compare costs under dynamic conditions, consistent with the cases and analysis performed by the AFCI Systems Analysis team. Generation IV Excel Calculations of Nuclear Systems (G4-ECONS) will provide static (snapshot-in-time) cost analysis and will provide a check on the dynamic results. In future analysis, additional AFCI measures may be developed to show the value of AFCI in closing the fuel cycle. Comparisons can show AFCI in terms of reduced global proliferation (e.g., reduction in enrichment), greater sustainability through preservation of a natural resource (e.g., reduction in uranium ore depletion), value from weaning the U.S. from energy imports (e.g., measures of energy self-sufficiency), and minimization of future high level waste (HLW) repositories world-wide.
Advanced algorithms for information science
Argo, P.; Brislawn, C.; Fitzgerald, T.J.; Kelley, B.; Kim, W.H.; Mazieres, B.; Roeder, H.; Strottman, D.
1998-12-31
This is the final report of a one-year, Laboratory Directed Research and Development (LDRD) project at Los Alamos National Laboratory (LANL). In a modern information-controlled society the importance of fast computational algorithms facilitating data compression and image analysis cannot be overemphasized. Feature extraction and pattern recognition are key to many LANL projects and the same types of dimensionality reduction and compression used in source coding are also applicable to image understanding. The authors have begun developing wavelet coding which decomposes data into different length-scale and frequency bands. New transform-based source-coding techniques offer potential for achieving better, combined source-channel coding performance by using joint-optimization techniques. They initiated work on a system that compresses the video stream in real time, and which also takes the additional step of analyzing the video stream concurrently. By using object-based compression schemes (where an object is an identifiable feature of the video signal, repeatable in time or space), they believe that the analysis is directly related to the efficiency of the compression.
An Efficient Rescaled Perceptron Algorithm for Conic Systems
Vempala, Santosh
The classical perceptron algorithm is an elementary row-action/relaxation algorithm for solving a homogeneous linear inequality system Ax > 0. A natural condition measure associated with this algorithm is the Euclidean ...
Statistical algorithms in the study of mammalian DNA methylation
Singer, Meromit
2012-01-01
non-overlapping CCGIs: the algorithm 2.2.6 Running time andI Statistical algorithms in the study of mammalian DNAof the result of the CCGI algorithm. Nodes marked along the
Two Strategies to Speed up Connected Component Labeling Algorithms
Wu, Kesheng; Otoo, Ekow; Suzuki, Kenji
2008-01-01
but not linear set union algorithm,” J. ACM, vol. 22, no. 2,analysis of set union algorithms,” J. ACM, vol. 31, no. 2,An improved equivalence algorithm,” Commun. ACM, vol. 7, no.
Comparison of generality based algorithm variants for automatic taxonomy generation
Madnick, Stuart E.
We compare a family of algorithms for the automatic generation of taxonomies by adapting the Heymann-algorithm in various ways. The core algorithm determines the generality of terms and iteratively inserts them in a growing ...
An Alternative to Gillespie's Algorithm for Simulating Chemical Reactions
Troina, Angelo
An Alternative to Gillespie's Algorithm for Simulating Chemical Reactions Roberto Barbuti, Andrea introduce a probabilistic algorithm for the simulation of chemical reactions, which can be used evolution of chemical reactive systems described by Gillespie. Moreover, we use our algorithm
A Note on the Finite Element Method with Singular Basis Functions
Kaneko, Hideaki
finite element analysis that incorporates singular element functions. A need for introducing * *some singular elements as part of basis functions in certain finite element analysis arises o* *ut A Note on the Finite Element Method with Singular Basis
Theoretical Investigation of Charge Transfer between N^{6+} and atomic Hydrogen
Wu, Y. [University of Georgia, Athens, GA; Stancil, P C [University of Georgia, Athens, GA; Liebermann, H. P. [Bergische Universitaet Wuppertal, Germany; Funke, P. [Bergische Universitaet Wuppertal, Germany; Rai, S. N. [Bergische Universitaet Wuppertal, Germany; Buenker, R. J. [Bergische Universitaet Wuppertal, Germany; Schultz, David Robert [ORNL; Hui, Yawei [ORNL; Draganic, Ilija N [ORNL; Havener, Charles C [ORNL
2011-01-01
Charge transfer due to collisions of ground-state N{sup 6+}(1s{sup 2} S) with atomic hydrogen has been investigated theoretically using the quantum-mechanical molecular-orbital close-coupling (QMOCC) method, in which the adiabatic potentials and nonadiabatic couplings were obtained using the multireference single- and double-excitation configuration-interaction (MRDCI) approach. Total, n-, l-, and S-resolved cross sections have been obtained for energies between 10 meV/u and 10 keV/u. The QMOCC results were compared to available experimental and theoretical data as well as to merged-beams measurements and atomic-orbital close-coupling and classical trajectory Monte Carlo calculations. The accuracy of the QMOCC charge-transfer cross sections was found to be sensitive to the accuracy of the adiabatic potentials and couplings. Consequently, we developed a method to optimize the atomic basis sets used in the MRDCI calculations for highly charged ions. Since cross sections, especially those that are state selective, are necessary input for x-ray emission simulation of heliospheric and Martian exospheric spectra arising from solar wind ion-neutral gas collisions, a recommended set of state-selective cross sections, based on our evaluation of the calculations and measurements, is provided.
Improved Sampling Algorithms in Lattice QCD
Gambhir, Arjun Singh
2015-01-01
Reverse Monte Carlo (RMC) is an algorithm that incorporates stochastic modification of the action as part of the process that updates the fields in a Monte Carlo simulation. Such update moves have the potential of lowering or eliminating potential barriers that may cause inefficiencies in exploring the field configuration space. The highly successful Cluster algorithms for spin systems can be derived from the RMC framework. In this work we apply RMC ideas to pure gauge theory, aiming to alleviate the critical slowing down observed in the topological charge evolution as well as other long distance observables. We present various formulations of the basic idea and report on our numerical experiments with these algorithms.
QCDLAB: Designing Lattice QCD Algorithms with MATLAB
Artan Borici
2006-10-09
This paper introduces QCDLAB, a design and research tool for lattice QCD algorithms. The tool, a collection of MATLAB functions, is based on a ``small-code'' and a ``minutes-run-time'' algorithmic design philosophy. The present version uses the Schwinger model on the lattice, a great simplification, which shares many features and algorithms with lattice QCD. A typical computing project using QCDLAB is characterised by short codes, short run times, and the ability to make substantial changes in a few seconds. QCDLAB 1.0 can be downloaded from the QCDLAB project homepage {\\tt http://phys.fshn.edu.al/qcdlab.html}.
Improved Sampling Algorithms in Lattice QCD
Arjun Singh Gambhir; Kostas Orginos
2015-06-19
Reverse Monte Carlo (RMC) is an algorithm that incorporates stochastic modification of the action as part of the process that updates the fields in a Monte Carlo simulation. Such update moves have the potential of lowering or eliminating potential barriers that may cause inefficiencies in exploring the field configuration space. The highly successful Cluster algorithms for spin systems can be derived from the RMC framework. In this work we apply RMC ideas to pure gauge theory, aiming to alleviate the critical slowing down observed in the topological charge evolution as well as other long distance observables. We present various formulations of the basic idea and report on our numerical experiments with these algorithms.
Recent Developments in Dual Lattice Algorithms
J. Wade Cherrington
2008-10-02
We review recent progress in numerical simulations with dually transformed SU(2) LGT, starting with a discussion of explicit dual amplitudes and algorithms for SU(2) pure Yang Mills in D=3 and D=4. In the D=3 case, we discuss results that validate the dual algorithm against conventional simulations. We also review how a local, exact dynamical fermion algorithm can naturally be incorporated into the dual framework. We conclude with an outlook for this technique and a look at some of the current challenges we've encountered with this method, specifically critical slowing down and the sign problem.
An Overview of LISA Data Analysis Algorithms
Edward K. Porter
2009-10-02
The development of search algorithms for gravitational wave sources in the LISA data stream is currently a very active area of research. It has become clear that not only does difficulty lie in searching for the individual sources, but in the case of galactic binaries, evaluating the fidelity of resolved sources also turns out to be a major challenge in itself. In this article we review the current status of developed algorithms for galactic binary, non-spinning supermassive black hole binary and extreme mass ratio inspiral sources. While covering the vast majority of algorithms, we will highlight those that represent the state of the art in terms of speed and accuracy.
Hanford External Dosimetry Technical Basis Manual PNL-MA-842
Rathbone, Bruce A.
2007-03-12
The Hanford External Dosimetry Technical Basis Manual PNL-MA-842 documents the design and implementation of the external dosimetry system used at Hanford. The manual describes the dosimeter design, processing protocols, dose calculation methodology, radiation fields encountered, dosimeter response characteristics, limitations of dosimeter design under field conditions, and makes recommendations for effective use of the dosimeters in the field. The manual describes the technical basis for the dosimetry system in a manner intended to help ensure defensibility of the dose of record at Hanford and to demonstrate compliance with 10 CFR 835, DOELAP, DOE-RL, ORP, PNSO, and Hanford contractor requirements. The dosimetry system is operated by PNNL’s Hanford External Dosimetry Program (HEDP) which provides dosimetry services to all Hanford contractors. The primary users of this manual are DOE and DOE contractors at Hanford using the dosimetry services of PNNL. Development and maintenance of this manual is funded directly by DOE and DOE contractors. Its contents have been reviewed and approved by DOE and DOE contractors at Hanford through the Hanford Personnel Dosimetry Advisory Committee (HPDAC) which is chartered and chaired by DOE-RL and serves as means of coordinating dosimetry practices across contractors at Hanford. This manual was established in 1996. Since inception, it has been revised many times and maintained by PNNL as a controlled document with controlled distribution. Rev. 0 marks the first revision to be released through PNNL’s Electronic Records & Information Capture Architecture (ERICA) database. Revision numbers that are whole numbers reflect major revisions typically involving changes to all chapters in the document. Revision numbers that include a decimal fraction reflect minor revisions, usually restricted to selected chapters or selected pages in the document. Revision Log: Rev. 0 (2/25/2005) Major revision and expansion. Rev. 0.1 (3/12/2007) Minor revision. Updated Chapters 5, 6 and 9 to reflect change in default ring calibration factor used in HEDP dose calculation software. Factor changed from 1.5 to 2.0 beginning January 1, 2007. Pages on which changes were made are as follows: 5.23, 5.69, 5.78, 5.80, 5.82, 6.3, 6.5, 6.29, 9.2.
Hanford External Dosimetry Technical Basis Manual PNL-MA-842
Rathbone, Bruce A.
2011-04-04
The Hanford External Dosimetry Technical Basis Manual PNL-MA-842 documents the design and implementation of the external dosimetry system used at the U.S. Department of Energy (DOE) Hanford site. The manual describes the dosimeter design, processing protocols, dose calculation methodology, radiation fields encountered, dosimeter response characteristics, limitations of dosimeter design under field conditions, and makes recommendations for effective use of the dosimeters in the field. The manual describes the technical basis for the dosimetry system in a manner intended to help ensure defensibility of the dose of record at Hanford and to demonstrate compliance with requirements of 10 CFR 835, the DOE Laboratory Accreditation Program, the DOE Richland Operations Office, DOE Office of River Protection, DOE Pacific Northwest Office of Science, and Hanford’s DOE contractors. The dosimetry system is operated by the Pacific Northwest National Laboratory (PNNL) Hanford External Dosimetry Program which provides dosimetry services to PNNL and all Hanford contractors. The primary users of this manual are DOE and DOE contractors at Hanford using the dosimetry services of PNNL. Development and maintenance of this manual is funded directly by DOE and DOE contractors. Its contents have been reviewed and approved by DOE and DOE contractors at Hanford through the Hanford Personnel Dosimetry Advisory Committee which is chartered and chaired by DOE-RL and serves as means of coordinating dosimetry practices across contractors at Hanford. This manual was established in 1996. Since its inception, it has been revised many times and maintained by PNNL as a controlled document with controlled distribution. The first revision to be released through PNNL’s Electronic Records & Information Capture Architecture database was designated Revision 0. Revision numbers that are whole numbers reflect major revisions typically involving significant changes to all chapters in the document. Revision numbers that include a decimal fraction reflect minor revisions, usually restricted to selected chapters or selected pages in the document. Maintenance and distribution of controlled hard copies of the manual by PNNL was discontinued beginning with Revision 0.2.
Hanford External Dosimetry Technical Basis Manual PNL-MA-842
Rathbone, Bruce A.
2010-04-01
The Hanford External Dosimetry Technical Basis Manual PNL-MA-842 documents the design and implementation of the external dosimetry system used at the U.S. Department of Energy (DOE) Hanford site. The manual describes the dosimeter design, processing protocols, dose calculation methodology, radiation fields encountered, dosimeter response characteristics, limitations of dosimeter design under field conditions, and makes recommendations for effective use of the dosimeters in the field. The manual describes the technical basis for the dosimetry system in a manner intended to help ensure defensibility of the dose of record at Hanford and to demonstrate compliance with requirements of 10 CFR 835, the DOE Laboratory Accreditation Program, the DOE Richland Operations Office, DOE Office of River Protection, DOE Pacific Northwest Office of Science, and Hanford’s DOE contractors. The dosimetry system is operated by the Pacific Northwest National Laboratory (PNNL) Hanford External Dosimetry Program which provides dosimetry services to PNNL and all Hanford contractors. The primary users of this manual are DOE and DOE contractors at Hanford using the dosimetry services of PNNL. Development and maintenance of this manual is funded directly by DOE and DOE contractors. Its contents have been reviewed and approved by DOE and DOE contractors at Hanford through the Hanford Personnel Dosimetry Advisory Committee which is chartered and chaired by DOE-RL and serves as means of coordinating dosimetry practices across contractors at Hanford. This manual was established in 1996. Since its inception, it has been revised many times and maintained by PNNL as a controlled document with controlled distribution. The first revision to be released through PNNL’s Electronic Records & Information Capture Architecture database was designated Revision 0. Revision numbers that are whole numbers reflect major revisions typically involving significant changes to all chapters in the document. Revision numbers that include a decimal fraction reflect minor revisions, usually restricted to selected chapters or selected pages in the document. Maintenance and distribution of controlled hard copies of the manual by PNNL was discontinued beginning with Revision 0.2.
Structural basis for the antibody neutralization of Herpes simplex virus
Lee, Cheng-Chung; Lin, Li-Ling [Academia Sinica, Taipei 115, Taiwan (China); Academia Sinica, Taipei 115, Taiwan (China); Chan, Woan-Eng [Development Center for Biotechnology, New Taipei City 221, Taiwan (China); Ko, Tzu-Ping [Academia Sinica, Taipei 115, Taiwan (China); Academia Sinica, Taipei 115, Taiwan (China); Lai, Jiann-Shiun [Development Center for Biotechnology, New Taipei City 221, Taiwan (China); Ministry of Economic Affairs, Taipei 100, Taiwan (China); Wang, Andrew H.-J., E-mail: ahjwang@gate.sinica.edu.tw [Academia Sinica, Taipei 115, Taiwan (China); Academia Sinica, Taipei 115, Taiwan (China); Taipei Medical University, Taipei 110, Taiwan (China)
2013-10-01
The gD–E317-Fab complex crystal revealed the conformational epitope of human mAb E317 on HSV gD, providing a molecular basis for understanding the viral neutralization mechanism. Glycoprotein D (gD) of Herpes simplex virus (HSV) binds to a host cell surface receptor, which is required to trigger membrane fusion for virion entry into the host cell. gD has become a validated anti-HSV target for therapeutic antibody development. The highly inhibitory human monoclonal antibody E317 (mAb E317) was previously raised against HSV gD for viral neutralization. To understand the structural basis of antibody neutralization, crystals of the gD ectodomain bound to the E317 Fab domain were obtained. The structure of the complex reveals that E317 interacts with gD mainly through the heavy chain, which covers a large area for epitope recognition on gD, with a flexible N-terminal and C-terminal conformation. The epitope core structure maps to the external surface of gD, corresponding to the binding sites of two receptors, herpesvirus entry mediator (HVEM) and nectin-1, which mediate HSV infection. E317 directly recognizes the gD–nectin-1 interface and occludes the HVEM contact site of gD to block its binding to either receptor. The binding of E317 to gD also prohibits the formation of the N-terminal hairpin of gD for HVEM recognition. The major E317-binding site on gD overlaps with either the nectin-1-binding residues or the neutralizing antigenic sites identified thus far (Tyr38, Asp215, Arg222 and Phe223). The epitopes of gD for E317 binding are highly conserved between two types of human herpesvirus (HSV-1 and HSV-2). This study enables the virus-neutralizing epitopes to be correlated with the receptor-binding regions. The results further strengthen the previously demonstrated therapeutic and diagnostic potential of the E317 antibody.
Ris-R-Report Grid fault and design-basis for wind turbines -
Risř-R-Report Grid fault and design-basis for wind turbines - Final report Anca D. Hansen, Nicolaos and design-basis for wind turbines - Final report Division: Wind Energy Division Risř-R-1714(EN) January 2010-basis for wind turbines". The objective of this project has been to assess and analyze the consequences
Martin, A; Venkatesan, Dr V Prasanna
2011-01-01
Today in every organization financial analysis provides the basis for understanding and evaluating the results of business operations and delivering how well a business is doing. This means that the organizations can control the operational activities primarily related to corporate finance. One way that doing this is by analysis of bankruptcy prediction. This paper develops an ontological model from financial information of an organization by analyzing the Semantics of the financial statement of a business. One of the best bankruptcy prediction models is Altman Z-score model. Altman Z-score method uses financial rations to predict bankruptcy. From the financial ontological model the relation between financial data is discovered by using data mining algorithm. By combining financial domain ontological model with association rule mining algorithm and Zscore model a new business intelligence model is developed to predict the bankruptcy.
A Cone Jet-Finding Algorithm for Heavy-Ion Collisions at LHC Energies
S-L Blyth; M J Horner; T Awes; T Cormier; H Gray; J L Klay; S R Klein; M van Leeuwen; A Morsch; G Odyniec; A Pavlinov
2006-09-15
Standard jet finding techniques used in elementary particle collisions have not been successful in the high track density of heavy-ion collisions. This paper describes a modified cone-type jet finding algorithm developed for the complex environment of heavy-ion collisions. The primary modification to the algorithm is the evaluation and subtraction of the large background energy, arising from uncorrelated soft hadrons, in each collision. A detailed analysis of the background energy and its event-by-event fluctuations has been performed on simulated data, and a method developed to estimate the background energy inside the jet cone from the measured energy outside the cone on an event-by-event basis. The algorithm has been tested using Monte-Carlo simulations of Pb+Pb collisions at $\\sqrt{s}=5.5$ TeV for the ALICE detector at the LHC. The algorithm can reconstruct jets with a transverse energy of 50 GeV and above with an energy resolution of $\\sim30%$.
Structural basis of substrate discrimination and integrin binding by autotaxin
Hausmann, Jens; Kamtekar, Satwik; Christodoulou, Evangelos; Day, Jacqueline E.; Wu, Tao; Fulkerson, Zachary; Albers, Harald M.H.G.; van Meeteren, Laurens A.; Houben, Anna J.S.; van Zeijl, Leonie; Jansen, Silvia; Andries, Maria; Hall, Troii; Pegg, Lyle E.; Benson, Timothy E.; Kasiem, Mobien; Harlos, Karl; Vander Kooi, Craig W.; Smyth, Susan S.; Ovaa, Huib; Bollen, Mathieu; Morris, Andrew J.; Moolenaar, Wouter H.; Perrakis, Anastassis (Pfizer); (Leuven); (Oxford); (NCI-Netherlands); (Kentucky)
2013-09-25
Autotaxin (ATX, also known as ectonucleotide pyrophosphatase/phosphodiesterase-2, ENPP2) is a secreted lysophospholipase D that generates the lipid mediator lysophosphatidic acid (LPA), a mitogen and chemoattractant for many cell types. ATX-LPA signaling is involved in various pathologies including tumor progression and inflammation. However, the molecular basis of substrate recognition and catalysis by ATX and the mechanism by which it interacts with target cells are unclear. Here, we present the crystal structure of ATX, alone and in complex with a small-molecule inhibitor. We have identified a hydrophobic lipid-binding pocket and mapped key residues for catalysis and selection between nucleotide and phospholipid substrates. We have shown that ATX interacts with cell-surface integrins through its N-terminal somatomedin B-like domains, using an atypical mechanism. Our results define determinants of substrate discrimination by the ENPP family, suggest how ATX promotes localized LPA signaling and suggest new approaches for targeting ATX with small-molecule therapeutic agents.
Climate Change: The Physical Basis and Latest Results
None
2011-10-06
The 2007 Assessment Report of the Intergovernmental Panel on Climate Change (IPCC) concludes: "Warming in the climate system is unequivocal." Without the contribution of Physics to climate science over many decades, such a statement would not have been possible. Experimental physics enables us to read climate archives such as polar ice cores and so provides the context for the current changes. For example, today the concentration of CO2 in the atmosphere, the second most important greenhouse gas, is 28% higher than any time during the last 800,000 years. Classical fluid mechanics and numerical mathematics are the basis of climate models from which estimates of future climate change are obtained. But major instabilities and surprises in the Earth System are still unknown. These are also to be considered when the climatic consequences of proposals for geo-engineering are estimated. Only Physics will permit us to further improve our understanding in order to provide the foundation for policy decisions facing the global climate change challenge.
Benchmarking Derivative-Free Optimization Algorithms
2008-05-13
has encouraged a new wave of theory and algorithms. ... the solver that delivers the largest reduction within a given computational budget. ..... cost per iteration. ..... expensive optimization problems that arise in DOE's SciDAC initiative.
A modified convective/stratiform partitioning algorithm
Listemaa, Steven Alan
1998-01-01
by using different radar reflectivity-rail-ate relationships. Several authors have developed their own convective-stratiform partitioning, but each had its limitations. An algorithm has been developed which partitions precipitating systems...
Large scale prediction models and algorithms
Monsch, Matthieu (Matthieu Frederic)
2013-01-01
Over 90% of the data available across the world has been produced over the last two years, and the trend is increasing. It has therefore become paramount to develop algorithms which are able to scale to very high dimensions. ...
An algorithmic approach to social networks
Liben-Nowell, David
2005-01-01
Social networks consist of a set of individuals and some form of social relationship that ties the individuals together. In this thesis, we use algorithmic techniques to study three aspects of social networks: (1) we analyze ...
A Saturation Algorithm for Homogeneous Binomial Ideals
Mehta, Shashank K
A Saturation Algorithm for Homogeneous Binomial Ideals Deepanjan Kesh and Shashank K Mehta Indian at computation in smaller rings is by Kesh and Mehta [7] which also requires the computation of one Gr
Greedy Algorithms Slides by Kevin Wayne.
Kosecka, Jana
. telephone, electrical, hydraulic, TV cable, computer, road Approximation algorithms for NP-hard problems. At each step, add the cheapest edge e to T that has exactly one endpoint in T. Remark. All three
Quantum algorithms for hidden nonlinear structures
Andrew M. Childs; Leonard J. Schulman; Umesh V. Vazirani
2007-05-21
Attempts to find new quantum algorithms that outperform classical computation have focused primarily on the nonabelian hidden subgroup problem, which generalizes the central problem solved by Shor's factoring algorithm. We suggest an alternative generalization, namely to problems of finding hidden nonlinear structures over finite fields. We give examples of two such problems that can be solved efficiently by a quantum computer, but not by a classical computer. We also give some positive results on the quantum query complexity of finding hidden nonlinear structures.
Quantum heuristic algorithm for traveling salesman problem
Jeongho Bang; Seokwon Yoo; James Lim; Junghee Ryu; Changhyoup Lee; Jinhyoung Lee
2012-11-06
We propose a quantum heuristic algorithm to solve a traveling salesman problem by generalizing Grover search. Sufficient conditions are derived to greatly enhance the probability of finding the tours with extremal costs, reaching almost to unity and they are shown characterized by statistical properties of tour costs. In particular for a Gaussian distribution of the tours along the cost we show that the quantum algorithm exhibits the quadratic speedup of its classical counterpart, similarly to Grover search.
TECHNICAL BASIS FOR VENTILATION REQUIREMENTS IN TANK FARMS OPERATING SPECIFICATIONS DOCUMENTS
BERGLIN, E J
2003-06-23
This report provides the technical basis for high efficiency particulate air filter (HEPA) for Hanford tank farm ventilation systems (sometimes known as heating, ventilation and air conditioning [HVAC]) to support limits defined in Process Engineering Operating Specification Documents (OSDs). This technical basis included a review of older technical basis and provides clarifications, as necessary, to technical basis limit revisions or justification. This document provides an updated technical basis for tank farm ventilation systems related to Operation Specification Documents (OSDs) for double-shell tanks (DSTs), single-shell tanks (SSTs), double-contained receiver tanks (DCRTs), catch tanks, and various other miscellaneous facilities.
Realization of a scalable Shor algorithm
Thomas Monz; Daniel Nigg; Esteban A. Martinez; Matthias F. Brandl; Philipp Schindler; Richard Rines; Shannon X. Wang; Isaac L. Chuang; Rainer Blatt
2015-07-31
Quantum computers are able to outperform classical algorithms. This was long recognized by the visionary Richard Feynman who pointed out in the 1980s that quantum mechanical problems were better solved with quantum machines. It was only in 1994 that Peter Shor came up with an algorithm that is able to calculate the prime factors of a large number vastly more efficiently than known possible with a classical computer. This paradigmatic algorithm stimulated the flourishing research in quantum information processing and the quest for an actual implementation of a quantum computer. Over the last fifteen years, using skillful optimizations, several instances of a Shor algorithm have been implemented on various platforms and clearly proved the feasibility of quantum factoring. For general scalability, though, a different approach has to be pursued. Here, we report the realization of a fully scalable Shor algorithm as proposed by Kitaev. For this, we demonstrate factoring the number fifteen by effectively employing and controlling seven qubits and four "cache-qubits", together with the implementation of generalized arithmetic operations, known as modular multipliers. The scalable algorithm has been realized with an ion-trap quantum computer exhibiting success probabilities in excess of 90%.
A New Algorithm for Linear Programming
Dhananjay P. Mehendale
2015-03-28
In this paper we propose two types of new algorithms for linear programming. The first type of these new algorithms uses algebraic methods while the second type of these new algorithms uses geometric methods. The first type of algorithms is based on treating the objective function as a parameter. In this method, we form a matrix using coefficients in the system of equations consisting objective equation and equations obtained from inequalities defining constraint by introducing slack/surplus variables. We obtain reduced row echelon form for this matrix containing only one variable, namely, the objective function itself as an unknown parameter. We analyse this matrix in the reduced row echelon form and develop a clear cut method to find the optimal solution for the problem at hand, if and when it exists. We see that the entire optimization process can be developed through the proper analysis of the said matrix in the reduced row echelon form. The second type of algorithms that we propose for linear programming are inspired by geometrical considerations. All these algorithms pursue common aim of approaching closer and closer to centroid or some centrally located interior point for speeding up the process of reaching an optimal solution! We then proceed to show that the algebraic method developed above for linear programming naturally extends to non-linear and integer programming problems. For non-linear and integer programming problems we use the technique of Grobner bases and the methods of solving linear Diophantine equations respectively.
Training a Large Scale Classifier with the Quantum Adiabatic Algorithm
Hartmut Neven; Vasil S. Denchev; Geordie Rose; William G. Macready
2009-12-04
In a previous publication we proposed discrete global optimization as a method to train a strong binary classifier constructed as a thresholded sum over weak classifiers. Our motivation was to cast the training of a classifier into a format amenable to solution by the quantum adiabatic algorithm. Applying adiabatic quantum computing (AQC) promises to yield solutions that are superior to those which can be achieved with classical heuristic solvers. Interestingly we found that by using heuristic solvers to obtain approximate solutions we could already gain an advantage over the standard method AdaBoost. In this communication we generalize the baseline method to large scale classifier training. By large scale we mean that either the cardinality of the dictionary of candidate weak classifiers or the number of weak learners used in the strong classifier exceed the number of variables that can be handled effectively in a single global optimization. For such situations we propose an iterative and piecewise approach in which a subset of weak classifiers is selected in each iteration via global optimization. The strong classifier is then constructed by concatenating the subsets of weak classifiers. We show in numerical studies that the generalized method again successfully competes with AdaBoost. We also provide theoretical arguments as to why the proposed optimization method, which does not only minimize the empirical loss but also adds L0-norm regularization, is superior to versions of boosting that only minimize the empirical loss. By conducting a Quantum Monte Carlo simulation we gather evidence that the quantum adiabatic algorithm is able to handle a generic training problem efficiently.
Meeting Shannon: Information-Theoretic Thinking in Engineering and Science
Goyal, Vivek K
Meeting Shannon: Information-Theoretic Thinking in Engineering and Science Lav R. Varshney Laboratory for Information and Decision Systems and Research Laboratory of Electronics Massachusetts universe for deducing fundamental limits, influences the cognitive processes of information theorists
Neutron-Antineutron Oscillations: Theoretical Status and Experimental Prospects
Phillips, D. G.; Snow, W. M.; Babu, K.; Banerjee, S.; Baxter, D. V.; Berezhiani, Z.; Bergevin, M.; Bhattacharya, S.; Brooijmans, G.; Castellanos, L.; et al.,
2014-10-04
This paper summarizes the relevant theoretical developments, outlines some ideas to improve experimental searches for free neutron-antineutron oscillations, and suggests avenues for future improvement in the experimental sensitivity.
Software Enabled Virtually Variable Displacement Pumps -Theoretical and Experimental Studies
Li, Perry Y.
Software Enabled Virtually Variable Displacement Pumps - Theoretical and Experimental Studies the functional equivalent of a variable displacement pump. This approach combines a fixed displacement pump valve control, without many of the shortcomings of commercially available variable displacement pumps
Physica Scripta An International Journal for Experimental and Theoretical Physics
Stancil, Phillip C.
universe [3]. D and T are also the fuel in a fusion device. In the core of a fusion plasma, the hydrogen detachment phenomenon is closely related to volume recombination in the cold divertor [4^11]. The theoretical
Learning by Game-Building in Theoretical Computer Science Education
Hutchins-Korte, Laura
2008-01-01
It has been suggested that theoretical computer science (TCS) suffers more than average from a lack of intrinsic motivation. The reasons provided in the literature include the difficulty of the subject, lack of relevance ...
An Axiomatisation of Computationally Adequate Domain Theoretic Models of FPC
Fiore, Marcelo P; Plotkin, Gordon
1994-01-01
Categorical models of the metalanguage FPC (a type theory with sums, products, exponentials and recursive types) are defined. Then, domain-theoretic models of FPC are axiomatised and a wide subclass of them —the ...
Shrink fit effects on rotordynamic stability: experimental and theoretical study
Jafri, Syed Muhammad Mohsin
2007-09-17
This dissertation presents an experimental and theoretical study of subsynchronous rotordynamic instability in rotors caused by interference and shrink fit interfaces. The experimental studies show the presence of strong ...
Theoretical/best practice energy use in metalcasting operations
Schifo, J. F.; Radia, J. T.
2004-05-01
This study determined the theoretical minimum energy requirements for melting processes for all ferrous and noferrous engenieering alloys. Also the report details the Best Practice energy consumption for the industry.
Soil and Water Assessment Tool Theoretical Documentation Version 2009
Neitsch, S.L.; Arnold, J.G.; Kiniry, J.R.; Williams, J.R.
2011-01-01
Documentation.pdf.txt Content-Type text/plain; charset=ISO-8859-1 Theoretical Documentation Version 2009 Soil & Water Assessment Tool TR-406 COLLEGE OF AGRICULTURE AND LIFE SCIENCES TR-406 2011 Soil and Water Assessment Tool...
Solid electrolytes for battery applications a theoretical perspective a
Holzwarth, Natalie
Solid electrolytes for battery applications a theoretical perspective a Natalie Holzwarth, USA · Introduction and motivation for solid electrolytes · What can computation do for this project? · Specific examples LiPON, thio phosphates, other solid electrolytes · Suggestions for collaboration
Theoretical Minimum Energy Use of a Building HVAC System
Tanskyi, O.
2011-01-01
This paper investigates the theoretical minimum energy use required by the HVAC system in a particular code compliant office building. This limit might be viewed as the "Carnot Efficiency" for HVAC system. It assumes that all ventilation and air...
Theoretical Aspects of Liquid Crystals and Liquid Crystalline Polymers
Feng, James J.
Theoretical Aspects of Liquid Crystals and Liquid Crystalline Polymers James J. Feng Department theories and mole- cular theories separately. In addition, a theory for liquid crystalline materials has, Vancouver, British Columbia, Canada INTRODUCTION Liquid crystallinity refers to an intermediate state
Safety evaluation of MHTGR licensing basis accident scenarios
Kroeger, P.G.
1989-04-01
The safety potential of the Modular High-Temperature Gas Reactor (MHTGR) was evaluated, based on the Preliminary Safety Information Document (PSID), as submitted by the US Department of Energy to the US Nuclear Regulatory Commission. The relevant reactor safety codes were extended for this purpose and applied to this new reactor concept, searching primarily for potential accident scenarios that might lead to fuel failures due to excessive core temperatures and/or to vessel damage, due to excessive vessel temperatures. The design basis accident scenario leading to the highest vessel temperatures is the depressurized core heatup scenario without any forced cooling and with decay heat rejection to the passive Reactor Cavity Cooling System (RCCS). This scenario was evaluated, including numerous parametric variations of input parameters, like material properties and decay heat. It was found that significant safety margins exist, but that high confidence levels in the core effective thermal conductivity, the reactor vessel and RCCS thermal emissivities and the decay heat function are required to maintain this safety margin. Severe accident extensions of this depressurized core heatup scenario included the cases of complete RCCS failure, cases of massive air ingress, core heatup without scram and cases of degraded RCCS performance due to absorbing gases in the reactor cavity. Except for no-scram scenarios extending beyond 100 hr, the fuel never reached the limiting temperature of 1600/degree/C, below which measurable fuel failures are not expected. In some of the scenarios, excessive vessel and concrete temperatures could lead to investment losses but are not expected to lead to any source term beyond that from the circulating inventory. 19 refs., 56 figs., 11 tabs.
An online algorithm for constrained POMDPs
Undurti, Aditya
This work seeks to address the problem of planning in the presence of uncertainty and constraints. Such problems arise in many situations, including the basis of this work, which involves planning for a team of first ...
An introduction to genetic algorithms for neural networks
Cambridge, University of
An introduction to genetic algorithms for neural networks Richard Kemp 1 Introduction Once a neural can use a genetic algorithm to try and solve the problem. What are genetic algorithms? Genetic algorithms (GAs) are search algo- rithms based on the mechanics of natural selection and genetics as observed
Shortest Path Discovery Problems: A Framework, Algorithms and Experimental Results
Szepesvari, Csaba
Shortest Path Discovery Problems: A Framework, Algorithms and Experimental Results Csaba Szepesv characterize some common properties of sound SPD algorithms, propose a partic- ular algorithm that is shown of the approach whereas the pro- posed algorithm is shown to yield a substantial speed-up of the recognition
Title of dissertation: Advanced Lagrangian Simulation Algorithms for Magnetized
Anlage, Steven
ABSTRACT Title of dissertation: Advanced Lagrangian Simulation Algorithms for Magnetized Plasma algorithms for problems such as these are derived and implemented. The algorithms are tested for multiple/toroidal). The advances presented here address two major shortcomings of conventional gyrokinetic PIC algorithms
Bounds On Contention Management Algorithms Johannes Schneider1
Bounds On Contention Management Algorithms Johannes Schneider1 , Roger Wattenhofer1 Computer algorithms for contention management in transactional memory, the deterministic algorithm CommitRounds and the randomized algo- rithm RandomizedRounds. Our randomized algorithm is efficient: in some noto- rious problem
KH Computational Physics-2015 Basic Numerical Algorithms Ordinary differential equations
Gustafsson, Torgny
KH Computational Physics- 2015 Basic Numerical Algorithms Ordinary differential equations The set(xl) at certain points xl. Kristjan Haule, 2015 1 #12;KH Computational Physics- 2015 Basic Numerical Algorithms purpose routine · Numerov's algorithm: ¨y = f(t)y(t) ( for Schroedinger equation) · Verlet algorithm: ¨y
Evaluation of Jet Algorithms in the Search for Sources of
Erdmann, Martin
Evaluation of Jet Algorithms in the Search for Sources of Ultra-High-Energy Cosmic Rays von-Energy-Correlations . . . . . . . . . . . . . . . . . . . 5 2 Jet Algorithms 7 2.1 Jet Algorithms in High-Energy Physics . . . . . . . . . . . . . . . . . 7 2.2 The SISCone-Jet-Algorithm . . . . . . . . . . . . . . . . . . . . . . . 8 2.2.1 The Search for Stable Cones
On the Statistical Efficiency of LMS Algorithms Bernard Widrow
Widrow, Bernard
On the Statistical Efficiency of LMS Algorithms Bernard Widrow ISL, Department of Electrical performance. In this work, two gradient descent adaptive algorithms are compared, the LMS algorithm and the LMS/Newton algorithm. LMS is simple and practical, and is used in many applications worldwide. LMS
Staging Dynamic Programming Algorithms Kedar Swadi Walid Taha Oleg Kiselyov
Taha, Walid
Applications of dynamic programming (DP) algorithms are numerous, and include g* *enetic engineering
Performance Evaluation of Binary Negative-Exponential Backoff Algorithm
Lee, Tae-Jin
algorithm in the presence of transmission error and com- pare the performance of the DCF with the BEB. In [6], the results showed that the BNEB had better performance than the DCF with the BEB algorithm and compare the throughput of the DCF with the BNEB algorithm to that with BEB algorithm. Finally, we conclude
Algorithms for first-order model checking Daniel Kral' (Warwick)
Banaji,. Murad
Algorithms for first-order model checking Daniel Kral' (Warwick) Meta-algorithms for deciding time algorithm of Frick and Grohe which applies to graphs with locally bounded tree- width. In this talk, we first survey commonly applied techniques to design FPT algorithms for FO properties. We
Just: Data requirements Data requirements of reverse-engineering algorithms
Just, Winfried
Just: Data requirements Data requirements of reverse-engineering algorithms Winfried Just vastly underdetermined. It is therefore important to estimate the probability that a given algorithm of different algorithms can be made. We also give an example of how expected algorithm performance can
An Objective Method of Evaluating and Devising Storm Tracking Algorithms
Lakshmanan, Valliappa
An Objective Method of Evaluating and Devising Storm Tracking Algorithms Valliappa Lakshmanan1 tracking algorithms are a key ingredient of nowcasting sys- tems, evaluation of storm tracking algorithms computable bulk statis- tics that can be used to directly evaluate the performance of tracking algorithms
Optimal scaling of the ADMM algorithm for distributed quadratic ...
2014-12-11
Dec 11, 2014 ... algorithm for a class of distributed quadratic programming problems. ..... Here, R(i
Digne, FranĂ§ois
Motivation Set-theoretic solutions Strategy 1 to determine set-theoretic solutions and Problems (Baie de Somme) May 30th Â june 2nd 2012 Eric Jespers #12;Motivation Set-theoretic solutions Strategy 1 to determine set-theoretic solutions and Problems Lecture Outline Motivation #12;Motivation Set
Joel W. Walker
2014-08-29
The MT2 or "s-transverse mass" statistic was developed to associate a parent mass scale to a missing transverse energy signature, given that escaping particles are generally expected in pairs, while collider experiments are sensitive to just a single transverse momentum vector sum. This document focuses on the generalized extension of that statistic to asymmetric one- and two-step decay chains, with arbitrary child particle masses and upstream missing transverse momentum. It provides a unified theoretical formulation, complete solution classification, taxonomy of critical points, and technical algorithmic prescription for treatment of the MT2 event scale. An implementation of the described algorithm is available for download, and is also a deployable component of the author's selection cut software package AEACuS (Algorithmic Event Arbiter and Cut Selector). Appendices address combinatoric event assembly, algorithm validation, and a complete pseudocode.
A New Algorithm for Multicommodity Flow
Dhananjay P. Mehendale
2010-01-13
We propose a new algorithm to obtain max flow for the multicommodity flow. This algorithm utilizes the max-flow min-cut theorem and the well known labeling algorithm due to Ford and Fulkerson [1]. We proceed as follows: We select one source/sink pair among the n distinguished source/sink pairs at a time and treat the given multicommodity network as a single commodity network for such chosen source/sink pair. Then applying standard labeling algorithm, separately for each sink/source pair, the feasible flow which is max flow and the corresponding minimum cut corresponding to each source/sink pair is obtained. A record is made of these cuts and the paths flowing through the edges of these cuts. This record is then utilized to develop our algorithm to obtain max flow for multicommodity flow. In this paper we have pinpointed the difficulty behind not getting a max flow min cut type theorem for multicommodity flow and found out a remedy.
Kim, Jeongnim [ORNL] [ORNL; Reboredo, Fernando A [ORNL] [ORNL
2014-01-01
The self-healing diffusion Monte Carlo method for complex functions [F. A. Reboredo J. Chem. Phys. {\\bf 136}, 204101 (2012)] and some ideas of the correlation function Monte Carlo approach [D. M. Ceperley and B. Bernu, J. Chem. Phys. {\\bf 89}, 6316 (1988)] are blended to obtain a method for the calculation of thermodynamic properties of many-body systems at low temperatures. In order to allow the evolution in imaginary time to describe the density matrix, we remove the fixed-node restriction using complex antisymmetric trial wave functions. A statistical method is derived for the calculation of finite temperature properties of many-body systems near the ground state. In the process we also obtain a parallel algorithm that optimizes the many-body basis of a small subspace of the many-body Hilbert space. This small subspace is optimized to have maximum overlap with the one expanded by the lower energy eigenstates of a many-body Hamiltonian. We show in a model system that the Helmholtz free energy is minimized within this subspace as the iteration number increases. We show that the subspace expanded by the small basis systematically converges towards the subspace expanded by the lowest energy eigenstates. Possible applications of this method to calculate the thermodynamic properties of many-body systems near the ground state are discussed. The resulting basis can be also used to accelerate the calculation of the ground or excited states with Quantum Monte Carlo.
International Association for Cryptologic Research (IACR)
Statistical weaknesses in 20 RC4-like algorithms and (probably) the simplest algorithm free from statistical weaknesses in 20 RC4-like algorithms including the original RC4, RC4A, PC-RC4 and others. This is achieved using a simple statistical test. We found only one algorithm which was able to pass the test
Möller, Torsten
ABSTRACT Splatting is a popular direct volume rendering algorithm. However, the algorithm does-casting algorithms, existing splatting algorithms do not have an equivalent mechanism for avoiding these artifacts. In this paper we propose such a mechanism, which delivers high-quality splatted images and has the potential
Axiomatic Tools versus Constructive approach to Unconventional Algorithms
Gordana Dodig-Crnkovic; Mark Burgin
2012-07-03
In this paper, we analyze axiomatic issues of unconventional computations from a methodological and philosophical point of view. We explain how the new models of algorithms changed the algorithmic universe, making it open and allowing increased flexibility and creativity. However, the greater power of new types of algorithms also brought the greater complexity of the algorithmic universe, demanding new tools for its study. That is why we analyze new powerful tools brought forth by the axiomatic theory of algorithms, automata and computation.
Heat Bath Algorithmic Cooling with Spins: Review and Prospects
Daniel K. Park; Nayeli A. Rodriguez-Briones; Guanru Feng; Robabeh R. Darabad; Jonathan Baugh; Raymond Laflamme
2015-01-05
Application of multiple rounds of Quantum Error Correction (QEC) is an essential milestone towards the construction of scalable quantum information processing devices. However, experimental realizations of it are still in their infancy. The requirements for multiple round QEC are high control fidelity and the ability to extract entropy from ancilla qubits. Nuclear Magnetic Resonance (NMR) based quantum devices have demonstrated high control fidelity with up to 12 qubits. On the other hand, the major challenge in the NMR QEC experiment is to efficiently supply ancilla qubits in highly pure states at the beginning of each round of QEC. Purification of qubits in NMR, or in other ensemble based quantum systems can be accomplished through Heat Bath Algorithmic Cooling (HBAC). It is an efficient method for extracting entropy from qubits that interact with a heat bath, allowing cooling below the bath temperature. For practical HBAC, coupled electron-nuclear spin systems are more promising than conventional NMR quantum processors, since electron spin polarization is about $10^3$ times greater than that of a proton under the same experimental conditions. We provide an overview on both theoretical and experimental aspects of HBAC focusing on spin and magnetic resonance based systems, and discuss the prospects of exploiting electron-nuclear coupled systems for the realization of HBAC and multiple round QEC.
Zhang, Gang; Harichandran, Ronald S.; Ramuhalli, Pradeep
2011-09-13
Delamination is a commonly observed distress in concrete bridge decks. Among all the delamination detection methods, acoustic methods have the advantages of being fast and inexpensive. In traditional acoustic inspection methods, the inspector drags a chain along or hammers on the bridge deck and detects delamination from the 'hollowness' of the sounds. The signals are often contaminated by ambient traffic noise and the detection of delamination is highly subjective. This paper describes the performance of an impact-bases acoustic NDE method where the traffic noise was filtered by employing a noise cancelling algorithm and where subjectivity was eliminated by introducing feature extraction and pattern recognition algorithms. Different algorithms were compared and the best one was selected in each category. The comparison showed that the modified independent component analysis (ICA) algorithm was most effective in cancelling the traffic noise and features consisting of mel-frequency cepstral coefficients (MFCCs) had the best performance in terms of repeatability and separabillty. The condition of the bridge deck was then detected by a radial basis function (RBF) neural network. The performance of the system was evaluated using both experimental and field data. The results show that the selected algorithms increase the noise robustness of acoustic methods and perform satisfactorily if the training data is representative.
Particle Merging Algorithm for PIC Codes
Vranic, Marija; Martins, Joana L; Fonseca, Ricardo A; Silva, Luis O
2014-01-01
Particle-in-cell merging algorithms aim to resample dynamically the six-dimensional phase space occupied by particles without distorting substantially the physical description of the system. Whereas various approaches have been proposed in previous works, none of them seemed to be able to conserve fully charge, momentum, energy and their associated distributions. We describe here an alternative algorithm based on the coalescence of N massive or massless particles, considered to be close enough in phase space, into two new macro-particles. The local conservation of charge, momentum and energy are ensured by the resolution of a system of scalar equations. Various simulation comparisons have been carried out with and without the merging algorithm, from classical plasma physics problems to extreme scenarios where quantum electrodynamics is taken into account, showing in addition to the conservation of local quantities, the good reproducibility of the particle distributions. In case where the number of particles o...
The cc-pV5Z-F12 basis set: reaching the basis set limit in explicitly correlated calculations
Peterson, Kirk A; Martin, Jan M L
2014-01-01
We have developed and benchmarked a new extended basis set for explicitly correlated calculations, namely cc-pV5Z-F12. It is offered in two variants, cc-pV5Z-F12 and cc- pV5Z-F12(rev2), the latter of which has additional basis functions on hydrogen not present in the cc-pVnZ-F12 (n=D,T,Q) sequence.A large uncontracted 'reference' basis set is used for benchmarking. cc-pVnZ-F12 (n=D, T, Q, 5) is shown to be a convergent hierarchy. Especially the cc- pV5Z-F12(rev2) basis set can yield the valence CCSD component of total atomization energies (TAEs), without any extrapolation, to an accuracy normally associated with aug-cc-pV{5,6}Z extrapolations. SCF components are functionally at the basis set limit, while the MP2 limit can be approached to as little as 0.01 kcal/mol without extrapolation. The determination of (T) appears to be the most difficult of the three components and cannot presently be accomplished without extrapolation or scaling. (T) extrapolation from cc-pV{T,Q}Z-F12 basis sets, combined with CCSD-F1...
The genetic basis of multiple sclerosis: a model for MS susceptibility
Goodin, Douglas S
2010-01-01
The genetic basis of multiple sclerosis: a model for MSet al: McAlpine’s Multiple Sclerosis. 4 edition. Churchillfamilial aggregation in multiple sclerosis. Nature 1995, 4.
Technical Basis and Considerations for DOE M 435.1-1 (Appendix A)
Broader source: Directives, Delegations, and Requirements [Office of Management (MA)]
1999-07-09
This appendix establishes the technical basis of the order revision process and of each of the requirements included in the revised radioactive waste management order.
THEORETICAL STUDIES OF NUCLEATION KINETICS AND NANODROPLET MICROSTRUCTURE
Wilemski, Gerald
2009-01-31
The goals of this project were to (1) explore ways of bridging the gap between fundamental molecular nucleation theories and phenomenological approaches based on thermodynamic reasoning, (2) test and improve binary nucleation theory, and (3) provide the theoretical underpinning for a powerful new experimental technique, small angle neutron scattering (SANS) from nanodroplet aerosols, that can probe the compositional structure of nanodroplets. This report summarizes the accomplishments of this project in realizing these goals. Publications supported by this project fall into three general categories: (1) theoretical work on nucleation theory (2) experiments and modeling of nucleation and condensation in supersonic nozzles, and (3) experimental and theoretical work on nanodroplet structure and neutron scattering. These publications are listed and briefly summarized in this report.
Theoretical and experimental investigation of heat pipe solar collector
Azad, E.
2008-09-15
Heat pipe solar collector was designed and constructed at IROST and its performance was measured on an outdoor test facility. The thermal behavior of a gravity assisted heat pipe solar collector was investigated theoretically and experimentally. A theoretical model based on effectiveness-NTU method was developed for evaluating the thermal efficiency of the collector, the inlet, outlet water temperatures and heat pipe temperature. Optimum value of evaporator length to condenser length ratio is also determined. The modelling predictions were validated using experimental data and it shows that there is a good concurrence between measured and predicted results. (author)
California at Davis, University of
used to probe the structure of a configura tion distribution, such as magnetization and structure . . . . . . . . . . . . . . . . . 6 2 Susceptibility and structure factors 6 3 Specific heat . . . . . . . . . . . . . 7 C Other to It 10 D Density, Rate, and Algorithmic Com plexity . . . . . . . . . . . . . . . . . . 11 E Redundancy
Chen, Sheng
Formulation Nonlinear System Identification Tunable RBF Model Construction 3 Particle Swarm Optimisation PSO Algorithm PSO Aided Tunable RBF Modelling 4 Examples Engine Data Set Nonlinear Liquid Level System 5 Model Construction 3 Particle Swarm Optimisation PSO Algorithm PSO Aided Tunable RBF Modelling 4
Implementation of the Trigonometric LMS Algorithm using Original Cordic Rotation
Akhter, Nasrin; Fersouse, Lilatul; Khandaker, Faria
2010-01-01
The LMS algorithm is one of the most successful adaptive filtering algorithms. It uses the instantaneous value of the square of the error signal as an estimate of the mean-square error (MSE). The LMS algorithm changes (adapts) the filter tap weights so that the error signal is minimized in the mean square sense. In Trigonometric LMS (TLMS) and Hyperbolic LMS (HLMS), two new versions of LMS algorithms, same formulations are performed as in the LMS algorithm with the exception that filter tap weights are now expressed using trigonometric and hyperbolic formulations, in cases for TLMS and HLMS respectively. Hence appears the CORDIC algorithm as it can efficiently perform trigonometric, hyperbolic, linear and logarithmic functions. While hardware-efficient algorithms often exist, the dominance of the software systems has kept those algorithms out of the spotlight. Among these hardware- efficient algorithms, CORDIC is an iterative solution for trigonometric and other transcendental functions. Former researches wor...
CRITICALITY SAFETY CONTROLS AND THE SAFETY BASIS AT PFP
Kessler, S
2009-04-21
With the implementation of DOE Order 420.1B, Facility Safety, and DOE-STD-3007-2007, 'Guidelines for Preparing Criticality Safety Evaluations at Department of Energy Non-Reactor Nuclear Facilities', a new requirement was imposed that all criticality safety controls be evaluated for inclusion in the facility Documented Safety Analysis (DSA) and that the evaluation process be documented in the site Criticality Safety Program Description Document (CSPDD). At the Hanford site in Washington State the CSPDD, HNF-31695, 'General Description of the FH Criticality Safety Program', requires each facility develop a linking document called a Criticality Control Review (CCR) to document performance of these evaluations. Chapter 5, Appendix 5B of HNF-7098, Criticality Safety Program, provided an example of a format for a CCR that could be used in lieu of each facility developing its own CCR. Since the Plutonium Finishing Plant (PFP) is presently undergoing Deactivation and Decommissioning (D&D), new procedures are being developed for cleanout of equipment and systems that have not been operated in years. Existing Criticality Safety Evaluations (CSE) are revised, or new ones written, to develop the controls required to support D&D activities. Other Hanford facilities, including PFP, had difficulty using the basic CCR out of HNF-7098 when first implemented. Interpretation of the new guidelines indicated that many of the controls needed to be elevated to TSR level controls. Criterion 2 of the standard, requiring that the consequence of a criticality be examined for establishing the classification of a control, was not addressed. Upon in-depth review by PFP Criticality Safety staff, it was not clear that the programmatic interpretation of criterion 8C could be applied at PFP. Therefore, the PFP Criticality Safety staff decided to write their own CCR. The PFP CCR provides additional guidance for the evaluation team to use by clarifying the evaluation criteria in DOE-STD-3007-2007. In reviewing documents used in classifying controls for Nuclear Safety, it was noted that DOE-HDBK-1188, 'Glossary of Environment, Health, and Safety Terms', defines an Administrative Control (AC) in terms that are different than typically used in Criticality Safety. As part of this CCR, a new term, Criticality Administrative Control (CAC) was defined to clarify the difference between an AC used for criticality safety and an AC used for nuclear safety. In Nuclear Safety terms, an AC is a provision relating to organization and management, procedures, recordkeeping, assessment, and reporting necessary to ensure safe operation of a facility. A CAC was defined as an administrative control derived in a criticality safety analysis that is implemented to ensure double contingency. According to criterion 2 of Section IV, 'Linkage to the Documented Safety Analysis', of DOESTD-3007-2007, the consequence of a criticality should be examined for the purposes of classifying the significance of a control or component. HNF-PRO-700, 'Safety Basis Development', provides control selection criteria based on consequence and risk that may be used in the development of a Criticality Safety Evaluation (CSE) to establish the classification of a component as a design feature, as safety class or safety significant, i.e., an Engineered Safety Feature (ESF), or as equipment important to safety; or merely provides defense-in-depth. Similar logic is applied to the CACs. Criterion 8C of DOE-STD-3007-2007, as written, added to the confusion of using the basic CCR from HNF-7098. The PFP CCR attempts to clarify this criterion by revising it to say 'Programmatic commitments or general references to control philosophy (e.g., mass control or spacing control or concentration control as an overall control strategy for the process without specific quantification of individual limits) is included in the PFP DSA'. Table 1 shows the PFP methodology for evaluating CACs. This evaluation process has been in use since February of 2008 and has proven to be simple and effective. Each control identified i
ITP Metal Casting: Theoretical/Best Practice Energy Use in Metalcastin...
Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site
TheoreticalBest Practice Energy Use in Metalcasting Operations ITP Metal Casting: TheoreticalBest Practice Energy Use in Metalcasting Operations doebestpractice052804.pdf More...
ADVANCED MODERN PHYSICS -Theoretical Foundations World Scientific Publishing Co. Pte. Ltd.
California at Santa Cruz, University of
ADVANCED MODERN PHYSICS - Theoretical Foundations © World Scientific Publishing Co. Pte. Ltd. http - Theoretical Foundations © World Scientific Publishing Co. Pte. Ltd. http://www.worldscibooks.com/physics/7555
Algorithms for Pure Nash Equilibria in Weighted
Conati, Cristina
Algorithms for Pure Nash Equilibria in Weighted Congestion Games Panagopoulou and Spirakis Review strategies. Â· This paper only deals with pure strategies. #12;Game Theory Â· A Nash Equilibrium is where players stay fixed. #12;Game Theory Â· A Nash equilibrium is guaranteed to exist when players can use mixed
STORAGE CAPACITY ALLOCATION ALGORITHMS FOR HIERARCHICAL
Stavrakakis, Ioannis
STORAGE CAPACITY ALLOCATION ALGORITHMS FOR HIERARCHICAL CONTENT DISTRIBUTION Nikolaos Laoutaris of Athens, 15784 Athens, Greece {laoutaris,vassilis,istavrak}@di.uoa.gr Abstract The addition of storage storage budget to the nodes of a hierarchical con- tent distribution system is formulated; optimal
August 1988 Computers, algorithms and mathematics
LovĂˇsz, LĂˇszlĂł
August 1988 Computers, algorithms and mathematics LÂ´aszlÂ´o LovÂ´asz 0. Introduction The development that it has not left untouched closely related branches of science like mathematics and its education mathematics of higher value than classical, structure-oriented, theoremÂproof mathemat- ics, or does it just
Radio Network Planning with Combinatorial Optimisation Algorithms
Boyer, Edmond
Radio Network Planning with Combinatorial Optimisation Algorithms P. CalĂ©gari, F. Guidec, P. Kuonen@c2r.tdf.fr Group: Software tools Abstract: Future UMTS radio planning engineers will face difficult, a software for the optimisation of the radio network is under development. Two mathematical models
Disentangling Clustering Effects in Jet Algorithms
Randall Kelley; Jonathan R. Walsh; Saba Zuberi
2012-04-04
Clustering algorithms build jets though the iterative application of single particle and pairwise metrics. This leads to phase space constraints that are extremely complicated beyond the lowest orders in perturbation theory, and in practice they must be implemented numerically. This complication presents a significant barrier to gaining an analytic understanding of the perturbative structure of jet cross sections. We present a novel framework to express the jet algorithm's phase space constraints as a function of clustered groups of particles, which are the possible outcomes of the algorithm. This approach highlights the analytic properties of jet observables, rather than the explicit constraints on individual final state momenta, which can be unwieldy at higher orders. We derive the form of the n-particle phase space constraints for a jet algorithm with any measurement. We provide an expression for the measurement that makes clustering effects manifest and relates them to constraints from clustering at lower orders. The utility of this framework is demonstrated by using it to understand clustering effects for a large class of jet shape observables in the soft/collinear limit. We apply this framework to isolate divergences and analyze the logarithmic structure of the Abelian terms in the soft function, providing the all-orders form of these terms and showing that corrections from clustering start at next-to-leading logarithmic order in the exponent of the cross section.
Fast Algorithm for Partial Covers in Words
Lonardi, Stefano
Fast Algorithm for Partial Covers in Words Tomasz Kociumaka1 , Solon P. Pissis2,3 , Jakub Bad Herrenalb, June 17, 2013 T. Kociumaka, S. Pissis, J. Radoszewski, W. Rytter, T. Wale Fast are aligned. a a a a a a a a a a a ab b b b T. Kociumaka, S. Pissis, J. Radoszewski, W. Rytter, T. Wale Fast
Embodied Evolution: Distributing an Evolutionary Algorithm
Meeden, Lisa A.
The vision of embodied evolution described above is largely inspired by ex periments in Artificial Li, which makes it an interesting platform for future work in collective robotics and Artificial Life. We. Key words: Evolutionary Robotics, Artificial Life, Evolutionary Algorithms, Distributed Learning