Normal Basis Multiplication Algorithms for GF(2n ) (Full Version)
International Association for Cryptologic Research (IACR)
1 Normal Basis Multiplication Algorithms for GF(2n ) (Full Version) Haining Fan, Duo Liu and Yiqi. fan_haining@yahoo.com Abstract - In this paper, we propose a new normal basis multiplication algorithm for GF(2n ). This algorithm can be used to design not only fast software algorithms but also low
Theoretical Basis of Likelihood Methods in Molecular Phylogenetic Inference
Das, Rhiju
for molecular data by the maximum-likelihood approach has been attacked from a theoretical point of view is seen to be a classical statistical problem involving selection between composite hypothesesTheoretical Basis of Likelihood Methods in Molecular Phylogenetic Inference Rhiju Das, Centre
On the relation between the MXL family of algorithms and Grobner basis algorithms
International Association for Cryptologic Research (IACR)
On the relation between the MXL family of algorithms and Gr¨obner basis algorithms Martin R Solving (PoSSo) problem. The most efficient known algorithms reduce the Gr¨obner basis computation", on which a new family of algorithms is based (MXL, MXL2 and MXL3). By studying and de- scribing
Decision Trees: More Theoretical Justification for Practical Algorithms
Fiat, Amos
Decision Trees: More Theoretical Justification for Practical Algorithms Amos Fiat and Dmitry,pechyony}@tau.ac.il Abstract. We study impuritybased decision tree algorithms such as CART, C4.5, etc., so as to better understand their theoretical under pinnings. We consider such algorithms on special forms of functions
Learning Active Basis Models by EM-Type Algorithms
Wu, Ying Nian
Learning Active Basis Models by EM-Type Algorithms Zhangzhang Si1, Haifeng Gong1,2, Song-Chun Zhu1, and scales as latent variables into the image generation process, and learn the template by EM-type scheme for learning image templates of object categories where the learning is not fully supervised. We
Crawford, T. Daniel
The balance between theoretical method and basis set quality: A systematic study of equilibrium the best balance between theoretical method and basis set quality. This "balance" was evident
Two Software Normal Basis Multiplication Algorithms for GF(2n Haining Fan and Yiqi Dai
International Association for Cryptologic Research (IACR)
1 Two Software Normal Basis Multiplication Algorithms for GF(2n ) Haining Fan and Yiqi Dai Abstract - In this paper, two different normal basis multiplication algorithms for software implementation are proposed over GF(2n ). The first algorithm is suitable for high complexity normal bases and the second algorithm
Centrifuge Permeameter for Unsaturated Soils. I: Theoretical Basis and Experimental Developments
Zornberg, Jorge G.
Centrifuge Permeameter for Unsaturated Soils. I: Theoretical Basis and Experimental Developments Jorge G. Zornberg, M.ASCE1 ; and John S. McCartney, A.M.ASCE2 Abstract: A new centrifuge permeameter the centrifuge permeame- ter for concurrent determination of the soil-water retention curve SWRC and hydraulic
Storjohann, Arne
a given integer lattice basis b1 ; b2 ; : : : ; bn 2 ZZ n into a reduced basis. The cost of L 3 reduction product. The L 3 reduction algorithm presented in [12] guarantees to return a basis with initial vector for Integer Lattice Basis Reduction Arne Storjohann Eidgen¨ossische Technische Hochschule CH8092 Z
A Decision Theoretic Approach to Evaluate Radiation Detection Algorithms
Nobles, Mallory A.; Sego, Landon H.; Cooley, Scott K.; Gosink, Luke J.; Anderson, Richard M.; Hays, Spencer E.; Tardiff, Mark F.
2013-07-01T23:59:59.000Z
There are a variety of sensor systems deployed at U.S. border crossings and ports of entry that scan for illicit nuclear material. In this work, we develop a framework for comparing the performance of detection algorithms that interpret the output of these scans and determine when secondary screening is needed. We optimize each algorithm to minimize its risk, or expected loss. We measure an algorithm’s risk by considering its performance over a sample, the probability distribution of threat sources, and the consequence of detection errors. While it is common to optimize algorithms by fixing one error rate and minimizing another, our framework allows one to simultaneously consider multiple types of detection errors. Our framework is flexible and easily adapted to many different assumptions regarding the probability of a vehicle containing illicit material, and the relative consequences of a false positive and false negative errors. Our methods can therefore inform decision makers of the algorithm family and parameter values which best reduce the threat from illicit nuclear material, given their understanding of the environment at any point in time. To illustrate the applicability of our methods, in this paper, we compare the risk from two families of detection algorithms and discuss the policy implications of our results.
A theoretical analysis of a pattern recognition algorithm for bank failure prediction
Prieto Orlando, Rodrigo Javier
1994-01-01T23:59:59.000Z
A THEORETICAL ANALYSIS OF A PATTERN RECOGNITION ALGORITHM FOR BANK FAILURE PREDICTION A Thesis by RODRIGO JAVIER PRIETO ORLANDO Submitted to Texas ARM University in partial fulfillment of the requirements for the degree of MASTER OF SCIENCE... Analysis of a Pattern Recognition Algorithm for Bank Failure Prediction. (December 1994) Rodrigo Javier Prieto Orlando, B. S. , Texas AkM University Chair of Advisory Committee: Dr. Tep Sastri This thesis describes a theoretical analysis and a series...
The Back and Forth Nudging algorithm for data assimilation problems: theoretical results on
Boyer, Edmond
The Back and Forth Nudging algorithm for data assimilation problems: theoretical results consider the back and forth nudging algorithm that has been introduced for data assimilation purposes of the system can then be seen as a control vector [LDT86]. Finally, the basic idea of stochastic methods
The Leap-Frog Algorithm and Optimal Control: Theoretical Aspects
Noakes, Lyle
problems in IR 2 with bounded controls. The local time-optimal control solution of systems linear Riemannian manifolds. A direct application of this algorithm to #12;nd optimal control for systems-bang control solution of a system with bounded controls, which is well-understood in the plane. Key words: Time-optimal
Decision-theoretic consideration of robust hashing: link to practical algorithms
Genève, Université de
of digital and analog content as well as goods and products justifying an urgent need for reliable document privacy as well as universality to provide asymptotic independence to a complete or partial lack of priorDecision-theoretic consideration of robust hashing: link to practical algorithms Oleksiy Koval
Quinn, M.J.
1983-01-01T23:59:59.000Z
The problem of developing efficient algorithms and data structures to solve graph theoretic problems on tightly-coupled MIMD comuters is addressed. Several approaches to parallelizing a serial algorithm are examined. A technique is developed which allows the prediction of the expected execution time of some kinds of parallel algorithms. This technique can be used to determine which parallel algorithm is best for a particular application. Two parallel approximate algorithms for the Euclidean traveling salesman problem are designed and analyzed. The algorithms are parallelizations of the farthest-insertion heuristic and Karp's partitioning algorithm. Software lockout, the delay of processes due to contention for shared data structure, can be a significant hindrance to obtaining satisfactory speedup. Using the tactics of indirection and replication, new data structures are devised which can reduce the severity of software lockout. Finally, an upper bound to the speedup of parallel branch-and-bound algorithms which use the best-bound search strategy is determined.
Vincenzo Tamma
2015-05-18T23:59:59.000Z
We describe a novel analogue algorithm that allows the simultaneous factorization of an exponential number of large integers with a polynomial number of experimental runs. It is the interference-induced periodicity of "factoring" interferograms measured at the output of an analogue computer that allows the selection of the factors of each integer [1,2,3,4]. At the present stage the algorithm manifests an exponential scaling which may be overcome by an extension of this method to correlated qubits emerging from n-order quantum correlations measurements. We describe the conditions for a generic physical system to compute such an analogue algorithm. A particular example given by an "optical computer" based on optical interference will be addressed in the second paper of this series [5].
Liang, Min
2012-01-01T23:59:59.000Z
Public-key cryptosystems for quantum messages are considered from two aspects: public-key encryption and public-key authentication. Firstly, we propose a general construction of quantum public-key encryption scheme, and then construct an information-theoretic secure instance. Then, we propose a quantum public-key authentication scheme, which can protect the integrity of quantum messages. This scheme can both encrypt and authenticate quantum messages. It is information-theoretic secure with regard to encryption, and the success probability of tampering decreases exponentially with the security parameter with regard to authentication. Compared with classical public-key cryptosystems, one private-key in our schemes corresponds to an exponential number of public-keys, and every quantum public-key used by the sender is an unknown quantum state to the sender.
Min Liang; Li Yang
2012-05-10T23:59:59.000Z
Public-key cryptosystems for quantum messages are considered from two aspects: public-key encryption and public-key authentication. Firstly, we propose a general construction of quantum public-key encryption scheme, and then construct an information-theoretic secure instance. Then, we propose a quantum public-key authentication scheme, which can protect the integrity of quantum messages. This scheme can both encrypt and authenticate quantum messages. It is information-theoretic secure with regard to encryption, and the success probability of tampering decreases exponentially with the security parameter with regard to authentication. Compared with classical public-key cryptosystems, one private-key in our schemes corresponds to an exponential number of public-keys, and every quantum public-key used by the sender is an unknown quantum state to the sender.
Mintert, James R.; Davis, Ernest E.; Dhuyvetter, Kevin C.; Bevers, Stan
1999-06-23T23:59:59.000Z
Livestock Basis James Mintert, Ernest E. Davis, Kevin Dhuyvetter and Stan Bevers* Basis is the difference between the local cash market and a futures contract price (Basis = Cash Price ? Futures Price). Knowledge of historical basis patterns can...
Mintert, James R.; Davis, Ernest E.; Dhuyvetter, Kevin C.; Bevers, Stan
1999-06-23T23:59:59.000Z
explains how livestock basis is computed, outlines an approach to developing a history of local basis levels, and discusses how historical basis data can be used to forecast basis....
Nikolova, Evdokia Velinova
2009-01-01T23:59:59.000Z
Classical algorithms from theoretical computer science arise time and again in practice. However,a practical situations typically do not fit precisely into the traditional theoretical models. Additional necessary components ...
Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)
AFDC Printable Version Share this resource Send a link to EERE: Alternative Fuels Data Center Home Page to someone by E-mail Share EERE: Alternative Fuels Data Center Home Page on Facebook Tweet about EERE: Alternative Fuels Data Center Home Page on Twitter Bookmark EERE:1 First Use of Energy for All Purposes (Fuel and Nonfuel), 2002; Level:5(Million Cubic Feet) Oregon (Including Vehicle Fuel) (MillionStructural Basis of WntSupportB 18B()The FiveRevisedThe vision of a smart1HEP Theoretical
Paris-Sud XI, Université de
Discrete Mathematics and Theoretical Computer Science DMTCS vol. 14:1, 2012, 147158 A linear time in H such that each edge of H appears in this sequence exactly once and vi-1, vi ei, vi-1 = vi Mathematics and Theoretical Computer Science 14, 1 (2012) 147-158" #12;148 Zbigniew Lonc and Pawel Naroski (i
Alasdair Macleod
2007-08-23T23:59:59.000Z
MOND is a phenomenological theory with no apparent physical justification which seems to undermine some of the basic principles that underpin established theoretical physics. It is nevertheless remarkably successful over its sphere of application and this suggests MOND may have some physical basis. It is shown here that two simple axioms pertaining to fundamental principles will reproduce the characteristic behaviour of MOND, though the axioms are in conflict with general relativistic cosmology.
2012-03-14T23:59:59.000Z
Index Terms—Basis pursuit, distributed optimization, sensor networks, augmented ... and image denoising and restoration [1], [2], compression, fitting and ...
Algorithmic and Theoretical Considerations for Computing ...
Steven Glenn Jackson and Alfred Gérard Noël (Speaker)
2009-03-10T23:59:59.000Z
Mar 12, 2009 ... dimp. S(g)K r the subalgebra of S(g)K defined by K-invariant polynomials of degree at most r. Steven Glenn Jackson and Alfred Gérard Noël ...
Ghelli, Giorgio
Basi di dati: FunzionalitÃ , Progettazione, Interrogazione Giorgio Ghelli DBMS's 2 Temi Â· FunzionalitÃ ed uso dei DBMS Â· Progettazione di una Base di Dati Â· Interrogazione di una Base di Dati FunzionalitÃ dei DBMS DBMS's 4 Riferimenti Â· A. Albano, G. Ghelli, R. Orsini, Basi di Dati Relazionali e
Sharkey, Keeper L. [Department of Chemistry, University of Arizona, Tucson, Arizona 85721 (United States)] [Department of Chemistry, University of Arizona, Tucson, Arizona 85721 (United States); Adamowicz, Ludwik [Department of Chemistry, University of Arizona, Tucson, Arizona 85721 (United States) [Department of Chemistry, University of Arizona, Tucson, Arizona 85721 (United States); Department of Physics, University of Arizona, Tucson, Arizona 85721 (United States)
2014-05-07T23:59:59.000Z
An algorithm for quantum-mechanical nonrelativistic variational calculations of L = 0 and M = 0 states of atoms with an arbitrary number of s electrons and with three p electrons have been implemented and tested in the calculations of the ground {sup 4}S state of the nitrogen atom. The spatial part of the wave function is expanded in terms of all-electrons explicitly correlated Gaussian functions with the appropriate pre-exponential Cartesian angular factors for states with the L = 0 and M = 0 symmetry. The algorithm includes formulas for calculating the Hamiltonian and overlap matrix elements, as well as formulas for calculating the analytic energy gradient determined with respect to the Gaussian exponential parameters. The gradient is used in the variational optimization of these parameters. The Hamiltonian used in the approach is obtained by rigorously separating the center-of-mass motion from the laboratory-frame all-particle Hamiltonian, and thus it explicitly depends on the finite mass of the nucleus. With that, the mass effect on the total ground-state energy is determined.
R.J. Garrett
2002-01-14T23:59:59.000Z
As part of the internal Integrated Safety Management Assessment verification process, it was determined that there was a lack of documentation that summarizes the safety basis of the current Yucca Mountain Project (YMP) site characterization activities. It was noted that a safety basis would make it possible to establish a technically justifiable graded approach to the implementation of the requirements identified in the Standards/Requirements Identification Document. The Standards/Requirements Identification Documents commit a facility to compliance with specific requirements and, together with the hazard baseline documentation, provide a technical basis for ensuring that the public and workers are protected. This Safety Basis Report has been developed to establish and document the safety basis of the current site characterization activities, establish and document the hazard baseline, and provide the technical basis for identifying structures, systems, and components (SSCs) that perform functions necessary to protect the public, the worker, and the environment from hazards unique to the YMP site characterization activities. This technical basis for identifying SSCs serves as a grading process for the implementation of programs such as Conduct of Operations (DOE Order 5480.19) and the Suspect/Counterfeit Items Program. In addition, this report provides a consolidated summary of the hazards analyses processes developed to support the design, construction, and operation of the YMP site characterization facilities and, therefore, provides a tool for evaluating the safety impacts of changes to the design and operation of the YMP site characterization activities.
Broader source: Directives, Delegations, and Requirements [Office of Management (MA)]
2007-07-11T23:59:59.000Z
The Guide assists DOE/NNSA field elements and operating contractors in identifying and analyzing hazards at facilities and sites to provide the technical planning basis for emergency management programs. Cancels DOE G 151.1-1, Volume 2.
The Brain Basis of Emotions 1 BRAIN BASIS OF EMOTION
Barrett, Lisa Feldman
The Brain Basis of Emotions 1 BRAIN BASIS OF EMOTION The brain basis of emotion: A meta, Building 149 Charlestown, MA 02129 lindqukr@nmr.mgh.harvard.edu #12;The Brain Basis of Emotions 2 Abstract Researchers have wondered how the brain creates emotions since the early days of psychological science
Michele Mosca
2008-08-04T23:59:59.000Z
This article surveys the state of the art in quantum computer algorithms, including both black-box and non-black-box results. It is infeasible to detail all the known quantum algorithms, so a representative sample is given. This includes a summary of the early quantum algorithms, a description of the Abelian Hidden Subgroup algorithms (including Shor's factoring and discrete logarithm algorithms), quantum searching and amplitude amplification, quantum algorithms for simulating quantum mechanical systems, several non-trivial generalizations of the Abelian Hidden Subgroup Problem (and related techniques), the quantum walk paradigm for quantum algorithms, the paradigm of adiabatic algorithms, a family of ``topological'' algorithms, and algorithms for quantum tasks which cannot be done by a classical computer, followed by a discussion.
Radioactive Waste Management Basis
Perkins, B K
2009-06-03T23:59:59.000Z
The purpose of this Radioactive Waste Management Basis is to describe the systematic approach for planning, executing, and evaluating the management of radioactive waste at LLNL. The implementation of this document will ensure that waste management activities at LLNL are conducted in compliance with the requirements of DOE Order 435.1, Radioactive Waste Management, and the Implementation Guide for DOE Manual 435.1-1, Radioactive Waste Management Manual. Technical justification is provided where methods for meeting the requirements of DOE Order 435.1 deviate from the DOE Manual 435.1-1 and Implementation Guide.
Algorithms for active learning
Hsu, Daniel Joseph
2010-01-01T23:59:59.000Z
6.2 Algorithms . . . . . . . . . . . . . . . . . . . 6.2.1CAL algorithm. . . . . . . . . . . . . . . . . . . .IWAL-CAL algorithm. . . . . . . . . . . . . . . . . . . . .
Graph-Theoretic Connectivity Control of Mobile
Pappas, George J.
]Â[23]. This research has given rise to connectivity or topology control algorithms that regulate the transmission powerINVITED P A P E R Graph-Theoretic Connectivity Control of Mobile Robot Networks This paper develops an analysis for groups of vehicles connected by a communication network; control laws are formulated
Approximating Power Indices --Theoretical and Empirical Analysis
Rosenschein, Jeff
, by providing lower bounds for both deter- ministic and randomized algorithms for calculating power indices. WeApproximating Power Indices -- Theoretical and Empirical Analysis Yoram Bachrach School and Computer Science, The Hebrew University, Jerusalem, Israel Amin Saberi Department of Management Science
Quantum Public-Key Encryption with Information Theoretic Security
Jiangyou Pan; Li Yang
2012-02-20T23:59:59.000Z
We propose a definition for the information theoretic security of a quantum public-key encryption scheme, and present bit-oriented and two-bit-oriented encryption schemes satisfying our security definition via the introduction of a new public-key algorithm structure. We extend the scheme to a multi-bitoriented one, and conjecture that it is also information theoretically secure, depending directly on the structure of our new algorithm.
Euclid's Algorithm, Guass' Elimination and Buchberger's Algorithm
International Association for Cryptologic Research (IACR)
Euclid's Algorithm, Guass' Elimination and Buchberger's Algorithm Shaohua Zhang School of Mathematics, Shandong University, Jinan, Shandong, 250100, PRC Abstract: It is known that Euclid's algorithm, Guass' elimination and Buchberger's algorithm play important roles in algorithmic number the- ory
Information-theoretic Approaches to Branching in Search Andrew Gilpin
Sandholm, Tuomas W.
constraints over sets of variables. 1 Introduction Search is a fundamental technique for problem solving in AIInformation-theoretic Approaches to Branching in Search Andrew Gilpin Computer Science Department of search algorithms. We introduce the information-theoretic paradigm for branching question selection
Wang, Kunpeng; Chai, Yi [College of Automation, Chongqing University, Chongqing 400044 (China)] [College of Automation, Chongqing University, Chongqing 400044 (China); Su, Chunxiao [Research Center of Laser Fusion, CAEP, P. O. Box 919-983, Mianyang 621900 (China)] [Research Center of Laser Fusion, CAEP, P. O. Box 919-983, Mianyang 621900 (China)
2013-08-15T23:59:59.000Z
In this paper, we consider the problem of extracting the desired signals from noisy measurements. This is a classical problem of signal recovery which is of paramount importance in inertial confinement fusion. To accomplish this task, we develop a tractable algorithm based on continuous basis pursuit and reweighted ?{sub 1}-minimization. By modeling the observed signals as superposition of scale time-shifted copies of theoretical waveform, structured noise, and unstructured noise on a finite time interval, a sparse optimization problem is obtained. We propose to solve this problem through an iterative procedure that alternates between convex optimization to estimate the amplitude, and local optimization to estimate the dictionary. The performance of the method was evaluated both numerically and experimentally. Numerically, we recovered theoretical signals embedded in increasing amounts of unstructured noise and compared the results with those obtained through popular denoising methods. We also applied the proposed method to a set of actual experimental data acquired from the Shenguang-II laser whose energy was below the detector noise-equivalent energy. Both simulation and experiments show that the proposed method improves the signal recovery performance and extends the dynamic detection range of detectors.
High performance parallel algorithms for incompressible flows
Sambavaram, Sreekanth Reddy
2002-01-01T23:59:59.000Z
innovative algorithms using solenoidal basis methods to solve the generalized Stokes problem for 3D MAC (Marker and Cell) and 2D unstructured P1-isoP1 finite element grids. It details a localized algebraic approach to construct solenoidal basis. An efficient...
Basis Token Consistency A Practical Mechanism for Strong Web Cache Consistency
call \\Basis Token Consistency" or BTC; when im- plemented at the server, this mechanism allows any between the BTC algorithm and the use of the Time-To-Live (TTL) heuristic. #3; This research was supported
Topics in Approximation Algorithms
Khare, Monik
2012-01-01T23:59:59.000Z
Hybrid Algorithm . . . . . . . . . . . . . . . . . . . 2.42 Empirical study of algorithms for packing and covering 2.12.3.1 CPLEX algorithms . . . . . . . . . . . . . . . . . . .
Papalaskari, Mary-Angela
efficiencyTime efficiency ·· Space efficiencySpace efficiency ·· OptimalityOptimality Approaches of Algorithms - Lecture 2 3 Theoretical analysis of time efficiencyTheoretical analysis of time efficiency Time and Analysis of Algorithms - Lecture 2 5 Empirical analysis of time efficiencyEmpirical analysis of time
Control algorithms for dynamic attenuators
Hsieh, Scott S., E-mail: sshsieh@stanford.edu [Department of Radiology, Stanford University, Stanford, California 94305 and Department of Electrical Engineering, Stanford University, Stanford, California 94305 (United States); Pelc, Norbert J. [Department of Radiology, Stanford University, Stanford California 94305 and Department of Bioengineering, Stanford University, Stanford, California 94305 (United States)] [Department of Radiology, Stanford University, Stanford California 94305 and Department of Bioengineering, Stanford University, Stanford, California 94305 (United States)
2014-06-15T23:59:59.000Z
Purpose: The authors describe algorithms to control dynamic attenuators in CT and compare their performance using simulated scans. Dynamic attenuators are prepatient beam shaping filters that modulate the distribution of x-ray fluence incident on the patient on a view-by-view basis. These attenuators can reduce dose while improving key image quality metrics such as peak or mean variance. In each view, the attenuator presents several degrees of freedom which may be individually adjusted. The total number of degrees of freedom across all views is very large, making many optimization techniques impractical. The authors develop a theory for optimally controlling these attenuators. Special attention is paid to a theoretically perfect attenuator which controls the fluence for each ray individually, but the authors also investigate and compare three other, practical attenuator designs which have been previously proposed: the piecewise-linear attenuator, the translating attenuator, and the double wedge attenuator. Methods: The authors pose and solve the optimization problems of minimizing the mean and peak variance subject to a fixed dose limit. For a perfect attenuator and mean variance minimization, this problem can be solved in simple, closed form. For other attenuator designs, the problem can be decomposed into separate problems for each view to greatly reduce the computational complexity. Peak variance minimization can be approximately solved using iterated, weighted mean variance (WMV) minimization. Also, the authors develop heuristics for the perfect and piecewise-linear attenuators which do not requirea priori knowledge of the patient anatomy. The authors compare these control algorithms on different types of dynamic attenuators using simulated raw data from forward projected DICOM files of a thorax and an abdomen. Results: The translating and double wedge attenuators reduce dose by an average of 30% relative to current techniques (bowtie filter with tube current modulation) without increasing peak variance. The 15-element piecewise-linear dynamic attenuator reduces dose by an average of 42%, and the perfect attenuator reduces dose by an average of 50%. Improvements in peak variance are several times larger than improvements in mean variance. Heuristic control eliminates the need for a prescan. For the piecewise-linear attenuator, the cost of heuristic control is an increase in dose of 9%. The proposed iterated WMV minimization produces results that are within a few percent of the true solution. Conclusions: Dynamic attenuators show potential for significant dose reduction. A wide class of dynamic attenuators can be accurately controlled using the described methods.
Facility worker technical basis document
SHULTZ, M.V.
2003-08-28T23:59:59.000Z
This technical basis document was developed to support the Tank Farm Documented Safety Analysis (DSA). It describes the criteria and methodology for allocating controls to hazardous conditions with significant facility work consequence and presents the results of the allocation.
Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)
AFDC Printable Version Share this resource Send a link to EERE: Alternative Fuels Data Center Home Page to someone by E-mail Share EERE: Alternative Fuels Data Center Home Page on Facebook Tweet about EERE: Alternative Fuels Data Center Home Page on Twitter Bookmark EERE:1 First Use of Energy for All Purposes (Fuel and Nonfuel), 2002; Level:5 TablesExports(Journal Article)41cloth Documentation DataDepartment of EnergyOn-Farm1 of 6 High-LevelRenewable3,9,Individual Permit
Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)
AFDC Printable Version Share this resource Send a link to EERE: Alternative Fuels Data Center Home Page to someone by E-mail Share EERE: Alternative Fuels Data Center Home Page on Facebook Tweet about EERE: Alternative Fuels Data Center Home Page on Twitter Bookmark EERE:1 First Use of Energy for All Purposes (Fuel and Nonfuel), 2002; Level:5 TablesExports(Journal Article)41cloth Documentation DataDepartment of EnergyOn-Farm1 of 6 High-LevelRenewable3,9,Individual Permitoperator bispectral
A Complexity Analysis of a Jacobi Method for Lattice Basis Reduction
Qiao, Sanzheng
A Complexity Analysis of a Jacobi Method for Lattice Basis Reduction Zhaofei Tian Department the Jacobi method introduced by S. Qiao [23], and show that it has the same complexity as the LLL algorithm. Our experimental results show that the Jacobi method outperforms the LLL algorithm in not only
Algorithms and Experiments: The New (and Old) Methodology
Moret, Bernard
Algorithms and Experiments: The New (and Old) Methodology Bernard M.E. Moret Department of Computer twenty years have seen enormous progress in the design of algorithms, but little of it has been put into practice. Because many recently developed algorithms are hard to characterize theoretically and have large
Priority Algorithms for Graph Optimization Problems Allan Borodin
Larsen, Kim Skak
Priority Algorithms for Graph Optimization Problems Allan Borodin University of Toronto bor of priority or "greedy-like" algorithms as initiated in [10] and as extended to graph theoretic problems, there are several natural input formulations for a given problem and we show that priority algorithm bounds
Facility worker technical basis document
EVANS, C.B.
2003-03-21T23:59:59.000Z
This report documents the technical basis for facility worker safety to support the Tank Farms Documented Safety Analysis and described the criteria and methodology for allocating controls to hazardous conditions with significant facility worker consequences and presents the results of the allocation.
Accelerating Majorization Algorithms
Jan de Leeuw
2011-01-01T23:59:59.000Z
incomplete data via the em algorithm. Journal of the RoyalACCELERATING MAJORIZATION ALGORITHMS JAN DE LEEUW Abstract.construc- tion of majorization algorithms and their rate of
Accelerating Majorization Algorithms
Leeuw, Jan de
2008-01-01T23:59:59.000Z
incomplete data via the em algorithm. Journal of the RoyalACCELERATING MAJORIZATION ALGORITHMS JAN DE LEEUW Abstract.construc- tion of majorization algorithms and their rate of
KIRKPATRICK, BONNIE
2011-01-01T23:59:59.000Z
41 3.2.1 The Peeling Algorithm and Elston-Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .46 iv 4 Algorithms for Inference 4.1 Gibbs
A Theoretical and Algorithmic Characterization of Bulge Knees
2015-05-29T23:59:59.000Z
the Pareto front) and bulge knee, to the best of our knowledge, is the only .... magnitudes (stress versus displacement trade-off that is inherent in engineering.
Organic solvent technical basis document
SANDGREN, K.R.
2003-03-22T23:59:59.000Z
This technical basis document was developed to support the Tank Farms Documented Safety Analysis (DSA), and describes the risk binning process and the technical basis for assigning risk bins for the organic solvent fire representative and associated represented hazardous conditions. The purpose of the risk binning process is to determine the need for safety-significant structures, systems, and components (SSC) and technical safety requirement (TSR)-level controls for a given representative accident or represented hazardous conditions based on an evaluation of the frequency and consequence. Note that the risk binning process is not applied to facility workers, because all facility worker hazardous conditions are considered for safety-significant SSCs and/or TSR-level controls. Determination of the need for safety-class SSCs was performed in accordance with DOE-STD-3009-94, Preparation Guide for US Department of Energy Nonreactor Nuclear Facility Documented Safety Analyses, as described in this report.
FACILITY WORKER TECHNICAL BASIS DOCUMENT
SHULTZ, M.V.
2005-03-31T23:59:59.000Z
This technical basis document was developed to support RPP-13033, ''Tank Farms Documented Safety Analysis (DSA). It describes the criteria and methodology for allocating controls to hazardous conditions with significant facility worker (FW) consequence and presents the results of the allocation. The criteria and methodology for identifying controls that address FW safety are in accordance with DOE-STD-3009-94, ''Preparation Guide for US Department of Energy Nonreactor Nuclear Facility Documented Safety Analyses''.
Hanford Generic Interim Safety Basis
Lavender, J.C.
1994-09-09T23:59:59.000Z
The purpose of this document is to identify WHC programs and requirements that are an integral part of the authorization basis for nuclear facilities that are generic to all WHC-managed facilities. The purpose of these programs is to implement the DOE Orders, as WHC becomes contractually obligated to implement them. The Hanford Generic ISB focuses on the institutional controls and safety requirements identified in DOE Order 5480.23, Nuclear Safety Analysis Reports.
Gravitational lens modeling with basis sets
Birrer, Simon; Refregier, Alexandre
2015-01-01T23:59:59.000Z
We present a strong lensing modeling technique based on versatile basis sets for the lens and source planes. Our method uses high performance Monte Carlo algorithms, allows for an adaptive build up of complexity and bridges the gap between parametric and pixel based reconstruction methods. We apply our method to a HST image of the strong lens system RXJ1131-1231 and show that our method finds a reliable solution and is able to detect substructure in the lens and source planes simultaneously. Using mock data we show that our method is sensitive to sub-clumps with masses four orders of magnitude smaller than the main lens, which corresponds to about $10^8 M_{\\odot}$, without prior knowledge on the position and mass of the sub-clump. The modelling approach is flexible and maximises automation to facilitate the analysis of the large number of strong lensing systems expected in upcoming wide field surveys. The resulting search for dark sub-clumps in these systems, without mass-to-light priors, offers promise for p...
Spectral Representations of Uncertainty: Algorithms and Applications
George Em Karniadakis
2005-04-24T23:59:59.000Z
The objectives of this project were: (1) Develop a general algorithmic framework for stochastic ordinary and partial differential equations. (2) Set polynomial chaos method and its generalization on firm theoretical ground. (3) Quantify uncertainty in large-scale simulations involving CFD, MHD and microflows. The overall goal of this project was to provide DOE with an algorithmic capability that is more accurate and three to five orders of magnitude more efficient than the Monte Carlo simulation.
FLAMMABLE GAS TECHNICAL BASIS DOCUMENT
KRIPPS, L.J.
2003-10-09T23:59:59.000Z
This technical basis document was developed to support of the Tank Farms Documented Safety Analysis (DSA) and describes the risk binning process for the flammable gas representative accidents and associated represented hazardous conditions. The purpose of the risk binning process is to determine the need for safety-significant structures, systems, and components (SSC) and technical safety requirement (TSR)-level controls for a given representative accident or represented hazardous condition based on an evaluation of the event frequency and consequence. Note that the risk binning process is not applied to facility workers, because all facility worker hazardous conditions are considered for safety-significant SSC and/or TSR-level controls.
Compulsory Elective Theoretical Physics
Dutz, Hartmut
Aug Sep Compulsory Elective Theoretical Physics (physics606 or - if done previously - 1 module out of physics751, physics754, physics755, physics760, physics7501) 7 cp Specialization (at least 24 cp out of physics61a, -61b, -61c and/or physics62a, -62b, -62c) 24 cp Elective Advanced Lectures (at least 18 cp out
Satisfiability of logic programming based on radial basis function neural networks
Hamadneh, Nawaf; Sathasivam, Saratha; Tilahun, Surafel Luleseged; Choon, Ong Hong [School of Mathematical Sciences, Universiti Sains Malaysia, 11800 USM, Penang (Malaysia)
2014-07-10T23:59:59.000Z
In this paper, we propose a new technique to test the Satisfiability of propositional logic programming and quantified Boolean formula problem in radial basis function neural networks. For this purpose, we built radial basis function neural networks to represent the proportional logic which has exactly three variables in each clause. We used the Prey-predator algorithm to calculate the output weights of the neural networks, while the K-means clustering algorithm is used to determine the hidden parameters (the centers and the widths). Mean of the sum squared error function is used to measure the activity of the two algorithms. We applied the developed technique with the recurrent radial basis function neural networks to represent the quantified Boolean formulas. The new technique can be applied to solve many applications such as electronic circuits and NP-complete problems.
Mathemathical methods of theoretical physics
Karl Svozil
2015-07-01T23:59:59.000Z
Course material for mathematical methods of theoretical physics intended for an undergraduate audience.
Mathemathical methods of theoretical physics
Karl Svozil
2015-02-26T23:59:59.000Z
Course material for mathematical methods of theoretical physics intended for an undergraduate audience.
Mathemathical methods of theoretical physics
Svozil, Karl
2012-01-01T23:59:59.000Z
Course material for mathemathical methods of theoretical physics intended for an undergraduate audience.
FLAMMABLE GAS TECHNICAL BASIS DOCUMENT
KRIPPS, L.J.
2005-03-03T23:59:59.000Z
This document describes the qualitative evaluation of frequency and consequences for DST and SST representative flammable gas accidents and associated hazardous conditions without controls. The evaluation indicated that safety-significant structures, systems and components (SSCs) and/or technical safety requirements (TSRs) were required to prevent or mitigate flammable gas accidents. Discussion on the resulting control decisions is included. This technical basis document was developed to support WP-13033, Tank Farms Documented Safety Analysis (DSA), and describes the risk binning process for the flammable gas representative accidents and associated represented hazardous conditions. The purpose of the risk binning process is to determine the need for safety-significant structures, systems, and components (SSC) and technical safety requirement (TSR)-level controls for a given representative accident or represented hazardous condition based on an evaluation of the event frequency and consequence.
FLAMMABLE GAS TECHNICAL BASIS DOCUMENT
KRIPPS, L.J.
2005-02-18T23:59:59.000Z
This document describes the qualitative evaluation of frequency and consequences for double shell tank (DST) and single shell tank (SST) representative flammable gas accidents and associated hazardous conditions without controls. The evaluation indicated that safety-significant SSCs and/or TSRS were required to prevent or mitigate flammable gas accidents. Discussion on the resulting control decisions is included. This technical basis document was developed to support of the Tank Farms Documented Safety Analysis (DSA) and describes the risk binning process for the flammable gas representative accidents and associated represented hazardous conditions. The purpose of the risk binning process is to determine the need for safety-significant structures, systems, and components (SSC) and technical safety requirement (TSR)-level controls for a given representative accident or represented hazardous condition based on an evaluation of the event frequency and consequence.
Protein Folding Challenge and Theoretical Computer Science Somenath Biswas
Biswas, Somenath
Protein Folding Challenge and Theoretical Computer Science Somenath Biswas Department of Computer the chain of amino acids that defines a protein. The protein folding problem is: given a sequence of amino to use an efficient algorithm to carry out protein folding. The atoms in a protein molecule attract each
Theoretical Computer Science 00 (2010) 114 Procedia Computer
2010-01-01T23:59:59.000Z
. An abort wastes all computation of a transaction and might happen right before its completion. A waitingTheoretical Computer Science 00 (2010) 114 Procedia Computer Science Bounds On Contention Management Algorithms1 Johannes Schneider, Roger Wattenhofer {jschneid, wattenhofer}@tik.ee.ethz.ch Computer
DOE Data Explorer [Office of Scientific and Technical Information (OSTI)]
Feller, D; Schuchardt, Karen L.; Didier, Brett T.; Elsethagen, Todd; Sun, Lisong; Gurumoorthi, Vidhya; Chase, Jared; Li, Jun
The Basis Set Exchange (BSE) provides a web-based user interface for downloading and uploading Gaussian-type (GTO) basis sets, including effective core potentials (ECPs), from the EMSL Basis Set Library. It provides an improved user interface and capabilities over its predecessor, the EMSL Basis Set Order Form, for exploring the contents of the EMSL Basis Set Library. The popular Basis Set Order Form and underlying Basis Set Library were originally developed by Dr. David Feller and have been available from the EMSL webpages since 1994. BSE not only allows downloading of the more than 200 Basis sets in various formats; it allows users to annotate existing sets and to upload new sets. (Specialized Interface)
A polynomialtime Nash equilibrium algorithm for repeated games #
Littman, Michael L.
A polynomialÂtime Nash equilibrium algorithm for repeated games # Michael L. Littman Dept theoretical and practical interest. The computational complexity of finding a Nash equilibrium for a one a Nash equilibrium for an averageÂpayo# repeated bimatrix game, and presents a polynomialÂtime algorithm
Advanced Fuel Cycle Cost Basis
D. E. Shropshire; K. A. Williams; W. B. Boore; J. D. Smith; B. W. Dixon; M. Dunzik-Gougar; R. D. Adams; D. Gombert
2007-04-01T23:59:59.000Z
This report, commissioned by the U.S. Department of Energy (DOE), provides a comprehensive set of cost data supporting a cost analysis for the relative economic comparison of options for use in the Advanced Fuel Cycle Initiative (AFCI) Program. The report describes the AFCI cost basis development process, reference information on AFCI cost modules, a procedure for estimating fuel cycle costs, economic evaluation guidelines, and a discussion on the integration of cost data into economic computer models. This report contains reference cost data for 26 cost modules—24 fuel cycle cost modules and 2 reactor modules. The cost modules were developed in the areas of natural uranium mining and milling, conversion, enrichment, depleted uranium disposition, fuel fabrication, interim spent fuel storage, reprocessing, waste conditioning, spent nuclear fuel (SNF) packaging, long-term monitored retrievable storage, near surface disposal of low-level waste (LLW), geologic repository and other disposal concepts, and transportation processes for nuclear fuel, LLW, SNF, and high-level waste.
Advanced Fuel Cycle Cost Basis
D. E. Shropshire; K. A. Williams; W. B. Boore; J. D. Smith; B. W. Dixon; M. Dunzik-Gougar; R. D. Adams; D. Gombert; E. Schneider
2008-03-01T23:59:59.000Z
This report, commissioned by the U.S. Department of Energy (DOE), provides a comprehensive set of cost data supporting a cost analysis for the relative economic comparison of options for use in the Advanced Fuel Cycle Initiative (AFCI) Program. The report describes the AFCI cost basis development process, reference information on AFCI cost modules, a procedure for estimating fuel cycle costs, economic evaluation guidelines, and a discussion on the integration of cost data into economic computer models. This report contains reference cost data for 25 cost modules—23 fuel cycle cost modules and 2 reactor modules. The cost modules were developed in the areas of natural uranium mining and milling, conversion, enrichment, depleted uranium disposition, fuel fabrication, interim spent fuel storage, reprocessing, waste conditioning, spent nuclear fuel (SNF) packaging, long-term monitored retrievable storage, near surface disposal of low-level waste (LLW), geologic repository and other disposal concepts, and transportation processes for nuclear fuel, LLW, SNF, transuranic, and high-level waste.
Advanced Fuel Cycle Cost Basis
D. E. Shropshire; K. A. Williams; W. B. Boore; J. D. Smith; B. W. Dixon; M. Dunzik-Gougar; R. D. Adams; D. Gombert; E. Schneider
2009-12-01T23:59:59.000Z
This report, commissioned by the U.S. Department of Energy (DOE), provides a comprehensive set of cost data supporting a cost analysis for the relative economic comparison of options for use in the Advanced Fuel Cycle Initiative (AFCI) Program. The report describes the AFCI cost basis development process, reference information on AFCI cost modules, a procedure for estimating fuel cycle costs, economic evaluation guidelines, and a discussion on the integration of cost data into economic computer models. This report contains reference cost data for 25 cost modules—23 fuel cycle cost modules and 2 reactor modules. The cost modules were developed in the areas of natural uranium mining and milling, conversion, enrichment, depleted uranium disposition, fuel fabrication, interim spent fuel storage, reprocessing, waste conditioning, spent nuclear fuel (SNF) packaging, long-term monitored retrievable storage, near surface disposal of low-level waste (LLW), geologic repository and other disposal concepts, and transportation processes for nuclear fuel, LLW, SNF, transuranic, and high-level waste.
The Static Universe Hypothesis: Theoretical Basis and Observational Tests of the Hypothesis
Thomas B. Andrews
2001-09-07T23:59:59.000Z
From the axiom of the unrestricted repeatability of all experiments, Bondi and Gold argued that the universe is in a stable, self-perpetuating equilibrium state. This concept generalizes the usual cosmological principle to the perfect cosmological principle in which the universe looks the same from any location at any time. Consequently, I hypothesize that the universe is static and in an equilibrium state (non-evolving). New physics is proposed based on the concept that the universe is a pure wave system. Based on the new physics and assuming a static universe, processes are derived for the Hubble redshift and the cosmic background radiation field. Then, following the scientific method, I test deductions of the static universe hypothesis using precise observational data primarily from the Hubble Space Telescope. Applying four different global tests of the space-time metric, I find that the observational data consistently fits the static universe model. The observational data also show that the average absolute magnitudes and physical radii of first-rank elliptical galaxies have not changed over the last 5 to 15 billion years. Because the static universe hypothesis is a logical deduction from the perfect cosmological principle and the hypothesis is confirmed by the observational data, I conclude that the universe is static and in an equilibrium state.
Boyer, Edmond
example, the long-term use of groundwater heat pumps for air conditioning of homes or buildings can induce and hydrogeological background. The presence of organic pollutants in the aquifer can amplify these phenomena/or the well productivity, (ii) an inappropriate temperature for the use of groundwater heat pumps for air
Approximation Algorithms for Covering Problems
Koufogiannakis, Christos
2009-01-01T23:59:59.000Z
1.3.1 Sequential Algorithms . . . . . . . . . . . . .Distributed 2-approximation algorithm for CMIP 2 (Alg.2 Sequential Algorithm 2.1 The Greedy Algorithm for Monotone
-time algorithms for computing reduced dimension models for uncertain systems. Here we present algorithms that compute lower dimensional realizations of an uncertain system, and compare their theoretical and com of the computational di- culties in handling more realistic systems. The uncertain system representation
Algorithmic Gauss-Manin Connection Algorithms to Compute Hodge-theoretic Invariants
Schulze, Mathias
Singular library linalg.lib . . . . . . . . . . . . . . . . . . 113 A.2 Singular library gaussman are considered to be equal and form an (equiv- alence) class. This leads to a classification problem form being an object in this class. The concept of invariants serves to approach classification
Theoretical Physics in Cellular Biology
Theoretical Physics in Cellular Biology: Some Illustrative Case Studies Living matter obeys the laws of physics, and the principles and methods of theoretical physics ought to find useful application observation, I will describe a few specific instances where approaches inspired by theoretical physics allow
A LOGICAL INVERTED TAXONOMY OF SORTING ALGORITHMS S.M. Merritt K.K. Lau
Lau, Kung-Kiu
A LOGICAL INVERTED TAXONOMY OF SORTING ALGORITHMS S.M. Merritt K.K. Lau School of Computer Science taxonomy of sorting algorithms, a highÂlevel, topÂdown, conceptually simple and symmetric categorization taxonomy of sorting algorithms. This provides a logical basis for the inverted taxonomy and expands
Cooling algorithms based on the 3-bit majority
Phillip Kaye
2007-05-15T23:59:59.000Z
Algorithmic cooling is a potentially important technique for making scalable NMR quantum computation feasible in practice. Given the constraints imposed by this approach to quantum computing, the most likely cooling algorithms to be practicable are those based on simple reversible polarization compression (RPC) operations acting locally on small numbers of bits. Several different algorithms using 2- and 3-bit RPC operations have appeared in the literature, and these are the algorithms I consider in this note. Specifically, I show that the RPC operation used in all these algorithms is essentially a majority vote of 3 bits, and prove the optimality of the best such algorithm. I go on to derive some theoretical bounds on the performance of these algorithms under some specific assumptions about errors.
Greenhill, Catherine
vector space, with respect to a canonical basis, is called the exterior square of X. Note that all vectorAn algorithm for recognising the exterior square of a matrix Keywords: matrix, exterior square the exterior square of a matrix. The approach involves manipulation of the equations which relate the entries
Authorization basis for the 209-E Building
TIFFANY, M.S.
1999-02-23T23:59:59.000Z
This Authorization Basis document is one of three documents that constitute the Authorization Basis for the 209-E Building. Per the U.S. Department of Energy, Richland Operations Office (RL) letter 98-WSD-074, this document, the 209-E Building Preliminary Hazards Analysis (WHC-SD-WM-TI-789), and the 209-E Building Safety Evaluation Report (97-WSD-074) constitute the Authorization Basis for the 209-E Building. This Authorization Basis and the associated controls and safety programs will remain in place until safety documentation addressing deactivation of the 209-E Building is developed by the contractor and approved by RL.
Theoretical Ecology: Continued growth and success
Hastings, Alan
2010-01-01T23:59:59.000Z
EDITORIAL Theoretical Ecology: Continued growth and successof areas in theoretical ecology. Among the highlights areyear represent theoretical ecology from around the world: 20
An Invitation to Algorithmic Information Theory
G. J. Chaitin
1996-09-17T23:59:59.000Z
I'll outline the latest version of my limits of math course. The purpose of this course is to illustrate the proofs of the key information-theoretic incompleteness theorems of algorithmic information theory by means of algorithms written in a specially designed version of LISP. The course is now written in HTML with Java applets, and is available at http://www.research.ibm.com/people/c/chaitin/lm . The LISP now used is much friendlier than before, and because its interpreter is a Java applet it will run in the Netscape browser as you browse my limits of math Web site.
Splitting Algorithms for Convex Optimization and Applications to Sparse Matrix Factorization
Rong, Rong
2013-01-01T23:59:59.000Z
Algorithms . . . . . . . . . . . . . . . . . . . . . . .Splitting Algorithms . . . . .Proximal Point Algorithm . . . . . . . . . .
Efficient Algebraic Representations for Throughput-Oriented Algorithms
McKinlay, Christopher E.
2013-01-01T23:59:59.000Z
Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . . . .Throughput-Oriented Algorithm Design Multilinear
Algorithms and Problem Solving Introduction
Razak, Saquib
Unit 16 1 Algorithms and Problem Solving · Introduction · What is an Algorithm? · Algorithm Properties · Example · Exercises #12;Unit 16 2 What is an Algorithm? What is an Algorithm? · An algorithm. · The algorithm must be general, that is, it should solve the problem for all possible input sets to the problem
Landscape Engineering: removing local traps in the chopped random basis optimization
Niklas Rach; Matthias M. Müller; Tommaso Calarco; Simone Montangero
2015-06-15T23:59:59.000Z
In quantum optimal control theory the success of an optimization algorithm is highly influenced by how the figure of merit to be optimized behaves as a function of the control field, i.e. by the control landscape. Constraints on the control field introduce local minima in the landscape --traps-- which might prevent an efficient solution of the optimal control problem. The Chopped Random Basis (CRAB) optimal control algorithm is constructed to improve the optimization efficiency by introducing an expansion of the control field onto a truncated basis, that is, it works with a limited control field bandwidth. We study the influence of traps on the success probability of CRAB and extend the original algorithm to engineer the landscape in order to eliminate the traps; we demonstrate that this development exploits the advantages of both (unconstrained) gradient algorithms and of truncated basis methods. Finally, we characterize the behavior of the extended CRAB under additional constraints and show that for reasonable constraints the convergence properties are still maintained.
Algorithms and Software for PCR Primer Design
Huang, Yu-Ting
2015-01-01T23:59:59.000Z
5.3.3 Algorithm . . . . . . . . . . . . . . . . . . . . .5.2.4 Algorithm . . . . . .clique problems and MCDPD . . . . . . . . . . . Algorithm 1
James R. Chelikowsky
2009-03-31T23:59:59.000Z
The work reported here took place at the University of Minnesota from September 15, 2003 to November 14, 2005. This funding resulted in 10 invited articles or book chapters, 37 articles in refereed journals and 13 invited talks. The funding helped train 5 PhD students. The research supported by this grant focused on developing theoretical methods for predicting and understanding the properties of matter at the nanoscale. Within this regime, new phenomena occur that are characteristic of neither the atomic limit, nor the crystalline limit. Moreover, this regime is crucial for understanding the emergence of macroscopic properties such as ferromagnetism. For example, elemental Fe clusters possess magnetic moments that reside between the atomic and crystalline limits, but the transition from the atomic to the crystalline limit is not a simple interpolation between the two size regimes. To capitalize properly on predicting such phenomena in this transition regime, a deeper understanding of the electronic, magnetic and structural properties of matter is required, e.g., electron correlation effects are enhanced within this size regime and the surface of a confined system must be explicitly included. A key element of our research involved the construction of new algorithms to address problems peculiar to the nanoscale. Typically, one would like to consider systems with thousands of atoms or more, e.g., a silicon nanocrystal that is 7 nm in diameter would contain over 10,000 atoms. Previous ab initio methods could address systems with hundreds of atoms whereas empirical methods can routinely handle hundreds of thousands of atoms (or more). However, these empirical methods often rely on ad hoc assumptions and lack incorporation of structural and electronic degrees of freedom. The key theoretical ingredients in our work involved the use of ab initio pseudopotentials and density functional approaches. The key numerical ingredients involved the implementation of algorithms for solving the Kohn-Sham equation without the use of an explicit basis, i.e., a real space grid. We invented algorithms for a solution of the Kohn-Sham equation based on Chebyshev 'subspace filtering'. Our filtering algorithms dramatically enhanced our ability to explore systems with thousands of atoms, i.e., we examined silicon quantum dots with approximately 11,000 atoms (or 40,000 electrons). We applied this algorithm to a number of nanoscale systems to examine the role of quantum confinement on electronic and magnetic properties: (1) Doping of nanocrystals and nanowires, including both magnetic and non-magnetic dopants and the role of self-purification; (2) Optical excitations and electronic properties of nanocrystals; (3) Intrinsic defects in nanostructures; and (4) The emergence of ferromagnetism from atoms to crystals.
THEORETICAL PHYSICS Faculty of Physics
Pachucki, Krzysztof
of Field Theory and Statistical Physics RG Division of General Relativity and Gravitation MP DivisionINSTITUTE OF THEORETICAL PHYSICS Faculty of Physics Warsaw University 1998-1999 Warsaw 2000 #12;INSTITUTE OF THEORETICAL PHYSICS Address: Hoza 69, PL-00 681 Warsaw, Poland Phone: (+48 22) 628 33 96 Fax
Reconstruction algorithms for MRI
Bilgic?, Berkin
2013-01-01T23:59:59.000Z
This dissertation presents image reconstruction algorithms for Magnetic Resonance Imaging (MRI) that aims to increase the imaging efficiency. Algorithms that reduce imaging time without sacrificing the image quality and ...
Rubinfeld, Ronitt
Sublinear time algorithms represent a new paradigm in computing, where an algorithm must give some sort of an answer after inspecting only a very small portion of the input. We discuss the types of answers that one can ...
Indigenous Algorithms, Organizations, and Rationality
Leaf, Murray
2008-01-01T23:59:59.000Z
Indigenous Optimizing Algorithm. Mathematical Anthropologythe use of maximizing algorithms in behavior is a crucialthe knowledge, rules, and algorithms that they apply. If we
Variational Algorithms for Marginal MAP
Liu, Q; Ihler, A
2013-01-01T23:59:59.000Z
2004. A. L. Yuille. CCCP algorithms to minimize the BetheA tutorial on MM algorithms. The American Statistician, 1(time approximation algorithms for the ising model. SIAM
Energy Science and Technology Software Center (OSTI)
002651IBMPC00 Algorithm for Accounting for the Interactions of Multiple Renewable Energy Technologies in Estimation of Annual Performance
Giorda, Paolo [Institute for Scientific Interchange, Villa Gualino Viale Settimio Severo 65, 10133 Turin (Italy); Iorio, Alfredo [Center for Theoretical Physics, Massachusetts Institute of Technology, 77 Massachusetts Avenue, Cambridge, Massachusetts 02139-4307 (United States); INFN, Rome (Italy); Sen, Samik [School of Mathematics, Trinity College Dublin, Dublin 2 (Ireland); Sen, Siddhartha [School of Mathematics, Trinity College Dublin, Dublin 2 (Ireland); IACS, Jadavpur, Calcutta 700032 (India)
2004-09-01T23:59:59.000Z
We propose a semiclassical version of Shor's quantum algorithm to factorize integer numbers, based on spin-(1/2) SU(2) generalized coherent states. Surprisingly, we find evidence that the algorithm's success probability is not too severely modified by our semiclassical approximation. This suggests that it is worth pursuing practical implementations of the algorithm on semiclassical devices.
Physical Algorithms Roger Wattenhofer
Physical Algorithms Roger Wattenhofer Computer Engineering and Networks Laboratory TIK ETH Zurich to an ICALP 2010 invited talk, intending to encourage research in physical algorithms. The area of physical algorithms deals with networked systems of active agents. These agents have access to limited information
CRAD, Safety Basis - Los Alamos National Laboratory Waste Characteriza...
Office of Environmental Management (EM)
Safety Basis - Los Alamos National Laboratory Waste Characterization, Reduction, and Repackaging Facility CRAD, Safety Basis - Los Alamos National Laboratory Waste...
222-S Laboratory interim safety basis
WEAVER, L.L.
2001-09-10T23:59:59.000Z
The purpose of this document is to establish the Interim Safety Basis (ISB) for the 222-S Laboratory. An ISB is a documented safety basis that provides the justification for the continued operation of the facility until an upgraded documented safety analysis (DSA) is prepared in compliance with 10CFR 830, Subpart B. The 222-S Laboratory ISB is based on revised facility and process descriptions and revised accident analyses that reflect current conditions.
Rossi, Tuomas P; Sakko, Arto; Puska, Martti J; Nieminen, Risto M
2015-01-01T23:59:59.000Z
We present an approach for generating local numerical basis sets of improving accuracy for first-principles nanoplasmonics simulations within time-dependent density functional theory. The method is demonstrated for copper, silver, and gold nanoparticles that are of experimental interest but computationally demanding due to the semi-core d-electrons that affect their plasmonic response. The basis sets are constructed by augmenting numerical atomic orbital basis sets by truncated Gaussian-type orbitals generated by the completeness-optimization scheme, which is applied to the photoabsorption spectra of homoatomic metal atom dimers. We obtain basis sets of improving accuracy up to the complete basis set limit and demonstrate that the performance of the basis sets transfers to simulations of larger nanoparticles and nanoalloys as well as to calculations with various exchange-correlation functionals. This work promotes the use of the local basis set approach of controllable accuracy in first-principles nanoplasmon...
Algorithms for Quantum Computers
Jamie Smith; Michele Mosca
2010-01-07T23:59:59.000Z
This paper surveys the field of quantum computer algorithms. It gives a taste of both the breadth and the depth of the known algorithms for quantum computers, focusing on some of the more recent results. It begins with a brief review of quantum Fourier transform based algorithms, followed by quantum searching and some of its early generalizations. It continues with a more in-depth description of two more recent developments: algorithms developed in the quantum walk paradigm, followed by tensor network evaluation algorithms (which include approximating the Tutte polynomial).
Relation between XL algorithm and Grobner Bases Algorithms
International Association for Cryptologic Research (IACR)
Relation between XL algorithm and Gr¨obner Bases Algorithms Makoto Sugita1 , Mitsuru Kawazoe2 the XL algorithm and Gr¨obner bases algorithms. The XL algorithm was proposed to be a more efficient algorithm to solve a system of equations with a special assumption with- out trying to calculate a whole Gr
McLachlan, Geoff
738 IEEE TRANSACTIONS ON NEURAL NETWORKS, VOL. 15, NO. 3, MAY 2004 Using the EM Algorithm to Train Neural Networks: Misconceptions and a New Algorithm for Multiclass Classification Shu-Kay Ng and Geoffrey in recent years as the basis for var- ious algorithms in application areas of neural networks such as pat
Sec$on Summary ! Properties of Algorithms
#12;Sec$on Summary ! Properties of Algorithms ! Algorithms for Searching and Sorting ! Greedy Algorithms ! Halting Problem #12;Problems and Algorithms ! In many. This procedure is called an algorithm. #12;Algorithms Definition: An algorithm
Master track Theoretical Biology & Bioinformatics
Utrecht, Universiteit
their master. Our two MSc courses "Computational Biology" and "Bioinformatics and Evolutionary GenomicsMaster track Theoretical Biology & Bioinformatics Modeling and bioinformatics is an important Biology & Bioinformatics provides courses introducing you to the basic concepts of modeling
C.E. Kessel; D. Meade; S.C. Jardin
2002-01-18T23:59:59.000Z
The FIRE [Fusion Ignition Research Experiment] design for a burning plasma experiment is described in terms of its physics basis and engineering features. Systems analysis indicates that the device has a wide operating space to accomplish its mission, both for the ELMing H-mode reference and the high bootstrap current/high beta advanced tokamak regimes. Simulations with 1.5D transport codes reported here both confirm and constrain the systems projections. Experimental and theoretical results are used to establish the basis for successful burning plasma experiments in FIRE.
Technical basis document for natural event hazards
CARSON, D.M.
2003-08-28T23:59:59.000Z
This technical basis document was developed to support the documented safety analysis (DSA) and describes the risk binning process and the technical basis for assigning risk bins for natural event hazard (NEH)-initiated accidents. The purpose of the risk binning process is to determine the need for safety-significant structures, systems, and components (SSC) and technical safety requirement (TSR)-level controls for a given representative accident or represented hazardous conditions based on an evaluation of the frequency and consequence. Note that the risk binning process is not applied to facility workers, because all facility worker hazardous conditions are considered for safety-significant SSCs and/or TSR-level controls This report documents the technical basis for assigning risk bins for Natural Event Hazards Representative Accident and associated represented hazardous conditions.
Basis for UCNI | Department of Energy
AFDC Printable Version Share this resource Send a link to EERE: Alternative Fuels Data Center Home Page to someone by E-mail Share EERE: Alternative Fuels Data Center Home Page on Facebook Tweet about EERE: Alternative Fuels Data Center Home Page on Twitter Bookmark EERE: Alternative Fuels Data Center Home Page on Google Bookmark EERE: Alternative Fuels Data Center Home Page on DeliciousPlasmaP a g e October 20, 2014 Attn: David Meyer Office ofReport:RA-10-065-03BC TIPSBasis for OUO BasisBasis
Structural Basis for Activation of Cholera Toxin
Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)
AFDC Printable Version Share this resource Send a link to EERE: Alternative Fuels Data Center Home Page to someone by E-mail Share EERE: Alternative Fuels Data Center Home Page on Facebook Tweet about EERE: Alternative Fuels Data Center Home Page on Twitter Bookmark EERE:1 First Use of Energy for All Purposes (Fuel and Nonfuel), 2002; Level:5(Million Cubic Feet) Oregon (Including Vehicle Fuel) (MillionStructural Basis of Wnt Recognition by Frizzled5I AssessmentStructural Basis for Activation
Structural Basis for Activation of Cholera Toxin
Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)
AFDC Printable Version Share this resource Send a link to EERE: Alternative Fuels Data Center Home Page to someone by E-mail Share EERE: Alternative Fuels Data Center Home Page on Facebook Tweet about EERE: Alternative Fuels Data Center Home Page on Twitter Bookmark EERE:1 First Use of Energy for All Purposes (Fuel and Nonfuel), 2002; Level:5(Million Cubic Feet) Oregon (Including Vehicle Fuel) (MillionStructural Basis of Wnt Recognition by Frizzled5I AssessmentStructural Basis for
Structural Basis for Activation of Cholera Toxin
Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)
AFDC Printable Version Share this resource Send a link to EERE: Alternative Fuels Data Center Home Page to someone by E-mail Share EERE: Alternative Fuels Data Center Home Page on Facebook Tweet about EERE: Alternative Fuels Data Center Home Page on Twitter Bookmark EERE:1 First Use of Energy for All Purposes (Fuel and Nonfuel), 2002; Level:5(Million Cubic Feet) Oregon (Including Vehicle Fuel) (MillionStructural Basis of Wnt Recognition by Frizzled5I AssessmentStructural Basis forStructural
Marketing Texas Wool on a Quality Basis.
Wooten, Alvin B.; Gabbard, L. P.; Davis, Stanley P.
1955-01-01T23:59:59.000Z
. ...................................................... 5 Marketing Texas Wool on a Quality Basis STANLEY P. DAVIS, L.P. GABBARD and ALVIN B. WOOTEN* THE LOCAL MARKETING OF WOOL Is om OF THE qost important problems facing Texas wool pro- ' clucers. Various phases of the marketing process already...
Technical basis document for external events
OBERG, B.D.
2003-03-22T23:59:59.000Z
This document supports the Tank Farms Documented Safety Analysis and presents the technical basis for the frequencies of externally initiated accidents. The consequences of externally initiated events are discussed in other documents that correspond to the accident that was caused by the external event. The external events include aircraft crash, vehicle accident, range fire, and rail accident.
Waste transfer leaks technical basis document
ZIMMERMAN, B.D.
2003-03-22T23:59:59.000Z
This document provides technical support for the onsite radiological and toxicological, and offsite toxicological, portions of the waste transfer leak accident presented in the Documented Safety Analysis. It provides the technical basis for frequency and consequence bin selection, and selection of safety SSCs and TSRs.
Neural Basis & Technical What are ERPs?
Coulson, Seana
1 Neural Basis & Technical Details What are ERPs? Could that work? Neurons communicate invented 1928 Hans Berger Early recording set-up Human Subject EEG monitors alertness EEG and ERPs What are ERPs? · ERPs formed by averaging EEG time-locked to the onset of stimuli that require cognitive
CRAD, Facility Safety- Nuclear Facility Safety Basis
Broader source: Energy.gov [DOE]
A section of Appendix C to DOE G 226.1-2 "Federal Line Management Oversight of Department of Energy Nuclear Facilities." Consists of Criteria Review and Approach Documents (CRADs) that can be used for assessment of a contractor's Nuclear Facility Safety Basis.
PRELIMINARY SELECTION OF MGR DESIGN BASIS EVENTS
J.A. Kappes
1999-09-16T23:59:59.000Z
The purpose of this analysis is to identify the preliminary design basis events (DBEs) for consideration in the design of the Monitored Geologic Repository (MGR). For external events and natural phenomena (e.g., earthquake), the objective is to identify those initiating events that the MGR will be designed to withstand. Design criteria will ensure that radiological release scenarios resulting from these initiating events are beyond design basis (i.e., have a scenario frequency less than once per million years). For internal (i.e., human-induced and random equipment failures) events, the objective is to identify credible event sequences that result in bounding radiological releases. These sequences will be used to establish the design basis criteria for MGR structures, systems, and components (SSCs) design basis criteria in order to prevent or mitigate radiological releases. The safety strategy presented in this analysis for preventing or mitigating DBEs is based on the preclosure safety strategy outlined in ''Strategy to Mitigate Preclosure Offsite Exposure'' (CRWMS M&O 1998f). DBE analysis is necessary to provide feedback and requirements to the design process, and also to demonstrate compliance with proposed 10 CFR 63 (Dyer 1999b) requirements. DBE analysis is also required to identify and classify the SSCs that are important to safety (ITS).
Benkart, Georgia
2008-01-01T23:59:59.000Z
This article contains an investigation of the equitable basis for the Lie algebra sl_2. Denoting this basis by {x,y,z}, we have [x,y] = 2x + 2y, [y,z] = 2y + 2z, [z, x] = 2z + 2x. One focus of our study is the group of automorphisms G generated by exp(ad x*), exp(ad y*), exp(ad z*), where {x*,y*,z*} is the basis for sl_2 dual to {x,y,z} with respect to the trace form (u,v) = tr(uv). We show that G is isomorphic to the modular group PSL_2(Z). Another focus of our investigation is the lattice L=Zx+Zy+Zz. We prove that the orbit G(x) equals {u in L |(u,u)=2}. We determine the precise relationship between (i) the group G, (ii) the group of automorphisms for sl_2 that preserve L, (iii) the group of automorphisms and antiautomorphisms for sl_2 that preserve L, and (iv) the group of isometries for (,) that preserve L. We obtain analogous results for the lattice L* =Zx*+Zy*+Zz*. Relative to the equitable basis, the matrix of the trace form is a Cartan matrix of hyperbolic type; consequently,we identify the equitable ...
Review and Approval of Nuclear Facility Safety Basis and Safety Design Basis Documents
Broader source: Directives, Delegations, and Requirements [Office of Management (MA)]
2014-12-19T23:59:59.000Z
This Standard describes a framework and the criteria to be used for approval of (1) safety basis documents, as required by 10 Code of Federal Regulation (C.F.R.) 830, Nuclear Safety Management, and (2) safety design basis documents, as required by Department of Energy (DOE) Standard (STD)-1189-2008, Integration of Safety into the Design Process.
System Design and the Safety Basis
Ellingson, Darrel
2008-05-06T23:59:59.000Z
The objective of this paper is to present the Bechtel Jacobs Company, LLC (BJC) Lessons Learned for system design as it relates to safety basis documentation. BJC has had to reconcile incomplete or outdated system description information with current facility safety basis for a number of situations in recent months. This paper has relevance in multiple topical areas including documented safety analysis, decontamination & decommissioning (D&D), safety basis (SB) implementation, safety and design integration, potential inadequacy of the safety analysis (PISA), technical safety requirements (TSR), and unreviewed safety questions. BJC learned that nuclear safety compliance relies on adequate and well documented system design information. A number of PIS As and TSR violations occurred due to inadequate or erroneous system design information. As a corrective action, BJC assessed the occurrences caused by systems design-safety basis interface problems. Safety systems reviewed included the Molten Salt Reactor Experiment (MSRE) Fluorination System, K-1065 fire alarm system, and the K-25 Radiation Criticality Accident Alarm System. The conclusion was that an inadequate knowledge of system design could result in continuous non-compliance issues relating to nuclear safety. This was especially true with older facilities that lacked current as-built drawings coupled with the loss of 'historical knowledge' as personnel retired or moved on in their careers. Walkdown of systems and the updating of drawings are imperative for nuclear safety compliance. System design integration with safety basis has relevance in the Department of Energy (DOE) complex. This paper presents the BJC Lessons Learned in this area. It will be of benefit to DOE contractors that manage and operate an aging population of nuclear facilities.
Critical Review of Theoretical Models for Anomalous Effects (Cold Fusion) in Deuterated Metals
Chechin, V A; Rabinowitz, M; Kim, Y E
1994-01-01T23:59:59.000Z
We briefly summarize the reported anomalous effects in deuterated metals at ambient temperature, commonly known as "Cold Fusion" (CF), with an emphasis on important experiments as well as the theoretical basis for the opposition to interpreting them as cold fusion. Then we critically examine more than 25 theoretical models for CF, including unusual nuclear and exotic chemical hypotheses. We conclude that they do not explain the data.
Critical Review of Theoretical Models for Anomalous Effects (Cold Fusion) in Deuterated Metals
V. A. Chechin; V. A. Tsarev; M. Rabinowitz; Y. E. Kim
2003-04-06T23:59:59.000Z
We briefly summarize the reported anomalous effects in deuterated metals at ambient temperature, commonly known as "Cold Fusion" (CF), with an emphasis on important experiments as well as the theoretical basis for the opposition to interpreting them as cold fusion. Then we critically examine more than 25 theoretical models for CF, including unusual nuclear and exotic chemical hypotheses. We conclude that they do not explain the data.
Greedy forward selection algorithms to Sparse Gaussian Process Regression
Yao, Xin
proposed method is always better than loss-keert in both generalization performance and running time-examine a previous basis vector selection criterion proposed by Smola and Bartlett [20], referred as loss version loss-sun to loss-smola criterion. We compare the full greedy algorithms induced by the loss
A Probability Analysis for Candidate-Based Frequent Itemset Algorithms
Van Gucht, Dirk
of Antwerp Middelheimlaan 1 2020 Antwerp, Belgium nele.dexters@ua.ac.be Paul W. Purdom Indiana University of candidates, which is an important step in frequent itemset mining algorithms, from a theoretical point), and failure (a candidate that is infrequent). For a selection of candidate-based frequent itemset mining algo
Nonlocal Monte Carlo algorithms for statistical physics applications
Janke, Wolfhard
magnets to polymers or proteins, to mention only a few classical problems. Quantum statistical problems different theoretical approaches such as field theory or series expansions, and, of course, with experimentsNonlocal Monte Carlo algorithms for statistical physics applications Wolfhard Janke1 Institut fu
Williams, P.T.
1993-09-01T23:59:59.000Z
As the field of computational fluid dynamics (CFD) continues to mature, algorithms are required to exploit the most recent advances in approximation theory, numerical mathematics, computing architectures, and hardware. Meeting this requirement is particularly challenging in incompressible fluid mechanics, where primitive-variable CFD formulations that are robust, while also accurate and efficient in three dimensions, remain an elusive goal. This dissertation asserts that one key to accomplishing this goal is recognition of the dual role assumed by the pressure, i.e., a mechanism for instantaneously enforcing conservation of mass and a force in the mechanical balance law for conservation of momentum. Proving this assertion has motivated the development of a new, primitive-variable, incompressible, CFD algorithm called the Continuity Constraint Method (CCM). The theoretical basis for the CCM consists of a finite-element spatial semi-discretization of a Galerkin weak statement, equal-order interpolation for all state-variables, a 0-implicit time-integration scheme, and a quasi-Newton iterative procedure extended by a Taylor Weak Statement (TWS) formulation for dispersion error control. Original contributions to algorithmic theory include: (a) formulation of the unsteady evolution of the divergence error, (b) investigation of the role of non-smoothness in the discretized continuity-constraint function, (c) development of a uniformly H{sup 1} Galerkin weak statement for the Reynolds-averaged Navier-Stokes pressure Poisson equation, (d) derivation of physically and numerically well-posed boundary conditions, and (e) investigation of sparse data structures and iterative methods for solving the matrix algebra statements generated by the algorithm.
Call for Papers 5th Workshop on Algorithm Engineering { WAE 2001
Brodal, Gerth Stølting
Call for Papers 5th Workshop on Algorithm Engineering { WAE 2001 BRICS, University of Aarhus, Denmark, August 28{30, 2001 Scope The Workshop on Algorithm Engineering covers research in all aspects of future research. WAE 2001 is spon- sored by BRICS and EATCS (the European Association for Theoretical
Call for Papers 5th Workshop on Algorithm Engineering WAE 2001
Brodal, Gerth Stølting
Call for Papers 5th Workshop on Algorithm Engineering WAE 2001 BRICS, University of Aarhus, Denmark, August 2830, 2001 Scope The Workshop on Algorithm Engineering covers research in all aspects of future research. WAE 2001 is spon- sored by BRICS and EATCS (the European Association for Theoretical
Grid and basis adaptive polynomial chaos techniques for sensitivity and uncertainty analysis
Perkó, Zoltán, E-mail: Z.Perko@tudelft.nl; Gilli, Luca, E-mail: Gilli@nrg.eu; Lathouwers, Danny, E-mail: D.Lathouwers@tudelft.nl; Kloosterman, Jan Leen, E-mail: J.L.Kloosterman@tudelft.nl
2014-03-01T23:59:59.000Z
The demand for accurate and computationally affordable sensitivity and uncertainty techniques is constantly on the rise and has become especially pressing in the nuclear field with the shift to Best Estimate Plus Uncertainty methodologies in the licensing of nuclear installations. Besides traditional, already well developed methods – such as first order perturbation theory or Monte Carlo sampling – Polynomial Chaos Expansion (PCE) has been given a growing emphasis in recent years due to its simple application and good performance. This paper presents new developments of the research done at TU Delft on such Polynomial Chaos (PC) techniques. Our work is focused on the Non-Intrusive Spectral Projection (NISP) approach and adaptive methods for building the PCE of responses of interest. Recent efforts resulted in a new adaptive sparse grid algorithm designed for estimating the PC coefficients. The algorithm is based on Gerstner's procedure for calculating multi-dimensional integrals but proves to be computationally significantly cheaper, while at the same it retains a similar accuracy as the original method. More importantly the issue of basis adaptivity has been investigated and two techniques have been implemented for constructing the sparse PCE of quantities of interest. Not using the traditional full PC basis set leads to further reduction in computational time since the high order grids necessary for accurately estimating the near zero expansion coefficients of polynomial basis vectors not needed in the PCE can be excluded from the calculation. Moreover the sparse PC representation of the response is easier to handle when used for sensitivity analysis or uncertainty propagation due to the smaller number of basis vectors. The developed grid and basis adaptive methods have been implemented in Matlab as the Fully Adaptive Non-Intrusive Spectral Projection (FANISP) algorithm and were tested on four analytical problems. These show consistent good performance both in terms of the accuracy of the resulting PC representation of quantities and the computational costs associated with constructing the sparse PCE. Basis adaptivity also seems to make the employment of PC techniques possible for problems with a higher number of input parameters (15–20), alleviating a well known limitation of the traditional approach. The prospect of larger scale applicability and the simplicity of implementation makes such adaptive PC algorithms particularly appealing for the sensitivity and uncertainty analysis of complex systems and legacy codes.
THEORETICAL BIOLOGY FORUM 105 · 2/2012 PISA · ROMA FABRIZIO SERRA EDITORE MMXII #12;Autorizzazione del Tribunale di Pisa n. 13 del 14 maggio 2012. Già registrata presso il Tribunale di Genova Fabrizio Serra editore® Casella postale n. 1, succursale n. 8, I 56123 Pisa Uffici di Pisa: Via Santa
Theoretical Perspectives on Protein Folding
Thirumalai, Devarajan
Theoretical Perspectives on Protein Folding D. Thirumalai,1 Edward P. O'Brien,2 Greg Morrison,3 Understanding how monomeric proteins fold under in vitro conditions is crucial to describing their functions remains to be done to solve the protein folding problem in the broadest sense. 159 Annu.Rev.Biophys.2010
VORTEX BREAKDOWN INCIPIENCE: THEORETICAL CONSIDERATIONS
Erlebacher, Gordon
dimensional boundary layer (Hall 2;3 , Mager 4 ); (ii) vortex breakdown is a consequence of hydrodynamic instabilityVORTEX BREAKDOWN INCIPIENCE: THEORETICAL CONSIDERATIONS S. A. Berger Department of Mechanical in Science and Engineering NASA Langley Research Center Hampton, VA 236810001 ABSTRACT The sensitivity
Theoretical Chemistry Theory, Computation, and
Gherman, Benjamin F.
1 23 Theoretical Chemistry Accounts Theory, Computation, and Modeling ISSN 1432-881X Volume 128). In order to explore the origin of this preference, density functional theory (DFT) calculations have been-terminus of nascent eubacterial proteins during protein synthesis [14]. As PDF is essential for bacterial survival
Climate Dynamics Observational, Theoretical and
Dong, Xiquan
1 23 Climate Dynamics Observational, Theoretical and Computational Research on the Climate System.6, and -22.5 Wm-2 , respectively, indicating a net cooling effect of clouds on the TOA radiation budget-2 , respectively, resulting in a larger net cooling effect of 2.9 Wm-2 in the model simu- lations
Theoretical study of cyclone design
Wang, Lingjuan
2005-08-29T23:59:59.000Z
efficiency. The cut-point correction models (K) for 1D 3 D and 2D2D cyclones were developed through regression fit from traced and theoretical cut-points. The regression results indicate that cut-points are more sensitive to mass median diameter (MMD) than...
Theoretical Aspects of Particle Production
B. R. Webber
1999-12-17T23:59:59.000Z
These lectures describe some of the latest data on particle production in high-energy collisions and compare them with theoretical calculations and models based on QCD. The main topics covered are: fragmentation functions and factorization, small-x fragmentation, hadronization models, differences between quark and gluon fragmentation, current and target fragmentation in deep inelastic scattering, and heavy quark fragmentation.
ORISE: The Medical Basis for Radiation-Accident Preparedness...
Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)
The Medical Basis for Radiation-Accident Preparedness: Medical Management Proceedings of the Fifth International REACTS Symposium on the Medical Basis for Radiation-Accident...
Structural basis for the antibody neutralization of Herpes simplex...
Office of Scientific and Technical Information (OSTI)
Structural basis for the antibody neutralization of Herpes simplex virus Citation Details In-Document Search Title: Structural basis for the antibody neutralization of Herpes...
Assessing Beyond Design Basis Seismic Events and Implications...
Office of Environmental Management (EM)
Safety Board Topics Covered: Department of Energy Approach to Natural Phenomena Hazards Analysis and Design (Seismic) Design Basis and Beyond Design Basis Seismic Events Seismic...
Technical Cost Modeling - Life Cycle Analysis Basis for Program...
Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site
- Life Cycle Analysis Basis for Program Focus Technical Cost Modeling - Life Cycle Analysis Basis for Program Focus Polymer Composites Research in the LM Materials Program Overview...
Technical basis document for natural event hazards
CARSON, D.M.
2003-03-20T23:59:59.000Z
This technical basis document was developed to support the Tank Farms Documented Safety Analysis (DSA), and describes the risk binning process and the technical basis for assigning risk bins for natural event hazards (NEH)-initiated representative accident and associated represented hazardous conditions. The purpose of the risk binning process is to determine the need for safety-significant structures, systems, and components (SSC) and technical safety requirement (TSR)-level controls for a given representative accident or represented hazardous conditions based on an evaluation of the frequency and consequence. Note that the risk binning process is not applied to facility workers, because all facility worker hazardous conditions are considered for safety-significant SSCs and/or TSR-level controls. Determination of the need for safety-class SSCs was performed in accordance with DOE-STD-3009-94, ''Preparation Guide for US Department of Energy Nonreactor Nuclear Facility Documented Safety Analyses'', as described in this report.
TECHNICAL BASIS DOCUMENT FOR NATURAL EVENT HAZARDS
KRIPPS, L.J.
2006-07-31T23:59:59.000Z
This technical basis document was developed to support the documented safety analysis (DSA) and describes the risk binning process and the technical basis for assigning risk bins for natural event hazard (NEH)-initiated accidents. The purpose of the risk binning process is to determine the need for safety-significant structures, systems, and components (SSC) and technical safety requirement (TSR)-level controls for a given representative accident or represented hazardous conditions based on an evaluation of the frequency and consequence. Note that the risk binning process is not applied to facility workers, because all facility worker hazardous conditions are considered for safety-significant SSCs and/or TSR-level controls.
Radioactive Waste Management BasisApril 2006
Perkins, B K
2011-08-31T23:59:59.000Z
This Radioactive Waste Management Basis (RWMB) documents radioactive waste management practices adopted at Lawrence Livermore National Laboratory (LLNL) pursuant to Department of Energy (DOE) Order 435.1, Radioactive Waste Management. The purpose of this Radioactive Waste Management Basis is to describe the systematic approach for planning, executing, and evaluating the management of radioactive waste at LLNL. The implementation of this document will ensure that waste management activities at LLNL are conducted in compliance with the requirements of DOE Order 435.1, Radioactive Waste Management, and the Implementation Guide for DOE Manual 435.1-1, Radioactive Waste Management Manual. Technical justification is provided where methods for meeting the requirements of DOE Order 435.1 deviate from the DOE Manual 435.1-1 and Implementation Guide.
Chopped random-basis quantum optimization
Tommaso Caneva; Tommaso Calarco; Simone Montangero
2011-08-22T23:59:59.000Z
In this work we describe in detail the "Chopped RAndom Basis" (CRAB) optimal control technique recently introduced to optimize t-DMRG simulations [arXiv:1003.3750]. Here we study the efficiency of this control technique in optimizing different quantum processes and we show that in the considered cases we obtain results equivalent to those obtained via different optimal control methods while using less resources. We propose the CRAB optimization as a general and versatile optimal control technique.
Structural Basis for Activation of Cholera Toxin
Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)
AFDC Printable Version Share this resource Send a link to EERE: Alternative Fuels Data Center Home Page to someone by E-mail Share EERE: Alternative Fuels Data Center Home Page on Facebook Tweet about EERE: Alternative Fuels Data Center Home Page on Twitter Bookmark EERE:1 First Use of Energy for All Purposes (Fuel and Nonfuel), 2002; Level:5(Million Cubic Feet) Oregon (Including Vehicle Fuel) (MillionStructural Basis of Wnt Recognition by Frizzled5I Assessment
design basis threat | National Nuclear Security Administration
Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)
AFDC Printable Version Share this resource Send a link to EERE: Alternative Fuels Data Center Home Page to someone by E-mail Share EERE: Alternative Fuels Data Center Home Page on Facebook Tweet about EERE: Alternative Fuels Data Center Home Page on Twitter Bookmark EERE:1 First Use of Energy for All Purposes (Fuel and Nonfuel), 2002; Level:5(Million Cubic Feet) Oregon (Including Vehicle Fuel) (MillionStructural Basis of5,:,, , ., ..., ,+ . :, ,. .,CSUShortwave SpectralDetermination ofdesign
Combinatorial Phylogenetics of Reconstruction Algorithms
Kleinman, Aaron Douglas
2012-01-01T23:59:59.000Z
and A. Spillner. Consistency of the Neighbor-Net algorithm.Algorithms for Molecular Biology, 2:8, 2007. [10] P.D. Gusfield. Efficient algorithms for inferring evolutionary
Algorithms for Greechie Diagrams
Brendan D. McKay; Norman D. Megill; Mladen Pavicic
2001-01-21T23:59:59.000Z
We give a new algorithm for generating Greechie diagrams with arbitrary chosen number of atoms or blocks (with 2,3,4,... atoms) and provide a computer program for generating the diagrams. The results show that the previous algorithm does not produce every diagram and that it is at least 100,000 times slower. We also provide an algorithm and programs for checking of Greechie diagram passage by equations defining varieties of orthomodular lattices and give examples from Hilbert lattices. At the end we discuss some additional characteristics of Greechie diagrams.
Algorithms incorporating concurrency and caching
Fineman, Jeremy T
2009-01-01T23:59:59.000Z
This thesis describes provably good algorithms for modern large-scale computer systems, including today's multicores. Designing efficient algorithms for these systems involves overcoming many challenges, including concurrency ...
Theoretical issues in Spheromak research
Cohen, R. H.; Hooper, E. B.; LoDestro, L. L.; Mattor, N.; Pearlstein, L. D.; Ryutov, D. D.
1997-04-01T23:59:59.000Z
This report summarizes the state of theoretical knowledge of several physics issues important to the spheromak. It was prepared as part of the preparation for the Sustained Spheromak Physics Experiment (SSPX), which addresses these goals: energy confinement and the physics which determines it; the physics of transition from a short-pulsed experiment, in which the equilibrium and stability are determined by a conducting wall (``flux conserver``) to one in which the equilibrium is supported by external coils. Physics is examined in this report in four important areas. The status of present theoretical understanding is reviewed, physics which needs to be addressed more fully is identified, and tools which are available or require more development are described. Specifically, the topics include: MHD equilibrium and design, review of MHD stability, spheromak dynamo, and edge plasma in spheromaks.
The theoretical significance of G
T. Damour
1999-01-22T23:59:59.000Z
The quantization of gravity, and its unification with the other interactions, is one of the greatest challenges of theoretical physics. Current ideas suggest that the value of G might be related to the other fundamental constants of physics, and that gravity might be richer than the standard Newton-Einstein description. This gives added significance to measurements of G and to Cavendish-type experiments.
Theoretical Uncertainties in Inflationary Predictions
William H. Kinney; Antonio Riotto
2006-03-09T23:59:59.000Z
With present and future observations becoming of higher and higher quality, it is timely and necessary to investigate the most significant theoretical uncertainties in the predictions of inflation. We show that our ignorance of the entire history of the Universe, including the physics of reheating after inflation, translates to considerable errors in observationally relevant parameters. Using the inflationary flow formalism, we estimate that for a spectral index $n$ and tensor/scalar ratio $r$ in the region favored by current observational constraints, the theoretical errors are of order $\\Delta n / | n - 1| \\sim 0.1 - 1$ and $\\Delta r /r \\sim 0.1 - 1$. These errors represent the dominant theoretical uncertainties in the predictions of inflation, and are generically of the order of or larger than the projected uncertainties in future precision measurements of the Cosmic Microwave Background. We also show that the lowest-order classification of models into small field, large field, and hybrid breaks down when higher order corrections to the dynamics are included. Models can flow from one region to another.
NDRPProtocolTechBasisCompiled020705.doc
Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)
AFDC Printable Version Share this resource Send a link to EERE: Alternative Fuels Data Center Home Page to someone by E-mail Share EERE: Alternative Fuels Data Center Home Page on Facebook Tweet about EERE: Alternative Fuels Data Center Home Page on Twitter Bookmark EERE:1 First Use of Energy for All Purposes (Fuel and Nonfuel), 2002; Level:5 TablesExports(Journalhttp://www.fnal.gov/directorate/nalcal/nalcal02_07_05_files/nalcal.gif Directorate - Events - Fermilab atBasis Document for the
Basis for OUO | Department of Energy
AFDC Printable Version Share this resource Send a link to EERE: Alternative Fuels Data Center Home Page to someone by E-mail Share EERE: Alternative Fuels Data Center Home Page on Facebook Tweet about EERE: Alternative Fuels Data Center Home Page on Twitter Bookmark EERE: Alternative Fuels Data Center Home Page on Google Bookmark EERE: Alternative Fuels Data Center Home Page on DeliciousPlasmaP a g e October 20, 2014 Attn: David Meyer Office ofReport:RA-10-065-03BC TIPSBasis for OUO Basis for
Technical basis for internal dosimetry at Hanford
Sula, M.J.; Carbaugh, E.H.; Bihl, D.E.
1989-04-01T23:59:59.000Z
The Hanford Internal Dosimetry Program, administered by Pacific Northwest Laboratory for the US Department of Energy, provides routine bioassay monitoring for employees who are potentially exposed to radionuclides in the workplace. This report presents the technical basis for routine bioassay monitoring and the assessment of internal dose at Hanford. The radionuclides of concern include tritium, corrosion products (/sup 58/Co, /sup 60/Co, /sup 54/Mn, and /sup 59/Fe), strontium, cesium, iodine, europium, uranium, plutonium, and americium. Sections on each of these radionuclides discuss the sources and characteristics; dosimetry; bioassay measurements and monitoring; dose measurement, assessment, and mitigation; and bioassay follow-up treatment. 64 refs., 42 figs., 118 tabs.
Technical basis for internal dosimetry at Hanford
Sula, M.J.; Carbaugh, E.H.; Bihl, D.E.
1991-07-01T23:59:59.000Z
The Hanford Internal Dosimetry Program, administered by Pacific Northwest Laboratory for the US Department of Energy, provides routine bioassay monitoring for employees who are potentially exposed to radionuclides in the workplace. This report presents the technical basis for routine bioassay monitoring and the assessment of internal dose at Hanford. The radionuclides of concern include tritium, corrosion products ({sup 58}Co, {sup 60}Co, {sup 54}Mn, and {sup 59}Fe), strontium, cesium, iodine, europium, uranium, plutonium, and americium,. Sections on each of these radionuclides discuss the sources and characteristics; dosimetry; bioassay measurements and monitoring; dose measurement, assessment, and mitigation and bioassay follow-up treatment. 78 refs., 35 figs., 115 tabs.
Randomized Algorithms with Splitting: Why the Classic Randomized Algorithms
Del Moral , Pierre
Randomized Algorithms with Splitting: Why the Classic Randomized Algorithms do not Work and how Abstract We show that the original classic randomized algorithms for approximate counting in NP simultaneously multiple Markov chains. We present several algorithms of the combined version, which we simple
Technical Basis for PNNL Beryllium Inventory
Johnson, Michelle Lynn
2014-07-09T23:59:59.000Z
The Department of Energy (DOE) issued Title 10 of the Code of Federal Regulations Part 850, “Chronic Beryllium Disease Prevention Program” (the Beryllium Rule) in 1999 and required full compliance by no later than January 7, 2002. The Beryllium Rule requires the development of a baseline beryllium inventory of the locations of beryllium operations and other locations of potential beryllium contamination at DOE facilities. The baseline beryllium inventory is also required to identify workers exposed or potentially exposed to beryllium at those locations. Prior to DOE issuing 10 CFR 850, Pacific Northwest Nuclear Laboratory (PNNL) had documented the beryllium characterization and worker exposure potential for multiple facilities in compliance with DOE’s 1997 Notice 440.1, “Interim Chronic Beryllium Disease.” After DOE’s issuance of 10 CFR 850, PNNL developed an implementation plan to be compliant by 2002. In 2014, an internal self-assessment (ITS #E-00748) of PNNL’s Chronic Beryllium Disease Prevention Program (CBDPP) identified several deficiencies. One deficiency is that the technical basis for establishing the baseline beryllium inventory when the Beryllium Rule was implemented was either not documented or not retrievable. In addition, the beryllium inventory itself had not been adequately documented and maintained since PNNL established its own CBDPP, separate from Hanford Site’s program. This document reconstructs PNNL’s baseline beryllium inventory as it would have existed when it achieved compliance with the Beryllium Rule in 2001 and provides the technical basis for the baseline beryllium inventory.
Theoretical Studies in Chemical Kinetics
Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)
AFDC Printable Version Share this resource Send a link to EERE: Alternative Fuels Data Center Home Page to someone by E-mail Share EERE: Alternative Fuels Data Center Home Page on Facebook Tweet about EERE: Alternative Fuels Data Center Home Page on Twitter Bookmark EERE:1 First Use of Energy for All Purposes (Fuel and Nonfuel), 2002; Level:5(Million Cubic Feet) Oregon (Including Vehicle Fuel) (MillionStructural Basis of WntSupportB 18B()The FiveRevisedThe vision of a smart1 Studies in
Theoretical studies of combustion dynamics
Bowman, J.M. [Emory Univ., Atlanta, GA (United States)
1993-12-01T23:59:59.000Z
The basic objectives of this research program are to develop and apply theoretical techniques to fundamental dynamical processes of importance in gas-phase combustion. There are two major areas currently supported by this grant. One is reactive scattering of diatom-diatom systems, and the other is the dynamics of complex formation and decay based on L{sup 2} methods. In all of these studies, the authors focus on systems that are of interest experimentally, and for which potential energy surfaces based, at least in part, on ab initio calculations are available.
Ricci, Laura
D in Computer Science #12;Introduction Epidemic virus diffusion: models Epidemic algorithms Gossip algorithmsIntroduction Epidemic virus diffusion: models Epidemic algorithms Gossip algorithms Epidemic Outline 1 Introduction 2 Epidemic virus diffusion: models 3 Epidemic algorithms 4 Gossip algorithms #12
Theoretical Perspectives on Protein Folding
D. Thirumalai; Edward P. O'Brien; Greg Morrison; Changbong Hyeon
2010-07-18T23:59:59.000Z
Understanding how monomeric proteins fold under in vitro conditions is crucial to describing their functions in the cellular context. Significant advances both in theory and experiments have resulted in a conceptual framework for describing the folding mechanisms of globular proteins. The experimental data and theoretical methods have revealed the multifaceted character of proteins. Proteins exhibit universal features that can be determined using only the number of amino acid residues (N) and polymer concepts. The sizes of proteins in the denatured and folded states, cooperativity of the folding transition, dispersions in the melting temperatures at the residue level, and time scales of folding are to a large extent determined by N. The consequences of finite N especially on how individual residues order upon folding depends on the topology of the folded states. Such intricate details can be predicted using the Molecular Transfer Model that combines simulations with measured transfer free energies of protein building blocks from water to the desired concentration of the denaturant. By watching one molecule fold at a time, using single molecule methods, the validity of the theoretically anticipated heterogeneity in the folding routes, and the N-dependent time scales for the three stages in the approach to the native state have been established. Despite the successes of theory, of which only a few examples are documented here, we conclude that much remains to be done to solve the "protein folding problem" in the broadest sense.
Theoretical and experimental investigation of polarization spectroscopy
Hanna, Sherif Fayez
2001-01-01T23:59:59.000Z
The physics of polarization spectroscopy has been studied theoretically and experimentally. Theoretically, the dependence of saturated polarization spectroscopy signal has been studied using the direct numerical integration code (DNI) of the time...
Recent Theoretical Results for Advanced Thermoelectric Materials...
Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site
Materials Recent Theoretical Results for Advanced Thermoelectric Materials Transport theory and first principles calculations applied to oxides, chalcogenides and skutterudite...
Tiled QR factorization algorithms
Bouwmeester, Henricus; Langou, Julien; Robert, Yves
2011-01-01T23:59:59.000Z
This work revisits existing algorithms for the QR factorization of rectangular matrices composed of p-by-q tiles, where p >= q. Within this framework, we study the critical paths and performance of algorithms such as Sameh and Kuck, Modi and Clarke, Greedy, and those found within PLASMA. Although neither Modi and Clarke nor Greedy is optimal, both are shown to be asymptotically optimal for all matrices of size p = q^2 f(q), where f is any function such that \\lim_{+\\infty} f= 0. This novel and important complexity result applies to all matrices where p and q are proportional, p = \\lambda q, with \\lambda >= 1, thereby encompassing many important situations in practice (least squares). We provide an extensive set of experiments that show the superiority of the new algorithms for tall matrices.
Inner model theoretic geology Gunter Fuchs
Schindler, Ralf
Inner model theoretic geology Gunter Fuchs Ralf Schindler November 18, 2014 Abstract One of the basic concepts of set theoretic geology is the mantle of a model of set theory V: it is the intersection in what was dubbed Set Theoretic Geology in that paper. One of the main results of [FHR] was that any
A. Khan; B. Yoshimura; J. K. Freericks
2015-08-11T23:59:59.000Z
One of the challenges with quantum simulation in ion traps is that the effective spin-spin exchange couplings are not uniform across the lattice. This can be particularly important in Penning trap realizations where the presence of an ellipsoidal boundary at the edge of the trap leads to dislocations in the crystal. By adding an additional anharmonic potential to better control interion spacing, and a triangular shaped rotating wall potential to reduce the appearance of dislocations, one can achieve better uniformity of the ionic positions. In this work, we calculate the axial phonon frequencies and the spin-spin interactions driven by a spin-dependent optical dipole force, and discuss what effects the more uniform ion spacing has on the spin simulation properties of Penning trap quantum simulators. Indeed, we find the spin-spin interactions behave more like a power law for a wide range of parameters.
Authorization basis status report (miscellaneous TWRS facilities, tanks and components)
Stickney, R.G.
1998-04-29T23:59:59.000Z
This report presents the results of a systematic evaluation conducted to identify miscellaneous TWRS facilities, tanks and components with potential needed authorization basis upgrades. It provides the Authorization Basis upgrade plan for those miscellaneous TWRS facilities, tanks and components identified.
Office of Nuclear Safety Basis and Facility Design
Broader source: Energy.gov [DOE]
The Office of Nuclear Safety Basis & Facility Design establishes safety basis and facility design requirements and expectations related to analysis and design of nuclear facilities to ensure protection of workers and the public from the hazards associated with nuclear operations.
Nonlinear adaptive control using radial basis function approximants
Petersen, Jerry Lee
1993-01-01T23:59:59.000Z
The purpose of this research is to present an adaptive control strategy using the radial basis function approximation method. Surface approximation methods using radial basis function approximants will first be discussed. The Hamiltonian dynamical...
CRAD, Integrated Safety Basis and Engineering Design Review ...
Broader source: Energy.gov (indexed) [DOE]
August 20, 2014 Integrated Safety Basis and Engineering Design Review - August 20, 2014 (EA CRAD 31-4, Rev. 0) CRAD, Integrated Safety Basis and Engineering Design Review - August...
Quantum algorithms for algebraic problems
Andrew M. Childs; Wim van Dam
2008-12-02T23:59:59.000Z
Quantum computers can execute algorithms that dramatically outperform classical computation. As the best-known example, Shor discovered an efficient quantum algorithm for factoring integers, whereas factoring appears to be difficult for classical computers. Understanding what other computational problems can be solved significantly faster using quantum algorithms is one of the major challenges in the theory of quantum computation, and such algorithms motivate the formidable task of building a large-scale quantum computer. This article reviews the current state of quantum algorithms, focusing on algorithms with superpolynomial speedup over classical computation, and in particular, on problems with an algebraic flavor.
Incentives and Internet Algorithms
Feigenbaum, Joan
Game Theory Internet Design #12;9 Game Theory and the Internet Â· Long history of work: Â NetworkingIncentives and Internet Algorithms Joan Feigenbaum Yale University http://www.cs.yale.edu/~jf Scott with selfishness? Â· Internet Architecture: robust scalability Â How to build large and robust systems? #12
Genetic Algorithms Artificial Life
Forrest, Stephanie
systems tremendously. Likewise, evolution of artificial systems is an important component of artificial) are currently the most promi nent and widely used models of evolution in artificiallife systems. GAs have beenGenetic Algorithms and Artificial Life Melanie Mitchell Santa Fe Institute 1660 Old Pecos Tr
Algorithms for Next-Generation High-Throughput Sequencing Technologies
Kao, Wei-Chun
2011-01-01T23:59:59.000Z
Algorithm . . . . . . . . . . . . . . . . . . . . . .2.6.1 A hybrid base-calling algorithm . . . . . . . . .Algorithm . . . . . . . . . . . . . . . . . . . . . . . . . .
A Flexible Reservation Algorithm for Advance Network Provisioning
Balman, Mehmet; Chaniotakis, Evangelos; Shoshani, Arie; Sim, Alex
2010-04-12T23:59:59.000Z
Many scientific applications need support from a communication infrastructure that provides predictable performance, which requires effective algorithms for bandwidth reservations. Network reservation systems such as ESnet's OSCARS, establish guaranteed bandwidth of secure virtual circuits for a certain bandwidth and length of time. However, users currently cannot inquire about bandwidth availability, nor have alternative suggestions when reservation requests fail. In general, the number of reservation options is exponential with the number of nodes n, and current reservation commitments. We present a novel approach for path finding in time-dependent networks taking advantage of user-provided parameters of total volume and time constraints, which produces options for earliest completion and shortest duration. The theoretical complexity is only O(n2r2) in the worst-case, where r is the number of reservations in the desired time interval. We have implemented our algorithm and developed efficient methodologies for incorporation into network reservation frameworks. Performance measurements confirm the theoretical predictions.
Graph Algorithms in the Internet Age
Stanton, Isabelle Lesley
2012-01-01T23:59:59.000Z
5.2 Classic Matching Algorithms . . . . . . . . . . . . .4.3 Analysis of Algorithms on Random Graphs . . . . . . . .Graph Problems 5 An Introduction to Matching Algorithms 5.1
High-performance combinatorial algorithms
Pinar, Ali
2003-01-01T23:59:59.000Z
mathematics, and high performance computing. The numericalalgorithms on high performance computing platforms.algorithms on high performance computing platforms, which
Research in Theoretical Particle Physics
Feldman, Hume A; Marfatia, Danny
2014-09-24T23:59:59.000Z
This document is the final report on activity supported under DOE Grant Number DE-FG02-13ER42024. The report covers the period July 15, 2013 – March 31, 2014. Faculty supported by the grant during the period were Danny Marfatia (1.0 FTE) and Hume Feldman (1% FTE). The grant partly supported University of Hawaii students, David Yaylali and Keita Fukushima, who are supervised by Jason Kumar. Both students are expected to graduate with Ph.D. degrees in 2014. Yaylali will be joining the University of Arizona theory group in Fall 2014 with a 3-year postdoctoral appointment under Keith Dienes. The group’s research covered topics subsumed under the Energy Frontier, the Intensity Frontier, and the Cosmic Frontier. Many theoretical results related to the Standard Model and models of new physics were published during the reporting period. The report contains brief project descriptions in Section 1. Sections 2 and 3 lists published and submitted work, respectively. Sections 4 and 5 summarize group activity including conferences, workshops and professional presentations.
Theoretical models for Bump Cepheids
G. Bono; V. Castellani; M. Marconi
2002-01-08T23:59:59.000Z
We present the results of a theoretical investigation aimed at testing whether full amplitude, nonlinear, convective models account for the I-band light curves of Bump Cepheids in the Large Magellanic Cloud (LMC). We selected two objects from the OGLE sample that show a well-defined bump along the decreasing (short-period) and the rising (long-period) branch respectively. We find that current models do reproduce the luminosity variation over the entire pulsation cycle if the adopted stellar mass is roughly 15 % smaller than predicted by evolutionary models that neglect both mass loss and convective core overshooting. Moreover, we find that the fit to the light curve of the long-period Cepheid located close to the cool edge of the instability strip requires an increase in the mixing length from 1.5 to 1.8 Hp. This suggests an increase in the efficiency of the convective transport when moving toward cooler effective temperatures. Current pulsation calculations supply a LMC distance modulus ranging from 18.48 to 18.58 mag.
Multipartite entanglement in quantum algorithms
D. Bruß; C. Macchiavello
2010-07-23T23:59:59.000Z
We investigate the entanglement features of the quantum states employed in quantum algorithms. In particular, we analyse the multipartite entanglement properties in the Deutsch-Jozsa, Grover and Simon algorithms. Our results show that for these algorithms most instances involve multipartite entanglement.
Axioms, algorithms and Hilbert's Entscheidungsproblem
Axioms, algorithms and Hilbert's Entscheidungsproblem Jan Stovicek Department of Mathematical Sciences September 9th, 2008 www.ntnu.no Jan Stovicek, Axioms & algorithms #12;2 Outline The Decision & algorithms #12;3 Outline The Decision Problem Formal Languages and Theories Incompleteness Undecidability www
Algorithm FIRE -- Feynman Integral REduction
A. V. Smirnov
2008-08-02T23:59:59.000Z
The recently developed algorithm FIRE performs the reduction of Feynman integrals to master integrals. It is based on a number of strategies, such as applying the Laporta algorithm, the s-bases algorithm, region-bases and integrating explicitly over loop momenta when possible. Currently it is being used in complicated three-loop calculations.
Samadi, R; Ludwig, H -G; Caffau, E; Campante, T L; Davies, G R; Kallinger, T; Lund, M N; Mosser, B; Baglin, A; Mathur, S; Garcia, R
2013-01-01T23:59:59.000Z
A large set of stars observed by CoRoT and Kepler shows clear evidence for the presence of a stellar background, which is interpreted to arise from surface convection, i.e., granulation. These observations show that the characteristic time-scale (tau_eff) and the root-mean-square (rms) brightness fluctuations (sigma) associated with the granulation scale as a function of the peak frequency (nu_max) of the solar-like oscillations. We aim at providing a theoretical background to the observed scaling relations based on a model developed in the companion paper. We computed for each 3D model the theoretical power density spectrum (PDS) associated with the granulation as seen in disk-integrated intensity on the basis of the theoretical model. For each PDS we derived tau_eff and sigma and compared these theoretical values with the theoretical scaling relations derived from the theoretical model and the Kepler measurements. We derive theoretical scaling relations for tau_eff and sigma, which show the same dependence ...
Tomasz Plawski, J. Hovater
2010-09-01T23:59:59.000Z
A digital low level radio frequency (RF) system typically incorporates either a heterodyne or direct sampling technique, followed by fast ADCs, then an FPGA, and finally a transmitting DAC. This universal platform opens up the possibilities for a variety of control algorithm implementations. The foremost concern for an RF control system is cavity field stability, and to meet the required quality of regulation, the chosen control system needs to have sufficient feedback gain. In this paper we will investigate the effectiveness of the regulation for three basic control system algorithms: I&Q (In-phase and Quadrature), Amplitude & Phase and digital SEL (Self Exciting Loop) along with the example of the Jefferson Lab 12 GeV cavity field control system.
Stability of Coupling Algorithms
Akkasale, Abhineeth
2012-07-16T23:59:59.000Z
of Committee, K. B. Nakshatrala Committee Members, Steve Suh J. N. Reddy Head of Department, Dennis O?Neal May 2011 Major Subject: Mechanical Engineering iii ABSTRACT Stability of Coupling Algorithms. (May 2011) Abhineeth Akkasale, B.E., Bangalore... step. iv To Amma and Anna v ACKNOWLEDGMENTS First and foremost, I thank Dr. Kalyana B. Nakshatrala for being an incredible advisor and for his time and patience in constantly guiding me through my research. I am indebted to him for his guidance...
Large scale tracking algorithms.
Hansen, Ross L.; Love, Joshua Alan; Melgaard, David Kennett; Karelitz, David B.; Pitts, Todd Alan; Zollweg, Joshua David; Anderson, Dylan Z.; Nandy, Prabal; Whitlow, Gary L.; Bender, Daniel A.; Byrne, Raymond Harry
2015-01-01T23:59:59.000Z
Low signal-to-noise data processing algorithms for improved detection, tracking, discrimination and situational threat assessment are a key research challenge. As sensor technologies progress, the number of pixels will increase signi cantly. This will result in increased resolution, which could improve object discrimination, but unfortunately, will also result in a significant increase in the number of potential targets to track. Many tracking techniques, like multi-hypothesis trackers, suffer from a combinatorial explosion as the number of potential targets increase. As the resolution increases, the phenomenology applied towards detection algorithms also changes. For low resolution sensors, "blob" tracking is the norm. For higher resolution data, additional information may be employed in the detection and classfication steps. The most challenging scenarios are those where the targets cannot be fully resolved, yet must be tracked and distinguished for neighboring closely spaced objects. Tracking vehicles in an urban environment is an example of such a challenging scenario. This report evaluates several potential tracking algorithms for large-scale tracking in an urban environment.
Parameters and error of a theoretical model
Moeller, P.; Nix, J.R.; Swiatecki, W.
1986-09-01T23:59:59.000Z
We propose a definition for the error of a theoretical model of the type whose parameters are determined from adjustment to experimental data. By applying a standard statistical method, the maximum-likelihoodlmethod, we derive expressions for both the parameters of the theoretical model and its error. We investigate the derived equations by solving them for simulated experimental and theoretical quantities generated by use of random number generators. 2 refs., 4 tabs.
Catalyst by Design - Theoretical, Nanostructural, and Experimental...
Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site
Emission Treatment Catalyst Catalyst by Design - Theoretical, Nanostructural, and Experimental Studies of Emission Treatment Catalyst Poster presented at the 16th Directions in...
Theoretical and Computational Neuroscience Gustavo Deco
Lambert, Patrik
(Germany, DAAD,Boehringer-Foundation, Volkswagen-Foundation) - Johan Larsson (Sweden, Generalitat,UPF, EU: Plasticity of cross- modal integration (Volkswagen Foundation) Computational and Theoretical Neuroscience
Distributed algorithms for mobile ad hoc networks
Malpani, Navneet
2001-01-01T23:59:59.000Z
We first present two new leader election algorithms for mobile ad hoc networks. The algorithms ensure that eventually each connected component of the topology graph has exactly one leader. The algorithms are based on a routing algorithm called TORA...
Random Search Algorithms Zelda B. Zabinsky
Del Moral , Pierre
Random Search Algorithms Zelda B. Zabinsky April 5, 2009 Abstract Random search algorithms with convergence results in probability. Random search algorithms include simulated an- nealing, tabu search, genetic algorithms, evolutionary programming, particle swarm optimization, ant colony optimization, cross
New algorithms for adaptive optics point-spread function reconstruction
Eric Gendron; Yann Clénet; Thierry Fusco; Gérard Rousset
2006-06-28T23:59:59.000Z
Context. The knowledge of the point-spread function compensated by adaptive optics is of prime importance in several image restoration techniques such as deconvolution and astrometric/photometric algorithms. Wavefront-related data from the adaptive optics real-time computer can be used to accurately estimate the point-spread function in adaptive optics observations. The only point-spread function reconstruction algorithm implemented on astronomical adaptive optics system makes use of particular functions, named $U\\_{ij}$. These $U\\_{ij}$ functions are derived from the mirror modes, and their number is proportional to the square number of these mirror modes. Aims. We present here two new algorithms for point-spread function reconstruction that aim at suppressing the use of these $U\\_{ij}$ functions to avoid the storage of a large amount of data and to shorten the computation time of this PSF reconstruction. Methods. Both algorithms take advantage of the eigen decomposition of the residual parallel phase covariance matrix. In the first algorithm, the use of a basis in which the latter matrix is diagonal reduces the number of $U\\_{ij}$ functions to the number of mirror modes. In the second algorithm, this eigen decomposition is used to compute phase screens that follow the same statistics as the residual parallel phase covariance matrix, and thus suppress the need for these $U\\_{ij}$ functions. Results. Our algorithms dramatically reduce the number of $U\\_{ij}$ functions to be computed for the point-spread function reconstruction. Adaptive optics simulations show the good accuracy of both algorithms to reconstruct the point-spread function.
Batcho, P.F. [Princeton Univ., NJ (United States)] [Princeton Univ., NJ (United States); Karniadakis, G.E. [Brown Univ., Providence, RI (United States)] [Brown Univ., Providence, RI (United States)
1994-11-01T23:59:59.000Z
The present study focuses on the solution of the incompressible Navier-Stokes equations in general, non-separable domains, and employs a Galerkin projection of divergence-free vector functions as a trail basis. This basis is obtained from the solution of a generalized constrained Stokes eigen-problem in the domain of interest. Faster convergence can be achieved by constructing a singular Stokes eigen-problem in which the Stokes operator is modified to include a variable coefficient which vanishes at the domain boundaries. The convergence properties of such functions are advantageous in a least squares sense and are shown to produce significantly better approximations to the solution of the Navier-Stokes equations in post-critical states where unsteadiness characterizes the flowfield. Solutions for the eigen-systems are efficiently accomplished using a combined Lanczos-Uzawa algorithm and spectral element discretizations. Results are presented for different simulations using these global spectral trial basis on non-separable and multiply-connected domains. It is confirmed that faster convergence is obtained using the singular eigen-expansions in approximating stationary Navier-Stokes solutions in general domains. It is also shown that 100-mode expansions of time-dependent solutions based on the singular Stokes eigenfunctions are sufficient to accurately predict the dynamics of flows in such domains, including Hopf bifurcations, intermittency, and details of flow structures.
Hanford External Dosimetry Technical Basis Manual PNL-MA-842
Rathbone, Bruce A.
2010-01-01T23:59:59.000Z
The Hanford External Dosimetry Technical Basis Manual PNL-MA-842 documents the design and implementation of the external dosimetry system used at Hanford. The manual describes the dosimeter design, processing protocols, dose calculation methodology, radiation fields encountered, dosimeter response characteristics, limitations of dosimeter design under field conditions, and makes recommendations for effective use of the dosimeters in the field. The manual describes the technical basis for the dosimetry system in a manner intended to help ensure defensibility of the dose of record at Hanford and to demonstrate compliance with 10 CFR 835, DOELAP, DOE-RL, ORP, PNSO, and Hanford contractor requirements. The dosimetry system is operated by PNNL’s Hanford External Dosimetry Program (HEDP) which provides dosimetry services to all Hanford contractors. The primary users of this manual are DOE and DOE contractors at Hanford using the dosimetry services of PNNL. Development and maintenance of this manual is funded directly by DOE and DOE contractors. Its contents have been reviewed and approved by DOE and DOE contractors at Hanford through the Hanford Personnel Dosimetry Advisory Committee (HPDAC) which is chartered and chaired by DOE-RL and serves as means of coordinating dosimetry practices across contractors at Hanford. This manual was established in 1996. Since its inception, it has been revised many times and maintained by PNNL as a controlled document with controlled distribution. The first revision to be released through PNNL’s Electronic Records & Information Capture Architecture (ERICA) database was designated Revision 0. Revision numbers that are whole numbers reflect major revisions typically involving significant changes to all chapters in the document. Revision numbers that include a decimal fraction reflect minor revisions, usually restricted to selected chapters or selected pages in the document. Maintenance and distribution of controlled hard copies of the manual by PNNL was discontinued beginning with Revision 0.2. Revision Log: Rev. 0 (2/25/2005) Major revision and expansion. Rev. 0.1 (3/12/2007) Updated Chapters 5, 6 and 9 to reflect change in default ring calibration factor used in HEDP dose calculation software. Factor changed from 1.5 to 2.0 beginning January 1, 2007. Pages on which changes were made are as follows: 5.23, 5.69, 5.78, 5.80, 5.82, 6.3, 6.5, 6.29, and 9.2. Rev 0.2 (8/28/2009) Updated Chapters 3, 5, 6, 8 and 9. Chapters 6 and 8 were significantly expanded. References in the Preface and Chapters 1, 2, 4, and 7 were updated to reflect updates to DOE documents. Approved by HPDAC on 6/2/2009. Rev 1.0 (1/1/2010) Major revision. Updated all chapters to reflect the Hanford site wide implementation on January 1, 2010 of new DOE requirements for occupational radiation protection. The new requirements are given in the June 8, 2007 amendment to 10 CFR 835 Occupational Radiation Protection (Federal Register, June 8, 2007. Title 10 Part 835. U.S., Code of Federal Regulations, Vol. 72, No. 110, 31904-31941). Revision 1.0 to the manual replaces ICRP 26 dosimetry concepts and terminology with ICRP 60 dosimetry concepts and terminology and replaces external dose conversion factors from ICRP 51 with those from ICRP 74 for use in measurement of operational quantities with dosimeters. Descriptions of dose algorithms and dosimeter response characteristics, and field performance were updated to reflect changes in the neutron quality factors used in the measurement of operational quantities.
5. Greedy and other efficient optimization algorithms
Keil, David M.
5. Greedy and other efficient optimization algorithms David Keil Analysis of Algorithms 7/14 1David Keil Analysis of Algorithms 5. Greedy algorithms 8/14 CSCI 347 Analysis of Algorithms David M. Keil, Framingham State University 5. Greedy and other fast optimization algorithms 1. When the next step is easy
Nuclear Safety Basis Program Review Overview and Management Oversight...
Broader source: Energy.gov (indexed) [DOE]
This SRP, Nuclear Safety Basis Program Review, consists of five volumes. It provides information to help strengthen the technical rigor of line management oversight and federal...
POSTDOCTORAL POSITIONS in THEORETICAL HIGH ENERGY PHYSICS
for Advanced Studies (SISSA), the Department of Theoretical Physics of Trieste University, the Trieste section of the INFN, and the Trieste Observatory. The Section is also a member of the European network "Quest Centre for Theoretical Physics Strada Costiera n. 11 - 34151 Trieste, Italy E-mail: rosanna@ictp.it #12;
Kramer, Peter
Theoretical Framework for Microscopic Osmotic Phenomena Theoretical Framework for Microscopic Osmotic Phenomena Paul J. Atzberger Department of Mathematics University of California, Santa Barbara 25, 2007) The basic ingredients of osmotic pressure are a solvent fluid with a soluble molecular
Pedram, Massoud
difference between energy consumption levels at peak usage time and off-peak hours has resulted in not only for customers to shift their energy consumption from peak-energy-use hours to off-peak times, thus lower California Department of Electrical Engineering Los Angeles CA USA {tcui, yanzhiwa, siyuyue, shahin, pedram
Game-theoretic learning algorithm for a spatial coverage problem Ketan Savla and Emilio Frazzoli
Savla, Ketan
spent alone" at the next target location and show that the Nash equilibria of the game correspond of particular interest is concerned with the generation of efficient cooperative strategies for several mobile to complete the task, or the fuel/energy expenditure. A related problem has been investigated as the Weapon
A Graph-theoretic Algorithm for Comparative Modeling of Protein Structure
Samudrala, Ram
are the same (Chothia & Lesk, 1986). This is the case now for about 30% of the general sequences entering for doing this is usually termed comparative or homology modeling. In contrast to progress in generating effects makes the energy surface extremely discontinuous, so that search methods that make semi
Theoretical Comparisons of Search Dynamics of Genetic Algorithms and Evolution Strategies
Coello, Carlos A. Coello
Okabe Honda R&D Co., Ltd., Wako Research Center 1-4-1 Chuo, Wako-shi, Saitama 351-0193, Japan tatsuya okabe@n.w.rd.honda.co.jp Yaochu Jin Honda Research Institute Europe Carl-Legien Strasse 30 63073 Offenbach am Main Germany yaochu.jin@honda-ri.de Bernhard Sendhoff Honda Research Institute Europe Carl
Zhou, Ping
2008-01-01T23:59:59.000Z
one of the k 1 2 wireless mesh routers can transmit packetssome access points with wireless mesh routers with gatewayindicates that all wireless mesh routers contend for the
Zhou, Ping
2008-01-01T23:59:59.000Z
Transmission Strategies for Wireless Devices,” in Proc. ofand Power Control for Wireless Ad-hoc Networks," in Proc. offor ad hoc mobile wireless networks," IEEE Personal
Zhou, Ping
2008-01-01T23:59:59.000Z
and et al, "Low-energy wireless communication networkT.H. Meng, “Minimun energy mobile wireless networks,” IEEE46] Ephremides, "Energy concerns in wireless networks," IEEE
Zhou, Ping
2008-01-01T23:59:59.000Z
Communication Theory and Systems / Electrical and Computerin Electrical Engineering (Communication Theory and Systems)in Electrical Engineering (Communication Theory and Systems)
Nonextensive lattice gauge theories: algorithms and methods
Rafael B. Frigori
2014-04-26T23:59:59.000Z
High-energy phenomena presenting strong dynamical correlations, long-range interactions and microscopic memory effects are well described by nonextensive versions of the canonical Boltzmann-Gibbs statistical mechanics. After a brief theoretical review, we introduce a class of generalized heat-bath algorithms that enable Monte Carlo lattice simulations of gauge fields on the nonextensive statistical ensemble of Tsallis. The algorithmic performance is evaluated as a function of the Tsallis parameter q in equilibrium and nonequilibrium setups. Then, we revisit short-time dynamic techniques, which in contrast to usual simulations in equilibrium present negligible finite-size effects and no critical slowing down. As an application, we investigate the short-time critical behaviour of the nonextensive hot Yang-Mills theory at q- values obtained from heavy-ion collision experiments. Our results imply that, when the equivalence of statistical ensembles is obeyed, the long-standing universality arguments relating gauge theories and spin systems hold also for the nonextensive framework.
Multisensor data fusion algorithm development
Yocky, D.A.; Chadwick, M.D.; Goudy, S.P.; Johnson, D.K.
1995-12-01T23:59:59.000Z
This report presents a two-year LDRD research effort into multisensor data fusion. We approached the problem by addressing the available types of data, preprocessing that data, and developing fusion algorithms using that data. The report reflects these three distinct areas. First, the possible data sets for fusion are identified. Second, automated registration techniques for imagery data are analyzed. Third, two fusion techniques are presented. The first fusion algorithm is based on the two-dimensional discrete wavelet transform. Using test images, the wavelet algorithm is compared against intensity modulation and intensity-hue-saturation image fusion algorithms that are available in commercial software. The wavelet approach outperforms the other two fusion techniques by preserving spectral/spatial information more precisely. The wavelet fusion algorithm was also applied to Landsat Thematic Mapper and SPOT panchromatic imagery data. The second algorithm is based on a linear-regression technique. We analyzed the technique using the same Landsat and SPOT data.
A New Numerical Algorithm for Thermoacoustic and Photoacoustic Tomography with Variable Sound Speed
Qian, Jianliang; Uhlmann, Gunther; Zhao, Hongkai
2011-01-01T23:59:59.000Z
We present a new algorithm for reconstructing an unknown source in Thermoacoustic and Photoacoustic Tomography based on the recent advances in understanding the theoretical nature of the problem. We work with variable sound speeds that might be also discontinuous across some surface. The latter problem arises in brain imaging. The new algorithm is based on an explicit formula in the form of a Neumann series. We present numerical examples with non-trapping, trapping and piecewise smooth speeds, as well as examples with data on a part of the boundary. These numerical examples demonstrate the robust performance of the new algorithm.
Emergence of a measurement basis in atom-photon scattering
Yinnon Glickman; Shlomi Kotler; Nitzan Akerman; Roee Ozeri
2012-06-18T23:59:59.000Z
The process of quantum measurement has been a long standing source of debate. A measurement is postulated to collapse a wavefunction onto one of the states of a predetermined set - the measurement basis. This basis origin is not specified within quantum mechanics. According to the theory of decohernce, a measurement basis is singled out by the nature of coupling of a quantum system to its environment. Here we show how a measurement basis emerges in the evolution of the electronic spin of a single trapped atomic ion due to spontaneous photon scattering. Using quantum process tomography we visualize the projection of all spin directions, onto this basis, as a photon is scattered. These basis spin states are found to be aligned with the scattered photon propagation direction. In accordance with decohernce theory, they are subjected to a minimal increase in entropy due to the photon scattering, while, orthogonal states become fully mixed and their entropy is maximally increased. Moreover, we show that detection of the scattered photon polarization measures the spin state of the ion, in the emerging basis, with high fidelity. Lastly, we show that while photon scattering entangles all superpositions of pointer states with the scattered photon polarization, the measurement-basis states themselves remain classically correlated with it. Our findings show that photon scattering by atomic spin superpositions fulfils all the requirements from a quantum measurement process.
Preconditioned solenoidal basis method for incompressible fluid flows
Wang, Xue
2006-04-12T23:59:59.000Z
defined parametrically using the element basis functions, xe(xi,eta) = 6summationdisplay i=1 ?ixei, ye(xi,eta) = 6summationdisplay i=1 ?iyei , (3.3) where (xei,yei ) are nodal coordinates defining Omegae. For the quadratic velocities, the basis functions...
A Jacobi Method for Lattice Basis Reduction Sanzheng Qiao
Qiao, Sanzheng
A Jacobi Method for Lattice Basis Reduction Sanzheng Qiao Department of Computing and Software Mc decoding has been suc- cessfully used in wireless communications. In this paper, we propose a Jacobi method for lattice basis reduction. Jacobi method is attractive, because it is inherently parallel. Thus high
Comparison between Traditional Neural Networks and Radial Basis Function Networks
Wilamowski, Bogdan Maciej
Comparison between Traditional Neural Networks and Radial Basis Function Networks Tiantian Xie, Hao networks: traditional neural networks and radial basis function (RBF) networks, both of which of neural network architectures are analyzed and compared based on four different examples. The comparison
Non adiabatic quantum search algorithms
A. Perez; A. Romanelli
2007-06-08T23:59:59.000Z
We present two new continuous time quantum search algorithms similar to the adiabatic search algorithm, but now without an adiabatic evolution. We find that both algorithms work for a wide range of values of the parameters of the Hamiltonian, and one of them has, as an additional feature that, for values of time larger than a characteristic one, it will converge to a state which can be close to the searched state.
Selected Items in Jet Algorithms
Giuseppe Bozzi
2008-08-06T23:59:59.000Z
I provide a very brief overview of recent developments in jet algorithms, mostly focusing on the issue of infrared-safety.
Algorithms for dynamical overlap fermions
Stefan Schaefer
2006-09-28T23:59:59.000Z
An overview of the current status of algorithmic approaches to dynamical overlap fermions is given. In particular the issue of changing the topological sector is discussed.
Hedge Algorithm and Subgradient Methods
2010-10-05T23:59:59.000Z
standard complexity results on subgradient algorithms allows us to derive optimal parameters ...... the American Statistical Association, 58:13–30, 1963. 1In fact ...
Lecture 24: Parallel Algorithms I Topics: sort and matrix algorithms
Balasubramonian, Rajeev
1 Lecture 24: Parallel Algorithms I · Topics: sort and matrix algorithms #12;2 Processor Model a single clock (asynchronous designs will require minor modifications) · At each clock, processors receive input output #12;4 Control at Each Processor · Each processor stores the minimum number it has seen
Recursive Dynamics Algorithms for Serial, Parallel, and Closed-chain Multibody Systems
Saha, Subir Kumar
Kumar Saha Department of Mechanical Engineering, IIT Delhi Hauz Khas, New Delhi 110 016 saha), Wehage and Haug (1982), Kamman and Huston (1984), Angeles and Lee (1988), Saha and Angeles (1991), Saha) and Saha (1997), which are the basis for the development of recursive dynamics algorithms proposed
Grid Monitoring: Bounds on Performances of Sensor Placement Algorithms Muhammad Uddin
Kavcic, Aleksandar
matrix, we use a linear minimum mean squared error estimator as the state estimator to formulate the PMU test systems, showing that the proposed bounds provide a valid basis for determining the quality iterative algorithms [3]. The PMUs, on the other hand, can directly measure the states at the PMU
Theoretical efficiency of solar thermoelectric energy generators
Chen, Gang
This paper investigates the theoretical efficiency of solar thermoelectric generators (STEGs). A model is established including thermal concentration in addition to optical concentration. Based on the model, the maximum ...
The pointer basis and the feedback stabilization of quantum systems
L. Li; A. Chia; H. M. Wiseman
2014-11-19T23:59:59.000Z
The dynamics for an open quantum system can be `unravelled' in infinitely many ways, depending on how the environment is monitored, yielding different sorts of conditioned states, evolving stochastically. In the case of ideal monitoring these states are pure, and the set of states for a given monitoring forms a basis (which is overcomplete in general) for the system. It has been argued elsewhere [D. Atkins et al., Europhys. Lett. 69, 163 (2005)] that the `pointer basis' as introduced by Zurek and Paz [Phys. Rev. Lett 70, 1187(1993)], should be identified with the unravelling-induced basis which decoheres most slowly. Here we show the applicability of this concept of pointer basis to the problem of state stabilization for quantum systems. In particular we prove that for linear Gaussian quantum systems, if the feedback control is assumed to be strong compared to the decoherence of the pointer basis, then the system can be stabilized in one of the pointer basis states with a fidelity close to one (the infidelity varies inversely with the control strength). Moreover, if the aim of the feedback is to maximize the fidelity of the unconditioned system state with a pure state that is one of its conditioned states, then the optimal unravelling for stabilizing the system in this way is that which induces the pointer basis for the conditioned states. We illustrate these results with a model system: quantum Brownian motion. We show that even if the feedback control strength is comparable to the decoherence, the optimal unravelling still induces a basis very close to the pointer basis. However if the feedback control is weak compared to the decoherence, this is not the case.
Algorithms for Supporting Compiled Communication
Yuan, Xin
Algorithms for Supporting Compiled Communication Xin Yuan Rami Melhem Rajiv Gupta Dept. We present an experimental compiler, ESUIF, that supports compiled communication for High algorithms used in ESUIF. We further demonstrate the effectiveness of compiled communication on all optical
Dhar, Deepak
polymers. Sumedha #3; and Deepak Dhar y Department Of Theoretical Physics Tata Institute Of Fundamental algorithm for linear and branched polymers. There is a qualitative di#11;erence in the eÆciency in these two for linear polymers, but as exp(cn #11; ) for branched (undirected and directed) polymers, where 0
Call for Papers 9th Annual European Symposium on Algorithms --ESA 2001
Brodal, Gerth Stølting
Call for Papers 9th Annual European Symposium on Algorithms -- ESA 2001 BRICS, University of Aarhus, Denmark, August 28--31, 2001 Scope The Symposium covers research in the use, design, and analysis of e. ESA 2001 is spon sored by BRICS and EATCS (the European Association for Theoretical Computer Science
Call for Papers 9th Annual European Symposium on Algorithms ESA 2001
Brodal, Gerth Stølting
Call for Papers 9th Annual European Symposium on Algorithms ESA 2001 BRICS, University of Aarhus, Denmark, August 2831, 2001 Scope The Symposium covers research in the use, design, and analysis programming. ESA 2001 is spon- sored by BRICS and EATCS (the European Association for Theoretical Computer
A Panoply of Quantum Algorithms
Bartholomew Furrow
2006-06-15T23:59:59.000Z
We create a variety of new quantum algorithms that use Grover's algorithm and similar techniques to give polynomial speedups over their classical counterparts. We begin by introducing a set of tools that carefully minimize the impact of errors on running time; those tools provide us with speedups to already-published quantum algorithms, such as improving Durr, Heiligman, Hoyer and Mhalla's algorithm for single-source shortest paths [quant-ph/0401091] by a factor of lg N. The algorithms we construct from scratch have a range of speedups, from O(E)->O(sqrt(VE lg V)) speedups in graph theory to an O(N^3)->O(N^2) speedup in dynamic programming.
Local algorithms for graph partitioning and finding dense subgraphs
Andersen, Reid
2007-01-01T23:59:59.000Z
ed local partitioning algorithm . . . . . . . . . . . .7 A Local Algorithm for Finding DenseComparison of local partitioning algorithms . . . . . . . .
Structural and Functional Basis for Broad-spectrum Neutralization...
Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)
Structural and Functional Basis for Broad-spectrum Neutralization of Avian and Human Influenza A Viruses Seasonal influenza A is a scourge of the young and old, killing more than...
Choosing Power Cables on the Basis of Energy Economics
Dimachkieh, S.; Brown, D. R.
1980-01-01T23:59:59.000Z
Feeder circuit and branch circuit cable sizes are determined primarily on the basis of ampacity and voltage drop considerations. Rising costs of energy suggest a reassessment of currently used voltage drop criteria. Some degree of cable over sizing...
Dynamic Algorithm for Space Weather Forecasting System
Fischer, Luke D.
2011-08-08T23:59:59.000Z
We propose to develop a dynamic algorithm that intelligently analyzes existing solar weather data and constructs an increasingly more accurate equation/algorithm for predicting solar weather accurately in real time. This dynamic algorithm analyzes a...
Efficient Algorithms for High Dimensional Data Mining
Rakthanmanon, Thanawin
2012-01-01T23:59:59.000Z
Resolution QRS Detection Algorithm for Sparsely Sampled ECGShamlo. 2011. A disk-aware algorithm for time series motifJ. M. Kleinberg, 1997. Two algorithms for nearest-neighbor
End of semester project Global Optimization algorithms
Dreyfuss, Pierre
End of semester project Global Optimization algorithms Ecole Polytechnique de l'UniversitÃ© de Nice.......................................................................................................................................3 II. Simulated annealing algorithm (SA.........................................................................................................................................7 2.Principle,algorithm and choice of parameters
Minimally entangled typical thermal state algorithms
Stoudenmire, E. M.; White, Steven R.
2010-01-01T23:59:59.000Z
s 2 t 2 ? 1 )? 2 and the algorithm continued by defining R 3order indicated, this algorithm for multiplying MPOs scalestypical thermal state algorithms E M Stoudenmire 1 and
Sensor Networks: Distributed Algorithms Reloaded or Revolutions?
Sensor Networks: Distributed Algorithms Reloaded or Revolutions? Roger Wattenhofer Computer. This paper wants to motivate the distributed algorithms community to study sensor networks. We discuss why community, a sensor network essentially is a database. The distributed algorithms community should join
Reflections for quantum query algorithms
Ben W. Reichardt
2010-05-10T23:59:59.000Z
We show that any boolean function can be evaluated optimally by a quantum query algorithm that alternates a certain fixed, input-independent reflection with a second reflection that coherently queries the input string. Originally introduced for solving the unstructured search problem, this two-reflections structure is therefore a universal feature of quantum algorithms. Our proof goes via the general adversary bound, a semi-definite program (SDP) that lower-bounds the quantum query complexity of a function. By a quantum algorithm for evaluating span programs, this lower bound is known to be tight up to a sub-logarithmic factor. The extra factor comes from converting a continuous-time query algorithm into a discrete-query algorithm. We give a direct and simplified quantum algorithm based on the dual SDP, with a bounded-error query complexity that matches the general adversary bound. Therefore, the general adversary lower bound is tight; it is in fact an SDP for quantum query complexity. This implies that the quantum query complexity of the composition f(g,...,g) of two boolean functions f and g matches the product of the query complexities of f and g, without a logarithmic factor for error reduction. It further shows that span programs are equivalent to quantum query algorithms.
Computing single step operators of logic programming in radial basis function neural networks
Hamadneh, Nawaf; Sathasivam, Saratha; Choon, Ong Hong [School of Mathematical Sciences, Universiti Sains Malaysia, 11800 USM, Penang (Malaysia)
2014-07-10T23:59:59.000Z
Logic programming is the process that leads from an original formulation of a computing problem to executable programs. A normal logic program consists of a finite set of clauses. A valuation I of logic programming is a mapping from ground atoms to false or true. The single step operator of any logic programming is defined as a function (T{sub p}:I?I). Logic programming is well-suited to building the artificial intelligence systems. In this study, we established a new technique to compute the single step operators of logic programming in the radial basis function neural networks. To do that, we proposed a new technique to generate the training data sets of single step operators. The training data sets are used to build the neural networks. We used the recurrent radial basis function neural networks to get to the steady state (the fixed point of the operators). To improve the performance of the neural networks, we used the particle swarm optimization algorithm to train the networks.
Tetrahedral hp finite elements: Algorithms and flow simulations
Sherwin, S.J.; Karniadakis, G.E. [Brown Univ., Providence RI (United States)] [Brown Univ., Providence RI (United States)
1996-03-01T23:59:59.000Z
We present a new discretisation for the incompressible Navier-Stokes equations that extends spectral methods to three-dimensional complex domains consisting of tetrahedral subdomains. The algorithm is based on standard concepts of hp finite elements as well as tensorial spectral elements. This new formulation employs a hierarchical/modal basis constructed from a new apex co-ordinate system which retains a generalised tensor product. These properties enable the development of computationally efficienct algorithms for use on standard finite volume unstructed meshes. A detailed analysis is presented that documents the stability and exponential convergence of the method and several flow cases are simulated and compared with analytical and experimental results. 34 refs., 28 figs., 1 tab.
Fast Computation Algorithm for Discrete Resonances among Gravity Waves
Elena Kartashova
2006-05-25T23:59:59.000Z
Traditionally resonant interactions among short waves, with large real wave-numbers, were described statistically and only a small domain in spectral space with integer wave-numbers, discrete resonances, had to be studied separately in resonators. Numerical simulations of the last few years showed unambiguously the existence of some discrete effects in the short-waves part of the wave spectrum. Newly presented model of laminated turbulence explains theoretically appearance of these effects thus putting a novel problem - construction of fast algorithms for computation of solutions of resonance conditions with integer wave-numbers of order $10^3$ and more. Example of such an algorithm for 4-waves interactions of gravity waves is given. Its generalization on the different types of waves is briefly discussed.
A polynomial projection algorithm for linear programming
2013-05-03T23:59:59.000Z
We propose a polynomial algorithm for linear programming. The algorithm represents a linear optimization or decision problem in the form of a system of linear ...
A Direct Manipulation Language for Explaining Algorithms
Scott, Jeremy
Instructors typically explain algorithms in computer science by tracing their behavior, often on blackboards, sometimes with algorithm visualizations. Using blackboards can be tedious because they do not facilitate ...
Theoretical stellar models for old galactic clusters
V. Castellani; S. Degl'Innocenti; M. Marconi
1998-12-05T23:59:59.000Z
We present new evolutionary stellar models suitable for old Population I clusters, discussing both the consequences of the most recent improvements in the input physics and the effect of element diffusion within the stellar structures. Theoretical cluster isochrones are presented, covering the range of ages from 1 to 9 Gyr for the four selected choices on the metallicity Z= 0.007, 0.010, 0.015 and 0.020. Theoretical uncertainties on the efficiency of superadiabatic convection are discussed in some details. Isochrone fitting to the CM diagrams of the two well observed galactic clusters NGC2420 and M67 indicates that a mixing length parameter alpha = 1.9 appears adequate for reproducing the observed color of cool giant stars. The problems in matching theoretical preditions to the observed slope of MS stars are discussed.
Algorithms for VLSI Circuit Optimization and GPU-Based Parallelization
Liu, Yifang
2010-07-14T23:59:59.000Z
-convex, theoretical optimality conditions The journal model is IEEE Transactions on Computer-aided Design of Integrated Circuits and Systems. 2 do not hold. Instead, the tendency to be trapped into local optimum quali?es them more as a greedy approach. This research... can be directly applied on DAG topology as opposed to tree topology in [12]. Experiments are performed on ISCAS85, ITC99, and IWLS 2005 benchmark circuits to compare our algorithm with a state-of-the-art previous work [1]. The re- sults indicate...
Theoretical studies of chemical reaction dynamics
Schatz, G.C. [Argonne National Laboratory, IL (United States)
1993-12-01T23:59:59.000Z
This collaborative program with the Theoretical Chemistry Group at Argonne involves theoretical studies of gas phase chemical reactions and related energy transfer and photodissociation processes. Many of the reactions studied are of direct relevance to combustion; others are selected they provide important examples of special dynamical processes, or are of relevance to experimental measurements. Both classical trajectory and quantum reactive scattering methods are used for these studies, and the types of information determined range from thermal rate constants to state to state differential cross sections.
Theoretical Study on Catalysis by Protein Enzymes and Ribozyme
Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)
Theoretical Study on Catalysis by Protein Enzymes and Ribozyme Theoretical Study on Catalysis by Protein Enzymes and Ribozyme 2000 NERSC Annual Report 17shkarplus.jpg The...
ITP Steel: Theoretical Minimum Energies to Produce Steel for...
Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site
Theoretical Minimum Energies to Produce Steel for Selected Conditions, March 2000 ITP Steel: Theoretical Minimum Energies to Produce Steel for Selected Conditions, March 2000...
Randomized algorithms for reliable broadcast
Vaikuntanathan, Vinod
2009-01-01T23:59:59.000Z
In this thesis, we design randomized algorithms for classical problems in fault tolerant distributed computing in the full-information model. The full-information model is a strong adversarial model which imposes no ...
Bayesian inference algorithm on Raw
Luong, Alda
2004-01-01T23:59:59.000Z
This work explores the performance of Raw, a parallel hardware platform developed at MIT, running a Bayesian inference algorithm. Motivation for examining this parallel system is a growing interest in creating a self-learning ...
DIMACS Series in Discrete Mathematics and Theoretical Computer Science
Martin, Ralph R.
, ``Blocked Clauses'' and ``Generalized Autarkness,'' are outlined. 6. ``The improved 3ÂSAT algorithm'': Here
Imaging algorithms in radio interferometry
R. J. Sault; T. A. Oosterloo
2007-01-08T23:59:59.000Z
The paper reviews progress in imaging in radio interferometry for the period 1993-1996. Unlike an optical telescope, the basic measurements of a radio interferometer (correlations between antennas) are indirectly related to a sky brightness image. In a real sense, algorithms and computers are the lenses of a radio interferometer. In the last 20 years, whereas interferometer hardware advances have resulted in improvements of a factor of a few, algorithm and computer advances have resulted in orders of magnitude improvement in image quality. Developing these algorithms has been a fruitful and comparatively inexpensive method of improving the performance of existing telescopes, and has made some newer telescopes possible. In this paper, we review recent developments in the algorithms used in the imaging part of the reduction process. What constitutes an `imaging algorithm'? Whereas once there was a steady `forward' progression in the reduction process of editing, calibrating, transforming and, finally, deconvolving, this is no longer true. The introduction of techniques such as self-calibration, and algorithms that go directly from visibilities to final images, have made the dividing lines less clear. Although we briefly consider self-calibration, for the purposes of this paper calibration issues are generally excluded. Most attention will be directed to the steps which form final images from the calibrated visibilities.
Hydraulic Geometry: Empirical Investigations and Theoretical Approaches
Eaton, Brett
Hydraulic Geometry: Empirical Investigations and Theoretical Approaches B.C. Eatona, a Department of Geography, The University of British Columbia 1984 West Mall, Vancouver, BC, V6T 1Z2 Abstract Hydraulic. One approach to hydraulic geometry considers temporal changes at a single location due to variations
Chicago Journal of Theoretical Computer Science
Kozen, Dexter
Chicago Journal of Theoretical Computer Science The MIT Press Volume 1995, Article 3 20 September 1995 ISSN 10730486. MIT Press Journals, 55 Hayward St., Cambridge, MA 02142 USA; (617)253-2889; journals-orders@mit.edu, journals-info@mit.edu. Published one article at a time in LATEX source form
Chicago Journal of Theoretical Computer Science
Erickson, Jeff
Chicago Journal of Theoretical Computer Science The MIT Press Volume 1999, Article 8 Lower Bounds for Linear Satisfiability Problems ISSN 10730486. MIT Press Journals, Five Cambridge Center, Cambridge, MA 02142-1493 USA; (617)253-2889; journals-ordersmit.edu, journals-infomit.edu. Published one article
Chicago Journal of Theoretical Computer Science
Pudlák, Pavel
Chicago Journal of Theoretical Computer Science The MIT Press Volume 1999, Article 11 Satis#12;ability Coding Lemma ISSN 1073{0486. MIT Press Journals, Five Cambridge Center, Cambridge, MA 02142-1493 USA; (617)253-2889; journals-orders@mit.edu, journals-info @mit.edu. Published one article at a time
Chicago Journal of Theoretical Computer Science
Agrawal, Manindra
Chicago Journal of Theoretical Computer Science The MIT Press Volume 1997, Article 5 31 December 1997 ISSN 10730486. MIT Press Journals, Five Cambridge Center, Cambridge, MA 02142-1493 USA; (617)253-2889; journals-orders@mit.edu, journals-info@mit.edu. Published one article at a time in LATEX source form
Chicago Journal of Theoretical Computer Science
Ta-Shma, Amnon
Chicago Journal of Theoretical Computer Science MIT Press Volume 1995, Article 1 30 June, 1995 ISSN 10730486. MIT Press Journals, 55 Hayward St., Cambridge, MA 02142; (617)253-2889; journals-orders@mit.edu, journals-info@mit.edu. Pub- lished one article at a time in LATEX source form on the Internet. Pagination
Chicago Journal of Theoretical Computer Science
Fenner, Stephen
Chicago Journal of Theoretical Computer Science The MIT Press Volume 1999, Article 2 Complements of Multivalued Functions ISSN 1073--0486. MIT Press Journals, Five Cambridge Center, Cambridge, MA 021421493 USA; (617)2532889; journalsorders@mit.edu, journals info@mit.edu. Published one article at a time in L
Chicago Journal of Theoretical Computer Science
Mahajan, Meena
Chicago Journal of Theoretical Computer Science The MIT Press Volume 1997, Article 5 31 December 1997 ISSN 1073--0486. MIT Press Journals, Five Cambridge Center, Cambridge, MA 021421493 USA; (617)2532889; journalsorders@mit.edu, journalsinfo@mit.edu. Published one article at a time in L A T E X source form
Chicago Journal of Theoretical Computer Science
Fenner, Stephen
Chicago Journal of Theoretical Computer Science The MIT Press Volume 1999, Article 2 Complements of Multivalued Functions ISSN 10730486. MIT Press Journals, Five Cambridge Center, Cambridge, MA 02142-1493 USA; (617)253-2889; journals-orders@mit.edu, journals- info@mit.edu. Published one article at a time
GROUP-THEORETIC ORBIT DECIDABILITY ENRIC VENTURA
Ventura, Enric
GROUP-THEORETIC ORBIT DECIDABILITY ENRIC VENTURA Abstract. A recent collection of papers with the conjugacy problem made by BogopolskiMartinoVentura in [2]. All the consequences up to date, published Government through grant number MTM2011-25955. 1 #12;2 ENRIC VENTURA endomorphisms A = End(X, X
Theoretical Studies in Elementary Particle Physics
Collins, John C.; Roiban, Radu S
2013-04-01T23:59:59.000Z
This final report summarizes work at Penn State University from June 1, 1990 to April 30, 2012. The work was in theoretical elementary particle physics. Many new results in perturbative QCD, in string theory, and in related areas were obtained, with a substantial impact on the experimental program.
History and Contributions of Theoretical Computer Science
Selman, Alan
History and Contributions of Theoretical Computer Science John E. Savage Department of Computer Science Brown University Providence, RI 02912 savage@cs.brown.edu Alan L. Selman Department of Computer@cse.buffalo.edu Carl Smith Department of Computer Science University of Maryland College Park, MD 20741 smith
Emergence of the pointer basis through the dynamics of correlations
M. F. Cornelio; O. Jiménez Farías; F. F. Fanchini; I. Frerot; G. H. Aguilar; M. O. Hor-Meyll; M. C. de Oliveira; S. P. Walborn; A. O. Caldeira; P. H. Souto Ribeiro
2012-10-04T23:59:59.000Z
We use the classical correlation between a quantum system being measured and its measurement apparatus to analyze the amount of information being retrieved in a quantum measurement process. Accounting for decoherence of the apparatus, we show that these correlations may have a sudden transition from a decay regime to a constant level. This transition characterizes a non-asymptotic emergence of the pointer basis, while the system-apparatus can still be quantum correlated. We provide a formalization of the concept of emergence of a pointer basis in an apparatus subject to decoherence. This contrast of the pointer basis emergence to the quantum to classical transition is demonstrated in an experiment with polarization entangled photon pairs.
Fine Entanglement and State Manipulation of Two Spin Coupled Qubits: A Lie Theoretic Overview
Roderick Vance
2015-02-18T23:59:59.000Z
By building on the work in Kuzmak & Tkachuk, "Preparation of quantum states of two spin-$\\frac{1}{2}$ particles in the form of the Schmidt decomposition", Physics Letters A, {\\bf 378}, pp1469-1474, which outlined the control of the degree of entanglement within this system, it is proven that any $SU(4)$ state manipulation operator can be realised for this system using a sequence of pulsed magnetic fields in either two linearly independent directions if the gyromagnetic ratios are unequal or three directions for equal gyromagnetic ratios. To achieve this goal, an elementary Lie theoretic proof of the fact that the group of transformations generated by finite products of exponentials of a set of Lie algebra vectors is equal to the Lie group generated by the smallest Lie algebra containing those vectors is rewritten into an explicit algorithm. A numerical example as well as the proof of the algorithm's effectiveness is given.
The Functional Requirements and Design Basis for Information Barriers
Fuller, James L.
2012-05-01T23:59:59.000Z
This report summarizes the results of the Information Barrier Working Group workshop held at Sandia National Laboratory in Albuquerque, NM, February 2-4, 1999. This workshop was convened to establish the functional requirements associated with warhead radiation signature information barriers, to identify the major design elements of any such system or approach, and to identify a design basis for each of these major elements. Such information forms the general design basis to be used in designing, fabricating, and evaluating the complete integrated systems developed for specific purposes.
Resilient Control Systems Practical Metrics Basis for Defining Mission Impact
Craig G. Rieger
2014-08-01T23:59:59.000Z
"Resilience” describes how systems operate at an acceptable level of normalcy despite disturbances or threats. In this paper we first consider the cognitive, cyber-physical interdependencies inherent in critical infrastructure systems and how resilience differs from reliability to mitigate these risks. Terminology and metrics basis are provided to integrate the cognitive, cyber-physical aspects that should be considered when defining solutions for resilience. A practical approach is taken to roll this metrics basis up to system integrity and business case metrics that establish “proper operation” and “impact.” A notional chemical processing plant is the use case for demonstrating how the system integrity metrics can be applied to establish performance, and
Structural Basis for the Promiscuous Biosynthetic Prenylation of Aromatic
Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)
AFDC Printable Version Share this resource Send a link to EERE: Alternative Fuels Data Center Home Page to someone by E-mail Share EERE: Alternative Fuels Data Center Home Page on Facebook Tweet about EERE: Alternative Fuels Data Center Home Page on Twitter Bookmark EERE:1 First Use of Energy for All Purposes (Fuel and Nonfuel), 2002; Level:5(Million Cubic Feet) Oregon (Including Vehicle Fuel) (MillionStructural Basis of Wnt Recognition by Frizzled5I AssessmentStructural Basis
Journes MAS 2010, Bordeaux Session : Algorithmes Stochastiques
Boyer, Edmond
. En particulier plusieurs facettes et applications des algorithmes MCO, MCOP, MCOG, ... seront mises
9. Genetic Algorithms 9.1 Introduction
Cambridge, University of
66 9. Genetic Algorithms 9.1 Introduction The concept of evolution is prevalent in most biological to computational optimisation methods using "genetic algorithms" [50]. 9.2 Neural Networks and Genetic Algorithms.1) with the function f being non-linear. Genetic algorithms (GAs) is one possible method of solving such a problem
Theoretical, Methodological, and Empirical Approaches to Cost Savings: A Compendium
M Weimar
1998-12-10T23:59:59.000Z
This publication summarizes and contains the original documentation for understanding why the U.S. Department of Energy's (DOE's) privatization approach provides cost savings and the different approaches that could be used in calculating cost savings for the Tank Waste Remediation System (TWRS) Phase I contract. The initial section summarizes the approaches in the different papers. The appendices are the individual source papers which have been reviewed by individuals outside of the Pacific Northwest National Laboratory and the TWRS Program. Appendix A provides a theoretical basis for and estimate of the level of savings that can be" obtained from a fixed-priced contract with performance risk maintained by the contractor. Appendix B provides the methodology for determining cost savings when comparing a fixed-priced contractor with a Management and Operations (M&O) contractor (cost-plus contractor). Appendix C summarizes the economic model used to calculate cost savings and provides hypothetical output from preliminary calculations. Appendix D provides the summary of the approach for the DOE-Richland Operations Office (RL) estimate of the M&O contractor to perform the same work as BNFL Inc. Appendix E contains information on cost growth and per metric ton of glass costs for high-level waste at two other DOE sites, West Valley and Savannah River. Appendix F addresses a risk allocation analysis of the BNFL proposal that indicates,that the current approach is still better than the alternative.
Formal Management Review of the Safety Basis Calculations Noncompliance
Altenbach, T J
2008-06-24T23:59:59.000Z
In Reference 1, LLNL identified a failure to adequately implement an institutional commitment concerning administrative requirements governing the documentation of Safety Basis calculations supporting the Documented Safety Analysis (DSA) process for LLNL Hazard Category 2 and Category 3 nuclear facilities. The AB Section has discovered that the administrative requirements of AB procedure AB-006, 'Safety Basis Calculation Procedure for Category 2 and 3 Nuclear Facilities', have not been uniformly or consistently applied in the preparation of Safety Basis calculations for LLNL Hazard Category 2 and 3 Nuclear Facilities. The SEP Associated Director has directed the AB Section to initiate a formal management review of the issue that includes, but is not necessarily limited to the following topics: (1) the basis establishing Ab-006 as a required internal procedure for Safety Basis calculations; (2) how requirements for Safety Basis calculations flow down in the institutional DSA process; (3) the extent to which affected Laboratory organizations have explicitly complied with the requirements of Procedure AB-006; (4) what alternative approaches LLNL organizations has used for Safety Basis calculations and how these alternate approaches compare with Procedure AB-006 requirements; and (5) how to reconcile Safety Basis calculations that were performed before Procedure AB-006 came into existence (i.e., August 2001). The management review2 also includes an extent-of-condition evaluation to determine how widespread the discovered issue is throughout Laboratory organizations responsible for operating nuclear facilities, and to determine if implementation of AB procedures other than AB-006 has been similarly affected. In Reference 2, Corrective Action 1 was established whereby the SEP Directorate will develop a plan for performing a formal management review of the discovered condition, including an extent-of condition evaluation. In Reference 3, a plan was provided to prepare a formal management review, satisfying Corrective Action 1. An AB-006 Working Group was formed,led by the AB Section, with representatives from the Nuclear Materials Technology Program (NMTP), the Radioactive and Hazardous Waste Management (RHWM) Division, and the Packaging and Transportation Safety (PATS) Program. The key action of this management review was for Working Group members to conduct an assessment of all safety basis calculations referenced in their respective DSAs. Those assessments were tasked to provide the following information: (1) list which safety basis calculations correctly follow AB-006 and therefore require no additional documentation; (2) identify and list which safety basis calculations do not strictly follow AB-006, these include NMTP Engineering Notes, Engineering Safety Notes, and calculations by organizations external to the nuclear facilities (such as Plant Engineering), subcontractor calculations, and other internally generated calculations. Each of these will be reviewed and listed on a memorandum with the facility manager's (or designee's) signature accepting that calculation for use in the DSA. If any of these calculations are lacking the signature of a technical reviewer, they must also be reviewed for technical content and that review documented per AB-006.
Papalaskari, Mary-Angela
CSC 8301 Design and Analysis of Algorithms Lecture 1 CSC 8301CSC 8301-- Design and Analysis of AlgorithmsDesign and Analysis of Algorithms Lecture 1Lecture 1 Algorithms: OverviewAlgorithms: Overview Next time: Principles of the analysis of algorithms (2.1, 2.2) Design and Analysis of Algorithms
INDEX TO ALGORITHMS AND THEOREMS Algorithm 5.1.1C, 591{592.
Pratt, Vaughan
APPENDIX C INDEX TO ALGORITHMS AND THEOREMS Algorithm 5.1.1C, 591{592. Theorem 5.1.2A, 26. Theorem{54. Theorem 5.1.4C, 55. Algorithm 5.1.4D, 50. Theorem 5.1.4D, 57. Algorithm 5.1.4G, 69. Algorithm 5.1.4H, 612. Theorem 5.1.4H, 60. Algorithm 5.1.4I, 49{50. Algorithm 5.1.4P, 70. Algorithm 5.1.4Q, 614. Algorithm 5.1.4S
INDEX TO ALGORITHMS AND THEOREMS Algorithm 1.1E, 2, 4.
Pratt, Vaughan
APPENDIX C INDEX TO ALGORITHMS AND THEOREMS Algorithm 1.1E, 2, 4. Algorithm 1.1F, 466. Algorithm 1.2.1E, 13{14. Algorithm 1.2.1I, 11{12. Algorithm 1.2.2E, 470. Algorithm 1.2.2L, 26. Law 1.2.4A, 40. Law, 81{82. Theorem 1.2.10A, 101. Algorithm 1.2.10M, 96. Theorem 1.2.11.3A, 119. Algorithm 1.3.2E, 160
D. Gulo; O. Alexejev
1999-03-12T23:59:59.000Z
Theoretical calculations of the electrical conductivity and electroosmotic transfer as functions of the disperse phase volume fraction and non-dissolving boundary layer thickness were provided on the basis of the cell theory of electroosmosis for the limiting case of large degree of electric double layers overlapping in interparticle space. The obtained results are in qualitative agreement with the experimental data and describe the main features of the latter
A preliminary evaluation of a speed threshold incident detection algorithm
Kolb, Stephanie Lang
1996-01-01T23:59:59.000Z
and California algorithm #8 using Fuzzy Logic to evaluate the new algorithm's effectiveness in detecting incidents on freeways. To test these algorithms, real data from TransGuide were run through the algorithms. Algorithm output were compared with CCTV (closed...
Group Non-negative Basis Pursuit for Automatic Music Transcription
Plumbley, Mark
Group Non-negative Basis Pursuit for Automatic Music Transcription Ken O'Hanlon1 , Mark D. Plumbley1 {keno, Mark.Plumbley}@eecs.qmul.ac.uk Centre for Digital Music, Queen Mary University of London for AMT (O'Hanlon et al.). Group sparsity considers that certain groups of atoms tend to be ac- tive
Market Split and Basis Reduction: Towards a Solution of the
Utrecht, Universiteit
in the book by Williams [13]. There, the application was related to the oil market in the UKMarket Split and Basis Reduction: Towards a Solution of the Cornu#19;ejols-Dawande Instances K-and-bound. They o#11;ered these market split instances as a challenge to the integer programming community
Implementing Radial Basis Functions Using Bump-Resistor Networks
Harris, John G.
performance using this for- mulation [SI. Anderson, Platt and Kirk previously demonstrated the use of follower]. An alter- nate strategy used by Anderson, Platt and Kirk [l] 0-7803-1901-X/94$4.0001994 IEEE 1894 #12 . Anderson, J. C. Platt, and D. Kirk. An analog VLSI chip for radial basis functions. In J. Han- son, J
NEAT-IGERT Proposal C. THEMATIC BASIS FOR GROUP EFFORT
Islam, M. Saif
NEAT-IGERT Proposal C. THEMATIC BASIS FOR GROUP EFFORT The last decade has seen immense progress the research and teaching interests of fourteen investigators in seven different departments ranging from, to the actual structure and management of the group. The Ph.D.'s from this program will be well poised to embark
CRAD, Safety Basis- Idaho MF-628 Drum Treatment Facility
Broader source: Energy.gov [DOE]
A section of Appendix C to DOE G 226.1-2 "Federal Line Management Oversight of Department of Energy Nuclear Facilities." Consists of Criteria Review and Approach Documents (CRADs) used for a May 2007 readiness assessment of the Safety Basis at the Advanced Mixed Waste Treatment Project.
Financing industrial boiler projects on a non-recourse basis
Anderson, C.
1995-09-01T23:59:59.000Z
Techniques for the financing of industrial boiler projects on a non-recourse basis are outlined. The following topics are discussed: types of projects; why non-recourse (off-balance sheet) financing; the down side; construction lenders requirements; and term lender/subdebt requirements.
Canister Storage Building (CSB) Design Basis Accident Analysis Documentation
CROWE, R.D.
1999-09-09T23:59:59.000Z
This document provides the detailed accident analysis to support ''HNF-3553, Spent Nuclear Fuel Project Final Safety, Analysis Report, Annex A,'' ''Canister Storage Building Final Safety Analysis Report.'' All assumptions, parameters, and models used to provide the analysis of the design basis accidents are documented to support the conclusions in the Canister Storage Building Final Safety Analysis Report.
Cold Vacuum Drying (CVD) Facility Design Basis Accident Analysis Documentation
PIEPHO, M.G.
1999-10-20T23:59:59.000Z
This document provides the detailed accident analysis to support HNF-3553, Annex B, Spent Nuclear Fuel Project Final Safety Analysis Report, ''Cold Vacuum Drying Facility Final Safety Analysis Report (FSAR).'' All assumptions, parameters and models used to provide the analysis of the design basis accidents are documented to support the conclusions in the FSAR.
Solar Power Tower Design Basis Document, Revision 0
ZAVOICO,ALEXIS B.
2001-07-01T23:59:59.000Z
This report contains the design basis for a generic molten-salt solar power tower. A solar power tower uses a field of tracking mirrors (heliostats) that redirect sunlight on to a centrally located receiver mounted on top a tower, which absorbs the concentrated sunlight. Molten nitrate salt, pumped from a tank at ground level, absorbs the sunlight, heating it up to 565 C. The heated salt flows back to ground level into another tank where it is stored, then pumped through a steam generator to produce steam and make electricity. This report establishes a set of criteria upon which the next generation of solar power towers will be designed. The report contains detailed criteria for each of the major systems: Collector System, Receiver System, Thermal Storage System, Steam Generator System, Master Control System, and Electric Heat Tracing System. The Electric Power Generation System and Balance of Plant discussions are limited to interface requirements. This design basis builds on the extensive experience gained from the Solar Two project and includes potential design innovations that will improve reliability and lower technical risk. This design basis document is a living document and contains several areas that require trade-studies and design analysis to fully complete the design basis. Project- and site-specific conditions and requirements will also resolve open To Be Determined issues.
Leaky LMS AlgorithmLeaky LMS Algorithm Convergence of tap-weight error modes dependent on
Santhanam, Balu
Leaky LMS AlgorithmLeaky LMS Algorithm Convergence of tap-weight error modes dependent. Stability and convergence time issues of concern for ill- conditioned inputs. Leaky LMS AlgorithmLeaky LMS cost. Block LMS AlgorithmBlock LMS Algorithm Uses type-I polyphase components of the input u[n]: Block
Braunstein, Samuel L.
2007-01-01T23:59:59.000Z
purposes to show an exponential speedup compared to classical approaches [11]. Hence quantum algorithmsIOP PUBLISHING JOURNAL OF PHYSICS A: MATHEMATICAL AND THEORETICAL J. Phys. A: Math. Theor. 40 (2007, Sungkyunkwan University, Republic of Korea 3 Indian Statistical Institute, Kolkata 700 108, India 4 Applied
Theoretical and numerical studies of chaotic mixing
Kim, Ho Jun
2008-10-10T23:59:59.000Z
defined using the same quadrature/collocation points [12]. SEM combines geometrical flexibility of the finite element method (FEM) with spectral convergence and low phase/dissipation error of the spectral method. For SEM it is assumed that the solution... of computational efficiency this basis is typically chosen to be orthogonal in a weighted inner-product. Convergence to the exact solution is achieved by increasing the order of the elements or the number of elements. If the boundary condition and 3 solution...
Direct photons ~basis for characterizing heavy ion collisions~
Takao Sakaguchi
2008-07-30T23:59:59.000Z
After years of experimental and theoretical efforts, direct photons become a strong and reliable tool to establish the basic characteristics of a hot and dense matter produced in heavy ion collisions. The recent direct photon measurements are reviewed and a future prospect is given.
Implications of Theoretical Ideas Regarding Cold Fusion
Afsar Abbas
1995-03-29T23:59:59.000Z
A lot of theoretical ideas have been floated to explain the so called cold fusion phenomenon. I look at a large subset of these and study further physical implications of the concepts involved. I suggest that these can be tested by other independent physical means. Because of the significance of these the experimentalists are urged to look for these signatures. The results in turn will be important for a better understanding and hence control of the cold fusion phenomenon.
Field-theoretical treatment of neutrino oscillations
Grimus, Walter; Stockinger, P
2000-01-01T23:59:59.000Z
We discuss the field-theoretical approach to neutrino oscillations. This approach includes the neutrino source and detector processes and allows to obtain the neutrino transition or survival probabilities as cross sections derived from the Feynman diagram of the combined source -- detection process. In this context, the neutrinos which are supposed to oscillate appear as propagators of the neutrino mass eigenfields, connecting the source and detection processes.
Field-theoretical treatment of neutrino oscillations
W. Grimus; S. Mohanty; P. Stockinger
1999-04-15T23:59:59.000Z
We discuss the field-theoretical approach to neutrino oscillations. This approach includes the neutrino source and detector processes and allows to obtain the neutrino transition or survival probabilities as cross sections derived from the Feynman diagram of the combined source -- detection process. In this context, the neutrinos which are supposed to oscillate appear as propagators of the neutrino mass eigenfields, connecting the source and detection processes.
Optimisation of Quantum Evolution Algorithms
Apoorva Patel
2015-03-04T23:59:59.000Z
Given a quantum Hamiltonian and its evolution time, the corresponding unitary evolution operator can be constructed in many different ways, corresponding to different trajectories between the desired end-points. A choice among these trajectories can then be made to obtain the best computational complexity and control over errors. As an explicit example, Grover's quantum search algorithm is described as a Hamiltonian evolution problem. It is shown that the computational complexity has a power-law dependence on error when a straightforward Lie-Trotter discretisation formula is used, and it becomes logarithmic in error when reflection operators are used. The exponential change in error control is striking, and can be used to improve many importance sampling methods. The key concept is to make the evolution steps as large as possible while obeying the constraints of the problem. In particular, we can understand why overrelaxation algorithms are superior to small step size algorithms.
Optimisation of Quantum Evolution Algorithms
Patel, Apoorva
2015-01-01T23:59:59.000Z
Given a quantum Hamiltonian and its evolution time, the corresponding unitary evolution operator can be constructed in many different ways, corresponding to different trajectories between the desired end-points. A choice among these trajectories can then be made to obtain the best computational complexity and control over errors. As an explicit example, Grover's quantum search algorithm is described as a Hamiltonian evolution problem. It is shown that the computational complexity has a power-law dependence on error when a straightforward Lie-Trotter discretisation formula is used, and it becomes logarithmic in error when reflection operators are used. The exponential change in error control is striking, and can be used to improve many importance sampling methods. The key concept is to make the evolution steps as large as possible while obeying the constraints of the problem. In particular, we can understand why overrelaxation algorithms are superior to small step size algorithms.
Five Quantum Algorithms Using Quipper
Safat Siddiqui; Mohammed Jahirul Islam; Omar Shehab
2014-06-18T23:59:59.000Z
Quipper is a recently released quantum programming language. In this report, we explore Quipper's programming framework by implementing the Deutsch's, Deutsch-Jozsa's, Simon's, Grover's, and Shor's factoring algorithms. It will help new quantum programmers in an instructive manner. We choose Quipper especially for its usability and scalability though it's an ongoing development project. We have also provided introductory concepts of Quipper and prerequisite backgrounds of the algorithms for readers' convenience. We also have written codes for oracles (black boxes or functions) for individual algorithms and tested some of them using the Quipper simulator to prove correctness and introduce the readers with the functionality. As Quipper 0.5 does not include more than \\ensuremath{4 \\times 4} matrix constructors for Unitary operators, we have also implemented \\ensuremath{8 \\times 8} and \\ensuremath{16 \\times 16} matrix constructors.
Mandayam, Narayan
-Theoretically Secret Key Generation for Fading Wireless Channels Chunxuan Ye, Suhas Mathur, Alex Reznik, Yogendra Shah as the basis for building practical secret key gener- ation protocols between two entities. We begin boundaries and a heuristic log likelihood ratio estimate to achieve an improved secret key generation rate
Goddard III, William A.
Mechanism of Atmospheric Photooxidation of Aromatics: A Theoretical Study Jean M. Andino, James N, California 91125 ReceiVed: October 3, 1995; In Final Form: December 13, 1995X The mechanisms of atmospheric-31G(d,p) basis set to study the intermediate structures. Full mechanisms for the OH
Quantum Chaos and Quantum Algorithms
Daniel Braun
2001-10-05T23:59:59.000Z
It was recently shown (quant-ph/9909074) that parasitic random interactions between the qubits in a quantum computer can induce quantum chaos and put into question the operability of a quantum computer. In this work I investigate whether already the interactions between the qubits introduced with the intention to operate the quantum computer may lead to quantum chaos. The analysis focuses on two well--known quantum algorithms, namely Grover's search algorithm and the quantum Fourier transform. I show that in both cases the same very unusual combination of signatures from chaotic and from integrable dynamics arises.
The SU(3) Algebra in a Cyclic Basis
P. F. Harrison; R. Krishnan; W. G. Scott
2014-07-31T23:59:59.000Z
With the couplings between the eight gluons constrained by the structure constants of the su(3) algebra in QCD, one would expect that there should exist a special basis (or set of bases) for the algebra wherein, unlike in a Cartan-Weyl basis, {\\em all} gluons interact identically (cyclically) with each other, explicitly on an equal footing. We report here particular such bases, which we have found in a computer search, and we indicate associated $3 \\times 3$ representations. We conjecture that essentially all cyclic bases for su(3) may be obtained from these making appropriate circulant transformations,and that cyclic bases may also exist for other su(n), n>3.
Basis for NGNP Reactor Design Down-Selection
L.E. Demick
2010-08-01T23:59:59.000Z
The purpose of this paper is to identify the extent of technology development, design and licensing maturity anticipated to be required to credibly identify differences that could make a technical choice practical between the prismatic and pebble bed reactor designs. This paper does not address a business decision based on the economics, business model and resulting business case since these will vary based on the reactor application. The selection of the type of reactor, the module ratings, the number of modules, the configuration of the balance of plant and other design selections will be made on the basis of optimizing the Business Case for the application. These are not decisions that can be made on a generic basis.
Basis for NGNP Reactor Design Down-Selection
L.E. Demick
2011-11-01T23:59:59.000Z
The purpose of this paper is to identify the extent of technology development, design and licensing maturity anticipated to be required to credibly identify differences that could make a technical choice practical between the prismatic and pebble bed reactor designs. This paper does not address a business decision based on the economics, business model and resulting business case since these will vary based on the reactor application. The selection of the type of reactor, the module ratings, the number of modules, the configuration of the balance of plant and other design selections will be made on the basis of optimizing the Business Case for the application. These are not decisions that can be made on a generic basis.
MIXING OF INCOMPATIBLE MATERIALS IN WASTE TANKS TECHNICAL BASIS DOCUMENT
SANDGREN, K.R.
2003-10-15T23:59:59.000Z
This document presents onsite radiological, onsite toxicological, and offsite toxicological consequences, risk binning, and control decision results for the mixing of incompatible materials in waste tanks representative accident. This technical basis document was developed to support the tank farms documented safety analysis (DSA) and describes the risk binning process, the technical basis for assigning risk bins, and the controls selected for the mixing of incompatible materials representative accident and associated represented hazardous conditions. The purpose of the risk binning process is to determine the need for safety-significant structures, systems, and components (SSC) and/or technical safety requirement (TSR)-level controls for a given representative accident or represented hazardous conditions based on an evaluation of the frequency and consequence. Note that the risk binning process is not applied to facility workers, because all facility worker hazardous conditions are considered for safety-significant SSCs and/or TSR-level controls.
Technical basis document for the evaporator dump accident
GOETZ, T.G.
2003-03-22T23:59:59.000Z
This technical basis document was developed to support the documented safety analysis (DSA) and describes the risk binning process and the technical basis for assigning risk bins for the evaporator dump representative accident and associated represented hazardous conditions. The purpose of the risk binning process is to determine the need for safety-significant structures, systems, and components (SSC) and/or technical safety requirement (TSR)-level controls for a given representative accident or represented hazardous conditions based on an evaluation of the frequency and consequence. Note that the risk binning process is not applied to facility workers, because all facility worker hazardous conditions are considered for safety-significant SSCs and/or TSR-level controls. Determination of the need for safety-class SSCs was performed in accordance with DOE-STD-3009-94, ''Preparation Guide for US Department of Energy Nonreactor Nuclear Facility Documented Safety Analyses'', as described in this report.
Mixing of incompatible materials in waste tanks technical basis document
SANDGREN, K.R.
2003-03-21T23:59:59.000Z
This technical basis document was developed to support the Tank Farms Documented Safety Analysis (DSA) and describes the risk binning process, the technical basis for assigning risk bins, and the controls selected for the mixing of incompatible materials representative accident and associated represented hazardous conditions. The purpose of the risk binning process is to determine the need for safety-significant structures, systems, and components (SSCs) and/or technical safety requirement (TSR)-level controls for a given representative accident or represented hazardous conditions based on an evaluation of the frequency and consequence. Note that the risk binning process is not applied to facility workers, because all facility worker hazardous conditions are considered for safety-significant SSCs and/or TSR level controls. Determination of the need for safety-class SSCs was performed in accordance with DOE-STD-3009-94, ''Preparation Guide for US Department of Energy Nonreactor Nuclear Facility Documented Safety Analyses'', as described in this report.
Design-Load Basis for LANL Structures, Systems, and Components
I. Cuesta
2004-09-01T23:59:59.000Z
This document supports the recommendations in the Los Alamos National Laboratory (LANL) Engineering Standard Manual (ESM), Chapter 5--Structural providing the basis for the loads, analysis procedures, and codes to be used in the ESM. It also provides the justification for eliminating the loads to be considered in design, and evidence that the design basis loads are appropriate and consistent with the graded approach required by the Department of Energy (DOE) Code of Federal Regulation Nuclear Safety Management, 10, Part 830. This document focuses on (1) the primary and secondary natural phenomena hazards listed in DOE-G-420.1-2, Appendix C, (2) additional loads not related to natural phenomena hazards, and (3) the design loads on structures during construction.
R. Guerraoui 1 Distributed algorithms
Guerraoui, Rachid
and then algorithms 7 Best-effort broadcast (beb) Events Request: bebBroadcast, m> Indication: bebDeliver, src, m> Â· Properties: BEB1, BEB2, BEB3 8 Best-effort broadcast (beb) Properties BEB1. Validity: If pi and pj are correct, then every message broadcast by pi is eventually delivered by pj BEB2. No duplication: No message
Adaptive protection algorithm and system
Hedrick, Paul (Pittsburgh, PA) [Pittsburgh, PA; Toms, Helen L. (Irwin, PA) [Irwin, PA; Miller, Roger M. (Mars, PA) [Mars, PA
2009-04-28T23:59:59.000Z
An adaptive protection algorithm and system for protecting electrical distribution systems traces the flow of power through a distribution system, assigns a value (or rank) to each circuit breaker in the system and then determines the appropriate trip set points based on the assigned rank.
Algorithmic Aspects of Risk Management
Gehani, Ashish
Algorithmic Aspects of Risk Management Ashish Gehani1 , Lee Zaniewski2 , and K. Subramani2 1 SRI International 2 West Virginia University Abstract. Risk analysis has been used to manage the security of sys configuration. This allows risk management to occur in real time and reduces the window of exposure to attack
Machine Learning: Foundations and Algorithms
Ben-David, Shai
with accident prevention systems that are built using machine learning algorithms. Machine learning is also to us). Machine learning tools are concerned with endowing programs with the ability to "learn if the learning process succeeded or failed? The second goal of this book is to present several key machine
Algorithmic + Geometric characterization of CAR
Gill, Richard D.
Algorithmic + Geometric characterization of CAR (Coarsening at Random) Richard Gill - Utrecht but independent) CCAR 3 door problem X=door with car behind Y=two doors still closed = {your first choice, other door left closed} 3 door problem X=door with car behind Y=(your first choice, other door left closed
GEET DUGGAL Algorithms for Determining
Relationship to Gene Regulation Final Public Oral Examination Doctor of Philosophy Recent genome sequencing. Analyses from them have shown that the 3D structure of DNA may be closely linked to genome functions structure of DNA and genome function on the scale of the whole genome. Specifically, we designed algorithms
Hierarchical Correctness Proofs Distributed Algorithms
Tuttle, Mark R.
distributed networks. With this model we are able to construct modular, hierarchical correct- ness proofs these messages and process variables can be extremely di cult, and the resulting proofs of correct- ness of the full algorithm's correct- ness. Some time ago, we began to consider this approach of proof by re nement
Algorithmic Thermodynamics John C. Baez
Tomkins, Andrew
Algorithmic Thermodynamics John C. Baez Department of Mathematics, University of California in statistical mechanics. This viewpoint allows us to apply many techniques developed for use in thermodynamics and chemical potential. We derive an analogue of the fundamental thermodynamic relation dE = TdS - PdV + Âµd
The Neural Basis of Financial Risk-Taking* Supplementary Material
Knutson, Brian
1 The Neural Basis of Financial Risk-Taking* Supplementary Material Camelia M. Kuhnen1 and Brian in each block, a rational risk-neutral agent should pick stock i if he/she expects to receive a dividend D is the information set up to trial -1. That is: I-1 ={D i t| t-1, i{Stock T, Stock R, Bond C}}. Let x i = Pr{ Stock
Evolution of Safety Basis Documentation for the Fernald Site
Brown, T.; Kohler, S.; Fisk, P.; Krach, F.; Klein, B.
2004-03-01T23:59:59.000Z
The objective of the Department of Energy's (DOE) Fernald Closure Project (FCP), in suburban Cincinnati, Ohio, is to safely complete the environmental restoration of the Fernald site by 2006. Over 200 out of 220 total structures, at this DOE plant site which processed uranium ore concentrates into high-purity uranium metal products, have been safely demolished, including eight of the nine major production plants. Documented Safety Analyses (DSAs) for these facilities have gone through a process of simplification, from individual operating Safety Analysis Reports (SARs) to a single site-wide Authorization Basis containing nuclear facility Bases for Interim Operations (BIOs) to individual project Auditable Safety Records (ASRs). The final stage in DSA simplification consists of project-specific Integrated Health and Safety Plans (I-HASPs) and Nuclear Health and Safety Plans (N-HASPs) that address all aspects of safety, from the worker in the field to the safety basis requirements preserving the facility/activity hazard categorization. This paper addresses the evolution of Safety Basis Documentation (SBD), as DSAs, from production through site closure.
The double-beta decay: Theoretical challenges
Horoi, Mihai [Department of Physics, Central Michigan University, Mount Pleasant, Michigan, 48859 (United States)
2012-11-20T23:59:59.000Z
Neutrinoless double beta decay is a unique process that could reveal physics beyond the Standard Model of particle physics namely, if observed, it would prove that neutrinos are Majorana particles. In addition, it could provide information regarding the neutrino masses and their hierarchy, provided that reliable nuclear matrix elements can be obtained. The two neutrino double beta decay is an associate process that is allowed by the Standard Model, and it was observed for about ten nuclei. The present contribution gives a brief review of the theoretical challenges associated with these two process, emphasizing the reliable calculation of the associated nuclear matrix elements.
Theoretical aspects of relativistic spectral features
V. Karas
2006-09-23T23:59:59.000Z
The inner parts of black-hole accretion discs shine in X-rays which can be monitored and the observed spectra can be used to trace strong gravitational fields in the place of emission and along paths of light rays. This paper summarizes several aspects of how the spectral features are influenced by relativistic effects. We focus our attention onto variable and broad emission lines, origin of which can be attributed to the presence of orbiting patterns -- spots and spiral waves in the disc. We point out that the observed spectrum can determine parameters of the central black hole provided the intrinsic local emissivity is constrained by theoretical models.
Learning Motor Skills: From Algorithms to Robot
Learning Motor Skills: From Algorithms to Robot Experiments Erlernen Motorischer Fähigkeiten: Von Algorithmen zu Roboter-Experimenten Zur Erlangung des akademischen Grades Doktor-Ingenieur (Dr Motor Skills: From Algorithms to Robot Experiments Erlernen Motorischer Fähigkeiten: Von Algorithmen zu
The bidimensionality theory and its algorithmic applications
Hajiaghayi, MohammadTaghi
2005-01-01T23:59:59.000Z
Our newly developing theory of bidimensional graph problems provides general techniques for designing efficient fixed-parameter algorithms and approximation algorithms for NP- hard graph problems in broad classes of graphs. ...
Constant time algorithms in sparse graph model
Nguyen, Huy Ngoc, Ph. D. Massachusetts Institute of Technology
2010-01-01T23:59:59.000Z
We focus on constant-time algorithms for graph problems in bounded degree model. We introduce several techniques to design constant-time approximation algorithms for problems such as Vertex Cover, Maximum Matching, Maximum ...
A distributed K-mutual exclusion algorithm
Bulgannawar, Shailaja Gurupad
1994-01-01T23:59:59.000Z
This thesis presents a new token-based K-mutual exclusion algorithm for distributed systems. The proposed algorithm uses K tokens to achieve K-mutual exclusion. The system of N nodes is organized as a logical forest, with ...
On Learning Algorithms for Nash Equilibria
Daskalakis, Constantinos
Can learning algorithms find a Nash equilibrium? This is a natural question for several reasons. Learning algorithms resemble the behavior of players in many naturally arising games, and thus results on the convergence or ...
Algorithms for Constrained Route Planning in Road Networks
Rice, Michael Norris
2013-01-01T23:59:59.000Z
2.2 Graph Search Algorithms . . . . . . . . . . . . .an Efficient Algorithm . . . . . . 4.6.4 RestrictionAn O(r)-Approximation Algorithm for GTSPP . . . . . . . .
Algorithms for testing fault-tolerance of sequenced jobs
Chrobak, Marek; Hurand, Mathilde; Sgall, Ji?í
2009-01-01T23:59:59.000Z
5th European symposium on algorithms (ESA) (pp. 296–307).· Real-time systems · Algorithms 1 Introduction Ghosh etfault-tolerance testing algorithm, under the restriction
Algorithms for tandem mass spectrometry-based proteomics
Frank, Ari Michael
2008-01-01T23:59:59.000Z
4. MS-Clustering Algorithm . . . . . . . . . . C.De Novo Sequencing Algorithm . . . . . . C. Experimental2. The RankBoost Algorithm (Freund et al. , 2003) B.
Approximation Algorithms for the Fault-Tolerant Facility Placement Problem
Yan, Li
2013-01-01T23:59:59.000Z
5.2 Algorithm ECHS with Ratio5.3 Algorithm EBGS with RatioFormulation 2.1.3 Approximation Algorithms . 2.1.4 Bifactor
Theoretical Framework for Microscopic Osmotic Phenomena
P. J. Atzberger; P. R. Kramer
2009-10-29T23:59:59.000Z
The basic ingredients of osmotic pressure are a solvent fluid with a soluble molecular species which is restricted to a chamber by a boundary which is permeable to the solvent fluid but impermeable to the solute molecules. For macroscopic systems at equilibrium, the osmotic pressure is given by the classical van't Hoff Law, which states that the pressure is proportional to the product of the temperature and the difference of the solute concentrations inside and outside the chamber. For microscopic systems the diameter of the chamber may be comparable to the length-scale associated with the solute-wall interactions or solute molecular interactions. In each of these cases, the assumptions underlying the classical van't Hoff Law may no longer hold. In this paper we develop a general theoretical framework which captures corrections to the classical theory for the osmotic pressure under more general relationships between the size of the chamber and the interaction length scales. We also show that notions of osmotic pressure based on the hydrostatic pressure of the fluid and the mechanical pressure on the bounding walls of the chamber must be distinguished for microscopic systems. To demonstrate how the theoretical framework can be applied, numerical results are presented for the osmotic pressure associated with a polymer of N monomers confined in a spherical chamber as the bond strength is varied.
Experimental and theoretical investigation of three-dimensional...
Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)
theoretical investigation of three-dimensional nitrogen-doped aluminum clusters AI8N- and AI8N. Experimental and theoretical investigation of three-dimensional nitrogen-doped...
Experimental and theoretical study of horizontal-axis wind turbines
Anderson, Michael Broughton
1981-10-20T23:59:59.000Z
An experimental and theoretical study of horizontal-axis wind turbines is undertaken. The theoretical analyses cover the four major areas of aerodynamics, turbulence. aeroelasticity and blade optimisation. EXisting aerodynamic theories based...
The SIGACT Theoretical Computer Science Genealogy: Preliminary Report
Parberry, Ian
The SIGACT Theoretical Computer Science Genealogy: Preliminary Report Ian Parberry Department The SIGACT Theoretical Computer Science Genealogy, which lists information on earned doctoral degrees of the Computer Science Genealogy lists information on earned doctoral degrees (thesis ad- viser, university
Theoretical Predictions of Freestanding Honeycomb Sheets of Cadmium Chalcogenides
Zhou, Jia [ORNL] [ORNL; Huang, Jingsong [ORNL] [ORNL; Sumpter, Bobby G [ORNL] [ORNL; Kent, Paul R [ORNL] [ORNL; Xie, Yu [ORNL] [ORNL; Terrones Maldonado, Humberto [ORNL] [ORNL; Smith, Sean C [ORNL] [ORNL
2014-01-01T23:59:59.000Z
Two-dimensional (2D) nanocrystals of CdX (X = S, Se, Te) typically grown by colloidal synthesis are coated with organic ligands. Recent experimental work on ZnSe showed that the organic ligands can be removed at elevated temperature, giving a freestanding 2D sheet of ZnSe. In this theoretical work, freestanding single- to few-layer sheets of CdX, each possessing a pseudo honeycomb lattice, are considered by cutting along all possible lattice planes of the bulk zinc blende (ZB) and wurtzite (WZ) phases. Using density functional theory, we have systematically studied their geometric structures, energetics, and electronic properties. A strong surface distortion is found to occur for all of the layered sheets, and yet all of the pseudo honeycomb lattices are preserved, giving unique types of surface corrugations and different electronic properties. The energetics, in combination with phonon mode calculations and molecular dynamics simulations, indicate that the syntheses of these freestanding 2D sheets could be selective, with the single- to few-layer WZ110, WZ100, and ZB110 sheets being favored. Through the GW approximation, it is found that all single-layer sheets have large band gaps falling into the ultraviolet range, while thicker sheets in general have reduced band gaps in the visible and ultraviolet range. On the basis of the present work and the experimental studies on freestanding double-layer sheets of ZnSe, we envision that the freestanding 2D layered sheets of CdX predicted herein are potential synthesis targets, which may offer tunable band gaps depending on their structural features including surface corrugations, stacking motifs, and number of layers.
Final Technical Report "Multiscale Simulation Algorithms for Biochemical Systems"
Petzold, Linda R.
2012-10-25T23:59:59.000Z
Biochemical systems are inherently multiscale and stochastic. In microscopic systems formed by living cells, the small numbers of reactant molecules can result in dynamical behavior that is discrete and stochastic rather than continuous and deterministic. An analysis tool that respects these dynamical characteristics is the stochastic simulation algorithm (SSA, Gillespie, 1976), a numerical simulation procedure that is essentially exact for chemical systems that are spatially homogeneous or well stirred. Despite recent improvements, as a procedure that simulates every reaction event, the SSA is necessarily inefficient for most realistic problems. There are two main reasons for this, both arising from the multiscale nature of the underlying problem: (1) stiffness, i.e. the presence of multiple timescales, the fastest of which are stable; and (2) the need to include in the simulation both species that are present in relatively small quantities and should be modeled by a discrete stochastic process, and species that are present in larger quantities and are more efficiently modeled by a deterministic differential equation (or at some scale in between). This project has focused on the development of fast and adaptive algorithms, and the fun- damental theory upon which they must be based, for the multiscale simulation of biochemical systems. Areas addressed by this project include: (1) Theoretical and practical foundations for ac- celerated discrete stochastic simulation (tau-leaping); (2) Dealing with stiffness (fast reactions) in an efficient and well-justified manner in discrete stochastic simulation; (3) Development of adaptive multiscale algorithms for spatially homogeneous discrete stochastic simulation; (4) Development of high-performance SSA algorithms.
Optimization Online - Efficient parallel coordinate descent algorithm ...
Ion Necoara
2012-11-02T23:59:59.000Z
Nov 2, 2012 ... Efficient parallel coordinate descent algorithm for convex optimization problems with separable constraints: application to distributed MPC.
Optimization Online - Efficient Algorithmic Techniques for Several ...
Mugurel Ionut Andreica
2008-10-23T23:59:59.000Z
Oct 23, 2008 ... Efficient Algorithmic Techniques for Several Multidimensional Geometric Data Management and Analysis Problems. Mugurel Ionut ...
An algorithm for minimization of quantum cost
Anindita Banerjee; Anirban Pathak
2010-04-09T23:59:59.000Z
A new algorithm for minimization of quantum cost of quantum circuits has been designed. The quantum cost of different quantum circuits of particular interest (eg. circuits for EPR, quantum teleportation, shor code and different quantum arithmetic operations) are computed by using the proposed algorithm. The quantum costs obtained using the proposed algorithm is compared with the existing results and it is found that the algorithm has produced minimum quantum cost in all cases.
An implicit numerical algorithm general relativistic hydrodynamics
A. Hujeirat
2008-01-09T23:59:59.000Z
An implicit numerical algorithm general relativistic hydrodynamics This article has been replaced by arXiv:0801.1017
Generalized URV Subspace Tracking LMS Algorithm 1
Boley, Daniel
Generalized URV Subspace Tracking LMS Algorithm 1 S. Hosur and A. H. Tew k and D. Boley Dept The convergence rate of the Least Mean Squares (LMS) algorithm is poor whenever the adaptive lter input auto-correlation matrix is ill-conditioned. In this paper we propose a new LMS algorithm to alleviate this problem
Total Algorithms \\Lambda Gerard Tel y
Utrecht, Universiteit
Total Algorithms \\Lambda Gerard Tel y Department of Computer Science, University of Utrecht, P and February 1993 Abstract We define the notion of total algorithms for networks of processes. A total algorithm enforces that a ``decision'' is taken by a subset of the processes, and that participation of all
Distributed QR Factorization Based on Randomized Algorithms
Zemen, Thomas
Distributed QR Factorization Based on Randomized Algorithms Hana Strakov´a1 , Wilfried N. Gansterer of Algorithms Hana.Strakova@univie.ac.at, Wilfried.Gansterer@univie.ac.at 2 Forschungszentrum Telekommunication Wien, Austria Thomas.Zemen@ftw.at Abstract. Most parallel algorithms for matrix computations assume
Finding Algorithms in Scientific Articles Sumit Bhatia
Giles, C. Lee
Finding Algorithms in Scientific Articles Sumit Bhatia , Prasenjit Mitra and C. Lee Giles,giles}@ist.psu.edu ABSTRACT Algorithms are an integral part of computer science literature. How- ever, none of the current search engines offer specialized algorithm search facility. We describe a vertical search engine
Algorithms in pure mathematics G. Stroth
Stroth, Gernot
Algorithms in pure mathematics G. Stroth 1 Introduction In this article, we will discuss algorithmic group theory from the point of view of pure, and where one might be surprised that there is no algorithmic solution. The two most developed areas
Expander Graph Arguments for Message Passing Algorithms
Burshtein, David
Expander Graph Arguments for Message Passing Algorithms David Burshtein and Gadi Miller Dept arguments may be used to prove that message passing algorithms can correct a linear number of erroneous a message passing algorithm has corrected a sufficiently large fraction of the errors, it will eventually
A DISTRIBUTED POWER CONTROL ALGORITHM FOR
Mitra, Debasis
A DISTRIBUTED POWER CONTROL ALGORITHM FOR BURSTY TRANSMISSIONS ON CELLULAR, SPREAD SPECTRUM, USA ABSTRACT We propose a distributed algorithm for power control in cellular, wideband networks, although its parameters are different from data. We propose a distributed algorithm for power control
university-logo Intro Algorithm Results Concl.
Aarts, Gert
university-logo Intro Algorithm Results Concl. Strong coupling lattice QCD at finite temperature Ph. de Forcrand Trento, March 2009 = 0 QCD #12;university-logo Intro Algorithm Results Concl. QCD Forcrand Trento, March 2009 = 0 QCD #12;university-logo Intro Algorithm Results Concl. Motivation (1
Equivalence of Learning Algorithms Julien Audiffren1
Equivalence of Learning Algorithms Julien Audiffren1 and Hachem Kadri2 1 CMLA, ENS Cachan is to introduce a concept of equivalence between machine learn- ing algorithms. We define two notions of algorithmic equivalence, namely, weak and strong equivalence. These notions are of paramount importance
Voronoi Particle Merging Algorithm for PIC Codes
Luu, Phuc T; Pukhov, A
2015-01-01T23:59:59.000Z
We present a new particle-merging algorithm for the particle-in-cell method. Based on the concept of the Voronoi diagram, the algorithm partitions the phase space into smaller subsets, which consist of only particles that are in close proximity in the phase space to each other. We show the performance of our algorithm in the case of magnetic shower.
The Observer Algorithm for Visibility Approximation
Doherty, Patrick
, with dif- ferent view ranges and grid cell sizes. By changing the size of the grid cells that the algorithm or more sentries while moving to a goal position. Algorithms for finding a covert paths in the presence of stationary and moving sentries has been devised by [5] [6]. An approximate visibility algorithm was devised
Study of Proposed Internet Congestion Control Algorithms*
Study of Proposed Internet Congestion Control Algorithms* Kevin L. Mills, NIST (joint work with D Y Algorithms Mills et al. Innovations in Measurement Science More information @ http;Study of Proposed Internet Congestion Control Algorithms Mills et al. OutlineOutline Technical
Partitioned algorithms for maximum likelihood and
Smyth, Gordon K.
Partitioned algorithms for maximum likelihood and other nonlinear estimation Gordon K. Smyth There are a variety of methods in the literature which seek to make iterative estimation algorithms more manageable by breaking the iterations into a greater number of simpler or faster steps. Those algorithms which deal
Algorithms and Theory of Computation Handbook, Second
Algorithms and Theory of Computation Handbook, Second Edition CRC PRESS Boca Raton Ann Arbor London Parameterized Algorithms 1 Rodney G. Downey and Catherine McCartin School of Mathematical and Computing Sciences.2 The Main Idea . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2 1.3 Practical FPT Algorithms
Minimum-Flip Supertrees: Complexity and Algorithms
Sanderson, Michael J.
Minimum-Flip Supertrees: Complexity and Algorithms Duhong Chen, Oliver Eulenstein, David Ferna that it is fixed-parameter tractable and give approximation algorithms for special cases. Index Terms assembled from all species in the study. Because the conventional algorithms to solve these problems
Tuning bandit algorithms in stochastic environments
Szepesvari, Csaba
Tuning bandit algorithms in stochastic environments Jean-Yves Audibert1 and R´emi Munos2 and Csaba@cs.ualberta.ca Abstract. Algorithms based on upper-confidence bounds for balancing exploration and exploitation a variant of the basic algorithm for the stochastic, multi-armed bandit problem that takes into account
A Genetic Algorithm Approach for Technology Characterization
Galvan, Edgar
2012-10-19T23:59:59.000Z
A GENETIC ALGORITHM APPROACH FOR TECHNOLOGY CHARACTERIZATION A Thesis by EDGAR GALVAN Submitted to the Office of Graduate Studies of Texas A&M University in partial fulfillment of the requirements for the degree of MASTER... OF SCIENCE August 2012 Major Subject: Mechanical Engineering A Genetic Algorithm Approach for Technology Characterization Copyright 2012 Edgar Galvan A GENETIC ALGORITHM APPROACH FOR TECHNOLOGY...
Algorithmic proof of Barnette's Conjecture
I. Cahit
2009-04-22T23:59:59.000Z
In this paper we have given an algorithmic proof of an long standing Barnette's conjecture (1969) that every 3-connected bipartite cubic planar graph is hamiltonian. Our method is quite different than the known approaches and it rely on the operation of opening disjoint chambers, bu using spiral-chain like movement of the outer-cycle elastic-sticky edges of the cubic planar graph. In fact we have shown that in hamiltonicity of Barnette graph a single-chamber or double-chamber with a bridge face is enough to transform the problem into finding specific hamiltonian path in the cubic bipartite graph reduced. In the last part of the paper we have demonstrated that, if the given cubic planar graph is non-hamiltonian then the algorithm which constructs spiral-chain (or double-spiral chain) like chamber shows that except one vertex there exists (n-1)-vertex cycle.
RELEASE OF DRIED RADIOACTIVE WASTE MATERIALS TECHNICAL BASIS DOCUMENT
KOZLOWSKI, S.D.
2007-05-30T23:59:59.000Z
This technical basis document was developed to support RPP-23429, Preliminary Documented Safety Analysis for the Demonstration Bulk Vitrification System (PDSA) and RPP-23479, Preliminary Documented Safety Analysis for the Contact-Handled Transuranic Mixed (CH-TRUM) Waste Facility. The main document describes the risk binning process and the technical basis for assigning risk bins to the representative accidents involving the release of dried radioactive waste materials from the Demonstration Bulk Vitrification System (DBVS) and to the associated represented hazardous conditions. Appendices D through F provide the technical basis for assigning risk bins to the representative dried waste release accident and associated represented hazardous conditions for the Contact-Handled Transuranic Mixed (CH-TRUM) Waste Packaging Unit (WPU). The risk binning process uses an evaluation of the frequency and consequence of a given representative accident or represented hazardous condition to determine the need for safety structures, systems, and components (SSC) and technical safety requirement (TSR)-level controls. A representative accident or a represented hazardous condition is assigned to a risk bin based on the potential radiological and toxicological consequences to the public and the collocated worker. Note that the risk binning process is not applied to facility workers because credible hazardous conditions with the potential for significant facility worker consequences are considered for safety-significant SSCs and/or TSR-level controls regardless of their estimated frequency. The controls for protection of the facility workers are described in RPP-23429 and RPP-23479. Determination of the need for safety-class SSCs was performed in accordance with DOE-STD-3009-94, Preparation Guide for US. Department of Energy Nonreactor Nuclear Facility Documented Safety Analyses, as described below.
Santhanam, Balu
LMS Algorithm: MotivationLMS Algorithm: Motivation Only a single realization of observations : delay in tap-weight adjustment. Simplicity: real-time applications possible. LMS AlgorithmLMS Algorithm Use instantaneous estimates for statistics: Filter output: Estimation error: Tap-weight update: LMS
Logo-like Learning of Basic Concepts of Algorithms -Having Fun with Algorithms
Logo-like Learning of Basic Concepts of Algorithms - Having Fun with Algorithms Gerald Futschek are not primarily interested in programming the way of learning is highly influenced by the Logo style of learning to design efficient algorithms. Keywords Logo-like learning, algorithms, group learning 1 2 3 4 5 n ... 1
Improved algorithms for reaction path following: Higher-order implicit algorithms
Schlegel, H. Bernhard
Improved algorithms for reaction path following: Higher-order implicit algorithms Carlos Gonzaleza (Received 13May 1991;accepted17June 1991) Eight new algorithms for reaction path following are presented or if accurate propertiessuch ascurvature and frequenciesare needed.3*4 Numerous algorithms exist for following
Technical Planning Basis - DOE Directives, Delegations, and Requirements
Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)
AFDC Printable Version Share this resource Send a link to EERE: Alternative Fuels Data Center Home Page to someone by E-mail Share EERE: Alternative Fuels Data Center Home Page on Facebook Tweet about EERE: Alternative Fuels Data Center Home Page on Twitter Bookmark EERE:1 First Use of Energy for All Purposes (Fuel and Nonfuel), 2002; Level:5(Million Cubic Feet) Oregon (Including Vehicle Fuel) (MillionStructural Basis of WntSupportB 18B isTandeep ChadhaSeptember3 To synchronize the clocks2,
Updated Costs (June 2011 Basis) for Selected Bituminous Baseline Cases
Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)
AFDC Printable Version Share this resource Send a link to EERE: Alternative Fuels Data Center Home Page to someone by E-mail Share EERE: Alternative Fuels Data Center Home Page on Facebook Tweet about EERE: Alternative Fuels Data Center Home Page on Twitter Bookmark EERE:1 First Use of Energy for All Purposes (Fuel and Nonfuel), 2002; Level:5(Million Cubic Feet) Oregon (Including Vehicle Fuel) (MillionStructural Basis of5, 2014 |and Terry M. Bricker Bellamy,DentonUpdate on theUpdated
Interim safety basis for fuel supply shutdown facility
Brehm, J.R.; Deobald, T.L.; Benecke, M.W.; Remaize, J.A.
1995-05-23T23:59:59.000Z
This ISB in conjunction with the new TSRs, will provide the required basis for interim operation or restrictions on interim operations and administrative controls for the Facility until a SAR is prepared in accordance with the new requirements. It is concluded that the risk associated with the current operational mode of the Facility, uranium closure, clean up, and transition activities required for permanent closure, are within Risk Acceptance Guidelines. The Facility is classified as a Moderate Hazard Facility because of the potential for an unmitigated fire associated with the uranium storage buildings.
Information Theoretic Approach to Social Networks
Kafri, Oded
2014-01-01T23:59:59.000Z
We propose an information theoretic model for sociological networks. The model is a microcanonical ensemble of states and particles. The states are the possible pairs of nodes (i.e. people, sites and alike) which exchange information. The particles are the energetic information bits. With analogy to bosons gas, we define for these networks model: entropy, volume, pressure and temperature. We show that these definitions are consistent with Carnot efficiency (the second law) and ideal gas law. Therefore, if we have two large networks: hot and cold having temperatures TH and TC and we remove Q energetic bits from the hot network to the cold network we can save W profit bits. The profit will be calculated from W equal or smaller than Q (1-TH/TC), namely, Carnot formula. In addition it is shown that when two of these networks are merged the entropy increases. This explains the tendency of economic and social networks to merge.
Game Theoretic Methods for the Smart Grid
Saad, Walid; Poor, H Vincent; Ba?ar, Tamer
2012-01-01T23:59:59.000Z
The future smart grid is envisioned as a large-scale cyber-physical system encompassing advanced power, communications, control, and computing technologies. In order to accommodate these technologies, it will have to build on solid mathematical tools that can ensure an efficient and robust operation of such heterogeneous and large-scale cyber-physical systems. In this context, this paper is an overview on the potential of applying game theory for addressing relevant and timely open problems in three emerging areas that pertain to the smart grid: micro-grid systems, demand-side management, and communications. In each area, the state-of-the-art contributions are gathered and a systematic treatment, using game theory, of some of the most relevant problems for future power systems is provided. Future opportunities for adopting game theoretic methodologies in the transition from legacy systems toward smart and intelligent grids are also discussed. In a nutshell, this article provides a comprehensive account of the...
Testing Algorithms for Finite Temperature Lattice QCD
M. Cheng; M. A. Clark; C. Jung; R. D. Mawhinney
2006-08-23T23:59:59.000Z
We discuss recent algorithmic improvements in simulating finite temperature QCD on a lattice. In particular, the Rational Hybrid Monte Carlo(RHMC) algorithm is employed to generate lattice configurations for 2+1 flavor QCD. Unlike the Hybrid R Algorithm, RHMC is reversible, admitting a Metropolis accept/reject step that eliminates the $\\mathcal{O}(\\delta t^2)$ errors inherent in the R Algorithm. We also employ several algorithmic speed-ups, including multiple time scales, the use of a more efficient numerical integrator, and Hasenbusch pre-conditioning of the fermion force.
Office of Scientific and Technical Information (OSTI)
AFDC Printable Version Share this resource Send a link to EERE: Alternative Fuels Data Center Home Page to someone by E-mail Share EERE: Alternative Fuels Data Center Home Page on Facebook Tweet about EERE: Alternative Fuels Data Center Home Page on Twitter Bookmark EERE:1 First Use of Energy for All Purposes (Fuel and Nonfuel), 2002; Level:5 TablesExports to3,1,50022,3,,0,,6,1,Separation 23TribalInformation Access toTen Problems in ExperimentalUnitarityOrganic lightCap plasticity 'ORNL
Hanford External Dosimetry Technical Basis Manual PNL-MA-842
Rathbone, Bruce A.
2005-02-25T23:59:59.000Z
The Hanford External Dosimetry Technical Basis Manual PNL-MA-842 documents the design and implementation of the external dosimetry system used at Hanford. The manual describes the dosimeter design, processing protocols, dose calculation methodology, radiation fields encountered, dosimeter response characteristics, limitations of dosimeter design under field conditions, and makes recommendations for effective use of the dosimeters in the field. The manual describes the technical basis for the dosimetry system in a manner intended to help ensure defensibility of the dose of record at Hanford and to demonstrate compliance with 10 CFR 835, DOELAP, DOE-RL, ORP, PNSO, and Hanford contractor requirements. The dosimetry system is operated by PNNL’s Hanford External Dosimetry Program which provides dosimetry services to all Hanford contractors. The primary users of this manual are DOE and DOE contractors at Hanford using the dosimetry services of PNNL. Development and maintenance of this manual is funded directly by DOE and DOE contractors. Its contents have been reviewed and approved by DOE and DOE contractors at Hanford through the Hanford Personnel Dosimetry Advisory Committee which is chartered and chaired by DOE-RL and serves as means of coordinating dosimetry practices across contractors at Hanford. This manual was established in 1996. Since inception, it has been revised many times and maintained by PNNL as a controlled document with controlled distribution. Rev. 0 marks the first revision to be released through PNNL’s Electronic Records & Information Capture Architecture (ERICA) database.
AN APPROACH TO SAFETY DESIGN BASIS DOCUMENTATION CHANGE CONTROL
RYAN GW
2008-05-15T23:59:59.000Z
This paper describes a safety design basis documentation change control process. The process identifies elements that can be used to manage the project/facility configuration during design evolution through the Initiation, Definition, and Execution project phases. The project phases addressed by the process are defined in US Department of Energy (DOE) Order (O) 413.3A, Program and Project Management for the Acquisition of Capital Assets, in support of DOE project Critical Decisions (CD). This approach has been developed for application to two Hanford Site projects in their early CD phases and is considered to be a key element of safety and design integration. As described in the work that has been performed, the purpose of change control is to maintain consistency among design requirements, the physical configuration, related facility documentation, and the nuclear safety basis during the evolution of the design. The process developed (1) ensures an appropriate level of rigor is applied at each project phase and (2) is considered to implement the requirements and guidance provided in DOE-STD-1189-2008, Integration of Safety into the Design Process. Presentation of this work is expected to benefit others in the DOE Complex that may be implementing DOE-STD-1189-2008 or managing nuclear safety documentation in support of projects in-process.
A New Basis of Geoscience: Whole-Earth Decompression Dynamics
J. Marvin Herndon
2013-07-04T23:59:59.000Z
Neither plate tectonics nor Earth expansion theory is sufficient to provide a basis for understanding geoscience. Each theory is incomplete and possesses problematic elements, but both have served as stepping stones to a more fundamental and inclusive geoscience theory that I call Whole-Earth Decompression Dynamics (WEDD). WEDD begins with and is the consequence of our planet's early formation as a Jupiter-like gas giant and permits deduction of:(1) Earth's internal composition, structure, and highly-reduced oxidation state; (2) Core formation without whole-planet melting; (3) Powerful new internal energy sources - proto-planetary energy of compression and georeactor nuclear fission energy; (4) Georeactor geomagnetic field generation; (5) Mechanism for heat emplacement at the base of the crust resulting in the crustal geothermal gradient; (6) Decompression driven geodynamics that accounts for the myriad of observations attributed to plate tectonics without requiring physically-impossible mantle convection, and; (7) A mechanism for fold-mountain formation that does not necessarily require plate collision. The latter obviates the necessity to assume supercontinent cycles. Here, I review the principles of Whole-Earth Decompression Dynamics and describe a new underlying basis for geoscience and geology.
An efficient basis set representation for calculating electrons in molecules
Jeremiah R. Jones; Francois-Henry Rouet; Keith V. Lawler; Eugene Vecharynski; Khaled Z. Ibrahim; Samuel Williams; Brant Abeln; Chao Yang; Daniel J. Haxton; C. William McCurdy; Xiaoye S. Li; Thomas N. Rescigno
2015-07-13T23:59:59.000Z
The method of McCurdy, Baertschy, and Rescigno, J. Phys. B, 37, R137 (2004) is generalized to obtain a straightforward, surprisingly accurate, and scalable numerical representation for calculating the electronic wave functions of molecules. It uses a basis set of product sinc functions arrayed on a Cartesian grid, and yields 1 kcal/mol precision for valence transition energies with a grid resolution of approximately 0.1 bohr. The Coulomb matrix elements are replaced with matrix elements obtained from the kinetic energy operator. A resolution-of-the-identity approximation renders the primitive one- and two-electron matrix elements diagonal; in other words, the Coulomb operator is local with respect to the grid indices. The calculation of contracted two-electron matrix elements among orbitals requires only O(N log(N)) multiplication operations, not O(N^4), where N is the number of basis functions; N = n^3 on cubic grids. The representation not only is numerically expedient, but also produces energies and properties superior to those calculated variationally. Absolute energies, absorption cross sections, transition energies, and ionization potentials are reported for one- (He^+, H_2^+ ), two- (H_2, He), ten- (CH_4) and 56-electron (C_8H_8) systems.
Distributed Approaches for Determination of Reconfiguration Algorithm Termination
Lai, Hong-jian
Distributed Approaches for Determination of Reconfiguration Algorithm Termination Pinak Tulpule architecture was used as globally shared memory structure for detection of algorithm termination. This paper of algorithm termination. Keywords--autonomous agent-based reconfiguration, dis- tributed algorithms, shipboard
on the complexity of some hierarchical structured matrix algorithms
2012-05-17T23:59:59.000Z
matrix algorithms, in terms of hierarchically semiseparable (HSS) matrices. ... We perform detailed complexity analysis for some typical HSS algorithms, with.
Karlický, František; Otyepka, Michal; 10.1063/1.4736998
2012-01-01T23:59:59.000Z
DFT calculations of the electronic structure of graphane and stoichiometrically halogenated graphene derivatives (fluorographene and other analogous graphene halides) show (i) localized orbital basis sets can be successfully and effectively used for such 2D materials; (ii) several functionals predict that the band gap of graphane is greater than that of fluorographene, whereas HSE06 gives the opposite trend; (iii) HSE06 functional predicts quite good values of band gaps w.r.t benchmark theoretical and experimental data; (iv) the zero band gap of graphene is opened by hydrogenation and halogenation and strongly depends on the chemical composition of mixed graphene halides; (v) the stability of graphene halides decreases sharply with increasing size of the halogen atom - fluorographene is stable, whereas graphene iodide spontaneously decomposes. In terms of band gap and stability, the C2FBr, and C2HBr derivatives seem to be promising materials, e.g., for (opto)electronics applications, because their band gaps a...
The Gaussian Radial Basis Function Method for Plasma Kinetic Theory
Hirvijoki, Eero; Belli, Emily; Embréus, Ola
2015-01-01T23:59:59.000Z
A fundamental macroscopic description of a magnetized plasma is the Vlasov equation supplemented by the nonlinear inverse-square force Fokker-Planck collision operator [Rosenbluth et al., Phys. Rev., 107, 1957]. The Vlasov part describes advection in a six-dimensional phase space whereas the collision operator involves friction and diffusion coefficients that are weighted velocity-space integrals of the particle distribution function. The Fokker-Planck collision operator is an integro-differential, bilinear operator, and numerical discretization of the operator is far from trivial. In this letter, we describe a new approach to discretize the entire kinetic system based on an expansion in Gaussian Radial Basis functions (RBFs). This approach is particularly well-suited to treat the collision operator because the friction and diffusion coefficients can be analytically calculated. Although the RBF method is known to be a powerful scheme for the interpolation of scattered multidimensional data, Gaussian RBFs also...
Electronic structure basis for the titanic magnetoresistance in WTe?
Pletikosic, I. [Princeton Univ., NJ (United States); Brookhaven National Lab. (BNL), Upton, NY (United States); Ali, Mazhar N. [Princeton Univ., NJ (United States); Fedorov, A. V. [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Cava, R. J. [Princeton Univ., NJ (United States); Valla, T. [Brookhaven National Lab. (BNL), Upton, NY (United States)
2014-11-01T23:59:59.000Z
The electronic structure basis of the extremely large magnetoresistance in layered non-magnetic tungsten ditelluride has been investigated by angle-resolved photoelectron spectroscopy. Hole and electron pockets of approximately the same size were found at the Fermi level, suggesting that carrier compensation should be considered the primary source of the effect. The material exhibits a highly anisotropic, quasi one-dimensional Fermi surface from which the pronounced anisotropy of the magnetoresistance follows. A change in the Fermi surface with temperature was found and a high-density-of-states band that may take over conduction at higher temperatures and cause the observed turn-on behavior of the magnetoresistance in WTe? was identified.
A New Basis of Geoscience: Whole-Earth Decompression Dynamics
Herndon, J Marvin
2013-01-01T23:59:59.000Z
Neither plate tectonics nor Earth expansion theory is sufficient to provide a basis for understanding geoscience. Each theory is incomplete and possesses problematic elements, but both have served as stepping stones to a more fundamental and inclusive geoscience theory that I call Whole-Earth Decompression Dynamics (WEDD). WEDD begins with and is the consequence of our planet's early formation as a Jupiter-like gas giant and permits deduction of:(1) Earth's internal composition, structure, and highly-reduced oxidation state; (2) Core formation without whole-planet melting; (3) Powerful new internal energy sources - proto-planetary energy of compression and georeactor nuclear fission energy; (4) Georeactor geomagnetic field generation; (5) Mechanism for heat emplacement at the base of the crust resulting in the crustal geothermal gradient; (6) Decompression driven geodynamics that accounts for the myriad of observations attributed to plate tectonics without requiring physically-impossible mantle convection, an...
Spices form the basis of food pairing in Indian cuisine
Jain, Anupam; Bagler, Ganesh
2015-01-01T23:59:59.000Z
Culinary practices are influenced by climate, culture, history and geography. Molecular composition of recipes in a cuisine reveals patterns in food preferences. Indian cuisine encompasses a number of diverse sub-cuisines separated by geographies, climates and cultures. Its culinary system has a long history of health-centric dietary practices focused on disease prevention and promotion of health. We study food pairing in recipes of Indian cuisine to show that, in contrast to positive food pairing reported in some Western cuisines, Indian cuisine has a strong signature of negative food pairing; more the extent of flavor sharing between any two ingredients, lesser their co-occurrence. This feature is independent of recipe size and is not explained by ingredient category-based recipe constitution alone. Ingredient frequency emerged as the dominant factor specifying the characteristic flavor sharing pattern of the cuisine. Spices, individually and as a category, form the basis of ingredient composition in Indian...
Electronic structure basis for the titanic magnetoresistance in WTe?
DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)
Pletikosic, I.; Ali, Mazhar N.; Fedorov, A. V.; Cava, R. J.; Valla, T.
2014-11-19T23:59:59.000Z
The electronic structure basis of the extremely large magnetoresistance in layered non-magnetic tungsten ditelluride has been investigated by angle-resolved photoelectron spectroscopy. Hole and electron pockets of approximately the same size were found at the Fermi level, suggesting that carrier compensation should be considered the primary source of the effect. The material exhibits a highly anisotropic, quasi one-dimensional Fermi surface from which the pronounced anisotropy of the magnetoresistance follows. A change in the Fermi surface with temperature was found and a high-density-of-states band that may take over conduction at higher temperatures and cause the observed turn-on behavior of the magnetoresistance inmore »WTe? was identified.« less
Hanford External Dosimetry Technical Basis Manual PNL-MA-842
Rathbone, Bruce A.
2009-08-28T23:59:59.000Z
The Hanford External Dosimetry Technical Basis Manual PNL-MA-842 documents the design and implementation of the external dosimetry system used at Hanford. The manual describes the dosimeter design, processing protocols, dose calculation methodology, radiation fields encountered, dosimeter response characteristics, limitations of dosimeter design under field conditions, and makes recommendations for effective use of the dosimeters in the field. The manual describes the technical basis for the dosimetry system in a manner intended to help ensure defensibility of the dose of record at Hanford and to demonstrate compliance with 10 CFR 835, DOELAP, DOE-RL, ORP, PNSO, and Hanford contractor requirements. The dosimetry system is operated by PNNL’s Hanford External Dosimetry Program (HEDP) which provides dosimetry services to all Hanford contractors. The primary users of this manual are DOE and DOE contractors at Hanford using the dosimetry services of PNNL. Development and maintenance of this manual is funded directly by DOE and DOE contractors. Its contents have been reviewed and approved by DOE and DOE contractors at Hanford through the Hanford Personnel Dosimetry Advisory Committee (HPDAC) which is chartered and chaired by DOE-RL and serves as means of coordinating dosimetry practices across contractors at Hanford. This manual was established in 1996. Since inception, it has been revised many times and maintained by PNNL as a controlled document with controlled distribution. The first revision to be released through PNNL’s Electronic Records & Information Capture Architecture (ERICA) database was designated Revision 0. Revision numbers that are whole numbers reflect major revisions typically involving changes to all chapters in the document. Revision numbers that include a decimal fraction reflect minor revisions, usually restricted to selected chapters or selected pages in the document.
Hero, Alfred O.
Background Basic methods Model-based algorithms Model-free algorithms RTI Perspectives Conclusions 23, 2015 1 52 #12;Background Basic methods Model-based algorithms Model-free algorithms RTI Coates (McGill) 2 52 #12;Background Basic methods Model-based algorithms Model-free algorithms RTI
Field theoretic description of charge regulation interaction
Natasa Adzic; Rudolf Podgornik
2014-05-15T23:59:59.000Z
In order to find the exact form of the electrostatic interaction between two proteins with dissociable charge groups in aqueous solution, we have studied a model system composed of two macroscopic surfaces with charge dissociation sites immersed in a counterion-only ionic solution. Field-theoretic representation of the grand canonical partition function is derived and evaluated within the mean-field approximation, giving the Poisson-Boltzmann theory with the Ninham-Parsegian boundary condition. Gaussian fluctuations around the mean-field are then analyzed in the lowest order correction that we calculate analytically and exactly, using the path integral representation for the partition function of a harmonic oscillator with time-dependent frequency. The first order (one loop) free energy correction gives the interaction free energy that reduces to the zero-frequency van der Waals form in the appropriate limit but in general gives rise to a mono-polar fluctuation term due to charge fluctuation at the dissociation sites. Our formulation opens up the possibility to investigate the Kirkwood-Shumaker interaction in more general contexts where their original derivation fails.
Field theoretic simulations of polymer nanocomposites
Koski, Jason; Chao, Huikuan; Riggleman, Robert A., E-mail: rrig@seas.upenn.edu [Department of Chemical and Biomolecular Engineering, University of Pennsylvania, Philadelphia, Pennsylvania 19104 (United States)
2013-12-28T23:59:59.000Z
Polymer field theory has emerged as a powerful tool for describing the equilibrium phase behavior of complex polymer formulations, particularly when one is interested in the thermodynamics of dense polymer melts and solutions where the polymer chains can be accurately described using Gaussian models. However, there are many systems of interest where polymer field theory cannot be applied in such a straightforward manner, such as polymer nanocomposites. Current approaches for incorporating nanoparticles have been restricted to the mean-field level and often require approximations where it is unclear how to improve their accuracy. In this paper, we present a unified framework that enables the description of polymer nanocomposites using a field theoretic approach. This method enables straightforward simulations of the fully fluctuating field theory for polymer formulations containing spherical or anisotropic nanoparticles. We demonstrate our approach captures the correlations between particle positions, present results for spherical and cylindrical nanoparticles, and we explore the effect of the numerical parameters on the performance of our approach.
Theoretical Tools for Large Scale Structure
J. R. Bond; L. Kofman; D. Pogosyan; J. Wadsley
1998-10-06T23:59:59.000Z
We review the main theoretical aspects of the structure formation paradigm which impinge upon wide angle surveys: the early universe generation of gravitational metric fluctuations from quantum noise in scalar inflaton fields; the well understood and computed linear regime of CMB anisotropy and large scale structure (LSS) generation; the weakly nonlinear regime, where higher order perturbation theory works well, and where the cosmic web picture operates, describing an interconnected LSS of clusters bridged by filaments, with membranes as the intrafilament webbing. Current CMB+LSS data favour the simplest inflation-based $\\Lambda$CDM models, with a primordial spectral index within about 5% of scale invariant and $\\Omega_\\Lambda \\approx 2/3$, similar to that inferred from SNIa observations, and with open CDM models strongly disfavoured. The attack on the nonlinear regime with a variety of N-body and gas codes is described, as are the excursion set and peak-patch semianalytic approaches to object collapse. The ingredients are mixed together in an illustrative gasdynamical simulation of dense supercluster formation.
Halverson, Thomas, E-mail: tom.halverson@ttu.edu; Poirier, Bill [Department of Chemistry and Biochemistry and Department of Physics, Texas Tech University, P.O. Box 41061, Lubbock, Texas 79409-1061 (United States)
2014-05-28T23:59:59.000Z
‘‘Exact” quantum dynamics calculations of vibrational spectra are performed for two molecular systems of widely varying dimensionality (P{sub 2}O and CH{sub 2}NH), using a momentum-symmetrized Gaussian basis. This basis has been previously shown to defeat exponential scaling of computational cost with system dimensionality. The calculations were performed using the new “SWITCHBLADE” black-box code, which utilizes both dimensionally independent algorithms and massive parallelization to compute very large numbers of eigenstates for any fourth-order force field potential, in a single calculation. For both molecules considered here, many thousands of vibrationally excited states were computed, to at least an “intermediate” level of accuracy (tens of wavenumbers). Future modifications to increase the accuracy to “spectroscopic” levels, along with other potential future improvements of the new code, are also discussed.
Theoretical ecology: a successful first year and a bright future for a new journal
Hastings, Alan
2009-01-01T23:59:59.000Z
6 EDITORIAL Theoretical ecology: a successful first year andvolume 2 of Theoretical Ecology. Looking back, this has beenfocusing on theoretical ecology can play an expanding role
Laminated Wave Turbulence: Generic Algorithms II
Elena Kartashova; Alexey Kartashov
2006-11-17T23:59:59.000Z
The model of laminated wave turbulence puts forth a novel computational problem - construction of fast algorithms for finding exact solutions of Diophantine equations in integers of order $10^{12}$ and more. The equations to be solved in integers are resonant conditions for nonlinearly interacting waves and their form is defined by the wave dispersion. It is established that for the most common dispersion as an arbitrary function of a wave-vector length two different generic algorithms are necessary: (1) one-class-case algorithm for waves interacting through scales, and (2) two-class-case algorithm for waves interacting through phases. In our previous paper we described the one-class-case generic algorithm and in our present paper we present the two-class-case generic algorithm.
Algorithm for a microfluidic assembly line
Tobias M. Schneider; Shreyas Mandre; Michael P. Brenner
2011-01-19T23:59:59.000Z
Microfluidic technology has revolutionized the control of flows at small scales giving rise to new possibilities for assembling complex structures on the microscale. We analyze different possible algorithms for assembling arbitrary structures, and demonstrate that a sequential assembly algorithm can manufacture arbitrary 3D structures from identical constituents. We illustrate the algorithm by showing that a modified Hele-Shaw cell with 7 controlled flowrates can be designed to construct the entire English alphabet from particles that irreversibly stick to each other.
A Note on the Finite Element Method with Singular Basis Functions
Kaneko, Hideaki
finite element analysis that incorporates singular element functions. A need for introducing * *some singular elements as part of basis functions in certain finite element analysis arises o* *ut A Note on the Finite Element Method with Singular Basis
GOETZ, T.G.
2003-07-25T23:59:59.000Z
This technical basis document describes the risk binning process and the technical basis for assigning risk bins for the aboveground structure failure representative accident and associated represented hazardous conditions. This document was developed to support the documented safety analysis.
A Game-Theoretical Dynamic Model for Electricity Markets
Aswin Kannan
2010-10-06T23:59:59.000Z
Oct 6, 2010 ... Abstract: We present a game-theoretical dynamic model for competitive electricity markets.We demonstrate that the model can be used to ...
Theoretical Electron Density Distributions for Fe- and Cu-Sulfide...
Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)
Electron Density Distributions for Fe- and Cu-Sulfide Earth Materials: A Connection between Bond Length, Bond Theoretical Electron Density Distributions for Fe- and Cu-Sulfide...
A Theoretical Study of Methanol Oxidation Catalyzed by Isolated...
Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)
Methanol Oxidation Catalyzed by Isolated Vanadia Clusters Supported on the (101) Surface of Anatase. A Theoretical Study of Methanol Oxidation Catalyzed by Isolated Vanadia...
Improvements of Nuclear Data and Its Uncertainties by Theoretical...
Office of Scientific and Technical Information (OSTI)
Improvements of Nuclear Data and Its Uncertainties by Theoretical Modeling Citation Details In-Document Search Title: Improvements of Nuclear Data and Its Uncertainties by...
Theoretical overview on top pair production and single top production
Stefan Weinzierl
2012-01-19T23:59:59.000Z
In this talk I will give an overview on theoretical aspects of top quark physics. The focus lies on top pair production and single top production.
Theoretical Physics | U.S. DOE Office of Science (SC)
Office of Science (SC) Website
Energy Frontier Intensity Frontier Cosmic Frontier Theoretical Physics Unique Aspects and Scientific Challenges Advanced Technology R and D Accelerator R&D Stewardship Questions...
A Game-Theoretical Dynamic Model for Electricity Markets
Oct 6, 2010 ... Abstract: We present a game-theoretical dynamic model for competitive electricity markets.We demonstrate that the model can be used to ...
Quintom Cosmology: Theoretical implications and observations
Yi-Fu Cai; Emmanuel N. Saridakis; Mohammad R. Setare; Jun-Qing Xia
2010-04-22T23:59:59.000Z
We review the paradigm of quintom cosmology. This scenario is motivated by the observational indications that the equation of state of dark energy across the cosmological constant boundary is mildly favored, although the data are still far from being conclusive. As a theoretical setup we introduce a no-go theorem existing in quintom cosmology, and based on it we discuss the conditions for the equation of state of dark energy realizing the quintom scenario. The simplest quintom model can be achieved by introducing two scalar fields with one being quintessence and the other phantom. Based on the double-field quintom model we perform a detailed analysis of dark energy perturbations and we discuss their effects on current observations. This type of scenarios usually suffer from a manifest problem due to the existence of a ghost degree of freedom, and thus we review various alternative realizations of the quintom paradigm. The developments in particle physics and string theory provide potential clues indicating that a quintom scenario may be obtained from scalar systems with higher derivative terms, as well as from non-scalar systems. Additionally, we construct a quintom realization in the framework of braneworld cosmology, where the cosmic acceleration and the phantom divide crossing result from the combined effects of the field evolution on the brane and the competition between four and five dimensional gravity. Finally, we study the outsets and fates of a universe in quintom cosmology. In a scenario with null energy condition violation one may obtain a bouncing solution at early times and therefore avoid the Big Bang singularity. Furthermore, if this occurs periodically, we obtain a realization of an oscillating universe. Lastly, we comment on several open issues in quintom cosmology and their connection to future investigations.
Jeongho Bang; Seung-Woo Lee; Chang-Woo Lee; Hyunseok Jeong
2014-09-17T23:59:59.000Z
We propose a quantum algorithm to obtain the lowest eigenstate of any Hamiltonian simulated by a quantum computer. The proposed algorithm begins with an arbitrary initial state of the simulated system. A finite series of transforms is iteratively applied to the initial state assisted with an ancillary qubit. The fraction of the lowest eigenstate in the initial state is then amplified up to $\\simeq 1$. We prove that our algorithm can faithfully work for any arbitrary Hamiltonian in the theoretical analysis. Numerical analyses are also carried out. We firstly provide a numerical proof-of-principle demonstration with a simple Hamiltonian in order to compare our scheme with the so-called "Demon-like algorithmic cooling (DLAC)", recently proposed in [Nature Photonics 8, 113 (2014)]. The result shows a good agreement with our theoretical analysis, exhibiting the comparable behavior to the best "cooling" with the DLAC method. We then consider a random Hamiltonian model for further analysis of our algorithm. By numerical simulations, we show that the total number $n_c$ of iterations is proportional to $\\simeq {\\cal O}(D^{-1}\\epsilon^{-0.19})$, where $D$ is the difference between the two lowest eigenvalues, and $\\epsilon$ is an error defined as the probability that the finally obtained system state is in an unexpected (i.e. not the lowest) eigenstate.
Algorithmic Cooling in Liquid State NMR
Yosi Atia; Yuval Elias; Tal Mor; Yossi Weinstein
2014-11-17T23:59:59.000Z
Algorithmic cooling is a method that employs thermalization to increase the qubits' purification level, namely it reduces the qubit-system's entropy. We utilized gradient ascent pulse engineering (GRAPE), an optimal control algorithm, to implement algorithmic cooling in liquid state nuclear magnetic resonance. Various cooling algorithms were applied onto the three qubits of 13C2-trichloroethylene, cooling the system beyond Shannon's entropy bound in several different ways. For example, in one experiment a carbon qubit was cooled by a factor of 4.61. This work is a step towards potentially integrating tools of NMR quantum computing into in vivo magnetic resonance spectroscopy.
LO, NLO, LO* and jet algorithms
J. Huston
2010-01-14T23:59:59.000Z
The impact of NLO corrections, and in particular, the role of jet algorithms, is examined for a variety of processes at the Tevatron and LHC.
Optimization Online - Efficient Heuristic Algorithms for Maximum ...
T. G. J. Myklebust
2012-11-19T23:59:59.000Z
Nov 19, 2012 ... Efficient Heuristic Algorithms for Maximum Utility Product Pricing Problems. T. G. J. Myklebust(tmyklebu ***at*** csclub.uwaterloo.ca)
Efficient Algorithmic Techniques for Several Multidimensional ...
Mugurel
2008-10-23T23:59:59.000Z
Politehnica University of Bucharest, Romania, mugurel.andreica@cs.pub.ro. Abstract: In this paper I present several novel, efficient, algorithmic techniques for.
Parallel Interval Continuous Global Optimization Algorithms
abdeljalil benyoub
2002-07-19T23:59:59.000Z
Jul 19, 2002 ... Abstract: We theorically study, on a distributed memory architecture, the parallelization of Hansen's algorithm for the continuous global ...
High-Performance Engineering Optimization: Applications, Algorithms...
Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)
High-Performance Engineering Optimization: Applications, Algorithms, and Adoption Event Sponsor: Mathematics and Computer Science Division Start Date: Aug 19 2015 - 10:30am...
Design and Analysis of Algorithms Course Page
Design and Analysis of Algorithms. TTR 3:05- 4:25, IC 109. OFFICE HOURS: Wed 11-12 or by appointment (Rm: Skiles, 116).
Algorithmic Cooling in Liquid State NMR
Yosi Atia; Yuval Elias; Tal Mor; Yossi Weinstein
2015-08-05T23:59:59.000Z
Algorithmic cooling is a method that employs thermalization to increase qubit purification level, namely it reduces the qubit-system's entropy. We utilized gradient ascent pulse engineering (GRAPE), an optimal control algorithm, to implement algorithmic cooling in liquid state nuclear magnetic resonance. Various cooling algorithms were applied onto the three qubits of $^{13}$C$_2$-trichloroethylene, cooling the system beyond Shannon's entropy bound in several different ways. In particular, in one experiment a carbon qubit was cooled by a factor of 4.61. This work is a step towards potentially integrating tools of NMR quantum computing into in vivo magnetic resonance spectroscopy.
Hanford External Dosimetry Technical Basis Manual PNL-MA-842
Rathbone, Bruce A.
2007-03-12T23:59:59.000Z
The Hanford External Dosimetry Technical Basis Manual PNL-MA-842 documents the design and implementation of the external dosimetry system used at Hanford. The manual describes the dosimeter design, processing protocols, dose calculation methodology, radiation fields encountered, dosimeter response characteristics, limitations of dosimeter design under field conditions, and makes recommendations for effective use of the dosimeters in the field. The manual describes the technical basis for the dosimetry system in a manner intended to help ensure defensibility of the dose of record at Hanford and to demonstrate compliance with 10 CFR 835, DOELAP, DOE-RL, ORP, PNSO, and Hanford contractor requirements. The dosimetry system is operated by PNNL’s Hanford External Dosimetry Program (HEDP) which provides dosimetry services to all Hanford contractors. The primary users of this manual are DOE and DOE contractors at Hanford using the dosimetry services of PNNL. Development and maintenance of this manual is funded directly by DOE and DOE contractors. Its contents have been reviewed and approved by DOE and DOE contractors at Hanford through the Hanford Personnel Dosimetry Advisory Committee (HPDAC) which is chartered and chaired by DOE-RL and serves as means of coordinating dosimetry practices across contractors at Hanford. This manual was established in 1996. Since inception, it has been revised many times and maintained by PNNL as a controlled document with controlled distribution. Rev. 0 marks the first revision to be released through PNNL’s Electronic Records & Information Capture Architecture (ERICA) database. Revision numbers that are whole numbers reflect major revisions typically involving changes to all chapters in the document. Revision numbers that include a decimal fraction reflect minor revisions, usually restricted to selected chapters or selected pages in the document. Revision Log: Rev. 0 (2/25/2005) Major revision and expansion. Rev. 0.1 (3/12/2007) Minor revision. Updated Chapters 5, 6 and 9 to reflect change in default ring calibration factor used in HEDP dose calculation software. Factor changed from 1.5 to 2.0 beginning January 1, 2007. Pages on which changes were made are as follows: 5.23, 5.69, 5.78, 5.80, 5.82, 6.3, 6.5, 6.29, 9.2.
Experimental Progress Report--Modernizing the Fission Basis
Macri, R A
2012-02-17T23:59:59.000Z
In 2010 a proposal (Modernizing the Fission Basis) was prepared to 'resolve long standing differences between LANL and LLNL associated with the correct fission basis for analysis of nuclear test data'. Collaboration between LANL/LLNL/TUNL has been formed to implement this program by performing high precision measurements of neutron induced fission product yields as a function of incident neutron energy. This new program benefits from successful previous efforts utilizing mono-energetic neutrons undertaken by this collaboration. The first preliminary experiment in this new program was performed between July 24-31, 2011 at TUNL and had 2 main objectives: (1) demonstrating the capability to measure characteristic {gamma}-rays from specific fission products; (2) studying background effects from room scattered neutrons. In addition, a new dual fission ionization chamber has been designed and manufactured. The production design of the chamber is shown in the picture below. The first feasibility experiment to test this chamber is scheduled at the TUNL Tandem Laboratory from September 19-25, 2011. The dual fission chamber design will allow simultaneous exposure of absolute fission fragment emission rate detectors and the thick fission activation foils, positioned between the two chambers. This document formalizes the earlier experimental report demonstrating the experimental capability to make accurate (< 2 %) precision gamma-ray spectroscopic measurements of the excitation function of high fission product yields of the 239Pu(n,f) reaction (induced by quasimonoenergetic neutrons). A second experiment (9/2011) introduced an compact double-sided fission chamber into the experimental arrangement, and so the relative number of incident neutrons striking the sample foil at each bombarding energy is limited only by statistics. (The number of incident neutrons often limits the experimental accuracy.) Fission chamber operation was so exceptional that 2 more chambers have been fabricated; thus fission foils of different isotopes may be left in place with sample changes. The scope of the measurements is both greatly expanded and the results become vetted. Experiment 2 is not reported here. A continuing experiment has been proposed for February 2012.
Structural basis for the antibody neutralization of Herpes simplex virus
Lee, Cheng-Chung; Lin, Li-Ling [Academia Sinica, Taipei 115, Taiwan (China); Academia Sinica, Taipei 115, Taiwan (China); Chan, Woan-Eng [Development Center for Biotechnology, New Taipei City 221, Taiwan (China); Ko, Tzu-Ping [Academia Sinica, Taipei 115, Taiwan (China); Academia Sinica, Taipei 115, Taiwan (China); Lai, Jiann-Shiun [Development Center for Biotechnology, New Taipei City 221, Taiwan (China); Ministry of Economic Affairs, Taipei 100, Taiwan (China); Wang, Andrew H.-J., E-mail: ahjwang@gate.sinica.edu.tw [Academia Sinica, Taipei 115, Taiwan (China); Academia Sinica, Taipei 115, Taiwan (China); Taipei Medical University, Taipei 110, Taiwan (China)
2013-10-01T23:59:59.000Z
The gD–E317-Fab complex crystal revealed the conformational epitope of human mAb E317 on HSV gD, providing a molecular basis for understanding the viral neutralization mechanism. Glycoprotein D (gD) of Herpes simplex virus (HSV) binds to a host cell surface receptor, which is required to trigger membrane fusion for virion entry into the host cell. gD has become a validated anti-HSV target for therapeutic antibody development. The highly inhibitory human monoclonal antibody E317 (mAb E317) was previously raised against HSV gD for viral neutralization. To understand the structural basis of antibody neutralization, crystals of the gD ectodomain bound to the E317 Fab domain were obtained. The structure of the complex reveals that E317 interacts with gD mainly through the heavy chain, which covers a large area for epitope recognition on gD, with a flexible N-terminal and C-terminal conformation. The epitope core structure maps to the external surface of gD, corresponding to the binding sites of two receptors, herpesvirus entry mediator (HVEM) and nectin-1, which mediate HSV infection. E317 directly recognizes the gD–nectin-1 interface and occludes the HVEM contact site of gD to block its binding to either receptor. The binding of E317 to gD also prohibits the formation of the N-terminal hairpin of gD for HVEM recognition. The major E317-binding site on gD overlaps with either the nectin-1-binding residues or the neutralizing antigenic sites identified thus far (Tyr38, Asp215, Arg222 and Phe223). The epitopes of gD for E317 binding are highly conserved between two types of human herpesvirus (HSV-1 and HSV-2). This study enables the virus-neutralizing epitopes to be correlated with the receptor-binding regions. The results further strengthen the previously demonstrated therapeutic and diagnostic potential of the E317 antibody.
Hanford External Dosimetry Technical Basis Manual PNL-MA-842
Rathbone, Bruce A.
2011-04-04T23:59:59.000Z
The Hanford External Dosimetry Technical Basis Manual PNL-MA-842 documents the design and implementation of the external dosimetry system used at the U.S. Department of Energy (DOE) Hanford site. The manual describes the dosimeter design, processing protocols, dose calculation methodology, radiation fields encountered, dosimeter response characteristics, limitations of dosimeter design under field conditions, and makes recommendations for effective use of the dosimeters in the field. The manual describes the technical basis for the dosimetry system in a manner intended to help ensure defensibility of the dose of record at Hanford and to demonstrate compliance with requirements of 10 CFR 835, the DOE Laboratory Accreditation Program, the DOE Richland Operations Office, DOE Office of River Protection, DOE Pacific Northwest Office of Science, and Hanford’s DOE contractors. The dosimetry system is operated by the Pacific Northwest National Laboratory (PNNL) Hanford External Dosimetry Program which provides dosimetry services to PNNL and all Hanford contractors. The primary users of this manual are DOE and DOE contractors at Hanford using the dosimetry services of PNNL. Development and maintenance of this manual is funded directly by DOE and DOE contractors. Its contents have been reviewed and approved by DOE and DOE contractors at Hanford through the Hanford Personnel Dosimetry Advisory Committee which is chartered and chaired by DOE-RL and serves as means of coordinating dosimetry practices across contractors at Hanford. This manual was established in 1996. Since its inception, it has been revised many times and maintained by PNNL as a controlled document with controlled distribution. The first revision to be released through PNNL’s Electronic Records & Information Capture Architecture database was designated Revision 0. Revision numbers that are whole numbers reflect major revisions typically involving significant changes to all chapters in the document. Revision numbers that include a decimal fraction reflect minor revisions, usually restricted to selected chapters or selected pages in the document. Maintenance and distribution of controlled hard copies of the manual by PNNL was discontinued beginning with Revision 0.2.
Hanford External Dosimetry Technical Basis Manual PNL-MA-842
Rathbone, Bruce A.
2010-04-01T23:59:59.000Z
The Hanford External Dosimetry Technical Basis Manual PNL-MA-842 documents the design and implementation of the external dosimetry system used at the U.S. Department of Energy (DOE) Hanford site. The manual describes the dosimeter design, processing protocols, dose calculation methodology, radiation fields encountered, dosimeter response characteristics, limitations of dosimeter design under field conditions, and makes recommendations for effective use of the dosimeters in the field. The manual describes the technical basis for the dosimetry system in a manner intended to help ensure defensibility of the dose of record at Hanford and to demonstrate compliance with requirements of 10 CFR 835, the DOE Laboratory Accreditation Program, the DOE Richland Operations Office, DOE Office of River Protection, DOE Pacific Northwest Office of Science, and Hanford’s DOE contractors. The dosimetry system is operated by the Pacific Northwest National Laboratory (PNNL) Hanford External Dosimetry Program which provides dosimetry services to PNNL and all Hanford contractors. The primary users of this manual are DOE and DOE contractors at Hanford using the dosimetry services of PNNL. Development and maintenance of this manual is funded directly by DOE and DOE contractors. Its contents have been reviewed and approved by DOE and DOE contractors at Hanford through the Hanford Personnel Dosimetry Advisory Committee which is chartered and chaired by DOE-RL and serves as means of coordinating dosimetry practices across contractors at Hanford. This manual was established in 1996. Since its inception, it has been revised many times and maintained by PNNL as a controlled document with controlled distribution. The first revision to be released through PNNL’s Electronic Records & Information Capture Architecture database was designated Revision 0. Revision numbers that are whole numbers reflect major revisions typically involving significant changes to all chapters in the document. Revision numbers that include a decimal fraction reflect minor revisions, usually restricted to selected chapters or selected pages in the document. Maintenance and distribution of controlled hard copies of the manual by PNNL was discontinued beginning with Revision 0.2.
A preliminary evaluation of a speed threshold incident detection algorithm
Kolb, Stephanie Lang
1996-01-01T23:59:59.000Z
Algorithm . . . . . . . . . . . . , . Event Scan Algorithm . Neural Network . . . . . . . . . . . . . . . , . . . . . . California Algorithm ?8 with Fuzzy Logic Selected Algorithms Page 20 21 22 24 24 25 26 27 28 28 30 32 32 33 33 33 33... 7 California Algorithm ?10 Decision Tree 12 14 15 8 Speed/Flow Curve 9 McMaster Algorithm Template 15 25 10 Traffic Flow Relationships Applied in the Dynamic Model Algorithm. . . 26 11 Multi-Layer Feed-Forward Neural Network 12 Membership...
Theoretical Studies of Low Frequency Instabilities in the Ionosphere. Final Report
Dimant, Y. S.
2003-08-20T23:59:59.000Z
The objective of the current project is to provide a theoretical basis for better understanding of numerous radar and rocket observations of density irregularities and related effects in the lower equatorial and high-latitude ionospheres. The research focused on: (1) continuing efforts to develop a theory of nonlinear saturation of the Farley-Buneman instability; (2) revision of the kinetic theory of electron-thermal instability at low altitudes; (3) studying the effects of strong anomalous electron heating in the high-latitude electrojet; (4) analytical and numerical studies of the combined Farley-Bunemadion-thermal instabilities in the E-region ionosphere; (5) studying the effect of dust charging in Polar Mesospheric Clouds. Revision of the kinetic theory of electron thermal instability at low altitudes.
Structural basis of substrate discrimination and integrin binding by autotaxin
Hausmann, Jens; Kamtekar, Satwik; Christodoulou, Evangelos; Day, Jacqueline E.; Wu, Tao; Fulkerson, Zachary; Albers, Harald M.H.G.; van Meeteren, Laurens A.; Houben, Anna J.S.; van Zeijl, Leonie; Jansen, Silvia; Andries, Maria; Hall, Troii; Pegg, Lyle E.; Benson, Timothy E.; Kasiem, Mobien; Harlos, Karl; Vander Kooi, Craig W.; Smyth, Susan S.; Ovaa, Huib; Bollen, Mathieu; Morris, Andrew J.; Moolenaar, Wouter H.; Perrakis, Anastassis (Pfizer); (Leuven); (Oxford); (NCI-Netherlands); (Kentucky)
2013-09-25T23:59:59.000Z
Autotaxin (ATX, also known as ectonucleotide pyrophosphatase/phosphodiesterase-2, ENPP2) is a secreted lysophospholipase D that generates the lipid mediator lysophosphatidic acid (LPA), a mitogen and chemoattractant for many cell types. ATX-LPA signaling is involved in various pathologies including tumor progression and inflammation. However, the molecular basis of substrate recognition and catalysis by ATX and the mechanism by which it interacts with target cells are unclear. Here, we present the crystal structure of ATX, alone and in complex with a small-molecule inhibitor. We have identified a hydrophobic lipid-binding pocket and mapped key residues for catalysis and selection between nucleotide and phospholipid substrates. We have shown that ATX interacts with cell-surface integrins through its N-terminal somatomedin B-like domains, using an atypical mechanism. Our results define determinants of substrate discrimination by the ENPP family, suggest how ATX promotes localized LPA signaling and suggest new approaches for targeting ATX with small-molecule therapeutic agents.
Reactivity accidents: A reassessment of the design-basis events
Diamond, D.J.; Hsu, Chia-Jung; Fitzpatrick, R.; Mirkovic, D.
1989-01-01T23:59:59.000Z
This paper summarizes a study of light water reactor event sequences which have been investigated for their potential to result in reactivity accidents with severe consequences. The study is an outgrowth of the concern which arose after the accident at Chernobyl and was recommended by the report of the US Nuclear Regulatory Commission (NRC) on the implications of that accident (NUREG-1251). The work was done for the NRC to reconfirm or bring into question previous judgments on reactivity events which must be analyzed for licensing. Event sequences were defined and then a probabilistic assessment was completed to estimate the frequency of the reactivity event and/or a deterministic calculation was completed to estimate the consequences to the fuel. Using the results of this analysis, analysis done by others, and a set of screening criteria developed within this study, judgments were made for each sequence as to its importance, and recommendations were made as to whether the NRC ought to be considering the important sequences as part of the design basis or for further, more detailed, investigation. 31 refs., 9 figs., 1 tab.
Climate Change: The Physical Basis and Latest Results
None
2011-10-06T23:59:59.000Z
The 2007 Assessment Report of the Intergovernmental Panel on Climate Change (IPCC) concludes: "Warming in the climate system is unequivocal." Without the contribution of Physics to climate science over many decades, such a statement would not have been possible. Experimental physics enables us to read climate archives such as polar ice cores and so provides the context for the current changes. For example, today the concentration of CO2 in the atmosphere, the second most important greenhouse gas, is 28% higher than any time during the last 800,000 years. Classical fluid mechanics and numerical mathematics are the basis of climate models from which estimates of future climate change are obtained. But major instabilities and surprises in the Earth System are still unknown. These are also to be considered when the climatic consequences of proposals for geo-engineering are estimated. Only Physics will permit us to further improve our understanding in order to provide the foundation for policy decisions facing the global climate change challenge.
St Andrews, University of
The Need for Language Repair The Reformation Algorithm Discussion Reformation: A Domain of Edinburgh University of St Andrews, 27th November 2013 #12;The Need for Language Repair The Reformation Algorithm Discussion Outline 1 The Need for Language Repair 2 The Reformation Algorithm 3 Discussion #12;The
MULTI-CRITERIA SEARCH ALGORITHM: AN EFFICIENT APPROXIMATE K-NN ALGORITHM FOR IMAGE RETRIEVAL
MULTI-CRITERIA SEARCH ALGORITHM: AN EFFICIENT APPROXIMATE K-NN ALGORITHM FOR IMAGE RETRIEVAL Mehdi-NN search in large scale image databases, based on top-k multi-criteria search tech- niques. The method retrieval, stor- age requirements and update cost. The search algorithm delivers ap- proximate results
Theoretical Description of the Fission Process
Witold Nazarewicz
2009-10-25T23:59:59.000Z
Advanced theoretical methods and high-performance computers may finally unlock the secrets of nuclear fission, a fundamental nuclear decay that is of great relevance to society. In this work, we studied the phenomenon of spontaneous fission using the symmetry-unrestricted nuclear density functional theory (DFT). Our results show that many observed properties of fissioning nuclei can be explained in terms of pathways in multidimensional collective space corresponding to different geometries of fission products. From the calculated collective potential and collective mass, we estimated spontaneous fission half-lives, and good agreement with experimental data was found. We also predicted a new phenomenon of trimodal spontaneous fission for some transfermium isotopes. Our calculations demonstrate that fission barriers of excited superheavy nuclei vary rapidly with particle number, pointing to the importance of shell effects even at large excitation energies. The results are consistent with recent experiments where superheavy elements were created by bombarding an actinide target with 48-calcium; yet even at high excitation energies, sizable fission barriers remained. Not only does this reveal clues about the conditions for creating new elements, it also provides a wider context for understanding other types of fission. Understanding of the fission process is crucial for many areas of science and technology. Fission governs existence of many transuranium elements, including the predicted long-lived superheavy species. In nuclear astrophysics, fission influences the formation of heavy elements on the final stages of the r-process in a very high neutron density environment. Fission applications are numerous. Improved understanding of the fission process will enable scientists to enhance the safety and reliability of the nation’s nuclear stockpile and nuclear reactors. The deployment of a fleet of safe and efficient advanced reactors, which will also minimize radiotoxic waste and be proliferation-resistant, is a goal for the advanced nuclear fuel cycles program. While in the past the design, construction, and operation of reactors were supported through empirical trials, this new phase in nuclear energy production is expected to heavily rely on advanced modeling and simulation capabilities.
Theoretical Studies of Hydrogen Storage Alloys.
Jonsson, Hannes
2012-03-22T23:59:59.000Z
Theoretical calculations were carried out to search for lightweight alloys that can be used to reversibly store hydrogen in mobile applications, such as automobiles. Our primary focus was on magnesium based alloys. While MgH{sub 2} is in many respects a promising hydrogen storage material, there are two serious problems which need to be solved in order to make it useful: (i) the binding energy of the hydrogen atoms in the hydride is too large, causing the release temperature to be too high, and (ii) the diffusion of hydrogen through the hydride is so slow that loading of hydrogen into the metal takes much too long. In the first year of the project, we found that the addition of ca. 15% of aluminum decreases the binding energy to the hydrogen to the target value of 0.25 eV which corresponds to release of 1 bar hydrogen gas at 100 degrees C. Also, the addition of ca. 15% of transition metal atoms, such as Ti or V, reduces the formation energy of interstitial H-atoms making the diffusion of H-atoms through the hydride more than ten orders of magnitude faster at room temperature. In the second year of the project, several calculations of alloys of magnesium with various other transition metals were carried out and systematic trends in stability, hydrogen binding energy and diffusivity established. Some calculations of ternary alloys and their hydrides were also carried out, for example of Mg{sub 6}AlTiH{sub 16}. It was found that the binding energy reduction due to the addition of aluminum and increased diffusivity due to the addition of a transition metal are both effective at the same time. This material would in principle work well for hydrogen storage but it is, unfortunately, unstable with respect to phase separation. A search was made for a ternary alloy of this type where both the alloy and the corresponding hydride are stable. Promising results were obtained by including Zn in the alloy.
Theoretical & Experimental Studies of Elementary Particles
McFarland, Kevin
2012-10-04T23:59:59.000Z
Abstract High energy physics has been one of the signature research programs at the University of Rochester for over 60 years. The group has made leading contributions to experimental discoveries at accelerators and in cosmic rays and has played major roles in developing the theoretical framework that gives us our ``standard model'' of fundamental interactions today. This award from the Department of Energy funded a major portion of that research for more than 20 years. During this time, highlights of the supported work included the discovery of the top quark at the Fermilab Tevatron, the completion of a broad program of physics measurements that verified the electroweak unified theory, the measurement of three generations of neutrino flavor oscillations, and the first observation of a ``Higgs like'' boson at the Large Hadron Collider. The work has resulted in more than 2000 publications over the period of the grant. The principal investigators supported on this grant have been recognized as leaders in the field of elementary particle physics by their peers through numerous awards and leadership positions. Most notable among them is the APS W.K.H. Panofsky Prize awarded to Arie Bodek in 2004, the J.J. Sakurai Prizes awarded to Susumu Okubo and C. Richard Hagen in 2005 and 2010, respectively, the Wigner medal awarded to Susumu Okubo in 2006, and five principal investigators (Das, Demina, McFarland, Orr, Tipton) who received Department of Energy Outstanding Junior Investigator awards during the period of this grant. The University of Rochester Department of Physics and Astronomy, which houses the research group, provides primary salary support for the faculty and has waived most tuition costs for graduate students during the period of this grant. The group also benefits significantly from technical support and infrastructure available at the University which supports the work. The research work of the group has provided educational opportunities for graduate students, undergraduate students and high school students and teachers. Seventy-two graduate students received a Ph.D. in physics for research supported by this grant.
The growth of business firms: Theoretical framework and empirical evidence
Buldyrev, Sergey
Pg(g) of business-firm growth rates. The model pre- dicts that Pg(g) is exponential in the central rate at all levels of aggregation studied. The Theoretical Framework We model business firms as classesThe growth of business firms: Theoretical framework and empirical evidence Dongfeng Fu* , Fabio
GRAPH THEORETIC APPROACHES TO INJECTIVITY IN CHEMICAL REACTION SYSTEMS
Craciun, Gheorghe
GRAPH THEORETIC APPROACHES TO INJECTIVITY IN CHEMICAL REACTION SYSTEMS MURAD BANAJI AND GHEORGHE algebraic and graph theoretic conditions for injectivity of chemical reaction systems. After developing the possibility of multiple equilibria in the systems in question. Key words. Chemical reactions; Injectivity; SR
Theoretical Determination of the Dissociation Energy of Molecular Hydrogen
Pachucki, Krzysztof
Physics, University of Warsaw, Hoza 69, 00-681 Warsaw, Poland Abstract The dissociation energyTheoretical Determination of the Dissociation Energy of Molecular Hydrogen Konrad Piszczatowski of Chemistry, University of Warsaw, Pasteura 1, 02-093, Warsaw, Poland, Center for Theoretical
Theoretical Integration, Cooperation, and Theories as Tracking Devices
Theoretical Integration, Cooperation, and Theories as Tracking Devices James Griesemer Departments@ucdavis.edu The theoretical problem of integrating evolution, heredity, de- velopment, and cognition has a long pedigree learning and incredulity at their peculiar visions of biological integration. Think of Herbert Spencer
Chemical Organization Theory as a Theoretical Base for Chemical Computing
Dittrich, Peter
Chemical Organization Theory as a Theoretical Base for Chemical Computing NAOKI MATSUMARU, FLORIAN-07743 Jena, Germany http://www.minet.uni-jena.de/csb/ Submitted 14 November 2005 In chemical computing- gramming chemical systems a theoretical method to cope with that emergent behavior is desired
INVERTING RADON TRANSFORMS : THE GROUP-THEORETIC Franois Rouvire
Vallette, Bruno
INVERTING RADON TRANSFORMS : THE GROUP-THEORETIC APPROACH François Rouvière Abstract of various inversion formulas from the literature on Radon transforms, obtained by group-theoretic tools such as invariant di¤erential operators and harmonic analysis. We introduce a general concept of shifted Radon
Towards Cinematic Hypertext: A Theoretical and Empirical Investigation
Towards Cinematic Hypertext: A Theoretical and Empirical Investigation Tech Report kmi-04-6 March submission February 2004 PHD awarded March 2004 #12;Knowledge Media Institute TOWARDS CINEMATIC HYPERTEXT elements of these with new theoretical insights, to investigate a fourth paradigm referred to as Cinematic
Center for Theoretical Biological Physics University of California, San Diego
Collar, Juan I.
Center for Theoretical Biological Physics University of California, San Diego CHEMOTAXIS To Go physicists with the Center for Theoretical Biological Physics at the University of California, San Diego at the University of California, San Diego. CTBP is a consortium of researchers from UCSD, The Salk Institute
On the Existence of certain Quantum Algorithms
Bjoern Grohmann
2009-04-11T23:59:59.000Z
We investigate the question if quantum algorithms exist that compute the maximum of a set of conjugated elements of a given number field in quantum polynomial time. We will relate the existence of these algorithms for a certain family of number fields to an open conjecture from elementary number theory.
On the Potential of Automatic Algorithm Configuration
Hutter, Frank
.g., neighborhood structure in local search or variable/value ordering heuristics in tree search), as well lead to enormous speed-ups of tree search algorithms for SAT for solving SAT-encoded software The problem of setting an algorithm's free parameters for maximal performance on a class of problem instances
Quadruped Gait Learning Using Cyclic Genetic Algorithms
Hickey, Timothy J.
and in particular, Genetic Algorithms, have previously been used to develop gaits for legged (primarily hexapod]. In a previous work Parker made use of cyclic genetic algorithms to develop walking gaits for a hexapod robot [5]. Each of the six legs of this hexapod robot could only move vertically and horizontally and the number
Enhancing Smart Home Algorithms Using Temporal Relations
Cook, Diane J.
Enhancing Smart Home Algorithms Using Temporal Relations Vikramaditya R. JAKKULA1 and Diane J COOK School of Electrical Engineering and Computer Science Abstract. Smart homes offer a potential benefit improves the performance of these algorithms and thus enhances the ability of smart homes to monitor
Virtual Scanning Algorithm for Road Network Surveillance
Jeong, Jaehoon "Paul"
Virtual Scanning Algorithm for Road Network Surveillance Jaehoon Jeong, Student Member, IEEE, Yu Gu a VIrtual Scanning Algorithm (VISA), tailored and optimized for road network surveillance. Our design roadways and 2) the road network maps are normally known. We guarantee the detection of moving targets
Communication and Computation in Distributed CSP Algorithms
Krishnamachari, Bhaskar
Communication and Computation in Distributed CSP Algorithms C`esar Fern`andez1 , Ram´on B´ejar1 in the context of networked distributed systems. In order to study the performance of Distributed CSP (DisCSP consider two complete DisCSP algorithms: asynchronous backtracking (ABT) and asynchronous weak commitment
A heuristic algorithm for graph isomorphism
Torres Navarro, Luz
1999-01-01T23:59:59.000Z
polynomial time algorithm O(n?), ISO-MT, that seems' to solve the graph isomorphism decision problem correctly for all classes of graphs. Our algorithm is extremely useful from the practical point of view since counter examples (pairs of graphs for which our...
Power Control Algorithms in Wireless Communications
Power Control Algorithms in Wireless Communications Judd Rohwer , Chaouki T. Abdallah , Aly El-Osery 1 Abstract This paper presents a comprehensive review of the published algorithms on power control) and Time Division Multiple Access (TDMA). 2 Introduction Power control in cellular systems is applied
A heuristic algorithm for graph isomorphism
Torres Navarro, Luz
1999-01-01T23:59:59.000Z
polynomial time algorithm O(n?), ISO-MT, that seems' to solve the graph isomorphism decision problem correctly for all classes of graphs. Our algorithm is extremely useful from the practical point of view since counter examples (pairs of graphs for which our...
A Faster Primal Network Simplex Algorithm
Aggarwal, Charu C.
We present a faster implementation of the polynomial time primal simplex algorithm due to Orlin [23]. His algorithm requires O(nm min{log(nC), m log n}) pivots and O(n2 m ??n{log nC, m log n}) time. The bottleneck operations ...
ASYNPLEX, an asynchronous parallel revised simplex algorithm
Hall, Julian
ASYNPLEX, an asynchronous parallel revised simplex algorithm J.A.J. Hall K.I.M. McKinnon February, an asynchronous parallel revised simplex algorithm J. A. J. Hall K. I. M. McKinnon 27th February 1998 Abstract This paper describes ASYNPLEX, an asynchronous variant of the revised simplex method which is suitable
Buffer assignment algorithms for data driven architectures
Chatterjee, Mitrajit
1994-01-01T23:59:59.000Z
algorithms have been shown to be O(V x E) and O(V'xlogV) re spectively; an improvement over the existing strategies. A novel buffer distribution algorithm to maximize the pipelining and throughput has also been proposed. The number of buffers obtained...
Stochastic Search for Signal Processing Algorithm Optimization
Stochastic Search for Signal Processing Algorithm Optimization Bryan Singer Manuela Veloso May address the complex task of signal processing optimization. We first introduce and discuss the complexities of this domain. In general, a single signal processing algorithm can be represented by a very
When are two algorithms the same?
Andreas Blass; Nachum Dershowitz; Yuri Gurevich
2008-11-05T23:59:59.000Z
People usually regard algorithms as more abstract than the programs that implement them. The natural way to formalize this idea is that algorithms are equivalence classes of programs with respect to a suitable equivalence relation. We argue that no such equivalence relation exists.
Note on Integer Factoring Algorithms II
N. A. Carella
2007-02-08T23:59:59.000Z
This note introduces a new class of integer factoring algorithms. Two versions of this method will be described, deterministic and probabilistic. These algorithms are practical, and can factor large classes of balanced integers N = pq, p < q < 2p in superpolynomial time. Further, an extension of the Fermat factoring method is proposed.
Improvements of the local bosonic algorithm
B. Jegerlehner
1996-12-15T23:59:59.000Z
We report on several improvements of the local bosonic algorithm proposed by M. Luescher. We find that preconditioning and over-relaxation works very well. A detailed comparison between the bosonic and the Kramers-algorithms shows comparable performance for the physical situation examined.
Energy Aware Algorithmic Engineering Swapnoneel Roy
Rudra,, Atri
Energy Aware Algorithmic Engineering Swapnoneel Roy School of Computing University of North Florida: akshat.verma@in.ibm.com Abstract--In this work, we argue that energy management should be a guiding are simple and do not aid in design of energy-efficient algorithms. In this work, we conducted a large number
Algorithmic Problems in Power Management Sandy Irani
Pruhs, Kirk
Algorithmic Problems in Power Management Sandy Irani School of Information and Computer Science on algorithmic problems related to power management. We will try to highlight some open problems that we feel are interesting. This survey places more concentration on lines of research of the authors: managing power using
Algorithmic cooling and scalable NMR quantum computers
Mor, Tal
Algorithmic cooling and scalable NMR quantum computers P. Oscar Boykin*, Tal MorÂ§ , Vwani cooling (via polarization heat bath)--a powerful method for obtaining a large number of highly polarized (quantum) bits, algorithmic cooling cleans dirty bits beyond the Shannon's bound on data compression
Safety evaluation of MHTGR licensing basis accident scenarios
Kroeger, P.G.
1989-04-01T23:59:59.000Z
The safety potential of the Modular High-Temperature Gas Reactor (MHTGR) was evaluated, based on the Preliminary Safety Information Document (PSID), as submitted by the US Department of Energy to the US Nuclear Regulatory Commission. The relevant reactor safety codes were extended for this purpose and applied to this new reactor concept, searching primarily for potential accident scenarios that might lead to fuel failures due to excessive core temperatures and/or to vessel damage, due to excessive vessel temperatures. The design basis accident scenario leading to the highest vessel temperatures is the depressurized core heatup scenario without any forced cooling and with decay heat rejection to the passive Reactor Cavity Cooling System (RCCS). This scenario was evaluated, including numerous parametric variations of input parameters, like material properties and decay heat. It was found that significant safety margins exist, but that high confidence levels in the core effective thermal conductivity, the reactor vessel and RCCS thermal emissivities and the decay heat function are required to maintain this safety margin. Severe accident extensions of this depressurized core heatup scenario included the cases of complete RCCS failure, cases of massive air ingress, core heatup without scram and cases of degraded RCCS performance due to absorbing gases in the reactor cavity. Except for no-scram scenarios extending beyond 100 hr, the fuel never reached the limiting temperature of 1600/degree/C, below which measurable fuel failures are not expected. In some of the scenarios, excessive vessel and concrete temperatures could lead to investment losses but are not expected to lead to any source term beyond that from the circulating inventory. 19 refs., 56 figs., 11 tabs.
EM Algorithms from a Non-Stochastic Perspective Charles Byrne
Byrne, Charles
EM Algorithms from a Non-Stochastic Perspective Charles Byrne Charles Byrne@uml.edu Department The EM algorithm is not a single algorithm, but a template for the con- struction of iterative algorithms a method for estimat- ing parameters in statistics, the essence of the EM algorithm is not stochastic
A Visualization System for Correctness Proofs of Graph Algorithms
Metaxas, Takis
A Visualization System for Correctness Proofs of Graph Algorithms P.A. Gloor1, D.B. Johnson2, F. Makedon2, P. Metaxas3 Feb. 28, 1993 Running head: Proof Visualization of Graph Algorithms Correspondence proofs of graph algorithms. The system has been demonstrated for a greedy algorithm, Prim's algorithm
QRlike Algorithms---An Overview of Convergence Theory and Practice
QRÂlike Algorithms--- An Overview of Convergence Theory and Practice David S. Watkins Abstract. The family of GR algorithms is discussed. This includes the stanÂ dard and multishift QR and LR algorithms, the Hamiltonian QR algorithm, divideÂandÂconquer algorithms such as the matrix sign function method, and many
Sketching, streaming, and sub-linear space algorithms
Reif, Rafael
Sketching, streaming, and sub-linear space algorithms Piotr Indyk MIT (currently at Rice U) #12 algorithms are approximate · We assume worst-case input stream Adversaries do exist General algorithms Modular composition · Randomized algorithms OK (often necessary) Randomness in the algorithm
Discrimination of Unitary Transformations and Quantum Algorithms
David Collins
2008-11-09T23:59:59.000Z
Quantum algorithms are typically understood in terms of the evolution of a multi-qubit quantum system under a prescribed sequence of unitary transformations. The input to the algorithm prescribes some of the unitary transformations in the sequence with others remaining fixed. For oracle query algorithms, the input determines the oracle unitary transformation. Such algorithms can be regarded as devices for discriminating amongst a set of unitary transformations. The question arises: "Given a set of known oracle unitary transformations, to what extent is it possible to discriminate amongst them?" We investigate this for the Deutsch-Jozsa problem. The task of discriminating amongst the admissible oracle unitary transformations results in an exhaustive collection of algorithms which can solve the problem with certainty.
Teodor Buchner; Jan ?ebrowski; Grzegorz Gielerak
2010-07-13T23:59:59.000Z
Using a three-compartment model of blood pressure dynamics, we analyze theoretically the short term cardiovascular variability: how the respiratory-related blood pressure fluctuations are buffered by appropriate heart rate changes: i.e. the respiratory sinus arrhythmia. The buffering is shown to be crucially dependent on the time delay between the stimulus (such as e.g. the inspiration onset) and the application of the control (the moment in time when the efferent response is delivered to the heart). This theoretical analysis shows that the buffering mechanism is effective only in the upright position of the body. It explains a paradoxical effect of enhancement of the blood pressure fluctuations by an ineffective control. Such a phenomenon was observed experimentally. Using the basis of the model, we discuss the blood pressure variability and heart rate variability under such clinical conditions as the states of expressed adrenergic drive and the tilt-test during the parasympathetic blockade or fixed rate atrial pacing. From the results of the variability analysis we draw a conclusion that the control of blood pressure in the HF band does not directly obtain the arterial baroreceptor input. We also discuss methodological issues of baroreflex sensitivity and sympathovagal balance assessment.
Theoretical Investigation of Charge Transfer between N^{6+} and atomic Hydrogen
Wu, Y. [University of Georgia, Athens, GA; Stancil, P C [University of Georgia, Athens, GA; Liebermann, H. P. [Bergische Universitaet Wuppertal, Germany; Funke, P. [Bergische Universitaet Wuppertal, Germany; Rai, S. N. [Bergische Universitaet Wuppertal, Germany; Buenker, R. J. [Bergische Universitaet Wuppertal, Germany; Schultz, David Robert [ORNL; Hui, Yawei [ORNL; Draganic, Ilija N [ORNL; Havener, Charles C [ORNL
2011-01-01T23:59:59.000Z
Charge transfer due to collisions of ground-state N{sup 6+}(1s{sup 2} S) with atomic hydrogen has been investigated theoretically using the quantum-mechanical molecular-orbital close-coupling (QMOCC) method, in which the adiabatic potentials and nonadiabatic couplings were obtained using the multireference single- and double-excitation configuration-interaction (MRDCI) approach. Total, n-, l-, and S-resolved cross sections have been obtained for energies between 10 meV/u and 10 keV/u. The QMOCC results were compared to available experimental and theoretical data as well as to merged-beams measurements and atomic-orbital close-coupling and classical trajectory Monte Carlo calculations. The accuracy of the QMOCC charge-transfer cross sections was found to be sensitive to the accuracy of the adiabatic potentials and couplings. Consequently, we developed a method to optimize the atomic basis sets used in the MRDCI calculations for highly charged ions. Since cross sections, especially those that are state selective, are necessary input for x-ray emission simulation of heliospheric and Martian exospheric spectra arising from solar wind ion-neutral gas collisions, a recommended set of state-selective cross sections, based on our evaluation of the calculations and measurements, is provided.
Advanced Fuel Cycle Economic Tools, Algorithms, and Methodologies
David E. Shropshire
2009-05-01T23:59:59.000Z
The Advanced Fuel Cycle Initiative (AFCI) Systems Analysis supports engineering economic analyses and trade-studies, and requires a requisite reference cost basis to support adequate analysis rigor. In this regard, the AFCI program has created a reference set of economic documentation. The documentation consists of the “Advanced Fuel Cycle (AFC) Cost Basis” report (Shropshire, et al. 2007), “AFCI Economic Analysis” report, and the “AFCI Economic Tools, Algorithms, and Methodologies Report.” Together, these documents provide the reference cost basis, cost modeling basis, and methodologies needed to support AFCI economic analysis. The application of the reference cost data in the cost and econometric systems analysis models will be supported by this report. These methodologies include: the energy/environment/economic evaluation of nuclear technology penetration in the energy market—domestic and internationally—and impacts on AFCI facility deployment, uranium resource modeling to inform the front-end fuel cycle costs, facility first-of-a-kind to nth-of-a-kind learning with application to deployment of AFCI facilities, cost tradeoffs to meet nuclear non-proliferation requirements, and international nuclear facility supply/demand analysis. The economic analysis will be performed using two cost models. VISION.ECON will be used to evaluate and compare costs under dynamic conditions, consistent with the cases and analysis performed by the AFCI Systems Analysis team. Generation IV Excel Calculations of Nuclear Systems (G4-ECONS) will provide static (snapshot-in-time) cost analysis and will provide a check on the dynamic results. In future analysis, additional AFCI measures may be developed to show the value of AFCI in closing the fuel cycle. Comparisons can show AFCI in terms of reduced global proliferation (e.g., reduction in enrichment), greater sustainability through preservation of a natural resource (e.g., reduction in uranium ore depletion), value from weaning the U.S. from energy imports (e.g., measures of energy self-sufficiency), and minimization of future high level waste (HLW) repositories world-wide.
Advanced algorithms for information science
Argo, P.; Brislawn, C.; Fitzgerald, T.J.; Kelley, B.; Kim, W.H.; Mazieres, B.; Roeder, H.; Strottman, D.
1998-12-31T23:59:59.000Z
This is the final report of a one-year, Laboratory Directed Research and Development (LDRD) project at Los Alamos National Laboratory (LANL). In a modern information-controlled society the importance of fast computational algorithms facilitating data compression and image analysis cannot be overemphasized. Feature extraction and pattern recognition are key to many LANL projects and the same types of dimensionality reduction and compression used in source coding are also applicable to image understanding. The authors have begun developing wavelet coding which decomposes data into different length-scale and frequency bands. New transform-based source-coding techniques offer potential for achieving better, combined source-channel coding performance by using joint-optimization techniques. They initiated work on a system that compresses the video stream in real time, and which also takes the additional step of analyzing the video stream concurrently. By using object-based compression schemes (where an object is an identifiable feature of the video signal, repeatable in time or space), they believe that the analysis is directly related to the efficiency of the compression.
Statistical algorithms in the study of mammalian DNA methylation
Singer, Meromit
2012-01-01T23:59:59.000Z
non-overlapping CCGIs: the algorithm 2.2.6 Running time andI Statistical algorithms in the study of mammalian DNAof the result of the CCGI algorithm. Nodes marked along the
Two Strategies to Speed up Connected Component Labeling Algorithms
Wu, Kesheng; Otoo, Ekow; Suzuki, Kenji
2008-01-01T23:59:59.000Z
but not linear set union algorithm,” J. ACM, vol. 22, no. 2,analysis of set union algorithms,” J. ACM, vol. 31, no. 2,An improved equivalence algorithm,” Commun. ACM, vol. 7, no.
An Alternative to Gillespie's Algorithm for Simulating Chemical Reactions
Troina, Angelo
An Alternative to Gillespie's Algorithm for Simulating Chemical Reactions Roberto Barbuti, Andrea introduce a probabilistic algorithm for the simulation of chemical reactions, which can be used evolution of chemical reactive systems described by Gillespie. Moreover, we use our algorithm
Comparison of generality based algorithm variants for automatic taxonomy generation
Madnick, Stuart E.
We compare a family of algorithms for the automatic generation of taxonomies by adapting the Heymann-algorithm in various ways. The core algorithm determines the generality of terms and iteratively inserts them in a growing ...
Martin, A; Venkatesan, Dr V Prasanna
2011-01-01T23:59:59.000Z
Today in every organization financial analysis provides the basis for understanding and evaluating the results of business operations and delivering how well a business is doing. This means that the organizations can control the operational activities primarily related to corporate finance. One way that doing this is by analysis of bankruptcy prediction. This paper develops an ontological model from financial information of an organization by analyzing the Semantics of the financial statement of a business. One of the best bankruptcy prediction models is Altman Z-score model. Altman Z-score method uses financial rations to predict bankruptcy. From the financial ontological model the relation between financial data is discovered by using data mining algorithm. By combining financial domain ontological model with association rule mining algorithm and Zscore model a new business intelligence model is developed to predict the bankruptcy.
A Cone Jet-Finding Algorithm for Heavy-Ion Collisions at LHC Energies
S-L Blyth; M J Horner; T Awes; T Cormier; H Gray; J L Klay; S R Klein; M van Leeuwen; A Morsch; G Odyniec; A Pavlinov
2006-09-15T23:59:59.000Z
Standard jet finding techniques used in elementary particle collisions have not been successful in the high track density of heavy-ion collisions. This paper describes a modified cone-type jet finding algorithm developed for the complex environment of heavy-ion collisions. The primary modification to the algorithm is the evaluation and subtraction of the large background energy, arising from uncorrelated soft hadrons, in each collision. A detailed analysis of the background energy and its event-by-event fluctuations has been performed on simulated data, and a method developed to estimate the background energy inside the jet cone from the measured energy outside the cone on an event-by-event basis. The algorithm has been tested using Monte-Carlo simulations of Pb+Pb collisions at $\\sqrt{s}=5.5$ TeV for the ALICE detector at the LHC. The algorithm can reconstruct jets with a transverse energy of 50 GeV and above with an energy resolution of $\\sim30%$.
System engineering approach to GPM retrieval algorithms
Rose, C. R. (Chris R.); Chandrasekar, V.
2004-01-01T23:59:59.000Z
System engineering principles and methods are very useful in large-scale complex systems for developing the engineering requirements from end-user needs. Integrating research into system engineering is a challenging task. The proposed Global Precipitation Mission (GPM) satellite will use a dual-wavelength precipitation radar to measure and map global precipitation with unprecedented accuracy, resolution and areal coverage. The satellite vehicle, precipitation radars, retrieval algorithms, and ground validation (GV) functions are all critical subsystems of the overall GPM system and each contributes to the success of the mission. Errors in the radar measurements and models can adversely affect the retrieved output values. Ground validation (GV) systems are intended to provide timely feedback to the satellite and retrieval algorithms based on measured data. These GV sites will consist of radars and DSD measurement systems and also have intrinsic constraints. One of the retrieval algorithms being studied for use with GPM is the dual-wavelength DSD algorithm that does not use the surface reference technique (SRT). The underlying microphysics of precipitation structures and drop-size distributions (DSDs) dictate the types of models and retrieval algorithms that can be used to estimate precipitation. Many types of dual-wavelength algorithms have been studied. Meneghini (2002) analyzed the performance of single-pass dual-wavelength surface-reference-technique (SRT) based algorithms. Mardiana (2003) demonstrated that a dual-wavelength retrieval algorithm could be successfully used without the use of the SRT. It uses an iterative approach based on measured reflectivities at both wavelengths and complex microphysical models to estimate both No and Do at each range bin. More recently, Liao (2004) proposed a solution to the Do ambiguity problem in rain within the dual-wavelength algorithm and showed a possible melting layer model based on stratified spheres. With the No and Do calculated at each bin, the rain rate can then be calculated based on a suitable rain-rate model. This paper develops a system engineering interface to the retrieval algorithms while remaining cognizant of system engineering issues so that it can be used to bridge the divide between algorithm physics an d overall mission requirements. Additionally, in line with the systems approach, a methodology is developed such that the measurement requirements pass through the retrieval model and other subsystems and manifest themselves as measurement and other system constraints. A systems model has been developed for the retrieval algorithm that can be evaluated through system-analysis tools such as MATLAB/Simulink.
Recent Developments in Dual Lattice Algorithms
J. Wade Cherrington
2008-10-02T23:59:59.000Z
We review recent progress in numerical simulations with dually transformed SU(2) LGT, starting with a discussion of explicit dual amplitudes and algorithms for SU(2) pure Yang Mills in D=3 and D=4. In the D=3 case, we discuss results that validate the dual algorithm against conventional simulations. We also review how a local, exact dynamical fermion algorithm can naturally be incorporated into the dual framework. We conclude with an outlook for this technique and a look at some of the current challenges we've encountered with this method, specifically critical slowing down and the sign problem.
An Overview of LISA Data Analysis Algorithms
Edward K. Porter
2009-10-02T23:59:59.000Z
The development of search algorithms for gravitational wave sources in the LISA data stream is currently a very active area of research. It has become clear that not only does difficulty lie in searching for the individual sources, but in the case of galactic binaries, evaluating the fidelity of resolved sources also turns out to be a major challenge in itself. In this article we review the current status of developed algorithms for galactic binary, non-spinning supermassive black hole binary and extreme mass ratio inspiral sources. While covering the vast majority of algorithms, we will highlight those that represent the state of the art in terms of speed and accuracy.
Improved Sampling Algorithms in Lattice QCD
Gambhir, Arjun Singh
2015-01-01T23:59:59.000Z
Reverse Monte Carlo (RMC) is an algorithm that incorporates stochastic modification of the action as part of the process that updates the fields in a Monte Carlo simulation. Such update moves have the potential of lowering or eliminating potential barriers that may cause inefficiencies in exploring the field configuration space. The highly successful Cluster algorithms for spin systems can be derived from the RMC framework. In this work we apply RMC ideas to pure gauge theory, aiming to alleviate the critical slowing down observed in the topological charge evolution as well as other long distance observables. We present various formulations of the basic idea and report on our numerical experiments with these algorithms.
Improved Sampling Algorithms in Lattice QCD
Arjun Singh Gambhir; Kostas Orginos
2015-06-19T23:59:59.000Z
Reverse Monte Carlo (RMC) is an algorithm that incorporates stochastic modification of the action as part of the process that updates the fields in a Monte Carlo simulation. Such update moves have the potential of lowering or eliminating potential barriers that may cause inefficiencies in exploring the field configuration space. The highly successful Cluster algorithms for spin systems can be derived from the RMC framework. In this work we apply RMC ideas to pure gauge theory, aiming to alleviate the critical slowing down observed in the topological charge evolution as well as other long distance observables. We present various formulations of the basic idea and report on our numerical experiments with these algorithms.
A new loop-reducing routing algorithm
Park, Sung-Woo
1989-01-01T23:59:59.000Z
Coming-up VI. Three Links Failed Page 51 51 52 52 53 53 . 54 54 55 Figure 5. 6. 7. 8. LIST OF FIGURES Bellman-Ford Algorithm Update Tables of Distributed Bellman-Ford Algorithm Two Types of a. Loop Two-Node Loop Multi-Node Loop... distances for all pairs of nodes in the subnet, and distributes updated routing information to all the nodes. The centralized algorithm, however, is vulnerable to a. single node failure ? if the NRC fails, all nodes in the network must stop their rout...
QCDLAB: Designing Lattice QCD Algorithms with MATLAB
Artan Borici
2006-10-09T23:59:59.000Z
This paper introduces QCDLAB, a design and research tool for lattice QCD algorithms. The tool, a collection of MATLAB functions, is based on a ``small-code'' and a ``minutes-run-time'' algorithmic design philosophy. The present version uses the Schwinger model on the lattice, a great simplification, which shares many features and algorithms with lattice QCD. A typical computing project using QCDLAB is characterised by short codes, short run times, and the ability to make substantial changes in a few seconds. QCDLAB 1.0 can be downloaded from the QCDLAB project homepage {\\tt http://phys.fshn.edu.al/qcdlab.html}.
A proximal point algorithm for sequential feature extraction ...
2011-08-03T23:59:59.000Z
We propose a proximal point algorithm to solve LAROS problem, that is the ... We also develop a new stopping criterion for the proximal point algorithm, which.
A new Search via Probability Algorithm for solving Engineering ...
Admin
2012-08-08T23:59:59.000Z
Without loss of generality, we design an algorithm to solve the problem (I), the .... Statistics of 30 times by running ESVP algorithm for Three-Bar Truss Design.
Efficient Heuristic Algorithms for Maximum Utility Product Pricing ...
2012-11-19T23:59:59.000Z
Nov 19, 2012 ... We provide very efficient implementations of the algorithms of ... cases of related optimal pricing problems admit efficient algorithms, see for ...
An Efficient Algorithm for Computing Robust Minimum Capacity st Cuts
Doug Altner
2008-03-20T23:59:59.000Z
Mar 20, 2008 ... In this paper, we present an efficient algorithm for computing minimum capacity s-t cuts under a polyhedral model of robustness. Our algorithm ...
New Design Methods and Algorithms for Multi-component Distillation...
Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site
Design Methods and Algorithms for Multi-component Distillation Processes New Design Methods and Algorithms for Multi-component Distillation Processes multicomponent.pdf More...
Madduri, Kamesh; Ediger, David; Jiang, Karl; Bader, David A.; Chavarría-Miranda, Daniel
2009-05-29T23:59:59.000Z
We present a new lock-free parallel algorithm for computing betweenness centrality of massive small-world networks. With minor changes to the data structures, our algorithm also achieves better spatial cache locality compared to previous approaches. Betweenness centrality is a key algorithm kernel in the HPCS SSCA#2 Graph Analysis benchmark, which has been extensively used to evaluate the performance of emerging high-performance computing architectures for graph-theoretic computations. We design optimized implementations of betweenness centrality and the SSCA#2 benchmark for two hardware multithreaded systems: a Cray XMT system with the ThreadStorm processor, and a single-socket Sun multicore server with the UltraSparc T2 processor. For a small-world network of 134 million vertices and 1.073 billion edges, the 16-processor XMT system and the 8-core Sun Fire T5120 server achieve TEPS scores (an algorithmic performance count for the SSCA#2 benchmark) of 160 million and 90 million respectively, which corresponds to more than a 2X performance improvement over the previous parallel implementations. To better characterize the performance of these multithreaded systems, we correlate the SSCA#2 performance results with data from the memory-intensive STREAM and RandomAccess benchmarks. Finally, we demonstrate the applicability of our implementation to analyze massive real-world datasets by computing approximate betweenness centrality for a large-scale IMDb movie-actor network.
Kim, Jeongnim [ORNL] [ORNL; Reboredo, Fernando A [ORNL] [ORNL
2014-01-01T23:59:59.000Z
The self-healing diffusion Monte Carlo method for complex functions [F. A. Reboredo J. Chem. Phys. {\\bf 136}, 204101 (2012)] and some ideas of the correlation function Monte Carlo approach [D. M. Ceperley and B. Bernu, J. Chem. Phys. {\\bf 89}, 6316 (1988)] are blended to obtain a method for the calculation of thermodynamic properties of many-body systems at low temperatures. In order to allow the evolution in imaginary time to describe the density matrix, we remove the fixed-node restriction using complex antisymmetric trial wave functions. A statistical method is derived for the calculation of finite temperature properties of many-body systems near the ground state. In the process we also obtain a parallel algorithm that optimizes the many-body basis of a small subspace of the many-body Hilbert space. This small subspace is optimized to have maximum overlap with the one expanded by the lower energy eigenstates of a many-body Hamiltonian. We show in a model system that the Helmholtz free energy is minimized within this subspace as the iteration number increases. We show that the subspace expanded by the small basis systematically converges towards the subspace expanded by the lowest energy eigenstates. Possible applications of this method to calculate the thermodynamic properties of many-body systems near the ground state are discussed. The resulting basis can be also used to accelerate the calculation of the ground or excited states with Quantum Monte Carlo.
The cc-pV5Z-F12 basis set: reaching the basis set limit in explicitly correlated calculations
Peterson, Kirk A; Martin, Jan M L
2014-01-01T23:59:59.000Z
We have developed and benchmarked a new extended basis set for explicitly correlated calculations, namely cc-pV5Z-F12. It is offered in two variants, cc-pV5Z-F12 and cc- pV5Z-F12(rev2), the latter of which has additional basis functions on hydrogen not present in the cc-pVnZ-F12 (n=D,T,Q) sequence.A large uncontracted 'reference' basis set is used for benchmarking. cc-pVnZ-F12 (n=D, T, Q, 5) is shown to be a convergent hierarchy. Especially the cc- pV5Z-F12(rev2) basis set can yield the valence CCSD component of total atomization energies (TAEs), without any extrapolation, to an accuracy normally associated with aug-cc-pV{5,6}Z extrapolations. SCF components are functionally at the basis set limit, while the MP2 limit can be approached to as little as 0.01 kcal/mol without extrapolation. The determination of (T) appears to be the most difficult of the three components and cannot presently be accomplished without extrapolation or scaling. (T) extrapolation from cc-pV{T,Q}Z-F12 basis sets, combined with CCSD-F1...
Neutron-Antineutron Oscillations: Theoretical Status and Experimental Prospects
Phillips, D. G.; Snow, W. M.; Babu, K.; Banerjee, S.; Baxter, D. V.; Berezhiani, Z.; Bergevin, M.; Bhattacharya, S.; Brooijmans, G.; Castellanos, L.; et al.,
2014-10-04T23:59:59.000Z
This paper summarizes the relevant theoretical developments, outlines some ideas to improve experimental searches for free neutron-antineutron oscillations, and suggests avenues for future improvement in the experimental sensitivity.
Theoretical Studies of the Dynamics of Gases at Organic Surfaces...
Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)
Theoretical Studies of the Dynamics of Gases at Organic Surfaces Apr 04 2014 10:00 AM - 11:00 AM Diego Troya, Virginia Tech, Blacksburg, Virginia Chemical Sciences Division Seminar...
Theoretical Approaches to the Evolution of Development and Genetic Architecture
Rice, Sean
Theoretical Approaches to the Evolution of Development and Genetic Architecture Sean H. Rice of development; genetic architecture; canal- ization; modularity Introduction Heritable variation is the raw all the complexities of development and genetic architecture. This was one of the motivations
Shrink fit effects on rotordynamic stability: experimental and theoretical study
Jafri, Syed Muhammad Mohsin
2007-09-17T23:59:59.000Z
This dissertation presents an experimental and theoretical study of subsynchronous rotordynamic instability in rotors caused by interference and shrink fit interfaces. The experimental studies show the presence of strong ...
Shrink fit effects on rotordynamic stability: experimental and theoretical study
Jafri, Syed Muhammad Mohsin
2007-09-17T23:59:59.000Z
This dissertation presents an experimental and theoretical study of subsynchronous rotordynamic instability in rotors caused by interference and shrink fit interfaces. The experimental studies show the presence of strong unstable subsynchronous...
Theoretical Minimum Energy Use of a Building HVAC System
Tanskyi, O.
2011-01-01T23:59:59.000Z
This paper investigates the theoretical minimum energy use required by the HVAC system in a particular code compliant office building. This limit might be viewed as the "Carnot Efficiency" for HVAC system. It assumes that all ventilation and air...
Theoretical study of syngas hydrogenation to methanol on the...
Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)
study of syngas hydrogenation to methanol on the polar Zn-terminated ZnO(0001) surface. Theoretical study of syngas hydrogenation to methanol on the polar Zn-terminated ZnO(0001)...
Solid electrolytes for battery applications a theoretical perspective a
Holzwarth, Natalie
Solid electrolytes for battery applications a theoretical perspective a Natalie Holzwarth, USA · Introduction and motivation for solid electrolytes · What can computation do for this project? · Specific examples LiPON, thio phosphates, other solid electrolytes · Suggestions for collaboration
Software Enabled Virtually Variable Displacement Pumps -Theoretical and Experimental Studies
Li, Perry Y.
Software Enabled Virtually Variable Displacement Pumps - Theoretical and Experimental Studies the functional equivalent of a variable displacement pump. This approach combines a fixed displacement pump valve control, without many of the shortcomings of commercially available variable displacement pumps
Theoretical investigation of energy-trapping mechanism by atomic systems
Srivastava, Rajendra P.
1978-06-01T23:59:59.000Z
The theoretical results are presented here in detail for the atomic device proposed earlier by the author. This device absorbs energy from a continuous radiation source and stores some of it with atoms in metastable states ...
Meeting Shannon: Information-Theoretic Thinking in Engineering and Science
Goyal, Vivek K
Meeting Shannon: Information-Theoretic Thinking in Engineering and Science Lav R. Varshney Laboratory for Information and Decision Systems and Research Laboratory of Electronics Massachusetts universe for deducing fundamental limits, influences the cognitive processes of information theorists
Learning by Game-Building in Theoretical Computer Science Education
Hutchins-Korte, Laura
2008-01-01T23:59:59.000Z
It has been suggested that theoretical computer science (TCS) suffers more than average from a lack of intrinsic motivation. The reasons provided in the literature include the difficulty of the subject, lack of relevance ...
An Axiomatisation of Computationally Adequate Domain Theoretic Models of FPC
Fiore, Marcelo P; Plotkin, Gordon
1994-01-01T23:59:59.000Z
Categorical models of the metalanguage FPC (a type theory with sums, products, exponentials and recursive types) are defined. Then, domain-theoretic models of FPC are axiomatised and a wide subclass of them —the ...
Physica Scripta An International Journal for Experimental and Theoretical Physics
Stancil, Phillip C.
universe [3]. D and T are also the fuel in a fusion device. In the core of a fusion plasma, the hydrogen detachment phenomenon is closely related to volume recombination in the cold divertor [4^11]. The theoretical
Photoelectron Spectroscopy and Theoretical Studies of UF5 - and...
Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)
Studies of UF5 - and UF6 -. Photoelectron Spectroscopy and Theoretical Studies of UF5 - and UF6 -. Abstract: The UF5 ? and UF6 ? anions are produced using electrospray...
Advanced CHP Control Algorithms: Scope Specification
Katipamula, Srinivas; Brambley, Michael R.
2006-04-28T23:59:59.000Z
The primary objective of this multiyear project is to develop algorithms for combined heat and power systems to ensure optimal performance, increase reliability, and lead to the goal of clean, efficient, reliable and affordable next generation energy systems.