Theoretical Basis of Likelihood Methods in Molecular Phylogenetic Inference
Das, Rhiju
for molecular data by the maximum-likelihood approach has been attacked from a theoretical point of view is seen to be a classical statistical problem involving selection between composite hypothesesTheoretical Basis of Likelihood Methods in Molecular Phylogenetic Inference Rhiju Das, Centre
Learning Active Basis Models by EM-Type Algorithms
Wu, Ying Nian
Learning Active Basis Models by EM-Type Algorithms Zhangzhang Si1, Haifeng Gong1,2, Song-Chun Zhu1, and scales as latent variables into the image generation process, and learn the template by EM-type scheme for learning image templates of object categories where the learning is not fully supervised. We
Theoretical Basis for the Design of a DWPF Evacuated Canister
Routt, K.R.
2001-09-17T23:59:59.000Z
This report provides the theoretical bases for use of an evacuated canister for draining a glass melter. Design recommendations are also presented to ensure satisfactory performance in future tests of the concept.
Centrifuge Permeameter for Unsaturated Soils. I: Theoretical Basis and Experimental Developments
Zornberg, Jorge G.
Centrifuge Permeameter for Unsaturated Soils. I: Theoretical Basis and Experimental Developments Jorge G. Zornberg, M.ASCE1 ; and John S. McCartney, A.M.ASCE2 Abstract: A new centrifuge permeameter the centrifuge permeame- ter for concurrent determination of the soil-water retention curve SWRC and hydraulic
Storjohann, Arne
a given integer lattice basis b1 ; b2 ; : : : ; bn 2 ZZ n into a reduced basis. The cost of L 3 reduction product. The L 3 reduction algorithm presented in [12] guarantees to return a basis with initial vector for Integer Lattice Basis Reduction Arne Storjohann Eidgen¨ossische Technische Hochschule CH8092 Z
Optimal Nonmyopic Value of Information in Graphical Models Â Efficient Algorithms and Theoretical, we present the first efficient optimal algorithms for selecting observations for a class of graphical: In most graphical models tasks, if one designs an efficient algorithm for chain graphs, such as HMMs
Optimal Nonmyopic Value of Information in Graphical Models -- Efficient Algorithms and Theoretical, we present the first efficient optimal algorithms for selecting observations for a class of graphical: In most graphical models tasks, if one designs an efficient algorithm for chain graphs, such as HMMs
A Decision Theoretic Approach to Evaluate Radiation Detection Algorithms
Nobles, Mallory A.; Sego, Landon H.; Cooley, Scott K.; Gosink, Luke J.; Anderson, Richard M.; Hays, Spencer E.; Tardiff, Mark F.
2013-07-01T23:59:59.000Z
There are a variety of sensor systems deployed at U.S. border crossings and ports of entry that scan for illicit nuclear material. In this work, we develop a framework for comparing the performance of detection algorithms that interpret the output of these scans and determine when secondary screening is needed. We optimize each algorithm to minimize its risk, or expected loss. We measure an algorithm’s risk by considering its performance over a sample, the probability distribution of threat sources, and the consequence of detection errors. While it is common to optimize algorithms by fixing one error rate and minimizing another, our framework allows one to simultaneously consider multiple types of detection errors. Our framework is flexible and easily adapted to many different assumptions regarding the probability of a vehicle containing illicit material, and the relative consequences of a false positive and false negative errors. Our methods can therefore inform decision makers of the algorithm family and parameter values which best reduce the threat from illicit nuclear material, given their understanding of the environment at any point in time. To illustrate the applicability of our methods, in this paper, we compare the risk from two families of detection algorithms and discuss the policy implications of our results.
A theoretical analysis of a pattern recognition algorithm for bank failure prediction
Prieto Orlando, Rodrigo Javier
1994-01-01T23:59:59.000Z
A THEORETICAL ANALYSIS OF A PATTERN RECOGNITION ALGORITHM FOR BANK FAILURE PREDICTION A Thesis by RODRIGO JAVIER PRIETO ORLANDO Submitted to Texas ARM University in partial fulfillment of the requirements for the degree of MASTER OF SCIENCE... Analysis of a Pattern Recognition Algorithm for Bank Failure Prediction. (December 1994) Rodrigo Javier Prieto Orlando, B. S. , Texas AkM University Chair of Advisory Committee: Dr. Tep Sastri This thesis describes a theoretical analysis and a series...
The Back and Forth Nudging algorithm for data assimilation problems: theoretical results on
Boyer, Edmond
The Back and Forth Nudging algorithm for data assimilation problems: theoretical results consider the back and forth nudging algorithm that has been introduced for data assimilation purposes of the system can then be seen as a control vector [LDT86]. Finally, the basic idea of stochastic methods
Decision-theoretic consideration of robust hashing: link to practical algorithms
Genève, Université de
of digital and analog content as well as goods and products justifying an urgent need for reliable document privacy as well as universality to provide asymptotic independence to a complete or partial lack of priorDecision-theoretic consideration of robust hashing: link to practical algorithms Oleksiy Koval
Rough-Fuzzy C-Medoids Algorithm and Selection of Bio-Basis for Amino Acid
Pal, Sankar Kumar
of protein data sets. Index Terms--Pattern recognition, data mining, c-medoids algorithm, fuzzy sets, rough Pradipta Maji and Sankar K. Pal, Fellow, IEEE Abstract--In most pattern recognition algorithms, amino acids pattern recognition algorithms to analyze these biological subsequences is that they cannot recognize
Quinn, M.J.
1983-01-01T23:59:59.000Z
The problem of developing efficient algorithms and data structures to solve graph theoretic problems on tightly-coupled MIMD comuters is addressed. Several approaches to parallelizing a serial algorithm are examined. A technique is developed which allows the prediction of the expected execution time of some kinds of parallel algorithms. This technique can be used to determine which parallel algorithm is best for a particular application. Two parallel approximate algorithms for the Euclidean traveling salesman problem are designed and analyzed. The algorithms are parallelizations of the farthest-insertion heuristic and Karp's partitioning algorithm. Software lockout, the delay of processes due to contention for shared data structure, can be a significant hindrance to obtaining satisfactory speedup. Using the tactics of indirection and replication, new data structures are devised which can reduce the severity of software lockout. Finally, an upper bound to the speedup of parallel branch-and-bound algorithms which use the best-bound search strategy is determined.
Liang, Min
2012-01-01T23:59:59.000Z
Public-key cryptosystems for quantum messages are considered from two aspects: public-key encryption and public-key authentication. Firstly, we propose a general construction of quantum public-key encryption scheme, and then construct an information-theoretic secure instance. Then, we propose a quantum public-key authentication scheme, which can protect the integrity of quantum messages. This scheme can both encrypt and authenticate quantum messages. It is information-theoretic secure with regard to encryption, and the success probability of tampering decreases exponentially with the security parameter with regard to authentication. Compared with classical public-key cryptosystems, one private-key in our schemes corresponds to an exponential number of public-keys, and every quantum public-key used by the sender is an unknown quantum state to the sender.
Min Liang; Li Yang
2012-05-10T23:59:59.000Z
Public-key cryptosystems for quantum messages are considered from two aspects: public-key encryption and public-key authentication. Firstly, we propose a general construction of quantum public-key encryption scheme, and then construct an information-theoretic secure instance. Then, we propose a quantum public-key authentication scheme, which can protect the integrity of quantum messages. This scheme can both encrypt and authenticate quantum messages. It is information-theoretic secure with regard to encryption, and the success probability of tampering decreases exponentially with the security parameter with regard to authentication. Compared with classical public-key cryptosystems, one private-key in our schemes corresponds to an exponential number of public-keys, and every quantum public-key used by the sender is an unknown quantum state to the sender.
Mintert, James R.; Davis, Ernest E.; Dhuyvetter, Kevin C.; Bevers, Stan
1999-06-23T23:59:59.000Z
the cash price. Conversely, a positive basis indicates the futures price is less than the cash price. Basis is usually computed using the nearby (closest to expiration) futures con- tract. For example, in October the nearby corn futures contract... for market in September. The October Live Cattle contract is currently trading at $71 per cwt. But what does that mean to you when feeding and selling fin- ished steers in Hereford, Texas? To more accu- rately estimate what your actual selling price might be...
Paris-Sud XI, Université de
Discrete Mathematics and Theoretical Computer Science DMTCS vol. 14:1, 2012, 147158 A linear time in H such that each edge of H appears in this sequence exactly once and vi-1, vi ei, vi-1 = vi Mathematics and Theoretical Computer Science 14, 1 (2012) 147-158" #12;148 Zbigniew Lonc and Pawel Naroski (i
2012-03-14T23:59:59.000Z
Index Terms—Basis pursuit, distributed optimization, sensor networks, augmented ... and image denoising and restoration [1], [2], compression, fitting and ...
Ghelli, Giorgio
Basi di dati: FunzionalitÃ , Progettazione, Interrogazione Giorgio Ghelli DBMS's 2 Temi Â· FunzionalitÃ ed uso dei DBMS Â· Progettazione di una Base di Dati Â· Interrogazione di una Base di Dati FunzionalitÃ dei DBMS DBMS's 4 Riferimenti Â· A. Albano, G. Ghelli, R. Orsini, Basi di Dati Relazionali e
R.J. Garrett
2002-01-14T23:59:59.000Z
As part of the internal Integrated Safety Management Assessment verification process, it was determined that there was a lack of documentation that summarizes the safety basis of the current Yucca Mountain Project (YMP) site characterization activities. It was noted that a safety basis would make it possible to establish a technically justifiable graded approach to the implementation of the requirements identified in the Standards/Requirements Identification Document. The Standards/Requirements Identification Documents commit a facility to compliance with specific requirements and, together with the hazard baseline documentation, provide a technical basis for ensuring that the public and workers are protected. This Safety Basis Report has been developed to establish and document the safety basis of the current site characterization activities, establish and document the hazard baseline, and provide the technical basis for identifying structures, systems, and components (SSCs) that perform functions necessary to protect the public, the worker, and the environment from hazards unique to the YMP site characterization activities. This technical basis for identifying SSCs serves as a grading process for the implementation of programs such as Conduct of Operations (DOE Order 5480.19) and the Suspect/Counterfeit Items Program. In addition, this report provides a consolidated summary of the hazards analyses processes developed to support the design, construction, and operation of the YMP site characterization facilities and, therefore, provides a tool for evaluating the safety impacts of changes to the design and operation of the YMP site characterization activities.
Algorithmic and Theoretical Considerations for Computing ...
Steven Glenn Jackson and Alfred Gérard Noël (Speaker)
2009-03-10T23:59:59.000Z
Mar 12, 2009 ... dimp. S(g)K r the subalgebra of S(g)K defined by K-invariant polynomials of degree at most r. Steven Glenn Jackson and Alfred Gérard Noël ...
Broader source: Directives, Delegations, and Requirements [Office of Management (MA)]
2007-07-11T23:59:59.000Z
The Guide assists DOE/NNSA field elements and operating contractors in identifying and analyzing hazards at facilities and sites to provide the technical planning basis for emergency management programs. Cancels DOE G 151.1-1, Volume 2.
Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)
AFDC Printable Version Share this resource Send a link to EERE: Alternative Fuels Data Center Home Page to someone by E-mail Share EERE: Alternative Fuels Data Center Home Page on Facebook Tweet about EERE: Alternative Fuels Data Center Home Page on Twitter Bookmark EERE: Alternative1 First Use of Energy for All Purposes (Fuel and Nonfuel), 2002; Level:Energy: Grid Integration Redefining What'sis Taking Over OurThe Iron Spin Transition in the Earth'sConnect,LLCStartupTheoreticalHEP
Reduced Basis Method for Nanodevices Simulation
Pau, George Shu Heng
2008-05-23T23:59:59.000Z
Ballistic transport simulation in nanodevices, which involves self-consistently solving a coupled Schrodinger-Poisson system of equations, is usually computationally intensive. Here, we propose coupling the reduced basis method with the subband decomposition method to improve the overall efficiency of the simulation. By exploiting a posteriori error estimation procedure and greedy sampling algorithm, we are able to design an algorithm where the computational cost is reduced significantly. In addition, the computational cost only grows marginally with the number of grid points in the confined direction.
The Brain Basis of Emotions 1 BRAIN BASIS OF EMOTION
Barrett, Lisa Feldman
The Brain Basis of Emotions 1 BRAIN BASIS OF EMOTION The brain basis of emotion: A meta, Building 149 Charlestown, MA 02129 lindqukr@nmr.mgh.harvard.edu #12;The Brain Basis of Emotions 2 Abstract Researchers have wondered how the brain creates emotions since the early days of psychological science
Radioactive Waste Management Basis
Perkins, B K
2009-06-03T23:59:59.000Z
The purpose of this Radioactive Waste Management Basis is to describe the systematic approach for planning, executing, and evaluating the management of radioactive waste at LLNL. The implementation of this document will ensure that waste management activities at LLNL are conducted in compliance with the requirements of DOE Order 435.1, Radioactive Waste Management, and the Implementation Guide for DOE Manual 435.1-1, Radioactive Waste Management Manual. Technical justification is provided where methods for meeting the requirements of DOE Order 435.1 deviate from the DOE Manual 435.1-1 and Implementation Guide.
Graph-Theoretic Connectivity Control of Mobile
Pappas, George J.
]Â[23]. This research has given rise to connectivity or topology control algorithms that regulate the transmission powerINVITED P A P E R Graph-Theoretic Connectivity Control of Mobile Robot Networks This paper develops an analysis for groups of vehicles connected by a communication network; control laws are formulated
Approximating Power Indices --Theoretical and Empirical Analysis
Rosenschein, Jeff
, by providing lower bounds for both deter- ministic and randomized algorithms for calculating power indices. WeApproximating Power Indices -- Theoretical and Empirical Analysis Yoram Bachrach School and Computer Science, The Hebrew University, Jerusalem, Israel Amin Saberi Department of Management Science
Quantum Public-Key Encryption with Information Theoretic Security
Jiangyou Pan; Li Yang
2012-02-20T23:59:59.000Z
We propose a definition for the information theoretic security of a quantum public-key encryption scheme, and present bit-oriented and two-bit-oriented encryption schemes satisfying our security definition via the introduction of a new public-key algorithm structure. We extend the scheme to a multi-bitoriented one, and conjecture that it is also information theoretically secure, depending directly on the structure of our new algorithm.
A Q-LEARNING ALGORITHM WITH CONTINUOUS STATE SPACE ...
2006-09-22T23:59:59.000Z
Sep 22, 2006 ... stochastic approximation. Then, in section 3, we solve the mountain car task with the newly presented algorithm. 2. Theoretical Framework. 2.1.
Information-theoretic Approaches to Branching in Search Andrew Gilpin
Sandholm, Tuomas W.
constraints over sets of variables. 1 Introduction Search is a fundamental technique for problem solving in AIInformation-theoretic Approaches to Branching in Search Andrew Gilpin Computer Science Department of search algorithms. We introduce the information-theoretic paradigm for branching question selection
Milk Futures, Options and Basis
Haigh, Michael; Stockton, Matthew; Anderson, David P.; Schwart Jr., Robert B.
2001-10-12T23:59:59.000Z
The milk futures and options market enables producers and processors to manage price risk. This publication explains hedging, margin accounts, basis and how to track it, and other fundamentals of the futures and options market....
Facility worker technical basis document
SHULTZ, M.V.
2003-08-28T23:59:59.000Z
This technical basis document was developed to support the Tank Farm Documented Safety Analysis (DSA). It describes the criteria and methodology for allocating controls to hazardous conditions with significant facility work consequence and presents the results of the allocation.
Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)
AFDC Printable Version Share this resource Send a link to EERE: Alternative Fuels Data Center Home Page to someone by E-mail Share EERE: Alternative Fuels Data Center Home Page on Facebook Tweet about EERE: Alternative Fuels Data Center Home Page on Twitter Bookmark EERE: Alternative1 First Use of Energy for All Purposes (Fuel and Nonfuel), 2002; Level: National5Sales for4,645U.S. DOE Office511041cloth DocumentationProductsAlternativeOperationalAugustDecade5-F,INITIAL
Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)
AFDC Printable Version Share this resource Send a link to EERE: Alternative Fuels Data Center Home Page to someone by E-mail Share EERE: Alternative Fuels Data Center Home Page on Facebook Tweet about EERE: Alternative Fuels Data Center Home Page on Twitter Bookmark EERE: Alternative1 First Use of Energy for All Purposes (Fuel and Nonfuel), 2002; Level: National5Sales for4,645U.S. DOE Office511041cloth DocumentationProductsAlternativeOperationalAugustDecade5-F,INITIALoperator bispectral analysis
High performance parallel algorithms for incompressible flows
Sambavaram, Sreekanth Reddy
2002-01-01T23:59:59.000Z
innovative algorithms using solenoidal basis methods to solve the generalized Stokes problem for 3D MAC (Marker and Cell) and 2D unstructured P1-isoP1 finite element grids. It details a localized algebraic approach to construct solenoidal basis. An efficient...
Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)
Top: Theoretical prediction of capacitance of nanoporous electrodes in dipolar solvent (red) versus ionic liquid (black- Jiang, 2013a); Middle: Activated graphene electrode in...
Facility worker technical basis document
EVANS, C.B.
2003-03-21T23:59:59.000Z
This report documents the technical basis for facility worker safety to support the Tank Farms Documented Safety Analysis and described the criteria and methodology for allocating controls to hazardous conditions with significant facility worker consequences and presents the results of the allocation.
INL FCF Basis Review Follow-up
Broader source: Energy.gov (indexed) [DOE]
Basis. The four Significant Issues addressed the: 1) analysis of cadmium releases in seismic events, 2) analysis of radiological releases following an evaluation basis earthquake...
Organic solvent technical basis document
SANDGREN, K.R.
2003-03-22T23:59:59.000Z
This technical basis document was developed to support the Tank Farms Documented Safety Analysis (DSA), and describes the risk binning process and the technical basis for assigning risk bins for the organic solvent fire representative and associated represented hazardous conditions. The purpose of the risk binning process is to determine the need for safety-significant structures, systems, and components (SSC) and technical safety requirement (TSR)-level controls for a given representative accident or represented hazardous conditions based on an evaluation of the frequency and consequence. Note that the risk binning process is not applied to facility workers, because all facility worker hazardous conditions are considered for safety-significant SSCs and/or TSR-level controls. Determination of the need for safety-class SSCs was performed in accordance with DOE-STD-3009-94, Preparation Guide for US Department of Energy Nonreactor Nuclear Facility Documented Safety Analyses, as described in this report.
Knowing and Managing Grain Basis
Amosson, Stephen H.; Mintert, James R.; Tierney Jr., William I.; Waller, Mark L.
1999-06-23T23:59:59.000Z
Knowing and Managing Grain Basis Stephen Amosson, Jim Mintert, William Tierney and Mark Waller* Differences in grain prices throughout the world are the result of surplus or deficit production in various regions. In general, grain prices are lower... in the inland producing regions and higher in grain-deficit, densely populated and port regions. Distances between producing and consuming regions explain the price differential. Transfer costs, which include loading or handling and transportation charges...
TCAP Aluminium Dissolution Flowsheet Basis
PIERCE, ROBERTA.
2004-03-01T23:59:59.000Z
The Actinide Technology Section has proposed the use of an nitric acid HNO3 and potassium fluoride KF flowsheet for stripping palladium Pd from palladium-coated kieselguhr Pd/K and removing aluminum (Al) metal foam from the TCAP coils. The basis for the HNO3-KF flowsheet is drawn from many sources. A brief review of the sources will be presented. The basic flowsheet involves three process steps, each with its own chemistry.
Hanford Generic Interim Safety Basis
Lavender, J.C.
1994-09-09T23:59:59.000Z
The purpose of this document is to identify WHC programs and requirements that are an integral part of the authorization basis for nuclear facilities that are generic to all WHC-managed facilities. The purpose of these programs is to implement the DOE Orders, as WHC becomes contractually obligated to implement them. The Hanford Generic ISB focuses on the institutional controls and safety requirements identified in DOE Order 5480.23, Nuclear Safety Analysis Reports.
FACILITY WORKER TECHNICAL BASIS DOCUMENT
SHULTZ, M.V.
2005-03-31T23:59:59.000Z
This technical basis document was developed to support RPP-13033, ''Tank Farms Documented Safety Analysis (DSA). It describes the criteria and methodology for allocating controls to hazardous conditions with significant facility worker (FW) consequence and presents the results of the allocation. The criteria and methodology for identifying controls that address FW safety are in accordance with DOE-STD-3009-94, ''Preparation Guide for US Department of Energy Nonreactor Nuclear Facility Documented Safety Analyses''.
Gravitational lens modeling with basis sets
Birrer, Simon; Refregier, Alexandre
2015-01-01T23:59:59.000Z
We present a strong lensing modeling technique based on versatile basis sets for the lens and source planes. Our method uses high performance Monte Carlo algorithms, allows for an adaptive build up of complexity and bridges the gap between parametric and pixel based reconstruction methods. We apply our method to a HST image of the strong lens system RXJ1131-1231 and show that our method finds a reliable solution and is able to detect substructure in the lens and source planes simultaneously. Using mock data we show that our method is sensitive to sub-clumps with masses four orders of magnitude smaller than the main lens, which corresponds to about $10^8 M_{\\odot}$, without prior knowledge on the position and mass of the sub-clump. The modelling approach is flexible and maximises automation to facilitate the analysis of the large number of strong lensing systems expected in upcoming wide field surveys. The resulting search for dark sub-clumps in these systems, without mass-to-light priors, offers promise for p...
FLAMMABLE GAS TECHNICAL BASIS DOCUMENT
KRIPPS, L.J.
2003-10-09T23:59:59.000Z
This technical basis document was developed to support of the Tank Farms Documented Safety Analysis (DSA) and describes the risk binning process for the flammable gas representative accidents and associated represented hazardous conditions. The purpose of the risk binning process is to determine the need for safety-significant structures, systems, and components (SSC) and technical safety requirement (TSR)-level controls for a given representative accident or represented hazardous condition based on an evaluation of the event frequency and consequence. Note that the risk binning process is not applied to facility workers, because all facility worker hazardous conditions are considered for safety-significant SSC and/or TSR-level controls.
Algorithms for builder guidelines
Balcomb, J.D.; Lekov, A.B.
1989-06-01T23:59:59.000Z
The Builder Guidelines are designed to make simple, appropriate guidelines available to builders for their specific localities. Builders may select from passive solar and conservation strategies with different performance potentials. They can then compare the calculated results for their particular house design with a typical house in the same location. Algorithms used to develop the Builder Guidelines are described. The main algorithms used are the monthly solar ratio (SLR) method for winter heating, the diurnal heat capacity (DHC) method for temperature swing, and a new simplified calculation method (McCool) for summer cooling. This paper applies the algorithms to estimate the performance potential of passive solar strategies, and the annual heating and cooling loads of various combinations of conservation and passive solar strategies. The basis of the McCool method is described. All three methods are implemented in a microcomputer program used to generate the guideline numbers. Guidelines for Denver, Colorado, are used to illustrate the results. The structure of the guidelines and worksheet booklets are also presented. 5 refs., 3 tabs.
The Leap-Frog Algorithm and Optimal Control: Theoretical Aspects
Noakes, Lyle
Department of Mathematics University of Western Australia Nedlands, W.A. 6907 Australia lyle of Mathematics University of South Australia The Levels, S.A. 5095 Australia y.kaya@unisa.edu.au J. Lyle Noakes
A Theoretical and Algorithmic Characterization of Bulge Knees
2015-05-29T23:59:59.000Z
the Pareto front) and bulge knee, to the best of our knowledge, is the only .... magnitudes (stress versus displacement trade-off that is inherent in engineering.
Satisfiability of logic programming based on radial basis function neural networks
Hamadneh, Nawaf; Sathasivam, Saratha; Tilahun, Surafel Luleseged; Choon, Ong Hong [School of Mathematical Sciences, Universiti Sains Malaysia, 11800 USM, Penang (Malaysia)
2014-07-10T23:59:59.000Z
In this paper, we propose a new technique to test the Satisfiability of propositional logic programming and quantified Boolean formula problem in radial basis function neural networks. For this purpose, we built radial basis function neural networks to represent the proportional logic which has exactly three variables in each clause. We used the Prey-predator algorithm to calculate the output weights of the neural networks, while the K-means clustering algorithm is used to determine the hidden parameters (the centers and the widths). Mean of the sum squared error function is used to measure the activity of the two algorithms. We applied the developed technique with the recurrent radial basis function neural networks to represent the quantified Boolean formulas. The new technique can be applied to solve many applications such as electronic circuits and NP-complete problems.
DOE Data Explorer [Office of Scientific and Technical Information (OSTI)]
Feller, D; Schuchardt, Karen L.; Didier, Brett T.; Elsethagen, Todd; Sun, Lisong; Gurumoorthi, Vidhya; Chase, Jared; Li, Jun
The Basis Set Exchange (BSE) provides a web-based user interface for downloading and uploading Gaussian-type (GTO) basis sets, including effective core potentials (ECPs), from the EMSL Basis Set Library. It provides an improved user interface and capabilities over its predecessor, the EMSL Basis Set Order Form, for exploring the contents of the EMSL Basis Set Library. The popular Basis Set Order Form and underlying Basis Set Library were originally developed by Dr. David Feller and have been available from the EMSL webpages since 1994. BSE not only allows downloading of the more than 200 Basis sets in various formats; it allows users to annotate existing sets and to upload new sets. (Specialized Interface)
FLAMMABLE GAS TECHNICAL BASIS DOCUMENT
KRIPPS, L.J.
2005-03-03T23:59:59.000Z
This document describes the qualitative evaluation of frequency and consequences for DST and SST representative flammable gas accidents and associated hazardous conditions without controls. The evaluation indicated that safety-significant structures, systems and components (SSCs) and/or technical safety requirements (TSRs) were required to prevent or mitigate flammable gas accidents. Discussion on the resulting control decisions is included. This technical basis document was developed to support WP-13033, Tank Farms Documented Safety Analysis (DSA), and describes the risk binning process for the flammable gas representative accidents and associated represented hazardous conditions. The purpose of the risk binning process is to determine the need for safety-significant structures, systems, and components (SSC) and technical safety requirement (TSR)-level controls for a given representative accident or represented hazardous condition based on an evaluation of the event frequency and consequence.
FLAMMABLE GAS TECHNICAL BASIS DOCUMENT
KRIPPS, L.J.
2005-02-18T23:59:59.000Z
This document describes the qualitative evaluation of frequency and consequences for double shell tank (DST) and single shell tank (SST) representative flammable gas accidents and associated hazardous conditions without controls. The evaluation indicated that safety-significant SSCs and/or TSRS were required to prevent or mitigate flammable gas accidents. Discussion on the resulting control decisions is included. This technical basis document was developed to support of the Tank Farms Documented Safety Analysis (DSA) and describes the risk binning process for the flammable gas representative accidents and associated represented hazardous conditions. The purpose of the risk binning process is to determine the need for safety-significant structures, systems, and components (SSC) and technical safety requirement (TSR)-level controls for a given representative accident or represented hazardous condition based on an evaluation of the event frequency and consequence.
Mathemathical methods of theoretical physics
Karl Svozil
2015-02-26T23:59:59.000Z
Course material for mathematical methods of theoretical physics intended for an undergraduate audience.
Mathemathical methods of theoretical physics
Svozil, Karl
2012-01-01T23:59:59.000Z
Course material for mathemathical methods of theoretical physics intended for an undergraduate audience.
Spectral Representations of Uncertainty: Algorithms and Applications
George Em Karniadakis
2005-04-24T23:59:59.000Z
The objectives of this project were: (1) Develop a general algorithmic framework for stochastic ordinary and partial differential equations. (2) Set polynomial chaos method and its generalization on firm theoretical ground. (3) Quantify uncertainty in large-scale simulations involving CFD, MHD and microflows. The overall goal of this project was to provide DOE with an algorithmic capability that is more accurate and three to five orders of magnitude more efficient than the Monte Carlo simulation.
Protein Folding Challenge and Theoretical Computer Science Somenath Biswas
Biswas, Somenath
Protein Folding Challenge and Theoretical Computer Science Somenath Biswas Department of Computer the chain of amino acids that defines a protein. The protein folding problem is: given a sequence of amino to use an efficient algorithm to carry out protein folding. The atoms in a protein molecule attract each
Advanced Fuel Cycle Cost Basis
D. E. Shropshire; K. A. Williams; W. B. Boore; J. D. Smith; B. W. Dixon; M. Dunzik-Gougar; R. D. Adams; D. Gombert; E. Schneider
2009-12-01T23:59:59.000Z
This report, commissioned by the U.S. Department of Energy (DOE), provides a comprehensive set of cost data supporting a cost analysis for the relative economic comparison of options for use in the Advanced Fuel Cycle Initiative (AFCI) Program. The report describes the AFCI cost basis development process, reference information on AFCI cost modules, a procedure for estimating fuel cycle costs, economic evaluation guidelines, and a discussion on the integration of cost data into economic computer models. This report contains reference cost data for 25 cost modules—23 fuel cycle cost modules and 2 reactor modules. The cost modules were developed in the areas of natural uranium mining and milling, conversion, enrichment, depleted uranium disposition, fuel fabrication, interim spent fuel storage, reprocessing, waste conditioning, spent nuclear fuel (SNF) packaging, long-term monitored retrievable storage, near surface disposal of low-level waste (LLW), geologic repository and other disposal concepts, and transportation processes for nuclear fuel, LLW, SNF, transuranic, and high-level waste.
Advanced Fuel Cycle Cost Basis
D. E. Shropshire; K. A. Williams; W. B. Boore; J. D. Smith; B. W. Dixon; M. Dunzik-Gougar; R. D. Adams; D. Gombert; E. Schneider
2008-03-01T23:59:59.000Z
This report, commissioned by the U.S. Department of Energy (DOE), provides a comprehensive set of cost data supporting a cost analysis for the relative economic comparison of options for use in the Advanced Fuel Cycle Initiative (AFCI) Program. The report describes the AFCI cost basis development process, reference information on AFCI cost modules, a procedure for estimating fuel cycle costs, economic evaluation guidelines, and a discussion on the integration of cost data into economic computer models. This report contains reference cost data for 25 cost modules—23 fuel cycle cost modules and 2 reactor modules. The cost modules were developed in the areas of natural uranium mining and milling, conversion, enrichment, depleted uranium disposition, fuel fabrication, interim spent fuel storage, reprocessing, waste conditioning, spent nuclear fuel (SNF) packaging, long-term monitored retrievable storage, near surface disposal of low-level waste (LLW), geologic repository and other disposal concepts, and transportation processes for nuclear fuel, LLW, SNF, transuranic, and high-level waste.
Advanced Fuel Cycle Cost Basis
D. E. Shropshire; K. A. Williams; W. B. Boore; J. D. Smith; B. W. Dixon; M. Dunzik-Gougar; R. D. Adams; D. Gombert
2007-04-01T23:59:59.000Z
This report, commissioned by the U.S. Department of Energy (DOE), provides a comprehensive set of cost data supporting a cost analysis for the relative economic comparison of options for use in the Advanced Fuel Cycle Initiative (AFCI) Program. The report describes the AFCI cost basis development process, reference information on AFCI cost modules, a procedure for estimating fuel cycle costs, economic evaluation guidelines, and a discussion on the integration of cost data into economic computer models. This report contains reference cost data for 26 cost modules—24 fuel cycle cost modules and 2 reactor modules. The cost modules were developed in the areas of natural uranium mining and milling, conversion, enrichment, depleted uranium disposition, fuel fabrication, interim spent fuel storage, reprocessing, waste conditioning, spent nuclear fuel (SNF) packaging, long-term monitored retrievable storage, near surface disposal of low-level waste (LLW), geologic repository and other disposal concepts, and transportation processes for nuclear fuel, LLW, SNF, and high-level waste.
Communication-Optimal Parallel Algorithm for Strassen's Matrix Multiplication
Moreno Maza, Marc
and CS Division, UC Berkeley Berkeley, CA 94720 demmel@cs.berkeley.edu Olga Holtz Mathematics]). It has been addressed using many theoretical approaches, algorithmic tools, and software engineering
Boyer, Edmond
example, the long-term use of groundwater heat pumps for air conditioning of homes or buildings can induce and hydrogeological background. The presence of organic pollutants in the aquifer can amplify these phenomena/or the well productivity, (ii) an inappropriate temperature for the use of groundwater heat pumps for air
The Static Universe Hypothesis: Theoretical Basis and Observational Tests of the Hypothesis
Thomas B. Andrews
2001-09-07T23:59:59.000Z
From the axiom of the unrestricted repeatability of all experiments, Bondi and Gold argued that the universe is in a stable, self-perpetuating equilibrium state. This concept generalizes the usual cosmological principle to the perfect cosmological principle in which the universe looks the same from any location at any time. Consequently, I hypothesize that the universe is static and in an equilibrium state (non-evolving). New physics is proposed based on the concept that the universe is a pure wave system. Based on the new physics and assuming a static universe, processes are derived for the Hubble redshift and the cosmic background radiation field. Then, following the scientific method, I test deductions of the static universe hypothesis using precise observational data primarily from the Hubble Space Telescope. Applying four different global tests of the space-time metric, I find that the observational data consistently fits the static universe model. The observational data also show that the average absolute magnitudes and physical radii of first-rank elliptical galaxies have not changed over the last 5 to 15 billion years. Because the static universe hypothesis is a logical deduction from the perfect cosmological principle and the hypothesis is confirmed by the observational data, I conclude that the universe is static and in an equilibrium state.
Applied Radiation and Isotopes 61 (2004) 14311435 Theoretical basis for long-term measurements of
Yu, K.N.
emitting radon progeny (218 Po+214 Po) to the concentration of radon gas (222 Rn). In particular, we have) for a survey). However, methods for long-term monitoring of the concentrations of radon progeny, or the equilibrium factor (which surrogates the ratios of concentrations of radon progeny to the concentration
Theoretical Physics in Cellular Biology
Theoretical Physics in Cellular Biology: Some Illustrative Case Studies Living matter obeys the laws of physics, and the principles and methods of theoretical physics ought to find useful application observation, I will describe a few specific instances where approaches inspired by theoretical physics allow
Dynamical properties of non-ideal plasma on the basis of effective potentials
Ramazanov, T. S.; Kodanova, S. K.; Moldabekov, Zh. A.; Issanova, M. K. [IETP, Al-Farabi Kazakh National University, 71 Al-Farabi str., Almaty 050040 (Kazakhstan)] [IETP, Al-Farabi Kazakh National University, 71 Al-Farabi str., Almaty 050040 (Kazakhstan)
2013-11-15T23:59:59.000Z
In this work, stopping power has been calculated on the basis of the Coulomb logarithm using the effective potentials. Calculations of the Coulomb logarithm and stopping power for different interaction potentials and degrees of ionization are compared. The comparison with the data of other theoretical and experimental works was carried out.
324 Building safety basis criteria document
STEFFEN, J.M.
1999-06-02T23:59:59.000Z
The Safety Basis Criteria document describes the proposed format, content, and schedule for the preparation of an updated Safety Analysis Report (SAR) and Operational Safety Requirements document (OSR) for the 324 Building. These updated safety authorization basis documents are intended to cover stabilization and deactivation activities that will prepare the facility for turnover to the Environmental Restoration Contractor for final decommissioning. The purpose of this document is to establish the specific set of criteria needed for technical upgrades to the 324 Facility Safety Authorization Basis, as required by Project Hanford Procedure HNF-PRO-705, Safety Basis Planning, Documentation, Review, and Approval.
Theoretical Particle Astrophysics
Kamionkowski, Marc
2013-08-07T23:59:59.000Z
Abstract: Theoretical Particle Astrophysics The research carried out under this grant encompassed work on the early Universe, dark matter, and dark energy. We developed CMB probes for primordial baryon inhomogeneities, primordial non-Gaussianity, cosmic birefringence, gravitational lensing by density perturbations and gravitational waves, and departures from statistical isotropy. We studied the detectability of wiggles in the inflation potential in string-inspired inflation models. We studied novel dark-matter candidates and their phenomenology. This work helped advance the DoE's Cosmic Frontier (and also Energy and Intensity Frontiers) by finding synergies between a variety of different experimental efforts, by developing new searches, science targets, and analyses for existing/forthcoming experiments, and by generating ideas for new next-generation experiments.
Algorithmic Gauss-Manin Connection Algorithms to Compute Hodge-theoretic Invariants
Schulze, Mathias
Singular library linalg.lib . . . . . . . . . . . . . . . . . . 113 A.2 Singular library gaussman are considered to be equal and form an (equiv- alence) class. This leads to a classification problem form being an object in this class. The concept of invariants serves to approach classification
A LOGICAL INVERTED TAXONOMY OF SORTING ALGORITHMS S.M. Merritt K.K. Lau
Lau, Kung-Kiu
A LOGICAL INVERTED TAXONOMY OF SORTING ALGORITHMS S.M. Merritt K.K. Lau School of Computer Science taxonomy of sorting algorithms, a highÂlevel, topÂdown, conceptually simple and symmetric categorization taxonomy of sorting algorithms. This provides a logical basis for the inverted taxonomy and expands
CRAD, Engineering Design and Safety Basis - December 22, 2009...
Broader source: Energy.gov (indexed) [DOE]
Engineering Design and Safety Basis - December 22, 2009 CRAD, Engineering Design and Safety Basis - December 22, 2009 December 22, 2009 Engineering Design and Safety Basis...
Theoretical Ecology: Continued growth and success
Hastings, Alan
2010-01-01T23:59:59.000Z
EDITORIAL Theoretical Ecology: Continued growth and successof areas in theoretical ecology. Among the highlights areyear represent theoretical ecology from around the world: 20
Greenhill, Catherine
vector space, with respect to a canonical basis, is called the exterior square of X. Note that all vectorAn algorithm for recognising the exterior square of a matrix Keywords: matrix, exterior square the exterior square of a matrix. The approach involves manipulation of the equations which relate the entries
Quantum Robot: Structure, Algorithms and Applications
Dao-Yi Dong; Chun-Lin Chen; Chen-Bin Zhang; Zong-Hai Chen
2005-06-18T23:59:59.000Z
A kind of brand-new robot, quantum robot, is proposed through fusing quantum theory with robot technology. Quantum robot is essentially a complex quantum system and it is generally composed of three fundamental parts: MQCU (multi quantum computing units), quantum controller/actuator, and information acquisition units. Corresponding to the system structure, several learning control algorithms including quantum searching algorithm and quantum reinforcement learning are presented for quantum robot. The theoretic results show that quantum robot can reduce the complexity of O(N^2) in traditional robot to O(N^(3/2)) using quantum searching algorithm, and the simulation results demonstrate that quantum robot is also superior to traditional robot in efficient learning by novel quantum reinforcement learning algorithm. Considering the advantages of quantum robot, its some potential important applications are also analyzed and prospected.
Optimized quantum random-walk search algorithms
V. Potocek; A. Gabris; T. Kiss; I. Jex
2008-12-12T23:59:59.000Z
Shenvi, Kempe and Whaley's quantum random-walk search (SKW) algorithm [Phys. Rev. A 67, 052307 (2003)] is known to require $O(\\sqrt N)$ number of oracle queries to find the marked element, where $N$ is the size of the search space. The overall time complexity of the SKW algorithm differs from the best achievable on a quantum computer only by a constant factor. We present improvements to the SKW algorithm which yield significant increase in success probability, and an improvement on query complexity such that the theoretical limit of a search algorithm succeeding with probability close to one is reached. We point out which improvement can be applied if there is more than one marked element to find.
CRAD, Safety Basis - Los Alamos National Laboratory Waste Characteriza...
Office of Environmental Management (EM)
Safety Basis - Los Alamos National Laboratory Waste Characterization, Reduction, and Repackaging Facility CRAD, Safety Basis - Los Alamos National Laboratory Waste...
222-S Laboratory interim safety basis
WEAVER, L.L.
2001-09-10T23:59:59.000Z
The purpose of this document is to establish the Interim Safety Basis (ISB) for the 222-S Laboratory. An ISB is a documented safety basis that provides the justification for the continued operation of the facility until an upgraded documented safety analysis (DSA) is prepared in compliance with 10CFR 830, Subpart B. The 222-S Laboratory ISB is based on revised facility and process descriptions and revised accident analyses that reflect current conditions.
Converting online algorithms to local computation algorithms
Mansour, Yishay; Vardi, Shai; Xie, Ning
2012-01-01T23:59:59.000Z
We propose a general method for converting online algorithms to local computation algorithms by selecting a random permutation of the input, and simulating running the online algorithm. We bound the number of steps of the algorithm using a query tree, which models the dependencies between queries. We improve previous analyses of query trees on graphs of bounded degree, and extend the analysis to the cases where the degrees are distributed binomially, and to a special case of bipartite graphs. Using this method, we give a local computation algorithm for maximal matching in graphs of bounded degree, which runs in time and space O(log^3 n). We also show how to convert a large family of load balancing algorithms (related to balls and bins problems) to local computation algorithms. This gives several local load balancing algorithms which achieve the same approximation ratios as the online algorithms, but run in O(log n) time and space. Finally, we modify existing local computation algorithms for hypergraph 2-color...
An Invitation to Algorithmic Information Theory
G. J. Chaitin
1996-09-17T23:59:59.000Z
I'll outline the latest version of my limits of math course. The purpose of this course is to illustrate the proofs of the key information-theoretic incompleteness theorems of algorithmic information theory by means of algorithms written in a specially designed version of LISP. The course is now written in HTML with Java applets, and is available at http://www.research.ibm.com/people/c/chaitin/lm . The LISP now used is much friendlier than before, and because its interpreter is a Java applet it will run in the Netscape browser as you browse my limits of math Web site.
C.E. Kessel; D. Meade; S.C. Jardin
2002-01-18T23:59:59.000Z
The FIRE [Fusion Ignition Research Experiment] design for a burning plasma experiment is described in terms of its physics basis and engineering features. Systems analysis indicates that the device has a wide operating space to accomplish its mission, both for the ELMing H-mode reference and the high bootstrap current/high beta advanced tokamak regimes. Simulations with 1.5D transport codes reported here both confirm and constrain the systems projections. Experimental and theoretical results are used to establish the basis for successful burning plasma experiments in FIRE.
James R. Chelikowsky
2009-03-31T23:59:59.000Z
The work reported here took place at the University of Minnesota from September 15, 2003 to November 14, 2005. This funding resulted in 10 invited articles or book chapters, 37 articles in refereed journals and 13 invited talks. The funding helped train 5 PhD students. The research supported by this grant focused on developing theoretical methods for predicting and understanding the properties of matter at the nanoscale. Within this regime, new phenomena occur that are characteristic of neither the atomic limit, nor the crystalline limit. Moreover, this regime is crucial for understanding the emergence of macroscopic properties such as ferromagnetism. For example, elemental Fe clusters possess magnetic moments that reside between the atomic and crystalline limits, but the transition from the atomic to the crystalline limit is not a simple interpolation between the two size regimes. To capitalize properly on predicting such phenomena in this transition regime, a deeper understanding of the electronic, magnetic and structural properties of matter is required, e.g., electron correlation effects are enhanced within this size regime and the surface of a confined system must be explicitly included. A key element of our research involved the construction of new algorithms to address problems peculiar to the nanoscale. Typically, one would like to consider systems with thousands of atoms or more, e.g., a silicon nanocrystal that is 7 nm in diameter would contain over 10,000 atoms. Previous ab initio methods could address systems with hundreds of atoms whereas empirical methods can routinely handle hundreds of thousands of atoms (or more). However, these empirical methods often rely on ad hoc assumptions and lack incorporation of structural and electronic degrees of freedom. The key theoretical ingredients in our work involved the use of ab initio pseudopotentials and density functional approaches. The key numerical ingredients involved the implementation of algorithms for solving the Kohn-Sham equation without the use of an explicit basis, i.e., a real space grid. We invented algorithms for a solution of the Kohn-Sham equation based on Chebyshev 'subspace filtering'. Our filtering algorithms dramatically enhanced our ability to explore systems with thousands of atoms, i.e., we examined silicon quantum dots with approximately 11,000 atoms (or 40,000 electrons). We applied this algorithm to a number of nanoscale systems to examine the role of quantum confinement on electronic and magnetic properties: (1) Doping of nanocrystals and nanowires, including both magnetic and non-magnetic dopants and the role of self-purification; (2) Optical excitations and electronic properties of nanocrystals; (3) Intrinsic defects in nanostructures; and (4) The emergence of ferromagnetism from atoms to crystals.
Technical basis document for natural event hazards
CARSON, D.M.
2003-08-28T23:59:59.000Z
This technical basis document was developed to support the documented safety analysis (DSA) and describes the risk binning process and the technical basis for assigning risk bins for natural event hazard (NEH)-initiated accidents. The purpose of the risk binning process is to determine the need for safety-significant structures, systems, and components (SSC) and technical safety requirement (TSR)-level controls for a given representative accident or represented hazardous conditions based on an evaluation of the frequency and consequence. Note that the risk binning process is not applied to facility workers, because all facility worker hazardous conditions are considered for safety-significant SSCs and/or TSR-level controls This report documents the technical basis for assigning risk bins for Natural Event Hazards Representative Accident and associated represented hazardous conditions.
Fsusy and Field Theoretical Construction
M. B. Sedra; J. Zerouaoui
2009-12-18T23:59:59.000Z
Following our previous work on fractional spin symmetries (FSS) \\cite{6, 7}, we consider here the construction of field theoretical models that are invariant under the $D=2(1/3,1/3)$ supersymmetric algebra.
Master track Theoretical Biology & Bioinformatics
Utrecht, Universiteit
their master. Our two MSc courses "Computational Biology" and "Bioinformatics and Evolutionary GenomicsMaster track Theoretical Biology & Bioinformatics Modeling and bioinformatics is an important Biology & Bioinformatics provides courses introducing you to the basic concepts of modeling
CRAD, Facility Safety- Nuclear Facility Safety Basis
Broader source: Energy.gov [DOE]
A section of Appendix C to DOE G 226.1-2 "Federal Line Management Oversight of Department of Energy Nuclear Facilities." Consists of Criteria Review and Approach Documents (CRADs) that can be used for assessment of a contractor's Nuclear Facility Safety Basis.
PRELIMINARY SELECTION OF MGR DESIGN BASIS EVENTS
J.A. Kappes
1999-09-16T23:59:59.000Z
The purpose of this analysis is to identify the preliminary design basis events (DBEs) for consideration in the design of the Monitored Geologic Repository (MGR). For external events and natural phenomena (e.g., earthquake), the objective is to identify those initiating events that the MGR will be designed to withstand. Design criteria will ensure that radiological release scenarios resulting from these initiating events are beyond design basis (i.e., have a scenario frequency less than once per million years). For internal (i.e., human-induced and random equipment failures) events, the objective is to identify credible event sequences that result in bounding radiological releases. These sequences will be used to establish the design basis criteria for MGR structures, systems, and components (SSCs) design basis criteria in order to prevent or mitigate radiological releases. The safety strategy presented in this analysis for preventing or mitigating DBEs is based on the preclosure safety strategy outlined in ''Strategy to Mitigate Preclosure Offsite Exposure'' (CRWMS M&O 1998f). DBE analysis is necessary to provide feedback and requirements to the design process, and also to demonstrate compliance with proposed 10 CFR 63 (Dyer 1999b) requirements. DBE analysis is also required to identify and classify the SSCs that are important to safety (ITS).
Benkart, Georgia
2008-01-01T23:59:59.000Z
This article contains an investigation of the equitable basis for the Lie algebra sl_2. Denoting this basis by {x,y,z}, we have [x,y] = 2x + 2y, [y,z] = 2y + 2z, [z, x] = 2z + 2x. One focus of our study is the group of automorphisms G generated by exp(ad x*), exp(ad y*), exp(ad z*), where {x*,y*,z*} is the basis for sl_2 dual to {x,y,z} with respect to the trace form (u,v) = tr(uv). We show that G is isomorphic to the modular group PSL_2(Z). Another focus of our investigation is the lattice L=Zx+Zy+Zz. We prove that the orbit G(x) equals {u in L |(u,u)=2}. We determine the precise relationship between (i) the group G, (ii) the group of automorphisms for sl_2 that preserve L, (iii) the group of automorphisms and antiautomorphisms for sl_2 that preserve L, and (iv) the group of isometries for (,) that preserve L. We obtain analogous results for the lattice L* =Zx*+Zy*+Zz*. Relative to the equitable basis, the matrix of the trace form is a Cartan matrix of hyperbolic type; consequently,we identify the equitable ...
Technical basis document for external events
OBERG, B.D.
2003-03-22T23:59:59.000Z
This document supports the Tank Farms Documented Safety Analysis and presents the technical basis for the frequencies of externally initiated accidents. The consequences of externally initiated events are discussed in other documents that correspond to the accident that was caused by the external event. The external events include aircraft crash, vehicle accident, range fire, and rail accident.
Waste transfer leaks technical basis document
ZIMMERMAN, B.D.
2003-03-22T23:59:59.000Z
This document provides technical support for the onsite radiological and toxicological, and offsite toxicological, portions of the waste transfer leak accident presented in the Documented Safety Analysis. It provides the technical basis for frequency and consequence bin selection, and selection of safety SSCs and TSRs.
Review and Approval of Nuclear Facility Safety Basis and Safety Design Basis Documents
Broader source: Directives, Delegations, and Requirements [Office of Management (MA)]
2014-12-19T23:59:59.000Z
This Standard describes a framework and the criteria to be used for approval of (1) safety basis documents, as required by 10 Code of Federal Regulation (C.F.R.) 830, Nuclear Safety Management, and (2) safety design basis documents, as required by Department of Energy (DOE) Standard (STD)-1189-2008, Integration of Safety into the Design Process.
Reconstruction algorithms for MRI
Bilgic?, Berkin
2013-01-01T23:59:59.000Z
This dissertation presents image reconstruction algorithms for Magnetic Resonance Imaging (MRI) that aims to increase the imaging efficiency. Algorithms that reduce imaging time without sacrificing the image quality and ...
Energy Science and Technology Software Center (OSTI)
002651IBMPC00 Algorithm for Accounting for the Interactions of Multiple Renewable Energy Technologies in Estimation of Annual Performance
System Design and the Safety Basis
Ellingson, Darrel
2008-05-06T23:59:59.000Z
The objective of this paper is to present the Bechtel Jacobs Company, LLC (BJC) Lessons Learned for system design as it relates to safety basis documentation. BJC has had to reconcile incomplete or outdated system description information with current facility safety basis for a number of situations in recent months. This paper has relevance in multiple topical areas including documented safety analysis, decontamination & decommissioning (D&D), safety basis (SB) implementation, safety and design integration, potential inadequacy of the safety analysis (PISA), technical safety requirements (TSR), and unreviewed safety questions. BJC learned that nuclear safety compliance relies on adequate and well documented system design information. A number of PIS As and TSR violations occurred due to inadequate or erroneous system design information. As a corrective action, BJC assessed the occurrences caused by systems design-safety basis interface problems. Safety systems reviewed included the Molten Salt Reactor Experiment (MSRE) Fluorination System, K-1065 fire alarm system, and the K-25 Radiation Criticality Accident Alarm System. The conclusion was that an inadequate knowledge of system design could result in continuous non-compliance issues relating to nuclear safety. This was especially true with older facilities that lacked current as-built drawings coupled with the loss of 'historical knowledge' as personnel retired or moved on in their careers. Walkdown of systems and the updating of drawings are imperative for nuclear safety compliance. System design integration with safety basis has relevance in the Department of Energy (DOE) complex. This paper presents the BJC Lessons Learned in this area. It will be of benefit to DOE contractors that manage and operate an aging population of nuclear facilities.
Giorda, Paolo [Institute for Scientific Interchange, Villa Gualino Viale Settimio Severo 65, 10133 Turin (Italy); Iorio, Alfredo [Center for Theoretical Physics, Massachusetts Institute of Technology, 77 Massachusetts Avenue, Cambridge, Massachusetts 02139-4307 (United States); INFN, Rome (Italy); Sen, Samik [School of Mathematics, Trinity College Dublin, Dublin 2 (Ireland); Sen, Siddhartha [School of Mathematics, Trinity College Dublin, Dublin 2 (Ireland); IACS, Jadavpur, Calcutta 700032 (India)
2004-09-01T23:59:59.000Z
We propose a semiclassical version of Shor's quantum algorithm to factorize integer numbers, based on spin-(1/2) SU(2) generalized coherent states. Surprisingly, we find evidence that the algorithm's success probability is not too severely modified by our semiclassical approximation. This suggests that it is worth pursuing practical implementations of the algorithm on semiclassical devices.
Critical Review of Theoretical Models for Anomalous Effects (Cold Fusion) in Deuterated Metals
Chechin, V A; Rabinowitz, M; Kim, Y E
1994-01-01T23:59:59.000Z
We briefly summarize the reported anomalous effects in deuterated metals at ambient temperature, commonly known as "Cold Fusion" (CF), with an emphasis on important experiments as well as the theoretical basis for the opposition to interpreting them as cold fusion. Then we critically examine more than 25 theoretical models for CF, including unusual nuclear and exotic chemical hypotheses. We conclude that they do not explain the data.
Critical Review of Theoretical Models for Anomalous Effects (Cold Fusion) in Deuterated Metals
V. A. Chechin; V. A. Tsarev; M. Rabinowitz; Y. E. Kim
2003-04-06T23:59:59.000Z
We briefly summarize the reported anomalous effects in deuterated metals at ambient temperature, commonly known as "Cold Fusion" (CF), with an emphasis on important experiments as well as the theoretical basis for the opposition to interpreting them as cold fusion. Then we critically examine more than 25 theoretical models for CF, including unusual nuclear and exotic chemical hypotheses. We conclude that they do not explain the data.
Theoretical ELSEVIE; Theoretical Computer Science 187 ( 1997) 249-262
Garbey, Marc
MAPLE for the analysis of bifurcation phenomena in gas combustion A. El Hamidi",`, M. Garbeyb aD6 for a premixed burner flame. Many experimental and theoretical works in condensed-phase and gas combustion show of the symbolic manipulation language MAPLE for the analysis of bifurcation phenomena in gas combustion. It shows
Integrated Safety Management System as the Basis for Work Planning...
Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site
Integrated Safety Management System as the Basis for Work Planning and Control for Research and Development Integrated Safety Management System as the Basis for Work Planning and...
ORISE: The Medical Basis for Radiation-Accident Preparedness...
Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)
The Medical Basis for Radiation-Accident Preparedness: Medical Management Proceedings of the Fifth International REACTS Symposium on the Medical Basis for Radiation-Accident...
Technical Cost Modeling - Life Cycle Analysis Basis for Program...
Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site
- Life Cycle Analysis Basis for Program Focus Technical Cost Modeling - Life Cycle Analysis Basis for Program Focus Polymer Composites Research in the LM Materials Program Overview...
Grid and basis adaptive polynomial chaos techniques for sensitivity and uncertainty analysis
Perkó, Zoltán, E-mail: Z.Perko@tudelft.nl; Gilli, Luca, E-mail: Gilli@nrg.eu; Lathouwers, Danny, E-mail: D.Lathouwers@tudelft.nl; Kloosterman, Jan Leen, E-mail: J.L.Kloosterman@tudelft.nl
2014-03-01T23:59:59.000Z
The demand for accurate and computationally affordable sensitivity and uncertainty techniques is constantly on the rise and has become especially pressing in the nuclear field with the shift to Best Estimate Plus Uncertainty methodologies in the licensing of nuclear installations. Besides traditional, already well developed methods – such as first order perturbation theory or Monte Carlo sampling – Polynomial Chaos Expansion (PCE) has been given a growing emphasis in recent years due to its simple application and good performance. This paper presents new developments of the research done at TU Delft on such Polynomial Chaos (PC) techniques. Our work is focused on the Non-Intrusive Spectral Projection (NISP) approach and adaptive methods for building the PCE of responses of interest. Recent efforts resulted in a new adaptive sparse grid algorithm designed for estimating the PC coefficients. The algorithm is based on Gerstner's procedure for calculating multi-dimensional integrals but proves to be computationally significantly cheaper, while at the same it retains a similar accuracy as the original method. More importantly the issue of basis adaptivity has been investigated and two techniques have been implemented for constructing the sparse PCE of quantities of interest. Not using the traditional full PC basis set leads to further reduction in computational time since the high order grids necessary for accurately estimating the near zero expansion coefficients of polynomial basis vectors not needed in the PCE can be excluded from the calculation. Moreover the sparse PC representation of the response is easier to handle when used for sensitivity analysis or uncertainty propagation due to the smaller number of basis vectors. The developed grid and basis adaptive methods have been implemented in Matlab as the Fully Adaptive Non-Intrusive Spectral Projection (FANISP) algorithm and were tested on four analytical problems. These show consistent good performance both in terms of the accuracy of the resulting PC representation of quantities and the computational costs associated with constructing the sparse PCE. Basis adaptivity also seems to make the employment of PC techniques possible for problems with a higher number of input parameters (15–20), alleviating a well known limitation of the traditional approach. The prospect of larger scale applicability and the simplicity of implementation makes such adaptive PC algorithms particularly appealing for the sensitivity and uncertainty analysis of complex systems and legacy codes.
Systematic expansion for infrared oscillator basis extrapolations
R. J. Furnstahl; S. N. More; T. Papenbrock
2014-03-20T23:59:59.000Z
Recent work has demonstrated that the infrared effects of harmonic oscillator basis truncations are well approximated by imposing a partial-wave Dirichlet boundary condition at a properly identified radius L. This led to formulas for extrapolating the corresponding energy E_L and other observables to infinite L and thus infinite basis size. Here we reconsider the energy for a two-body system with a Dirichlet boundary condition at L to identify and test a consistent and systematic expansion for E_L that depends only on observables. We also generalize the energy extrapolation formula to nonzero angular momentum, and apply it to the deuteron. Formulas given previously for extrapolating the radius are derived in detail.
TECHNICAL BASIS DOCUMENT FOR NATURAL EVENT HAZARDS
KRIPPS, L.J.
2006-07-31T23:59:59.000Z
This technical basis document was developed to support the documented safety analysis (DSA) and describes the risk binning process and the technical basis for assigning risk bins for natural event hazard (NEH)-initiated accidents. The purpose of the risk binning process is to determine the need for safety-significant structures, systems, and components (SSC) and technical safety requirement (TSR)-level controls for a given representative accident or represented hazardous conditions based on an evaluation of the frequency and consequence. Note that the risk binning process is not applied to facility workers, because all facility worker hazardous conditions are considered for safety-significant SSCs and/or TSR-level controls.
Technical basis document for natural event hazards
CARSON, D.M.
2003-03-20T23:59:59.000Z
This technical basis document was developed to support the Tank Farms Documented Safety Analysis (DSA), and describes the risk binning process and the technical basis for assigning risk bins for natural event hazards (NEH)-initiated representative accident and associated represented hazardous conditions. The purpose of the risk binning process is to determine the need for safety-significant structures, systems, and components (SSC) and technical safety requirement (TSR)-level controls for a given representative accident or represented hazardous conditions based on an evaluation of the frequency and consequence. Note that the risk binning process is not applied to facility workers, because all facility worker hazardous conditions are considered for safety-significant SSCs and/or TSR-level controls. Determination of the need for safety-class SSCs was performed in accordance with DOE-STD-3009-94, ''Preparation Guide for US Department of Energy Nonreactor Nuclear Facility Documented Safety Analyses'', as described in this report.
Chopped random-basis quantum optimization
Tommaso Caneva; Tommaso Calarco; Simone Montangero
2011-08-22T23:59:59.000Z
In this work we describe in detail the "Chopped RAndom Basis" (CRAB) optimal control technique recently introduced to optimize t-DMRG simulations [arXiv:1003.3750]. Here we study the efficiency of this control technique in optimizing different quantum processes and we show that in the considered cases we obtain results equivalent to those obtained via different optimal control methods while using less resources. We propose the CRAB optimization as a general and versatile optimal control technique.
VORTEX BREAKDOWN INCIPIENCE: THEORETICAL CONSIDERATIONS
Erlebacher, Gordon
dimensional boundary layer (Hall 2;3 , Mager 4 ); (ii) vortex breakdown is a consequence of hydrodynamic instabilityVORTEX BREAKDOWN INCIPIENCE: THEORETICAL CONSIDERATIONS S. A. Berger Department of Mechanical in Science and Engineering NASA Langley Research Center Hampton, VA 236810001 ABSTRACT The sensitivity
Theoretical Perspectives on Protein Folding
Thirumalai, Devarajan
Theoretical Perspectives on Protein Folding D. Thirumalai,1 Edward P. O'Brien,2 Greg Morrison,3 Understanding how monomeric proteins fold under in vitro conditions is crucial to describing their functions remains to be done to solve the protein folding problem in the broadest sense. 159 Annu.Rev.Biophys.2010
Theoretical Chemistry Theory, Computation, and
Gherman, Benjamin F.
1 23 Theoretical Chemistry Accounts Theory, Computation, and Modeling ISSN 1432-881X Volume 128). In order to explore the origin of this preference, density functional theory (DFT) calculations have been-terminus of nascent eubacterial proteins during protein synthesis [14]. As PDF is essential for bacterial survival
Climate Dynamics Observational, Theoretical and
Dong, Xiquan
1 23 Climate Dynamics Observational, Theoretical and Computational Research on the Climate System.6, and -22.5 Wm-2 , respectively, indicating a net cooling effect of clouds on the TOA radiation budget-2 , respectively, resulting in a larger net cooling effect of 2.9 Wm-2 in the model simu- lations
Theoretical study of cyclone design
Wang, Lingjuan
2005-08-29T23:59:59.000Z
differential equation. Barth??s "static particle" theory, particle (with diameter of d50) collection probability is 50% when the forces acting on it are balanced, combined with the force balance equation was applied in the theoretical analyses for the models...
Adaptive Scheduling Algorithms for Planet Searches
Eric B. Ford
2007-12-17T23:59:59.000Z
High-precision radial velocity planet searches have surveyed over ~2000 nearby stars and detected over ~200 planets. While these same stars likely harbor many additional planets, they will become increasingly challenging to detect, as they tend to have relatively small masses and/or relatively long orbital periods. Therefore, observers are increasing the precision of their observations, continuing to monitor stars over decade timescales, and also preparing to survey thousands more stars. Given the considerable amounts of telescope time required for such observing programs, it is important use the available resources as efficiently as possible. Previous studies have found that a wide range of predetermined scheduling algorithms result in planet searches with similar sensitivities. We have developed adaptive scheduling algorithms which have a solid basis in Bayesian inference and information theory and also are computationally feasible for modern planet searches. We have performed Monte Carlo simulations of plausible planet searches to test the power of adaptive scheduling algorithms. Our simulations demonstrate that planet searches performed with adaptive scheduling algorithms can simultaneously detect more planets, detect less massive planets, and measure orbital parameters more accurately than comparable surveys using a non-adaptive scheduling algorithm. We expect that these techniques will be particularly valuable for the N2K radial velocity planet search for short-period planets as well as future astrometric planet searches with the Space Interferometry Mission which aim to detect terrestrial mass planets.
Experiments with a Block Sorting Text Compression Algorithm
Fenwick, Peter
compressors. The original paper did little more than present the algorithm, with strong advice for efficient on aspects of its operation. Consideration of the possible efficiency of text compression leads to the revival of ideas by Shannon as the basis of a text compressor and then to the classification of the Block
Greedy forward selection algorithms to Sparse Gaussian Process Regression
Yao, Xin
proposed method is always better than loss-keert in both generalization performance and running time-examine a previous basis vector selection criterion proposed by Smola and Bartlett [20], referred as loss version loss-sun to loss-smola criterion. We compare the full greedy algorithms induced by the loss
Williams, P.T.
1993-09-01T23:59:59.000Z
As the field of computational fluid dynamics (CFD) continues to mature, algorithms are required to exploit the most recent advances in approximation theory, numerical mathematics, computing architectures, and hardware. Meeting this requirement is particularly challenging in incompressible fluid mechanics, where primitive-variable CFD formulations that are robust, while also accurate and efficient in three dimensions, remain an elusive goal. This dissertation asserts that one key to accomplishing this goal is recognition of the dual role assumed by the pressure, i.e., a mechanism for instantaneously enforcing conservation of mass and a force in the mechanical balance law for conservation of momentum. Proving this assertion has motivated the development of a new, primitive-variable, incompressible, CFD algorithm called the Continuity Constraint Method (CCM). The theoretical basis for the CCM consists of a finite-element spatial semi-discretization of a Galerkin weak statement, equal-order interpolation for all state-variables, a 0-implicit time-integration scheme, and a quasi-Newton iterative procedure extended by a Taylor Weak Statement (TWS) formulation for dispersion error control. Original contributions to algorithmic theory include: (a) formulation of the unsteady evolution of the divergence error, (b) investigation of the role of non-smoothness in the discretized continuity-constraint function, (c) development of a uniformly H{sup 1} Galerkin weak statement for the Reynolds-averaged Navier-Stokes pressure Poisson equation, (d) derivation of physically and numerically well-posed boundary conditions, and (e) investigation of sparse data structures and iterative methods for solving the matrix algebra statements generated by the algorithm.
Technical basis for internal dosimetry at Hanford
Sula, M.J.; Carbaugh, E.H.; Bihl, D.E.
1991-07-01T23:59:59.000Z
The Hanford Internal Dosimetry Program, administered by Pacific Northwest Laboratory for the US Department of Energy, provides routine bioassay monitoring for employees who are potentially exposed to radionuclides in the workplace. This report presents the technical basis for routine bioassay monitoring and the assessment of internal dose at Hanford. The radionuclides of concern include tritium, corrosion products ({sup 58}Co, {sup 60}Co, {sup 54}Mn, and {sup 59}Fe), strontium, cesium, iodine, europium, uranium, plutonium, and americium,. Sections on each of these radionuclides discuss the sources and characteristics; dosimetry; bioassay measurements and monitoring; dose measurement, assessment, and mitigation and bioassay follow-up treatment. 78 refs., 35 figs., 115 tabs.
Structural Basis for Activation of Cholera Toxin
Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)
AFDC Printable Version Share this resource Send a link to EERE: Alternative Fuels Data Center Home Page to someone by E-mail Share EERE: Alternative Fuels Data Center Home Page on Facebook Tweet about EERE: Alternative Fuels Data Center Home Page on Twitter Bookmark EERE: Alternative1 First Use of Energy for All Purposes (Fuel and Nonfuel), 2002; Level:Energy: Grid Integration Redefining What'sis Taking Over Our Instagram Secretary900SteepStrengthening northern NewStructural Basis for
Structural Basis for Activation of Cholera Toxin
Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)
AFDC Printable Version Share this resource Send a link to EERE: Alternative Fuels Data Center Home Page to someone by E-mail Share EERE: Alternative Fuels Data Center Home Page on Facebook Tweet about EERE: Alternative Fuels Data Center Home Page on Twitter Bookmark EERE: Alternative1 First Use of Energy for All Purposes (Fuel and Nonfuel), 2002; Level:Energy: Grid Integration Redefining What'sis Taking Over Our Instagram Secretary900SteepStrengthening northern NewStructural Basis
Structural Basis for Activation of Cholera Toxin
Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)
AFDC Printable Version Share this resource Send a link to EERE: Alternative Fuels Data Center Home Page to someone by E-mail Share EERE: Alternative Fuels Data Center Home Page on Facebook Tweet about EERE: Alternative Fuels Data Center Home Page on Twitter Bookmark EERE: Alternative1 First Use of Energy for All Purposes (Fuel and Nonfuel), 2002; Level:Energy: Grid Integration Redefining What'sis Taking Over Our Instagram Secretary900SteepStrengthening northern NewStructural BasisStructural
NDRPProtocolTechBasisCompiled020705.doc
Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)
AFDC Printable Version Share this resource Send a link to EERE: Alternative Fuels Data Center Home Page to someone by E-mail Share EERE: Alternative Fuels Data Center Home Page on Facebook Tweet about EERE: Alternative Fuels Data Center Home Page on Twitter Bookmark EERE: Alternative1 First Use of Energy for All Purposes (Fuel and Nonfuel), 2002; Level: National5Saleshttp://www.fnal.gov/directorate/nalcal/nalcal02_07_05_files/nalcal.gif Directorate - Events - Fermilab atNovelNC Ï€5,NDLGS:Basis
Theoretical issues in Spheromak research
Cohen, R. H.; Hooper, E. B.; LoDestro, L. L.; Mattor, N.; Pearlstein, L. D.; Ryutov, D. D.
1997-04-01T23:59:59.000Z
This report summarizes the state of theoretical knowledge of several physics issues important to the spheromak. It was prepared as part of the preparation for the Sustained Spheromak Physics Experiment (SSPX), which addresses these goals: energy confinement and the physics which determines it; the physics of transition from a short-pulsed experiment, in which the equilibrium and stability are determined by a conducting wall (``flux conserver``) to one in which the equilibrium is supported by external coils. Physics is examined in this report in four important areas. The status of present theoretical understanding is reviewed, physics which needs to be addressed more fully is identified, and tools which are available or require more development are described. Specifically, the topics include: MHD equilibrium and design, review of MHD stability, spheromak dynamo, and edge plasma in spheromaks.
Call for Papers 5th Workshop on Algorithm Engineering { WAE 2001
Brodal, Gerth Stølting
Call for Papers 5th Workshop on Algorithm Engineering { WAE 2001 BRICS, University of Aarhus, Denmark, August 28{30, 2001 Scope The Workshop on Algorithm Engineering covers research in all aspects of future research. WAE 2001 is spon- sored by BRICS and EATCS (the European Association for Theoretical
Call for Papers 5th Workshop on Algorithm Engineering WAE 2001
Brodal, Gerth Stølting
Call for Papers 5th Workshop on Algorithm Engineering WAE 2001 BRICS, University of Aarhus, Denmark, August 2830, 2001 Scope The Workshop on Algorithm Engineering covers research in all aspects of future research. WAE 2001 is spon- sored by BRICS and EATCS (the European Association for Theoretical
The theoretical significance of G
T. Damour
1999-01-22T23:59:59.000Z
The quantization of gravity, and its unification with the other interactions, is one of the greatest challenges of theoretical physics. Current ideas suggest that the value of G might be related to the other fundamental constants of physics, and that gravity might be richer than the standard Newton-Einstein description. This gives added significance to measurements of G and to Cavendish-type experiments.
On the Implementation of Interior Point Decomposition Algorithms for ...
2005-10-31T23:59:59.000Z
Oct 31, 2005 ... Industrial Engineering and Management Science Technical Report 2005-04 ... We also describe our interior decomposition algorithms using the Jordan algebra operations. .... The theoretical analysis assumes taking fixed steps along the ..... portfolio vector, and by ˜r ? Rn the random vector of asset returns ...
Nonlocal Monte Carlo algorithms for statistical physics applications
Janke, Wolfhard
magnets to polymers or proteins, to mention only a few classical problems. Quantum statistical problems different theoretical approaches such as field theory or series expansions, and, of course, with experimentsNonlocal Monte Carlo algorithms for statistical physics applications Wolfhard Janke1 Institut fu
Theoretical Uncertainties in Inflationary Predictions
William H. Kinney; Antonio Riotto
2006-03-09T23:59:59.000Z
With present and future observations becoming of higher and higher quality, it is timely and necessary to investigate the most significant theoretical uncertainties in the predictions of inflation. We show that our ignorance of the entire history of the Universe, including the physics of reheating after inflation, translates to considerable errors in observationally relevant parameters. Using the inflationary flow formalism, we estimate that for a spectral index $n$ and tensor/scalar ratio $r$ in the region favored by current observational constraints, the theoretical errors are of order $\\Delta n / | n - 1| \\sim 0.1 - 1$ and $\\Delta r /r \\sim 0.1 - 1$. These errors represent the dominant theoretical uncertainties in the predictions of inflation, and are generically of the order of or larger than the projected uncertainties in future precision measurements of the Cosmic Microwave Background. We also show that the lowest-order classification of models into small field, large field, and hybrid breaks down when higher order corrections to the dynamics are included. Models can flow from one region to another.
Technical Basis for PNNL Beryllium Inventory
Johnson, Michelle Lynn
2014-07-09T23:59:59.000Z
The Department of Energy (DOE) issued Title 10 of the Code of Federal Regulations Part 850, “Chronic Beryllium Disease Prevention Program” (the Beryllium Rule) in 1999 and required full compliance by no later than January 7, 2002. The Beryllium Rule requires the development of a baseline beryllium inventory of the locations of beryllium operations and other locations of potential beryllium contamination at DOE facilities. The baseline beryllium inventory is also required to identify workers exposed or potentially exposed to beryllium at those locations. Prior to DOE issuing 10 CFR 850, Pacific Northwest Nuclear Laboratory (PNNL) had documented the beryllium characterization and worker exposure potential for multiple facilities in compliance with DOE’s 1997 Notice 440.1, “Interim Chronic Beryllium Disease.” After DOE’s issuance of 10 CFR 850, PNNL developed an implementation plan to be compliant by 2002. In 2014, an internal self-assessment (ITS #E-00748) of PNNL’s Chronic Beryllium Disease Prevention Program (CBDPP) identified several deficiencies. One deficiency is that the technical basis for establishing the baseline beryllium inventory when the Beryllium Rule was implemented was either not documented or not retrievable. In addition, the beryllium inventory itself had not been adequately documented and maintained since PNNL established its own CBDPP, separate from Hanford Site’s program. This document reconstructs PNNL’s baseline beryllium inventory as it would have existed when it achieved compliance with the Beryllium Rule in 2001 and provides the technical basis for the baseline beryllium inventory.
Radioactive Waste Management BasisSept 2001
Goodwin, S S
2011-08-31T23:59:59.000Z
This Radioactive Waste Management Basis (RWMB) documents radioactive waste management practices adopted at Lawrence Livermore National Laboratory (LLNL) pursuant to Department of Energy (DOE) Order 435.1, Radioactive Waste Management. The purpose of this RWMB is to describe the systematic approach for planning, executing, and evaluating the management of radioactive waste at LLNL. The implementation of this document will ensure that waste management activities at LLNL are conducted in compliance with the requirements of DOE Order 435.1, Radioactive Waste Management, and the Implementation Guide for DOE manual 435.1-1, Radioactive Waste Management Manual. Technical justification is provided where methods for meeeting the requirements of DOE Order 435.1 deviate from the DOE Manual 435.1-1 and Implementation Guide.
Theoretical studies of combustion dynamics
Bowman, J.M. [Emory Univ., Atlanta, GA (United States)
1993-12-01T23:59:59.000Z
The basic objectives of this research program are to develop and apply theoretical techniques to fundamental dynamical processes of importance in gas-phase combustion. There are two major areas currently supported by this grant. One is reactive scattering of diatom-diatom systems, and the other is the dynamics of complex formation and decay based on L{sup 2} methods. In all of these studies, the authors focus on systems that are of interest experimentally, and for which potential energy surfaces based, at least in part, on ab initio calculations are available.
Efficient Approximation of Diagonal Unitaries over the Clifford+T Basis
Jonathan Welch; Alex Bocharov; Krysta M. Svore
2014-12-22T23:59:59.000Z
We present an algorithm for the approximate decomposition of diagonal operators, focusing specifically on decompositions over the Clifford+$T$ basis, that minimize the number of phase-rotation gates in the synthesized approximation circuit. The equivalent $T$-count of the synthesized circuit is bounded by $k \\, C_0 \\log_2(1/\\varepsilon) + E(n,k)$, where $k$ is the number of distinct phases in the diagonal $n$-qubit unitary, $\\varepsilon$ is the desired precision, $C_0$ is a quality factor of the implementation method ($1total entanglement cost (in $T$ gates). We determine an optimal decision boundary in $(k,n,\\varepsilon)$-space where our decomposition algorithm achieves lower entanglement cost than previous state-of-the-art techniques. Our method outperforms state-of-the-art techniques for a practical range of $\\varepsilon$ values and diagonal operators and can reduce the number of $T$ gates exponentially in $n$ when $k << 2^n$.
Approximation algorithms for QMA-complete problems
Sevag Gharibian; Julia Kempe
2011-01-20T23:59:59.000Z
Approximation algorithms for classical constraint satisfaction problems are one of the main research areas in theoretical computer science. Here we define a natural approximation version of the QMA-complete local Hamiltonian problem and initiate its study. We present two main results. The first shows that a non-trivial approximation ratio can be obtained in the class NP using product states. The second result (which builds on the first one), gives a polynomial time (classical) algorithm providing a similar approximation ratio for dense instances of the problem. The latter result is based on an adaptation of the "exhaustive sampling method" by Arora et al. [J. Comp. Sys. Sci. 58, p.193 (1999)] to the quantum setting, and might be of independent interest.
Theoretical Perspectives on Protein Folding
D. Thirumalai; Edward P. O'Brien; Greg Morrison; Changbong Hyeon
2010-07-18T23:59:59.000Z
Understanding how monomeric proteins fold under in vitro conditions is crucial to describing their functions in the cellular context. Significant advances both in theory and experiments have resulted in a conceptual framework for describing the folding mechanisms of globular proteins. The experimental data and theoretical methods have revealed the multifaceted character of proteins. Proteins exhibit universal features that can be determined using only the number of amino acid residues (N) and polymer concepts. The sizes of proteins in the denatured and folded states, cooperativity of the folding transition, dispersions in the melting temperatures at the residue level, and time scales of folding are to a large extent determined by N. The consequences of finite N especially on how individual residues order upon folding depends on the topology of the folded states. Such intricate details can be predicted using the Molecular Transfer Model that combines simulations with measured transfer free energies of protein building blocks from water to the desired concentration of the denaturant. By watching one molecule fold at a time, using single molecule methods, the validity of the theoretically anticipated heterogeneity in the folding routes, and the N-dependent time scales for the three stages in the approach to the native state have been established. Despite the successes of theory, of which only a few examples are documented here, we conclude that much remains to be done to solve the "protein folding problem" in the broadest sense.
Office of Nuclear Safety Basis and Facility Design
Broader source: Energy.gov [DOE]
The Office of Nuclear Safety Basis & Facility Design establishes safety basis and facility design requirements and expectations related to analysis and design of nuclear facilities to ensure protection of workers and the public from the hazards associated with nuclear operations.
Nuclear Facility Safety Basis Fundamentals Self-Study Guide ...
Broader source: Energy.gov (indexed) [DOE]
Oak Ridge Operations Office Nuclear Facility Safety Basis Fundamentals Self-Study Guide Fulfills ORO Safety Basis Competency 1, 2 (Part 1), or 7 (Part 1) November 2002 Nuclear...
CRAD, Integrated Safety Basis and Engineering Design Review ...
Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site
Integrated Safety Basis and Engineering Design Review - August 20, 2014 (EA CRAD 31-4, Rev. 0) CRAD, Integrated Safety Basis and Engineering Design Review - August 20, 2014 (EA...
Authorization basis status report (miscellaneous TWRS facilities, tanks and components)
Stickney, R.G.
1998-04-29T23:59:59.000Z
This report presents the results of a systematic evaluation conducted to identify miscellaneous TWRS facilities, tanks and components with potential needed authorization basis upgrades. It provides the Authorization Basis upgrade plan for those miscellaneous TWRS facilities, tanks and components identified.
Recent Theoretical Results for Advanced Thermoelectric Materials...
Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site
Materials Recent Theoretical Results for Advanced Thermoelectric Materials Transport theory and first principles calculations applied to oxides, chalcogenides and skutterudite...
Algorithms for Greechie Diagrams
Brendan D. McKay; Norman D. Megill; Mladen Pavicic
2001-01-21T23:59:59.000Z
We give a new algorithm for generating Greechie diagrams with arbitrary chosen number of atoms or blocks (with 2,3,4,... atoms) and provide a computer program for generating the diagrams. The results show that the previous algorithm does not produce every diagram and that it is at least 100,000 times slower. We also provide an algorithm and programs for checking of Greechie diagram passage by equations defining varieties of orthomodular lattices and give examples from Hilbert lattices. At the end we discuss some additional characteristics of Greechie diagrams.
Optimized Algorithms Boost Combustion Research
Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)
Optimized Algorithms Boost Combustion Research Optimized Algorithms Boost Combustion Research Methane Flame Simulations Run 6x Faster on NERSC's Hopper Supercomputer November 25,...
Algorithms incorporating concurrency and caching
Fineman, Jeremy T
2009-01-01T23:59:59.000Z
This thesis describes provably good algorithms for modern large-scale computer systems, including today's multicores. Designing efficient algorithms for these systems involves overcoming many challenges, including concurrency ...
Theoretical analysis of ARC constriction
Stoenescu, M.L.; Brooks, A.W.; Smith, T.M.
1980-12-01T23:59:59.000Z
The physics of the thermionic converter is governed by strong electrode-plasma interactions (emissions surface scattering, charge exchange) and weak interactions (diffusion, radiation) at the maximum interelectrode plasma radius. The physical processes are thus mostly convective in thin sheaths in front of the electrodes and mostly diffusive and radiative in the plasma bulk. The physical boundaries are open boundaries to particle transfer (electrons emitted or absorbed by the electrodes, all particles diffusing through some maximum plasma radius) and to convective, conductive and radiative heat transfer. In a first approximation the thermionic converter may be described by a one-dimensional classical transport theory. The two-dimensional effects may be significant as a result of the sheath sensitivity to radial plasma variations and of the strong sheath-plasma coupling. The current-voltage characteristic of the converter is thus the result of an integrated current density over the collector area for which the boundary conditions at each r determine the regime (ignited/unignited) of the local current density. A current redistribution strongly weighted at small radii (arc constriction) limits the converter performance and opens questions on constriction reduction possibilities. The questions addressed are the followng: (1) what are the main contributors to the loss of current at high voltage in the thermionic converter; and (2) is arc constriction observable theoretically and what are the conditions of its occurrence. The resulting theoretical problem is formulated and results are given. The converter electrical current is estimated directly from the electron and ion particle fluxes based on the spatial distribution of the electron/ion density n, temperatures T/sub e/, T/sub i/, electrical voltage V and on the knowledge of the transport coefficients. (WHK)
Samadi, R; Ludwig, H -G; Caffau, E; Campante, T L; Davies, G R; Kallinger, T; Lund, M N; Mosser, B; Baglin, A; Mathur, S; Garcia, R
2013-01-01T23:59:59.000Z
A large set of stars observed by CoRoT and Kepler shows clear evidence for the presence of a stellar background, which is interpreted to arise from surface convection, i.e., granulation. These observations show that the characteristic time-scale (tau_eff) and the root-mean-square (rms) brightness fluctuations (sigma) associated with the granulation scale as a function of the peak frequency (nu_max) of the solar-like oscillations. We aim at providing a theoretical background to the observed scaling relations based on a model developed in the companion paper. We computed for each 3D model the theoretical power density spectrum (PDS) associated with the granulation as seen in disk-integrated intensity on the basis of the theoretical model. For each PDS we derived tau_eff and sigma and compared these theoretical values with the theoretical scaling relations derived from the theoretical model and the Kepler measurements. We derive theoretical scaling relations for tau_eff and sigma, which show the same dependence ...
Research in Theoretical Particle Physics
Feldman, Hume A; Marfatia, Danny
2014-09-24T23:59:59.000Z
This document is the final report on activity supported under DOE Grant Number DE-FG02-13ER42024. The report covers the period July 15, 2013 – March 31, 2014. Faculty supported by the grant during the period were Danny Marfatia (1.0 FTE) and Hume Feldman (1% FTE). The grant partly supported University of Hawaii students, David Yaylali and Keita Fukushima, who are supervised by Jason Kumar. Both students are expected to graduate with Ph.D. degrees in 2014. Yaylali will be joining the University of Arizona theory group in Fall 2014 with a 3-year postdoctoral appointment under Keith Dienes. The group’s research covered topics subsumed under the Energy Frontier, the Intensity Frontier, and the Cosmic Frontier. Many theoretical results related to the Standard Model and models of new physics were published during the reporting period. The report contains brief project descriptions in Section 1. Sections 2 and 3 lists published and submitted work, respectively. Sections 4 and 5 summarize group activity including conferences, workshops and professional presentations.
Parameters and error of a theoretical model
Moeller, P.; Nix, J.R.; Swiatecki, W.
1986-09-01T23:59:59.000Z
We propose a definition for the error of a theoretical model of the type whose parameters are determined from adjustment to experimental data. By applying a standard statistical method, the maximum-likelihoodlmethod, we derive expressions for both the parameters of the theoretical model and its error. We investigate the derived equations by solving them for simulated experimental and theoretical quantities generated by use of random number generators. 2 refs., 4 tabs.
2005 American Conference on Theoretical Chemistry
Carter, Emily A
2006-11-19T23:59:59.000Z
The materials uploaded are meant to serve as final report on the funds provided by DOE-BES to help sponsor the 2005 American Conference on Theoretical Chemistry.
Catalyst by Design - Theoretical, Nanostructural, and Experimental...
Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site
Emission Treatment Catalyst Catalyst by Design - Theoretical, Nanostructural, and Experimental Studies of Emission Treatment Catalyst Poster presented at the 16th Directions in...
Batcho, P.F. [Princeton Univ., NJ (United States)] [Princeton Univ., NJ (United States); Karniadakis, G.E. [Brown Univ., Providence, RI (United States)] [Brown Univ., Providence, RI (United States)
1994-11-01T23:59:59.000Z
The present study focuses on the solution of the incompressible Navier-Stokes equations in general, non-separable domains, and employs a Galerkin projection of divergence-free vector functions as a trail basis. This basis is obtained from the solution of a generalized constrained Stokes eigen-problem in the domain of interest. Faster convergence can be achieved by constructing a singular Stokes eigen-problem in which the Stokes operator is modified to include a variable coefficient which vanishes at the domain boundaries. The convergence properties of such functions are advantageous in a least squares sense and are shown to produce significantly better approximations to the solution of the Navier-Stokes equations in post-critical states where unsteadiness characterizes the flowfield. Solutions for the eigen-systems are efficiently accomplished using a combined Lanczos-Uzawa algorithm and spectral element discretizations. Results are presented for different simulations using these global spectral trial basis on non-separable and multiply-connected domains. It is confirmed that faster convergence is obtained using the singular eigen-expansions in approximating stationary Navier-Stokes solutions in general domains. It is also shown that 100-mode expansions of time-dependent solutions based on the singular Stokes eigenfunctions are sufficient to accurately predict the dynamics of flows in such domains, including Hopf bifurcations, intermittency, and details of flow structures.
Hanford External Dosimetry Technical Basis Manual PNL-MA-842
Rathbone, Bruce A.
2010-01-01T23:59:59.000Z
The Hanford External Dosimetry Technical Basis Manual PNL-MA-842 documents the design and implementation of the external dosimetry system used at Hanford. The manual describes the dosimeter design, processing protocols, dose calculation methodology, radiation fields encountered, dosimeter response characteristics, limitations of dosimeter design under field conditions, and makes recommendations for effective use of the dosimeters in the field. The manual describes the technical basis for the dosimetry system in a manner intended to help ensure defensibility of the dose of record at Hanford and to demonstrate compliance with 10 CFR 835, DOELAP, DOE-RL, ORP, PNSO, and Hanford contractor requirements. The dosimetry system is operated by PNNL’s Hanford External Dosimetry Program (HEDP) which provides dosimetry services to all Hanford contractors. The primary users of this manual are DOE and DOE contractors at Hanford using the dosimetry services of PNNL. Development and maintenance of this manual is funded directly by DOE and DOE contractors. Its contents have been reviewed and approved by DOE and DOE contractors at Hanford through the Hanford Personnel Dosimetry Advisory Committee (HPDAC) which is chartered and chaired by DOE-RL and serves as means of coordinating dosimetry practices across contractors at Hanford. This manual was established in 1996. Since its inception, it has been revised many times and maintained by PNNL as a controlled document with controlled distribution. The first revision to be released through PNNL’s Electronic Records & Information Capture Architecture (ERICA) database was designated Revision 0. Revision numbers that are whole numbers reflect major revisions typically involving significant changes to all chapters in the document. Revision numbers that include a decimal fraction reflect minor revisions, usually restricted to selected chapters or selected pages in the document. Maintenance and distribution of controlled hard copies of the manual by PNNL was discontinued beginning with Revision 0.2. Revision Log: Rev. 0 (2/25/2005) Major revision and expansion. Rev. 0.1 (3/12/2007) Updated Chapters 5, 6 and 9 to reflect change in default ring calibration factor used in HEDP dose calculation software. Factor changed from 1.5 to 2.0 beginning January 1, 2007. Pages on which changes were made are as follows: 5.23, 5.69, 5.78, 5.80, 5.82, 6.3, 6.5, 6.29, and 9.2. Rev 0.2 (8/28/2009) Updated Chapters 3, 5, 6, 8 and 9. Chapters 6 and 8 were significantly expanded. References in the Preface and Chapters 1, 2, 4, and 7 were updated to reflect updates to DOE documents. Approved by HPDAC on 6/2/2009. Rev 1.0 (1/1/2010) Major revision. Updated all chapters to reflect the Hanford site wide implementation on January 1, 2010 of new DOE requirements for occupational radiation protection. The new requirements are given in the June 8, 2007 amendment to 10 CFR 835 Occupational Radiation Protection (Federal Register, June 8, 2007. Title 10 Part 835. U.S., Code of Federal Regulations, Vol. 72, No. 110, 31904-31941). Revision 1.0 to the manual replaces ICRP 26 dosimetry concepts and terminology with ICRP 60 dosimetry concepts and terminology and replaces external dose conversion factors from ICRP 51 with those from ICRP 74 for use in measurement of operational quantities with dosimeters. Descriptions of dose algorithms and dosimeter response characteristics, and field performance were updated to reflect changes in the neutron quality factors used in the measurement of operational quantities.
Nuclear Safety Basis Program Review Overview and Management Oversight...
Broader source: Energy.gov (indexed) [DOE]
This SRP, Nuclear Safety Basis Program Review, consists of five volumes. It provides information to help strengthen the technical rigor of line management oversight and federal...
Engineering Design and Safety Basis Inspection Criteria, Inspection...
Broader source: Energy.gov (indexed) [DOE]
to this is our commitment to enhance our program. Therefore, we have developed the Engineering Design and Safety Basis Inspection Criteria, Inspection Activities, and Lines of...
Assessing Beyond Design Basis Seismic Events and Implications...
Office of Environmental Management (EM)
on Seismic Risk Assessing Beyond Design Basis Seismic Events and Implications on Seismic Risk September 19, 2012 Presenter: Jeffrey Kimball, Technical Specialist (Seismologist)...
auxiliary basis expansions: Topics by E-print Network
Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)
work by estimating a novel algorithm for extracting atrial activity from single lead electrocardiogram (ECG) signal sustained subtraction (ABS) method using synthetic AF...
Genetic Algorithms Artificial Life
Forrest, Stephanie
systems tremendously. Likewise, evolution of artificial systems is an important component of artificial) are currently the most promi nent and widely used models of evolution in artificiallife systems. GAs have beenGenetic Algorithms and Artificial Life Melanie Mitchell Santa Fe Institute 1660 Old Pecos Tr
Graph algorithms experimentation facility
Sonom, Donald George
1994-01-01T23:59:59.000Z
DRAWADJMAT 2 ~e ~l 2. ~f ~2 2 ~t ~& [g H 2 O? Z Mwd a P d ed d Aid~a sae R 2-BE& T C dbms Fig. 2. External Algorithm Handler The facility is menu driven and implemented as a client to XAGE. Our implementation follows very closely the functionality...
Space Complexity Algorithms & Complexity
Way, Andy
Space Complexity Algorithms & Complexity Space Complexity Nicolas Stroppa Patrik Lambert - plambert@computing.dcu.ie CA313@Dublin City University. 2008-2009. December 4, 2008 #12;Space Complexity Hierarchy of problems #12;Space Complexity NP-intermediate Languages If P = NP, then are there languages which neither in P
Robust seed selection algorithm for k-means type algorithms
Pavan, K Karteeka; Rao, A V Dattatreya; Sridhar, G R; 10.5121/ijcsit.2011.3513
2012-01-01T23:59:59.000Z
Selection of initial seeds greatly affects the quality of the clusters and in k-means type algorithms. Most of the seed selection methods result different results in different independent runs. We propose a single, optimal, outlier insensitive seed selection algorithm for k-means type algorithms as extension to k-means++. The experimental results on synthetic, real and on microarray data sets demonstrated that effectiveness of the new algorithm in producing the clustering results
POSTDOCTORAL POSITIONS in THEORETICAL HIGH ENERGY PHYSICS
for Advanced Studies (SISSA), the Department of Theoretical Physics of Trieste University, the Trieste section of the INFN, and the Trieste Observatory. The Section is also a member of the European network "Quest Centre for Theoretical Physics Strada Costiera n. 11 - 34151 Trieste, Italy E-mail: rosanna@ictp.it #12;
Paris-Sud XI, UniversitÃ© de
Discrete Mathematics and Theoretical Computer Science DMTCS vol. 9:2, 2007, 145Â152 Gray code order order yields a Gray code on the Lyndon family. In this paper we give a positive answer. More precisely and Lyndon words in Gray code order. Keywords: Lyndon words, Gray codes, generating algorithms 1 Introduction
The optimization problem Genetic Algorithm
GimÃ©nez, Domingo
The optimization problem Genetic Algorithm Particle Swarm Optimization Experimental results for time-power optimization META, October 27-31, 2014 1 / 25 #12;The optimization problem Genetic Algorithm Particle Swarm Optimization Experimental results Conclusions Time and energy optimization Traditionally
High-performance combinatorial algorithms
Pinar, Ali
2003-01-01T23:59:59.000Z
mathematics, and high performance computing. The numericalalgorithms on high performance computing platforms.algorithms on high performance computing platforms, which
Extended models of gravity in SNIa cosmological data using genetic algorithms
López-Corona, O
2015-01-01T23:59:59.000Z
In this talk I explained briefly the advantages of using genetic algorithms on any measured data but specially astronomical ones. This kind of algorithms are not only a better computational paradigm, but they also allow for a more profound data treatment enhancing theoretical developments. As an example, I will use the SNIa cosmological data to fit the extended metric theories of gravity of Carranza et al. (2013, 2014) showing that the best parameters combination deviate from theoretical predicted ones by a minimal amount. This means that these kind of gravitational extensions are statistically robust and show that no dark matter and/or energy is required to explain the observations.
Tomasz Plawski, J. Hovater
2010-09-01T23:59:59.000Z
A digital low level radio frequency (RF) system typically incorporates either a heterodyne or direct sampling technique, followed by fast ADCs, then an FPGA, and finally a transmitting DAC. This universal platform opens up the possibilities for a variety of control algorithm implementations. The foremost concern for an RF control system is cavity field stability, and to meet the required quality of regulation, the chosen control system needs to have sufficient feedback gain. In this paper we will investigate the effectiveness of the regulation for three basic control system algorithms: I&Q (In-phase and Quadrature), Amplitude & Phase and digital SEL (Self Exciting Loop) along with the example of the Jefferson Lab 12 GeV cavity field control system.
Stability of Coupling Algorithms
Akkasale, Abhineeth
2012-07-16T23:59:59.000Z
of Committee, K. B. Nakshatrala Committee Members, Steve Suh J. N. Reddy Head of Department, Dennis O?Neal May 2011 Major Subject: Mechanical Engineering iii ABSTRACT Stability of Coupling Algorithms. (May 2011) Abhineeth Akkasale, B.E., Bangalore... step. iv To Amma and Anna v ACKNOWLEDGMENTS First and foremost, I thank Dr. Kalyana B. Nakshatrala for being an incredible advisor and for his time and patience in constantly guiding me through my research. I am indebted to him for his guidance...
Large scale tracking algorithms.
Hansen, Ross L.; Love, Joshua Alan; Melgaard, David Kennett; Karelitz, David B.; Pitts, Todd Alan; Zollweg, Joshua David; Anderson, Dylan Z.; Nandy, Prabal; Whitlow, Gary L.; Bender, Daniel A.; Byrne, Raymond Harry
2015-01-01T23:59:59.000Z
Low signal-to-noise data processing algorithms for improved detection, tracking, discrimination and situational threat assessment are a key research challenge. As sensor technologies progress, the number of pixels will increase signi cantly. This will result in increased resolution, which could improve object discrimination, but unfortunately, will also result in a significant increase in the number of potential targets to track. Many tracking techniques, like multi-hypothesis trackers, suffer from a combinatorial explosion as the number of potential targets increase. As the resolution increases, the phenomenology applied towards detection algorithms also changes. For low resolution sensors, "blob" tracking is the norm. For higher resolution data, additional information may be employed in the detection and classfication steps. The most challenging scenarios are those where the targets cannot be fully resolved, yet must be tracked and distinguished for neighboring closely spaced objects. Tracking vehicles in an urban environment is an example of such a challenging scenario. This report evaluates several potential tracking algorithms for large-scale tracking in an urban environment.
The pointer basis and the feedback stabilization of quantum systems
L. Li; A. Chia; H. M. Wiseman
2014-11-19T23:59:59.000Z
The dynamics for an open quantum system can be `unravelled' in infinitely many ways, depending on how the environment is monitored, yielding different sorts of conditioned states, evolving stochastically. In the case of ideal monitoring these states are pure, and the set of states for a given monitoring forms a basis (which is overcomplete in general) for the system. It has been argued elsewhere [D. Atkins et al., Europhys. Lett. 69, 163 (2005)] that the `pointer basis' as introduced by Zurek and Paz [Phys. Rev. Lett 70, 1187(1993)], should be identified with the unravelling-induced basis which decoheres most slowly. Here we show the applicability of this concept of pointer basis to the problem of state stabilization for quantum systems. In particular we prove that for linear Gaussian quantum systems, if the feedback control is assumed to be strong compared to the decoherence of the pointer basis, then the system can be stabilized in one of the pointer basis states with a fidelity close to one (the infidelity varies inversely with the control strength). Moreover, if the aim of the feedback is to maximize the fidelity of the unconditioned system state with a pure state that is one of its conditioned states, then the optimal unravelling for stabilizing the system in this way is that which induces the pointer basis for the conditioned states. We illustrate these results with a model system: quantum Brownian motion. We show that even if the feedback control strength is comparable to the decoherence, the optimal unravelling still induces a basis very close to the pointer basis. However if the feedback control is weak compared to the decoherence, this is not the case.
Distributed algorithms for mobile ad hoc networks
Malpani, Navneet
2001-01-01T23:59:59.000Z
We first present two new leader election algorithms for mobile ad hoc networks. The algorithms ensure that eventually each connected component of the topology graph has exactly one leader. The algorithms are based on a routing algorithm called TORA...
Theoretical Comparisons of Search Dynamics of Genetic Algorithms and Evolution Strategies
Coello, Carlos A. Coello
Okabe Honda R&D Co., Ltd., Wako Research Center 1-4-1 Chuo, Wako-shi, Saitama 351-0193, Japan tatsuya okabe@n.w.rd.honda.co.jp Yaochu Jin Honda Research Institute Europe Carl-Legien Strasse 30 63073 Offenbach am Main Germany yaochu.jin@honda-ri.de Bernhard Sendhoff Honda Research Institute Europe Carl
Khisti, Ashish, 1979-
2009-01-01T23:59:59.000Z
As modern infrastructure systems become increasingly more complex, we are faced with many new challenges in the area of information security. In this thesis we examine some approaches to security based on ideas from ...
Pedram, Massoud
-cooperative utility companies who have incentives to maximize their own profits. The energy price competition forms. More interestingly, the use of dynamic energy pricing schemes incentivizes homeowners to consume to the change of energy usage as a factor of energy price. Although it is no longer possible to prove
Game-theoretic learning algorithm for a spatial coverage problem Ketan Savla and Emilio Frazzoli
Savla, Ketan
ksavla@mit.edu, frazzoli@mit.edu Abstract-- In this paper we consider a class of dynamic vehicle routing active research area today addresses coordi- nation of several mobile agents: groups of autonomous robots to complete the task, or the fuel/energy expenditure. A related problem has been investigated as the Weapon
A Graph-theoretic Algorithm for Comparative Modeling of Protein Structure
Samudrala, Ram
are the same (Chothia & Lesk, 1986). This is the case now for about 30% of the general sequences entering for doing this is usually termed comparative or homology modeling. In contrast to progress in generating effects makes the energy surface extremely discontinuous, so that search methods that make semi
SPARSE REPRESENTATIONS WITH DATA FIDELITY TERM VIA AN ITERATIVELY REWEIGHTED LEAST SQUARES ALGORITHM
WOHLBERG, BRENDT [Los Alamos National Laboratory; RODRIGUEZ, PAUL [Los Alamos National Laboratory
2007-01-08T23:59:59.000Z
Basis Pursuit and Basis Pursuit Denoising, well established techniques for computing sparse representations, minimize an {ell}{sup 2} data fidelity term subject to an {ell}{sup 1} sparsity constraint or regularization term on the solution by mapping the problem to a linear or quadratic program. Basis Pursuit Denoising with an {ell}{sup 1} data fidelity term has recently been proposed, also implemented via a mapping to a linear program. They introduce an alternative approach via an iteratively Reweighted Least Squares algorithm, providing greater flexibility in the choice of data fidelity term norm, and computational advantages in certain circumstances.
Online Auctions: Theoretical and Empirical Investigations
Zhang, Yu
2010-10-12T23:59:59.000Z
This dissertation, which consists of three essays, studies online auctions both theoretically and empirically. The first essay studies a special online auction format used by eBay, “Buy-It- Now” (BIN) auctions, in which ...
Theoretical efficiency of solar thermoelectric energy generators
Chen, Gang
This paper investigates the theoretical efficiency of solar thermoelectric generators (STEGs). A model is established including thermal concentration in addition to optical concentration. Based on the model, the maximum ...
Speculations About the Selective Basis for Modern Human Craniofacial Form
Lieberman, Daniel E.
Speculations About the Selective Basis for Modern Human Craniofacial Form DANIEL E. LIEBERMAN. To name just a few of our unusual craniofacial apo- morphies, we are the only extant pri- mate
Quasi Sturmian Basis in Two-Electron Continuum Problems
A. S. Zaytsev; L. U. Ancarani; S. A. Zaytsev
2015-03-12T23:59:59.000Z
A new type of basis functions is proposed to describe a two-electron continuum which arises as a final state in electron-impact ionization and double photoionization of atomic systems. We name these functions, which are calculated in terms of the recently introduced Quasi Sturmian functions, Convoluted Quasi Sturmian functions (CQS). By construction, the CQS functions look asymptotically like a six-dimensional spherical wave. The driven equation describing an $(e, 3e)$ process on helium in the framework of the Temkin-Poet model has been solved numerically using expansions on the basis CQS functions. The convergence behavior of the solution has been examined as the size of the basis has been increased. The calculations show that the convergence rate is significantly improved by introducing a phase factor corresponding the electron-electron interaction into the basis functions. Such a modification of the boundary conditions leads to appreciable change in the magnitude of the solution.
auf basis einer: Topics by E-print Network
Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)
assoziiert. Ziel der vorliegenden Studie war es, den Einfluss einer (more) Zachmann, Christin 2014-01-01 23 77Weniger ist mehr Virtuelle Thin Clients auf Linux-Basis...
Is the Preferred Basis selected by the environment?
Tian Wang; David Hobill
2014-12-09T23:59:59.000Z
We show that in a quantum measurement, the preferred basis is determined by the interaction between the apparatus and the quantum system, instead of by the environment. This interaction entangles three degrees of freedom, one system degree of freedom we are interested in and preserved by the interaction, one system degree of freedom that carries the change due to the interaction, and the apparatus degree of freedom which is always ignored. Considering all three degrees of freedom the composite state only has one decomposition, and this guarantees that the apparatus would end up in the expected preferred basis of our daily experiences. We also point out some problems with the environment-induced super-selection (Einselection) solution to the preferred basis problem, and clarifies a common misunderstanding of environmental decoherence and the preferred basis problem.
Technical Basis Document for PFP Area Monitoring Dosimetry Program
COOPER, J.R.
2000-04-17T23:59:59.000Z
This document describes the phantom dosimetry used for the PFP Area Monitoring program and establishes the basis for the Plutonium Finishing Plant's (PFP) area monitoring dosimetry program in accordance with the following requirements: Title 10, Code of Federal Regulations (CFR), part 835, ''Occupational Radiation Protection'' Part 835.403; Hanford Site Radiological Control Manual (HSRCM-1), Part 514; HNF-PRO-382, Area Dosimetry Program; and PNL-MA-842, Hanford External Dosimetry Technical Basis Manual.
A New Numerical Algorithm for Thermoacoustic and Photoacoustic Tomography with Variable Sound Speed
Qian, Jianliang; Uhlmann, Gunther; Zhao, Hongkai
2011-01-01T23:59:59.000Z
We present a new algorithm for reconstructing an unknown source in Thermoacoustic and Photoacoustic Tomography based on the recent advances in understanding the theoretical nature of the problem. We work with variable sound speeds that might be also discontinuous across some surface. The latter problem arises in brain imaging. The new algorithm is based on an explicit formula in the form of a Neumann series. We present numerical examples with non-trapping, trapping and piecewise smooth speeds, as well as examples with data on a part of the boundary. These numerical examples demonstrate the robust performance of the new algorithm.
Nonextensive lattice gauge theories: algorithms and methods
Rafael B. Frigori
2014-04-26T23:59:59.000Z
High-energy phenomena presenting strong dynamical correlations, long-range interactions and microscopic memory effects are well described by nonextensive versions of the canonical Boltzmann-Gibbs statistical mechanics. After a brief theoretical review, we introduce a class of generalized heat-bath algorithms that enable Monte Carlo lattice simulations of gauge fields on the nonextensive statistical ensemble of Tsallis. The algorithmic performance is evaluated as a function of the Tsallis parameter q in equilibrium and nonequilibrium setups. Then, we revisit short-time dynamic techniques, which in contrast to usual simulations in equilibrium present negligible finite-size effects and no critical slowing down. As an application, we investigate the short-time critical behaviour of the nonextensive hot Yang-Mills theory at q- values obtained from heavy-ion collision experiments. Our results imply that, when the equivalence of statistical ensembles is obeyed, the long-standing universality arguments relating gauge theories and spin systems hold also for the nonextensive framework.
Computing single step operators of logic programming in radial basis function neural networks
Hamadneh, Nawaf; Sathasivam, Saratha; Choon, Ong Hong [School of Mathematical Sciences, Universiti Sains Malaysia, 11800 USM, Penang (Malaysia)
2014-07-10T23:59:59.000Z
Logic programming is the process that leads from an original formulation of a computing problem to executable programs. A normal logic program consists of a finite set of clauses. A valuation I of logic programming is a mapping from ground atoms to false or true. The single step operator of any logic programming is defined as a function (T{sub p}:I?I). Logic programming is well-suited to building the artificial intelligence systems. In this study, we established a new technique to compute the single step operators of logic programming in the radial basis function neural networks. To do that, we proposed a new technique to generate the training data sets of single step operators. The training data sets are used to build the neural networks. We used the recurrent radial basis function neural networks to get to the steady state (the fixed point of the operators). To improve the performance of the neural networks, we used the particle swarm optimization algorithm to train the networks.
Recursive Dynamics Algorithms for Serial, Parallel, and Closed-chain Multibody Systems
Saha, Subir Kumar
Kumar Saha Department of Mechanical Engineering, IIT Delhi Hauz Khas, New Delhi 110 016 saha), Wehage and Haug (1982), Kamman and Huston (1984), Angeles and Lee (1988), Saha and Angeles (1991), Saha) and Saha (1997), which are the basis for the development of recursive dynamics algorithms proposed
Shortest Path Algorithms: A Comparison
Golden, Bruce L., 1950-
In this note we present some computational evidence to suggest that a version of Bellman's shortest path algorithm outperforms Treesort- Dijkstra's for a certain class of networks.
Hedge Algorithm and Subgradient Methods
2010-10-05T23:59:59.000Z
standard complexity results on subgradient algorithms allows us to derive optimal parameters ...... the American Statistical Association, 58:13–30, 1963. 1In fact ...
Call for Papers 9th Annual European Symposium on Algorithms ESA 2001
Brodal, Gerth Stølting
Call for Papers 9th Annual European Symposium on Algorithms ESA 2001 BRICS, University of Aarhus, Denmark, August 2831, 2001 Scope The Symposium covers research in the use, design, and analysis programming. ESA 2001 is spon- sored by BRICS and EATCS (the European Association for Theoretical Computer
Call for Papers 9th Annual European Symposium on Algorithms --ESA 2001
Brodal, Gerth Stølting
Call for Papers 9th Annual European Symposium on Algorithms -- ESA 2001 BRICS, University of Aarhus, Denmark, August 28--31, 2001 Scope The Symposium covers research in the use, design, and analysis of e. ESA 2001 is spon sored by BRICS and EATCS (the European Association for Theoretical Computer Science
Dhar, Deepak
polymers. Sumedha #3; and Deepak Dhar y Department Of Theoretical Physics Tata Institute Of Fundamental algorithm for linear and branched polymers. There is a qualitative di#11;erence in the eÆciency in these two for linear polymers, but as exp(cn #11; ) for branched (undirected and directed) polymers, where 0
An order-theoretic quantification of contextuality
Ian T. Durham
2014-09-23T23:59:59.000Z
In this essay, I develop order-theoretic notions of determinism and contextuality on domains and topoi. In the process, I develop a method for quantifying contextuality and show that the order-theoretic sense of contextuality is analogous to the sense embodied in the topos-theoretic statement of the Kochen-Specker theorem. Additionally, I argue that this leads to a relation between the entropy associated with measurements on quantum systems and the second law of thermodynamics. The idea that the second law has its origin in the ordering of quantum states and processes dates to at least 1958 and possibly earlier. The suggestion that the mechanism behind this relation is contextuality, is made here for the first time.
Lecture 24: Parallel Algorithms I Topics: sort and matrix algorithms
Balasubramonian, Rajeev
1 Lecture 24: Parallel Algorithms I · Topics: sort and matrix algorithms #12;2 Processor Model a single clock (asynchronous designs will require minor modifications) · At each clock, processors receive input output #12;4 Control at Each Processor · Each processor stores the minimum number it has seen
analysis theoretical considerations: Topics by E-print Network
Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)
Lorne A. Nelson; Saul Rappaport 2003-04-22 11 Theoretical Mobility Analysis of Ion Mobility Spectrometry Physics Websites Summary: Results Theoretical Mobility Analysis of Ion...
arrays theoretical analysis: Topics by E-print Network
Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)
are used today for a very wide Pearce, Tim C. 10 Theoretical Mobility Analysis of Ion Mobility Spectrometry Physics Websites Summary: Results Theoretical Mobility Analysis of Ion...
ITP Steel: Theoretical Minimum Energies to Produce Steel for...
Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site
Theoretical Minimum Energies to Produce Steel for Selected Conditions, March 2000 ITP Steel: Theoretical Minimum Energies to Produce Steel for Selected Conditions, March 2000...
Theoretical studies of chemical reaction dynamics
Schatz, G.C. [Argonne National Laboratory, IL (United States)
1993-12-01T23:59:59.000Z
This collaborative program with the Theoretical Chemistry Group at Argonne involves theoretical studies of gas phase chemical reactions and related energy transfer and photodissociation processes. Many of the reactions studied are of direct relevance to combustion; others are selected they provide important examples of special dynamical processes, or are of relevance to experimental measurements. Both classical trajectory and quantum reactive scattering methods are used for these studies, and the types of information determined range from thermal rate constants to state to state differential cross sections.
Resilient Control Systems Practical Metrics Basis for Defining Mission Impact
Craig G. Rieger
2014-08-01T23:59:59.000Z
"Resilience” describes how systems operate at an acceptable level of normalcy despite disturbances or threats. In this paper we first consider the cognitive, cyber-physical interdependencies inherent in critical infrastructure systems and how resilience differs from reliability to mitigate these risks. Terminology and metrics basis are provided to integrate the cognitive, cyber-physical aspects that should be considered when defining solutions for resilience. A practical approach is taken to roll this metrics basis up to system integrity and business case metrics that establish “proper operation” and “impact.” A notional chemical processing plant is the use case for demonstrating how the system integrity metrics can be applied to establish performance, and
The Functional Requirements and Design Basis for Information Barriers
Fuller, James L.
2012-05-01T23:59:59.000Z
This report summarizes the results of the Information Barrier Working Group workshop held at Sandia National Laboratory in Albuquerque, NM, February 2-4, 1999. This workshop was convened to establish the functional requirements associated with warhead radiation signature information barriers, to identify the major design elements of any such system or approach, and to identify a design basis for each of these major elements. Such information forms the general design basis to be used in designing, fabricating, and evaluating the complete integrated systems developed for specific purposes.
Simple basis for hydrogenic atoms in magnetic fields
Gallas, J.A.C.
1984-01-01T23:59:59.000Z
A field-dependent hydrogenic basis is used to obtain the evolution of the energy spectrum of atoms in strong (approx.10/sup 8/ G) and uniform magnetic fields. The basis allows results to be derived analytically. Numerical values for the first 13 excited states of hydrogen are found to be in very good agreement with much more elaborate calculations of Smith et al. and of Brandi. In addition, the possibility of having a remnant type of degeneracy in the presence of the magnetic field is investigated.
Formal Management Review of the Safety Basis Calculations Noncompliance
Altenbach, T J
2008-06-24T23:59:59.000Z
In Reference 1, LLNL identified a failure to adequately implement an institutional commitment concerning administrative requirements governing the documentation of Safety Basis calculations supporting the Documented Safety Analysis (DSA) process for LLNL Hazard Category 2 and Category 3 nuclear facilities. The AB Section has discovered that the administrative requirements of AB procedure AB-006, 'Safety Basis Calculation Procedure for Category 2 and 3 Nuclear Facilities', have not been uniformly or consistently applied in the preparation of Safety Basis calculations for LLNL Hazard Category 2 and 3 Nuclear Facilities. The SEP Associated Director has directed the AB Section to initiate a formal management review of the issue that includes, but is not necessarily limited to the following topics: (1) the basis establishing Ab-006 as a required internal procedure for Safety Basis calculations; (2) how requirements for Safety Basis calculations flow down in the institutional DSA process; (3) the extent to which affected Laboratory organizations have explicitly complied with the requirements of Procedure AB-006; (4) what alternative approaches LLNL organizations has used for Safety Basis calculations and how these alternate approaches compare with Procedure AB-006 requirements; and (5) how to reconcile Safety Basis calculations that were performed before Procedure AB-006 came into existence (i.e., August 2001). The management review2 also includes an extent-of-condition evaluation to determine how widespread the discovered issue is throughout Laboratory organizations responsible for operating nuclear facilities, and to determine if implementation of AB procedures other than AB-006 has been similarly affected. In Reference 2, Corrective Action 1 was established whereby the SEP Directorate will develop a plan for performing a formal management review of the discovered condition, including an extent-of condition evaluation. In Reference 3, a plan was provided to prepare a formal management review, satisfying Corrective Action 1. An AB-006 Working Group was formed,led by the AB Section, with representatives from the Nuclear Materials Technology Program (NMTP), the Radioactive and Hazardous Waste Management (RHWM) Division, and the Packaging and Transportation Safety (PATS) Program. The key action of this management review was for Working Group members to conduct an assessment of all safety basis calculations referenced in their respective DSAs. Those assessments were tasked to provide the following information: (1) list which safety basis calculations correctly follow AB-006 and therefore require no additional documentation; (2) identify and list which safety basis calculations do not strictly follow AB-006, these include NMTP Engineering Notes, Engineering Safety Notes, and calculations by organizations external to the nuclear facilities (such as Plant Engineering), subcontractor calculations, and other internally generated calculations. Each of these will be reviewed and listed on a memorandum with the facility manager's (or designee's) signature accepting that calculation for use in the DSA. If any of these calculations are lacking the signature of a technical reviewer, they must also be reviewed for technical content and that review documented per AB-006.
Theoretical Studies in Elementary Particle Physics
Collins, John C.; Roiban, Radu S
2013-04-01T23:59:59.000Z
This final report summarizes work at Penn State University from June 1, 1990 to April 30, 2012. The work was in theoretical elementary particle physics. Many new results in perturbative QCD, in string theory, and in related areas were obtained, with a substantial impact on the experimental program.
A Theoretical Framework for Chimera Domain Decomposition
Keeling, Stephen L.
A Theoretical Framework for Chimera Domain Decomposition S. L. Keeling Sverdrup Technology, Inc. Steger, UC Davis, May 2-4, 1997. 1 Introduction. The Chimera scheme is a domain decomposition method- ometry is divided into simply shaped regions. Unlike other approaches [5], the Chimera method simplifies
History and Contributions of Theoretical Computer Science
Selman, Alan
History and Contributions of Theoretical Computer Science John E. Savage Department of Computer Science Brown University Providence, RI 02912 savage@cs.brown.edu Alan L. Selman Department of Computer@cse.buffalo.edu Carl Smith Department of Computer Science University of Maryland College Park, MD 20741 smith
Hydraulic Geometry: Empirical Investigations and Theoretical Approaches
Eaton, Brett
Hydraulic Geometry: Empirical Investigations and Theoretical Approaches B.C. Eatona, a Department of Geography, The University of British Columbia 1984 West Mall, Vancouver, BC, V6T 1Z2 Abstract Hydraulic. One approach to hydraulic geometry considers temporal changes at a single location due to variations
ARTICLE IN PRESS Theoretical Computer Science ( )
Fischer, Johannes
ARTICLE IN PRESS Theoretical Computer Science ( ) Contents lists available at Science that are still functional. A smaller suffix tree representation could fit in a faster memory, outweighing by far, could easily fit in the main memory of a desktop computer (as each DNA symbol needs just 2 bits
GROUP-THEORETIC ORBIT DECIDABILITY ENRIC VENTURA
Ventura, Enric
GROUP-THEORETIC ORBIT DECIDABILITY ENRIC VENTURA Abstract. A recent collection of papers with the conjugacy problem made by BogopolskiMartinoVentura in [2]. All the consequences up to date, published Government through grant number MTM2011-25955. 1 #12;2 ENRIC VENTURA endomorphisms A = End(X, X
DIMACS Series in Discrete Mathematics and Theoretical Computer Science
Martin, Ralph R.
, ``Blocked Clauses'' and ``Generalized Autarkness,'' are outlined. 6. ``The improved 3ÂSAT algorithm'': Here
Tetrahedral hp finite elements: Algorithms and flow simulations
Sherwin, S.J.; Karniadakis, G.E. [Brown Univ., Providence RI (United States)] [Brown Univ., Providence RI (United States)
1996-03-01T23:59:59.000Z
We present a new discretisation for the incompressible Navier-Stokes equations that extends spectral methods to three-dimensional complex domains consisting of tetrahedral subdomains. The algorithm is based on standard concepts of hp finite elements as well as tensorial spectral elements. This new formulation employs a hierarchical/modal basis constructed from a new apex co-ordinate system which retains a generalised tensor product. These properties enable the development of computationally efficienct algorithms for use on standard finite volume unstructed meshes. A detailed analysis is presented that documents the stability and exponential convergence of the method and several flow cases are simulated and compared with analytical and experimental results. 34 refs., 28 figs., 1 tab.
Theoretical, Methodological, and Empirical Approaches to Cost Savings: A Compendium
M Weimar
1998-12-10T23:59:59.000Z
This publication summarizes and contains the original documentation for understanding why the U.S. Department of Energy's (DOE's) privatization approach provides cost savings and the different approaches that could be used in calculating cost savings for the Tank Waste Remediation System (TWRS) Phase I contract. The initial section summarizes the approaches in the different papers. The appendices are the individual source papers which have been reviewed by individuals outside of the Pacific Northwest National Laboratory and the TWRS Program. Appendix A provides a theoretical basis for and estimate of the level of savings that can be" obtained from a fixed-priced contract with performance risk maintained by the contractor. Appendix B provides the methodology for determining cost savings when comparing a fixed-priced contractor with a Management and Operations (M&O) contractor (cost-plus contractor). Appendix C summarizes the economic model used to calculate cost savings and provides hypothetical output from preliminary calculations. Appendix D provides the summary of the approach for the DOE-Richland Operations Office (RL) estimate of the M&O contractor to perform the same work as BNFL Inc. Appendix E contains information on cost growth and per metric ton of glass costs for high-level waste at two other DOE sites, West Valley and Savannah River. Appendix F addresses a risk allocation analysis of the BNFL proposal that indicates,that the current approach is still better than the alternative.
Molecular basis of infrared detection by Elena O. Gracheva1
Newman, Eric A.
, snakes detect infrared signals through a mechanism involving radiant heating of the pit organ, ratherARTICLES Molecular basis of infrared detection by snakes Elena O. Gracheva1 *, Nicholas T. Ingolia2 system for detecting infrared radiation, enabling them to generate a `thermal image' of predators or prey
Revising Beliefs on the Basis of Evidence James P. Delgrande
Delgrande, James P.
Fraser University Burnaby, B.C., Canada V5A 1S6 jim@cs.sfu.ca Abstract Approaches to belief revision mostRevising Beliefs on the Basis of Evidence James P. Delgrande School of Computing Science Simon is not categorical. In revision, one may circumvent this fact by assuming that, in some fashion or other, an agent
NEAT-IGERT Proposal C. THEMATIC BASIS FOR GROUP EFFORT
Islam, M. Saif
NEAT-IGERT Proposal C. THEMATIC BASIS FOR GROUP EFFORT The last decade has seen immense progress the research and teaching interests of fourteen investigators in seven different departments ranging from, to the actual structure and management of the group. The Ph.D.'s from this program will be well poised to embark
Solar Power Tower Design Basis Document, Revision 0
ZAVOICO,ALEXIS B.
2001-07-01T23:59:59.000Z
This report contains the design basis for a generic molten-salt solar power tower. A solar power tower uses a field of tracking mirrors (heliostats) that redirect sunlight on to a centrally located receiver mounted on top a tower, which absorbs the concentrated sunlight. Molten nitrate salt, pumped from a tank at ground level, absorbs the sunlight, heating it up to 565 C. The heated salt flows back to ground level into another tank where it is stored, then pumped through a steam generator to produce steam and make electricity. This report establishes a set of criteria upon which the next generation of solar power towers will be designed. The report contains detailed criteria for each of the major systems: Collector System, Receiver System, Thermal Storage System, Steam Generator System, Master Control System, and Electric Heat Tracing System. The Electric Power Generation System and Balance of Plant discussions are limited to interface requirements. This design basis builds on the extensive experience gained from the Solar Two project and includes potential design innovations that will improve reliability and lower technical risk. This design basis document is a living document and contains several areas that require trade-studies and design analysis to fully complete the design basis. Project- and site-specific conditions and requirements will also resolve open To Be Determined issues.
Market Split and Basis Reduction: Towards a Solution of the
Utrecht, Universiteit
in the book by Williams [13]. There, the application was related to the oil market in the UKMarket Split and Basis Reduction: Towards a Solution of the Cornu#19;ejols-Dawande Instances K-and-bound. They o#11;ered these market split instances as a challenge to the integer programming community
Implementing Radial Basis Functions Using Bump-Resistor Networks
Harris, John G.
performance using this for- mulation [SI. Anderson, Platt and Kirk previously demonstrated the use of follower]. An alter- nate strategy used by Anderson, Platt and Kirk [l] 0-7803-1901-X/94$4.0001994 IEEE 1894 #12 . Anderson, J. C. Platt, and D. Kirk. An analog VLSI chip for radial basis functions. In J. Han- son, J
Cognitively Ergonomic Route A Potential Basis for the
Klippel, Alexander
1 Cognitively Ergonomic Route Directions A Potential Basis for the OpenLS Navigation Service? Stefan Hansen, Alexander Klippel, Kai-Florian Richter Overview Background Aspect of cognitively ergonomic Ontologies and cognitive modelling (cognitive engineering) Aspects of Cognitively Ergonomic Route Directions
Canister Storage Building (CSB) Design Basis Accident Analysis Documentation
CROWE, R.D.
1999-09-09T23:59:59.000Z
This document provides the detailed accident analysis to support ''HNF-3553, Spent Nuclear Fuel Project Final Safety, Analysis Report, Annex A,'' ''Canister Storage Building Final Safety Analysis Report.'' All assumptions, parameters, and models used to provide the analysis of the design basis accidents are documented to support the conclusions in the Canister Storage Building Final Safety Analysis Report.
Cold Vacuum Drying (CVD) Facility Design Basis Accident Analysis Documentation
PIEPHO, M.G.
1999-10-20T23:59:59.000Z
This document provides the detailed accident analysis to support HNF-3553, Annex B, Spent Nuclear Fuel Project Final Safety Analysis Report, ''Cold Vacuum Drying Facility Final Safety Analysis Report (FSAR).'' All assumptions, parameters and models used to provide the analysis of the design basis accidents are documented to support the conclusions in the FSAR.
PRICING COMMODITY DERIVATIVES WITH BASIS RISK AND PARTIAL OBSERVATIONS
Ludkovski, Mike
LUDKOVSKI Abstract. We study the problem of pricing claims written on an over-the-counter energy con- tractPRICING COMMODITY DERIVATIVES WITH BASIS RISK AND PARTIAL OBSERVATIONS RENÂ´E CARMONA AND MICHAEL. Because the underlying is illiquid, we work with an indifference pricing framework based on a liquid
CRAD, Safety Basis- Idaho Accelerated Retrieval Project Phase II
Broader source: Energy.gov [DOE]
A section of Appendix C to DOE G 226.1-2 "Federal Line Management Oversight of Department of Energy Nuclear Facilities." Consists of Criteria Review and Approach Documents (CRADs) used for a February 2006 Commencement of Operations assessment of the Safety Basis at the Idaho Accelerated Retrieval Project Phase II.
Canister Storage Building (CSB) Design Basis Accident Analysis Documentation
CROWE, R.D.; PIEPHO, M.G.
2000-03-23T23:59:59.000Z
This document provided the detailed accident analysis to support HNF-3553, Spent Nuclear Fuel Project Final Safety Analysis Report, Annex A, ''Canister Storage Building Final Safety Analysis Report''. All assumptions, parameters, and models used to provide the analysis of the design basis accidents are documented to support the conclusions in the Canister Storage Building Final Safety Analysis Report.
CRAD, Safety Basis- Idaho MF-628 Drum Treatment Facility
Broader source: Energy.gov [DOE]
A section of Appendix C to DOE G 226.1-2 "Federal Line Management Oversight of Department of Energy Nuclear Facilities." Consists of Criteria Review and Approach Documents (CRADs) used for a May 2007 readiness assessment of the Safety Basis at the Advanced Mixed Waste Treatment Project.
Data mining with sparse grids using simplicial basis functions
Sminchisescu, Cristian
Data mining with sparse grids using simplicial basis functions Jochen Garcke and Michael Griebel we presented a new approach [18] to the classifi- cation problem arising in data mining. It is based with the number of given data points. Finally we report on the quality of the classifier built by our new method
Data mining with sparse grids using simplicial basis functions
Sminchisescu, Cristian
Data mining with sparse grids using simplicial basis functions Jochen Garcke and Michael Griebel Recently we presented a new approach [18] to the classi#12;- cation problem arising in data mining scales linearly with the number of given data points. Finally we report on the quality of the classi#12
Fine Entanglement and State Manipulation of Two Spin Coupled Qubits: A Lie Theoretic Overview
Roderick Vance
2015-02-18T23:59:59.000Z
By building on the work in Kuzmak & Tkachuk, "Preparation of quantum states of two spin-$\\frac{1}{2}$ particles in the form of the Schmidt decomposition", Physics Letters A, {\\bf 378}, pp1469-1474, which outlined the control of the degree of entanglement within this system, it is proven that any $SU(4)$ state manipulation operator can be realised for this system using a sequence of pulsed magnetic fields in either two linearly independent directions if the gyromagnetic ratios are unequal or three directions for equal gyromagnetic ratios. To achieve this goal, an elementary Lie theoretic proof of the fact that the group of transformations generated by finite products of exponentials of a set of Lie algebra vectors is equal to the Lie group generated by the smallest Lie algebra containing those vectors is rewritten into an explicit algorithm. A numerical example as well as the proof of the algorithm's effectiveness is given.
Theoretical summary of the 8th International Conference on Hadron Spectroscopy
Lipkin, H. J.
1999-11-15T23:59:59.000Z
The Constituent Quark Model has provided a remarkable description of the experimentally observed hadron spectrum but still has no firm theoretical basis. Attempts to provide a QCD justification discussed at Hadron99 include QCD Sum Rules, instantons, relativistic potential models and the lattice. Phenomenological analyses to clarify outstanding problems like the nature of the scalar and pseudoscalar mesons and the low branching ratio for {psi} {prime} {r_arrow} {rho} {r_arrow} {pi} were presented. New experimental puzzles include the observation of {anti p}p {r_arrow} {phi}{pi}.
Real-time algorithm for robust coincidence search
Petrovic, T.; Vencelj, M.; Lipoglavsek, M.; Gajevic, J.; Pelicon, P. [Jozef Stefan Institute, Jamova cesta 39, Ljubljana, Slovenia and Cosylab d.d., Control System Laboratory, Teslova ulica 30, Ljubljana (Slovenia); Jozef Stefan Institute, Jamova cesta 39, Ljubljana (Slovenia)
2012-10-20T23:59:59.000Z
In in-beam {gamma}-ray spectroscopy experiments, we often look for coincident detection events. Among every N events detected, coincidence search is naively of principal complexity O(N{sup 2}). When we limit the approximate width of the coincidence search window, the complexity can be reduced to O(N), permitting the implementation of the algorithm into real-time measurements, carried out indefinitely. We have built an algorithm to find simultaneous events between two detection channels. The algorithm was tested in an experiment where coincidences between X and {gamma} rays detected in two HPGe detectors were observed in the decay of {sup 61}Cu. Functioning of the algorithm was validated by comparing calculated experimental branching ratio for EC decay and theoretical calculation for 3 selected {gamma}-ray energies for {sup 61}Cu decay. Our research opened a question on the validity of the adopted value of total angular momentum of the 656 keV state (J{sup {pi}} = 1/2{sup -}) in {sup 61}Ni.
Algorithms for VLSI Circuit Optimization and GPU-Based Parallelization
Liu, Yifang
2010-07-14T23:59:59.000Z
-convex, theoretical optimality conditions The journal model is IEEE Transactions on Computer-aided Design of Integrated Circuits and Systems. 2 do not hold. Instead, the tendency to be trapped into local optimum quali?es them more as a greedy approach. This research... can be directly applied on DAG topology as opposed to tree topology in [12]. Experiments are performed on ISCAS85, ITC99, and IWLS 2005 benchmark circuits to compare our algorithm with a state-of-the-art previous work [1]. The re- sults indicate...
Theoretical and numerical studies of chaotic mixing
Kim, Ho Jun
2008-10-10T23:59:59.000Z
defined using the same quadrature/collocation points [12]. SEM combines geometrical flexibility of the finite element method (FEM) with spectral convergence and low phase/dissipation error of the spectral method. For SEM it is assumed that the solution... of computational efficiency this basis is typically chosen to be orthogonal in a weighted inner-product. Convergence to the exact solution is achieved by increasing the order of the elements or the number of elements. If the boundary condition and 3 solution...
Mandayam, Narayan
-Theoretically Secret Key Generation for Fading Wireless Channels Chunxuan Ye, Suhas Mathur, Alex Reznik, Yogendra Shah as the basis for building practical secret key gener- ation protocols between two entities. We begin boundaries and a heuristic log likelihood ratio estimate to achieve an improved secret key generation rate
Braunstein, Samuel L.
2007-01-01T23:59:59.000Z
purposes to show an exponential speedup compared to classical approaches [11]. Hence quantum algorithmsIOP PUBLISHING JOURNAL OF PHYSICS A: MATHEMATICAL AND THEORETICAL J. Phys. A: Math. Theor. 40 (2007, Sungkyunkwan University, Republic of Korea 3 Indian Statistical Institute, Kolkata 700 108, India 4 Applied
Implications of Theoretical Ideas Regarding Cold Fusion
Afsar Abbas
1995-03-29T23:59:59.000Z
A lot of theoretical ideas have been floated to explain the so called cold fusion phenomenon. I look at a large subset of these and study further physical implications of the concepts involved. I suggest that these can be tested by other independent physical means. Because of the significance of these the experimentalists are urged to look for these signatures. The results in turn will be important for a better understanding and hence control of the cold fusion phenomenon.
Field-theoretical treatment of neutrino oscillations
Grimus, Walter; Stockinger, P
2000-01-01T23:59:59.000Z
We discuss the field-theoretical approach to neutrino oscillations. This approach includes the neutrino source and detector processes and allows to obtain the neutrino transition or survival probabilities as cross sections derived from the Feynman diagram of the combined source -- detection process. In this context, the neutrinos which are supposed to oscillate appear as propagators of the neutrino mass eigenfields, connecting the source and detection processes.
Field-theoretical treatment of neutrino oscillations
W. Grimus; S. Mohanty; P. Stockinger
1999-04-15T23:59:59.000Z
We discuss the field-theoretical approach to neutrino oscillations. This approach includes the neutrino source and detector processes and allows to obtain the neutrino transition or survival probabilities as cross sections derived from the Feynman diagram of the combined source -- detection process. In this context, the neutrinos which are supposed to oscillate appear as propagators of the neutrino mass eigenfields, connecting the source and detection processes.
Theoretical nuclear structure. Progress report for 1997
Nazarewicz, W.; Strayer, M.R.
1997-12-31T23:59:59.000Z
This research effort is directed toward theoretical support and guidance for the fields of radioactive ion beam physics, gamma-ray spectroscopy, and the interface between nuclear structure and nuclear astrophysics. The authors report substantial progress in all these areas. One measure of progress is publications and invited material. The research described here has led to more than 25 papers that are published, accepted, or submitted to refereed journals, and to 25 invited presentations at conferences and workshops.
Basis for NGNP Reactor Design Down-Selection
L.E. Demick
2010-08-01T23:59:59.000Z
The purpose of this paper is to identify the extent of technology development, design and licensing maturity anticipated to be required to credibly identify differences that could make a technical choice practical between the prismatic and pebble bed reactor designs. This paper does not address a business decision based on the economics, business model and resulting business case since these will vary based on the reactor application. The selection of the type of reactor, the module ratings, the number of modules, the configuration of the balance of plant and other design selections will be made on the basis of optimizing the Business Case for the application. These are not decisions that can be made on a generic basis.
Basis for NGNP Reactor Design Down-Selection
L.E. Demick
2011-11-01T23:59:59.000Z
The purpose of this paper is to identify the extent of technology development, design and licensing maturity anticipated to be required to credibly identify differences that could make a technical choice practical between the prismatic and pebble bed reactor designs. This paper does not address a business decision based on the economics, business model and resulting business case since these will vary based on the reactor application. The selection of the type of reactor, the module ratings, the number of modules, the configuration of the balance of plant and other design selections will be made on the basis of optimizing the Business Case for the application. These are not decisions that can be made on a generic basis.
The SU(3) Algebra in a Cyclic Basis
P. F. Harrison; R. Krishnan; W. G. Scott
2014-07-31T23:59:59.000Z
With the couplings between the eight gluons constrained by the structure constants of the su(3) algebra in QCD, one would expect that there should exist a special basis (or set of bases) for the algebra wherein, unlike in a Cartan-Weyl basis, {\\em all} gluons interact identically (cyclically) with each other, explicitly on an equal footing. We report here particular such bases, which we have found in a computer search, and we indicate associated $3 \\times 3$ representations. We conjecture that essentially all cyclic bases for su(3) may be obtained from these making appropriate circulant transformations,and that cyclic bases may also exist for other su(n), n>3.
MIXING OF INCOMPATIBLE MATERIALS IN WASTE TANKS TECHNICAL BASIS DOCUMENT
SANDGREN, K.R.
2003-10-15T23:59:59.000Z
This document presents onsite radiological, onsite toxicological, and offsite toxicological consequences, risk binning, and control decision results for the mixing of incompatible materials in waste tanks representative accident. This technical basis document was developed to support the tank farms documented safety analysis (DSA) and describes the risk binning process, the technical basis for assigning risk bins, and the controls selected for the mixing of incompatible materials representative accident and associated represented hazardous conditions. The purpose of the risk binning process is to determine the need for safety-significant structures, systems, and components (SSC) and/or technical safety requirement (TSR)-level controls for a given representative accident or represented hazardous conditions based on an evaluation of the frequency and consequence. Note that the risk binning process is not applied to facility workers, because all facility worker hazardous conditions are considered for safety-significant SSCs and/or TSR-level controls.
Technical basis document for the evaporator dump accident
GOETZ, T.G.
2003-03-22T23:59:59.000Z
This technical basis document was developed to support the documented safety analysis (DSA) and describes the risk binning process and the technical basis for assigning risk bins for the evaporator dump representative accident and associated represented hazardous conditions. The purpose of the risk binning process is to determine the need for safety-significant structures, systems, and components (SSC) and/or technical safety requirement (TSR)-level controls for a given representative accident or represented hazardous conditions based on an evaluation of the frequency and consequence. Note that the risk binning process is not applied to facility workers, because all facility worker hazardous conditions are considered for safety-significant SSCs and/or TSR-level controls. Determination of the need for safety-class SSCs was performed in accordance with DOE-STD-3009-94, ''Preparation Guide for US Department of Energy Nonreactor Nuclear Facility Documented Safety Analyses'', as described in this report.
Mixing of incompatible materials in waste tanks technical basis document
SANDGREN, K.R.
2003-03-21T23:59:59.000Z
This technical basis document was developed to support the Tank Farms Documented Safety Analysis (DSA) and describes the risk binning process, the technical basis for assigning risk bins, and the controls selected for the mixing of incompatible materials representative accident and associated represented hazardous conditions. The purpose of the risk binning process is to determine the need for safety-significant structures, systems, and components (SSCs) and/or technical safety requirement (TSR)-level controls for a given representative accident or represented hazardous conditions based on an evaluation of the frequency and consequence. Note that the risk binning process is not applied to facility workers, because all facility worker hazardous conditions are considered for safety-significant SSCs and/or TSR level controls. Determination of the need for safety-class SSCs was performed in accordance with DOE-STD-3009-94, ''Preparation Guide for US Department of Energy Nonreactor Nuclear Facility Documented Safety Analyses'', as described in this report.
Geometric algorithms for reconfigurable structures
Benbernou, Nadia M
2011-01-01T23:59:59.000Z
In this thesis, we study three problems related to geometric algorithms of reconfigurable structures. In the first problem, strip folding, we present two universal hinge patterns for a strip of material that enable the ...
Randomized algorithms for reliable broadcast
Vaikuntanathan, Vinod
2009-01-01T23:59:59.000Z
In this thesis, we design randomized algorithms for classical problems in fault tolerant distributed computing in the full-information model. The full-information model is a strong adversarial model which imposes no ...
Bayesian inference algorithm on Raw
Luong, Alda
2004-01-01T23:59:59.000Z
This work explores the performance of Raw, a parallel hardware platform developed at MIT, running a Bayesian inference algorithm. Motivation for examining this parallel system is a growing interest in creating a self-learning ...
The Neural Basis of Financial Risk-Taking* Supplementary Material
Knutson, Brian
1 The Neural Basis of Financial Risk-Taking* Supplementary Material Camelia M. Kuhnen1 and Brian in each block, a rational risk-neutral agent should pick stock i if he/she expects to receive a dividend D is the information set up to trial -1. That is: I-1 ={D i t| t-1, i{Stock T, Stock R, Bond C}}. Let x i = Pr{ Stock
Use of Normalized Radial Basis Function in Hydrology
Cotar, Anton; Brilly, Mitja [Chair of Hydrology and Hydraulic Engineering, University of Ljubljana, Jamova 2, 1000 Ljubljana (Slovenia)
2008-11-13T23:59:59.000Z
In this article we will present a use of normalized radial basis function in hydrology for prediction of missing river Reka runoff data. The method is based on multidimensional normal distribution, where standard deviation is first optimized and later the whole prediction process is learned on existing data [5]. We can conclude, that the method works very well for middle ranges of data, but not so well for extremes because of its interpolating nature.
Evolution of Safety Basis Documentation for the Fernald Site
Brown, T.; Kohler, S.; Fisk, P.; Krach, F.; Klein, B.
2004-03-01T23:59:59.000Z
The objective of the Department of Energy's (DOE) Fernald Closure Project (FCP), in suburban Cincinnati, Ohio, is to safely complete the environmental restoration of the Fernald site by 2006. Over 200 out of 220 total structures, at this DOE plant site which processed uranium ore concentrates into high-purity uranium metal products, have been safely demolished, including eight of the nine major production plants. Documented Safety Analyses (DSAs) for these facilities have gone through a process of simplification, from individual operating Safety Analysis Reports (SARs) to a single site-wide Authorization Basis containing nuclear facility Bases for Interim Operations (BIOs) to individual project Auditable Safety Records (ASRs). The final stage in DSA simplification consists of project-specific Integrated Health and Safety Plans (I-HASPs) and Nuclear Health and Safety Plans (N-HASPs) that address all aspects of safety, from the worker in the field to the safety basis requirements preserving the facility/activity hazard categorization. This paper addresses the evolution of Safety Basis Documentation (SBD), as DSAs, from production through site closure.
Journes MAS 2010, Bordeaux Session : Algorithmes Stochastiques
Boyer, Edmond
. En particulier plusieurs facettes et applications des algorithmes MCO, MCOP, MCOG, ... seront mises
The Top Mass: Interpretation and Theoretical Uncertainties
André H. Hoang
2014-12-11T23:59:59.000Z
Currently the most precise LHC measurements of the top quark mass are determinations of the top quark mass parameter of Monte-Carlo (MC) event generators reaching uncertainties of well below $1$ GeV. However, there is an additional theoretical problem when using the MC top mass $m_t^{\\rm MC}$ as an input for theoretical predictions, because a rigorous relation of $m_t^{\\rm MC}$ to a renormalized field theory mass is, at the very strict level, absent. In this talk I show how - nevertheless - some concrete statements on $m_t^{\\rm MC}$ can be deduced assuming that the MC generator behaves like a rigorous first principles QCD calculator for the observables that are used for the analyses. I give simple conceptual arguments showing that in this context $m_t^{\\rm MC}$ can be interpreted like the mass of a heavy-light top meson, and that there is a conversion relation to field theory top quark masses that requires a non-perturbative input. The situation is in analogy to B physics where a similar relation exists between experimental B meson masses and field theory bottom masses. The relation gives a prescription how to use $m_t^{\\rm MC}$ as an input for theoretical predictions in perturbative QCD. The outcome is that at this time an additional uncertainty of about $1$ GeV has to be accounted for. I discuss limitations of the arguments I give and possible ways to test them, or even to improve the current situation.
Theoretical aspects of relativistic spectral features
V. Karas
2006-09-23T23:59:59.000Z
The inner parts of black-hole accretion discs shine in X-rays which can be monitored and the observed spectra can be used to trace strong gravitational fields in the place of emission and along paths of light rays. This paper summarizes several aspects of how the spectral features are influenced by relativistic effects. We focus our attention onto variable and broad emission lines, origin of which can be attributed to the presence of orbiting patterns -- spots and spiral waves in the disc. We point out that the observed spectrum can determine parameters of the central black hole provided the intrinsic local emissivity is constrained by theoretical models.
The double-beta decay: Theoretical challenges
Horoi, Mihai [Department of Physics, Central Michigan University, Mount Pleasant, Michigan, 48859 (United States)
2012-11-20T23:59:59.000Z
Neutrinoless double beta decay is a unique process that could reveal physics beyond the Standard Model of particle physics namely, if observed, it would prove that neutrinos are Majorana particles. In addition, it could provide information regarding the neutrino masses and their hierarchy, provided that reliable nuclear matrix elements can be obtained. The two neutrino double beta decay is an associate process that is allowed by the Standard Model, and it was observed for about ten nuclei. The present contribution gives a brief review of the theoretical challenges associated with these two process, emphasizing the reliable calculation of the associated nuclear matrix elements.
Spinning Fluids: A Group Theoretical Approach
Dario Capasso; Debajyoti Sarkar
2014-04-07T23:59:59.000Z
We extend the Lagrangian formulation of relativistic non-abelian fluids in group theory language. We propose a Mathisson-Papapetrou equation for spinning fluids in terms of the reduction limit of de Sitter group. The equation we find correctly boils down to the one for non-spinning fluids. We study the application of our results for an FRW cosmological background for fluids with no vorticity and for dusts in the vicinity of a Kerr black hole. We also explore two alternative approaches based on a group theoretical formulation of particles dynamics.
A Low-Power Imager and Compression Algorithms for a Brain-Machine Visual Prosthesis for the Blind
Sarpeshkar, Rahul
A Low-Power Imager and Compression Algorithms for a Brain-Machine Visual Prosthesis for the Blind L-machine visual prosthesis for the blind where energy efficiency and power are of paramount importance a few filter basis coefficients. Keywords: Neural prosthesis, visual prosthesis, neural stimulation
Theoretical Predictions of Freestanding Honeycomb Sheets of Cadmium Chalcogenides
Zhou, Jia [ORNL] [ORNL; Huang, Jingsong [ORNL] [ORNL; Sumpter, Bobby G [ORNL] [ORNL; Kent, Paul R [ORNL] [ORNL; Xie, Yu [ORNL] [ORNL; Terrones Maldonado, Humberto [ORNL] [ORNL; Smith, Sean C [ORNL] [ORNL
2014-01-01T23:59:59.000Z
Two-dimensional (2D) nanocrystals of CdX (X = S, Se, Te) typically grown by colloidal synthesis are coated with organic ligands. Recent experimental work on ZnSe showed that the organic ligands can be removed at elevated temperature, giving a freestanding 2D sheet of ZnSe. In this theoretical work, freestanding single- to few-layer sheets of CdX, each possessing a pseudo honeycomb lattice, are considered by cutting along all possible lattice planes of the bulk zinc blende (ZB) and wurtzite (WZ) phases. Using density functional theory, we have systematically studied their geometric structures, energetics, and electronic properties. A strong surface distortion is found to occur for all of the layered sheets, and yet all of the pseudo honeycomb lattices are preserved, giving unique types of surface corrugations and different electronic properties. The energetics, in combination with phonon mode calculations and molecular dynamics simulations, indicate that the syntheses of these freestanding 2D sheets could be selective, with the single- to few-layer WZ110, WZ100, and ZB110 sheets being favored. Through the GW approximation, it is found that all single-layer sheets have large band gaps falling into the ultraviolet range, while thicker sheets in general have reduced band gaps in the visible and ultraviolet range. On the basis of the present work and the experimental studies on freestanding double-layer sheets of ZnSe, we envision that the freestanding 2D layered sheets of CdX predicted herein are potential synthesis targets, which may offer tunable band gaps depending on their structural features including surface corrugations, stacking motifs, and number of layers.
Spent Nuclear Fuel (SNF) Project Design Basis Capacity Study
CLEVELAND, K.J.
2000-08-17T23:59:59.000Z
This study of the design basis capacity of process systems was prepared by Fluor Federal Services for the Spent Nuclear Fuel Project. The evaluation uses a summary level model of major process sub-systems to determine the impact of sub-system interactions on the overall time to complete fuel removal operations. The process system model configuration and time cycle estimates developed in the original version of this report have been updated as operating scenario assumptions evolve. The initial document released in Fiscal Year (FY) 1996 varied the number of parallel systems and transport systems over a wide range, estimating a conservative design basis for completing fuel processing in a two year time period. Configurations modeling planned operations were updated in FY 1998 and FY 1999. The FY 1998 Base Case continued to indicate that fuel removal activities at the basins could be completed in slightly over 2 years. Evaluations completed in FY 1999 were based on schedule modifications that delayed the start of KE Basin fuel removal, with respect to the start of KW Basin fuel removal activities, by 12 months. This delay resulted in extending the time to complete all fuel removal activities by 12 months. However, the results indicated that the number of Cold Vacuum Drying (CVD) stations could be reduced from four to three without impacting the projected time to complete fuel removal activities. This update of the design basis capacity evaluation, performed for FY 2000, evaluates a fuel removal scenario that delays the start of KE Basin activities such that staffing peaks are minimized. The number of CVD stations included in all cases for the FY 2000 evaluation is reduced from three to two, since the scenario schedule results in minimal time periods of simultaneous fuel removal from both basins. The FY 2000 evaluation also considers removal of Shippingport fuel from T Plant storage and transfer to the Canister Storage Building for storage.
RELEASE OF DRIED RADIOACTIVE WASTE MATERIALS TECHNICAL BASIS DOCUMENT
KOZLOWSKI, S.D.
2007-05-30T23:59:59.000Z
This technical basis document was developed to support RPP-23429, Preliminary Documented Safety Analysis for the Demonstration Bulk Vitrification System (PDSA) and RPP-23479, Preliminary Documented Safety Analysis for the Contact-Handled Transuranic Mixed (CH-TRUM) Waste Facility. The main document describes the risk binning process and the technical basis for assigning risk bins to the representative accidents involving the release of dried radioactive waste materials from the Demonstration Bulk Vitrification System (DBVS) and to the associated represented hazardous conditions. Appendices D through F provide the technical basis for assigning risk bins to the representative dried waste release accident and associated represented hazardous conditions for the Contact-Handled Transuranic Mixed (CH-TRUM) Waste Packaging Unit (WPU). The risk binning process uses an evaluation of the frequency and consequence of a given representative accident or represented hazardous condition to determine the need for safety structures, systems, and components (SSC) and technical safety requirement (TSR)-level controls. A representative accident or a represented hazardous condition is assigned to a risk bin based on the potential radiological and toxicological consequences to the public and the collocated worker. Note that the risk binning process is not applied to facility workers because credible hazardous conditions with the potential for significant facility worker consequences are considered for safety-significant SSCs and/or TSR-level controls regardless of their estimated frequency. The controls for protection of the facility workers are described in RPP-23429 and RPP-23479. Determination of the need for safety-class SSCs was performed in accordance with DOE-STD-3009-94, Preparation Guide for US. Department of Energy Nonreactor Nuclear Facility Documented Safety Analyses, as described below.
Scanning Tunneling Microscopy and Theoretical Study of Water...
Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)
and Theoretical Study of Water Adsorption on Fe3O4: Implications for Catalysis. Scanning Tunneling Microscopy and Theoretical Study of Water Adsorption on Fe3O4: Implications...
The SIGACT Theoretical Computer Science Genealogy: Preliminary Report
Parberry, Ian
The SIGACT Theoretical Computer Science Genealogy: Preliminary Report Ian Parberry Department The SIGACT Theoretical Computer Science Genealogy, which lists information on earned doctoral degrees of the Computer Science Genealogy lists information on earned doctoral degrees (thesis ad- viser, university
Experimental and theoretical investigation of three-dimensional...
Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)
theoretical investigation of three-dimensional nitrogen-doped aluminum clusters AI8N- and AI8N. Experimental and theoretical investigation of three-dimensional nitrogen-doped...
Interim safety basis for fuel supply shutdown facility
Brehm, J.R.; Deobald, T.L.; Benecke, M.W.; Remaize, J.A.
1995-05-23T23:59:59.000Z
This ISB in conjunction with the new TSRs, will provide the required basis for interim operation or restrictions on interim operations and administrative controls for the Facility until a SAR is prepared in accordance with the new requirements. It is concluded that the risk associated with the current operational mode of the Facility, uranium closure, clean up, and transition activities required for permanent closure, are within Risk Acceptance Guidelines. The Facility is classified as a Moderate Hazard Facility because of the potential for an unmitigated fire associated with the uranium storage buildings.
SRS FTF Section 3116 Basis for Determination | Department of Energy
Office of Environmental Management (EM)
AFDC Printable Version Share this resource Send a link to EERE: Alternative Fuels Data Center Home Page to someone by E-mail Share EERE: Alternative Fuels Data Center Home Page on Facebook Tweet about EERE: Alternative Fuels Data Center Home Page on Twitter Bookmark EERE: Alternative1 First Use of Energy for All Purposes (Fuel and Nonfuel), 2002; Level: National5Sales for4,645 3,625 1,006 492 742 33 1112011 Strategic2 OPAM615_CostNSARDevelopmental AssignmentApril 2, 2015AwardsOctoberBasis for
Ecological Research Division Theoretical Ecology Program. [Contains abstracts
Not Available
1990-10-01T23:59:59.000Z
This report presents the goals of the Theoretical Ecology Program and abstracts of research in progress. Abstracts cover both theoretical research that began as part of the terrestrial ecology core program and new projects funded by the theoretical program begun in 1988. Projects have been clustered into four major categories: Ecosystem dynamics; landscape/scaling dynamics; population dynamics; and experiment/sample design.
Optimisation of Quantum Evolution Algorithms
Apoorva Patel
2015-03-04T23:59:59.000Z
Given a quantum Hamiltonian and its evolution time, the corresponding unitary evolution operator can be constructed in many different ways, corresponding to different trajectories between the desired end-points. A choice among these trajectories can then be made to obtain the best computational complexity and control over errors. As an explicit example, Grover's quantum search algorithm is described as a Hamiltonian evolution problem. It is shown that the computational complexity has a power-law dependence on error when a straightforward Lie-Trotter discretisation formula is used, and it becomes logarithmic in error when reflection operators are used. The exponential change in error control is striking, and can be used to improve many importance sampling methods. The key concept is to make the evolution steps as large as possible while obeying the constraints of the problem. In particular, we can understand why overrelaxation algorithms are superior to small step size algorithms.
Optimisation of Quantum Evolution Algorithms
Patel, Apoorva
2015-01-01T23:59:59.000Z
Given a quantum Hamiltonian and its evolution time, the corresponding unitary evolution operator can be constructed in many different ways, corresponding to different trajectories between the desired end-points. A choice among these trajectories can then be made to obtain the best computational complexity and control over errors. As an explicit example, Grover's quantum search algorithm is described as a Hamiltonian evolution problem. It is shown that the computational complexity has a power-law dependence on error when a straightforward Lie-Trotter discretisation formula is used, and it becomes logarithmic in error when reflection operators are used. The exponential change in error control is striking, and can be used to improve many importance sampling methods. The key concept is to make the evolution steps as large as possible while obeying the constraints of the problem. In particular, we can understand why overrelaxation algorithms are superior to small step size algorithms.
Five Quantum Algorithms Using Quipper
Safat Siddiqui; Mohammed Jahirul Islam; Omar Shehab
2014-06-18T23:59:59.000Z
Quipper is a recently released quantum programming language. In this report, we explore Quipper's programming framework by implementing the Deutsch's, Deutsch-Jozsa's, Simon's, Grover's, and Shor's factoring algorithms. It will help new quantum programmers in an instructive manner. We choose Quipper especially for its usability and scalability though it's an ongoing development project. We have also provided introductory concepts of Quipper and prerequisite backgrounds of the algorithms for readers' convenience. We also have written codes for oracles (black boxes or functions) for individual algorithms and tested some of them using the Quipper simulator to prove correctness and introduce the readers with the functionality. As Quipper 0.5 does not include more than \\ensuremath{4 \\times 4} matrix constructors for Unitary operators, we have also implemented \\ensuremath{8 \\times 8} and \\ensuremath{16 \\times 16} matrix constructors.
Hanford External Dosimetry Technical Basis Manual PNL-MA-842
Rathbone, Bruce A.
2005-02-25T23:59:59.000Z
The Hanford External Dosimetry Technical Basis Manual PNL-MA-842 documents the design and implementation of the external dosimetry system used at Hanford. The manual describes the dosimeter design, processing protocols, dose calculation methodology, radiation fields encountered, dosimeter response characteristics, limitations of dosimeter design under field conditions, and makes recommendations for effective use of the dosimeters in the field. The manual describes the technical basis for the dosimetry system in a manner intended to help ensure defensibility of the dose of record at Hanford and to demonstrate compliance with 10 CFR 835, DOELAP, DOE-RL, ORP, PNSO, and Hanford contractor requirements. The dosimetry system is operated by PNNL’s Hanford External Dosimetry Program which provides dosimetry services to all Hanford contractors. The primary users of this manual are DOE and DOE contractors at Hanford using the dosimetry services of PNNL. Development and maintenance of this manual is funded directly by DOE and DOE contractors. Its contents have been reviewed and approved by DOE and DOE contractors at Hanford through the Hanford Personnel Dosimetry Advisory Committee which is chartered and chaired by DOE-RL and serves as means of coordinating dosimetry practices across contractors at Hanford. This manual was established in 1996. Since inception, it has been revised many times and maintained by PNNL as a controlled document with controlled distribution. Rev. 0 marks the first revision to be released through PNNL’s Electronic Records & Information Capture Architecture (ERICA) database.
AN APPROACH TO SAFETY DESIGN BASIS DOCUMENTATION CHANGE CONTROL
RYAN GW
2008-05-15T23:59:59.000Z
This paper describes a safety design basis documentation change control process. The process identifies elements that can be used to manage the project/facility configuration during design evolution through the Initiation, Definition, and Execution project phases. The project phases addressed by the process are defined in US Department of Energy (DOE) Order (O) 413.3A, Program and Project Management for the Acquisition of Capital Assets, in support of DOE project Critical Decisions (CD). This approach has been developed for application to two Hanford Site projects in their early CD phases and is considered to be a key element of safety and design integration. As described in the work that has been performed, the purpose of change control is to maintain consistency among design requirements, the physical configuration, related facility documentation, and the nuclear safety basis during the evolution of the design. The process developed (1) ensures an appropriate level of rigor is applied at each project phase and (2) is considered to implement the requirements and guidance provided in DOE-STD-1189-2008, Integration of Safety into the Design Process. Presentation of this work is expected to benefit others in the DOE Complex that may be implementing DOE-STD-1189-2008 or managing nuclear safety documentation in support of projects in-process.
Information Theoretic Approach to Social Networks
Kafri, Oded
2014-01-01T23:59:59.000Z
We propose an information theoretic model for sociological networks. The model is a microcanonical ensemble of states and particles. The states are the possible pairs of nodes (i.e. people, sites and alike) which exchange information. The particles are the energetic information bits. With analogy to bosons gas, we define for these networks model: entropy, volume, pressure and temperature. We show that these definitions are consistent with Carnot efficiency (the second law) and ideal gas law. Therefore, if we have two large networks: hot and cold having temperatures TH and TC and we remove Q energetic bits from the hot network to the cold network we can save W profit bits. The profit will be calculated from W equal or smaller than Q (1-TH/TC), namely, Carnot formula. In addition it is shown that when two of these networks are merged the entropy increases. This explains the tendency of economic and social networks to merge.
Game Theoretic Methods for the Smart Grid
Saad, Walid; Poor, H Vincent; Ba?ar, Tamer
2012-01-01T23:59:59.000Z
The future smart grid is envisioned as a large-scale cyber-physical system encompassing advanced power, communications, control, and computing technologies. In order to accommodate these technologies, it will have to build on solid mathematical tools that can ensure an efficient and robust operation of such heterogeneous and large-scale cyber-physical systems. In this context, this paper is an overview on the potential of applying game theory for addressing relevant and timely open problems in three emerging areas that pertain to the smart grid: micro-grid systems, demand-side management, and communications. In each area, the state-of-the-art contributions are gathered and a systematic treatment, using game theory, of some of the most relevant problems for future power systems is provided. Future opportunities for adopting game theoretic methodologies in the transition from legacy systems toward smart and intelligent grids are also discussed. In a nutshell, this article provides a comprehensive account of the...
Algorithmic Aspects of Risk Management
Stehr, Mark-Oliver
Algorithmic Aspects of Risk Management Ashish Gehani1 , Lee Zaniewski2 , and K. Subramani2 1 SRI International 2 West Virginia University Abstract. Risk analysis has been used to manage the security of sys configuration. This allows risk management to occur in real time and reduces the window of exposure to attack
Algorithmic Thermodynamics John C. Baez
Tomkins, Andrew
Algorithmic Thermodynamics John C. Baez Department of Mathematics, University of California in statistical mechanics. This viewpoint allows us to apply many techniques developed for use in thermodynamics and chemical potential. We derive an analogue of the fundamental thermodynamic relation dE = TdS - PdV + Âµd
Adaptive protection algorithm and system
Hedrick, Paul (Pittsburgh, PA) [Pittsburgh, PA; Toms, Helen L. (Irwin, PA) [Irwin, PA; Miller, Roger M. (Mars, PA) [Mars, PA
2009-04-28T23:59:59.000Z
An adaptive protection algorithm and system for protecting electrical distribution systems traces the flow of power through a distribution system, assigns a value (or rank) to each circuit breaker in the system and then determines the appropriate trip set points based on the assigned rank.
Machine Learning: Foundations and Algorithms
Ben-David, Shai
with accident prevention systems that are built using machine learning algorithms. Machine learning is also to us). Machine learning tools are concerned with endowing programs with the ability to "learn if the learning process succeeded or failed? The second goal of this book is to present several key machine
GEET DUGGAL Algorithms for Determining
Relationship to Gene Regulation Final Public Oral Examination Doctor of Philosophy Recent genome sequencing. Analyses from them have shown that the 3D structure of DNA may be closely linked to genome functions structure of DNA and genome function on the scale of the whole genome. Specifically, we designed algorithms
Electronic structure basis for the titanic magnetoresistance in WTe?
DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)
Pletikosic, I.; Ali, Mazhar N.; Fedorov, A. V.; Cava, R. J.; Valla, T.
2014-11-01T23:59:59.000Z
The electronic structure basis of the extremely large magnetoresistance in layered non-magnetic tungsten ditelluride has been investigated by angle-resolved photoelectron spectroscopy. Hole and electron pockets of approximately the same size were found at the Fermi level, suggesting that carrier compensation should be considered the primary source of the effect. The material exhibits a highly anisotropic, quasi one-dimensional Fermi surface from which the pronounced anisotropy of the magnetoresistance follows. A change in the Fermi surface with temperature was found and a high-density-of-states band that may take over conduction at higher temperatures and cause the observed turn-on behavior of the magnetoresistance inmore »WTe? was identified.« less
Electronic structure basis for the titanic magnetoresistance in WTe?
Pletikosic, I. [Princeton Univ., NJ (United States); Brookhaven National Lab. (BNL), Upton, NY (United States); Ali, Mazhar N. [Princeton Univ., NJ (United States); Fedorov, A. V. [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Cava, R. J. [Princeton Univ., NJ (United States); Valla, T. [Brookhaven National Lab. (BNL), Upton, NY (United States)
2014-11-01T23:59:59.000Z
The electronic structure basis of the extremely large magnetoresistance in layered non-magnetic tungsten ditelluride has been investigated by angle-resolved photoelectron spectroscopy. Hole and electron pockets of approximately the same size were found at the Fermi level, suggesting that carrier compensation should be considered the primary source of the effect. The material exhibits a highly anisotropic, quasi one-dimensional Fermi surface from which the pronounced anisotropy of the magnetoresistance follows. A change in the Fermi surface with temperature was found and a high-density-of-states band that may take over conduction at higher temperatures and cause the observed turn-on behavior of the magnetoresistance in WTe? was identified.
The Gaussian Radial Basis Function Method for Plasma Kinetic Theory
Hirvijoki, Eero; Belli, Emily; Embréus, Ola
2015-01-01T23:59:59.000Z
A fundamental macroscopic description of a magnetized plasma is the Vlasov equation supplemented by the nonlinear inverse-square force Fokker-Planck collision operator [Rosenbluth et al., Phys. Rev., 107, 1957]. The Vlasov part describes advection in a six-dimensional phase space whereas the collision operator involves friction and diffusion coefficients that are weighted velocity-space integrals of the particle distribution function. The Fokker-Planck collision operator is an integro-differential, bilinear operator, and numerical discretization of the operator is far from trivial. In this letter, we describe a new approach to discretize the entire kinetic system based on an expansion in Gaussian Radial Basis functions (RBFs). This approach is particularly well-suited to treat the collision operator because the friction and diffusion coefficients can be analytically calculated. Although the RBF method is known to be a powerful scheme for the interpolation of scattered multidimensional data, Gaussian RBFs also...
Hanford External Dosimetry Technical Basis Manual PNL-MA-842
Rathbone, Bruce A.
2009-08-28T23:59:59.000Z
The Hanford External Dosimetry Technical Basis Manual PNL-MA-842 documents the design and implementation of the external dosimetry system used at Hanford. The manual describes the dosimeter design, processing protocols, dose calculation methodology, radiation fields encountered, dosimeter response characteristics, limitations of dosimeter design under field conditions, and makes recommendations for effective use of the dosimeters in the field. The manual describes the technical basis for the dosimetry system in a manner intended to help ensure defensibility of the dose of record at Hanford and to demonstrate compliance with 10 CFR 835, DOELAP, DOE-RL, ORP, PNSO, and Hanford contractor requirements. The dosimetry system is operated by PNNL’s Hanford External Dosimetry Program (HEDP) which provides dosimetry services to all Hanford contractors. The primary users of this manual are DOE and DOE contractors at Hanford using the dosimetry services of PNNL. Development and maintenance of this manual is funded directly by DOE and DOE contractors. Its contents have been reviewed and approved by DOE and DOE contractors at Hanford through the Hanford Personnel Dosimetry Advisory Committee (HPDAC) which is chartered and chaired by DOE-RL and serves as means of coordinating dosimetry practices across contractors at Hanford. This manual was established in 1996. Since inception, it has been revised many times and maintained by PNNL as a controlled document with controlled distribution. The first revision to be released through PNNL’s Electronic Records & Information Capture Architecture (ERICA) database was designated Revision 0. Revision numbers that are whole numbers reflect major revisions typically involving changes to all chapters in the document. Revision numbers that include a decimal fraction reflect minor revisions, usually restricted to selected chapters or selected pages in the document.
Forward - Backward Greedy Algorithms for Atomic Norm ...
2014-04-21T23:59:59.000Z
allows for a wholesale redefinition of the current basis, seeking a new, smaller basis and a new set of coeffi- cients such that the objecive value is not degraded
Final Technical Report "Multiscale Simulation Algorithms for Biochemical Systems"
Petzold, Linda R.
2012-10-25T23:59:59.000Z
Biochemical systems are inherently multiscale and stochastic. In microscopic systems formed by living cells, the small numbers of reactant molecules can result in dynamical behavior that is discrete and stochastic rather than continuous and deterministic. An analysis tool that respects these dynamical characteristics is the stochastic simulation algorithm (SSA, Gillespie, 1976), a numerical simulation procedure that is essentially exact for chemical systems that are spatially homogeneous or well stirred. Despite recent improvements, as a procedure that simulates every reaction event, the SSA is necessarily inefficient for most realistic problems. There are two main reasons for this, both arising from the multiscale nature of the underlying problem: (1) stiffness, i.e. the presence of multiple timescales, the fastest of which are stable; and (2) the need to include in the simulation both species that are present in relatively small quantities and should be modeled by a discrete stochastic process, and species that are present in larger quantities and are more efficiently modeled by a deterministic differential equation (or at some scale in between). This project has focused on the development of fast and adaptive algorithms, and the fun- damental theory upon which they must be based, for the multiscale simulation of biochemical systems. Areas addressed by this project include: (1) Theoretical and practical foundations for ac- celerated discrete stochastic simulation (tau-leaping); (2) Dealing with stiffness (fast reactions) in an efficient and well-justified manner in discrete stochastic simulation; (3) Development of adaptive multiscale algorithms for spatially homogeneous discrete stochastic simulation; (4) Development of high-performance SSA algorithms.
asexual genetic algorithm: Topics by E-print Network
Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)
for Genetic Algorithms No Author Given Andrews, Mark W. 2 A simple algorithm for optimization and model fitting: AGA (asexual genetic algorithm) CERN Preprints Summary: Context....
Halverson, Thomas, E-mail: tom.halverson@ttu.edu; Poirier, Bill [Department of Chemistry and Biochemistry and Department of Physics, Texas Tech University, P.O. Box 41061, Lubbock, Texas 79409-1061 (United States)
2014-05-28T23:59:59.000Z
‘‘Exact” quantum dynamics calculations of vibrational spectra are performed for two molecular systems of widely varying dimensionality (P{sub 2}O and CH{sub 2}NH), using a momentum-symmetrized Gaussian basis. This basis has been previously shown to defeat exponential scaling of computational cost with system dimensionality. The calculations were performed using the new “SWITCHBLADE” black-box code, which utilizes both dimensionally independent algorithms and massive parallelization to compute very large numbers of eigenstates for any fourth-order force field potential, in a single calculation. For both molecules considered here, many thousands of vibrationally excited states were computed, to at least an “intermediate” level of accuracy (tens of wavenumbers). Future modifications to increase the accuracy to “spectroscopic” levels, along with other potential future improvements of the new code, are also discussed.
The bidimensionality theory and its algorithmic applications
Hajiaghayi, MohammadTaghi
2005-01-01T23:59:59.000Z
Our newly developing theory of bidimensional graph problems provides general techniques for designing efficient fixed-parameter algorithms and approximation algorithms for NP- hard graph problems in broad classes of graphs. ...
Optimization Online - An Approximation Algorithm for Constructing ...
Artur Pessoa
2006-09-02T23:59:59.000Z
Sep 2, 2006 ... In this paper, we propose an approximation algorithm for the 2-bit Hamming prefix code problem. Our algorithm spends $O(n \\log^3 n)$ time to ...
Constant time algorithms in sparse graph model
Nguyen, Huy Ngoc, Ph. D. Massachusetts Institute of Technology
2010-01-01T23:59:59.000Z
We focus on constant-time algorithms for graph problems in bounded degree model. We introduce several techniques to design constant-time approximation algorithms for problems such as Vertex Cover, Maximum Matching, Maximum ...
Theoretical cosmic Type Ia supernova rates
R. Valiante; F. Matteucci; S. Recchi; F. Calura
2009-03-16T23:59:59.000Z
The aim of this work is the computation of the cosmic Type Ia supernova rates at very high redshifts (z>2). We adopt various progenitor models in order to predict the number of explosions in different scenarios for galaxy formation and to check whether it is possible to select the best delay time distribution model, on the basis of the available observations of Type Ia supernovae. We also computed the Type Ia supernova rate in typical elliptical galaxies of different initial luminous masses and the total amount of iron produced by Type Ia supernovae in each case. It emerges that: it is not easy to select the best delay time distribution scenario from the observational data and this is because the cosmic star formation rate dominates over the distribution function of the delay times; the monolithic collapse scenario predicts an increasing trend of the SN Ia rate at high redshifts whereas the predicted rate in the hierarchical scheme drops dramatically at high redshift; for the elliptical galaxies we note that the predicted maximum of the Type Ia supernova rate depends on the initial galactic mass. The maximum occurs earlier (at about 0.3 Gyr) in the most massive ellipticals, as a consequence of downsizing in star formation. We find that different delay time distributions predict different relations between the Type Ia supernova rate per unit mass at the present time and the color of the parent galaxies and that bluer ellipticals present higher supernova Type Ia rates at the present time.
Theoretical Model for Nanoporous Carbon Supercapacitors
Sumpter, Bobby G [ORNL; Meunier, Vincent [ORNL; Huang, Jingsong [ORNL
2008-01-01T23:59:59.000Z
The unprecedented anomalous increase in capacitance of nanoporous carbon supercapacitors at pore sizes smaller than 1 nm [Science 2006, 313, 1760.] challenges the long-held presumption that pores smaller than the size of solvated electrolyte ions do not contribute to energy storage. We propose a heuristic model to replace the commonly used model for an electric double-layer capacitor (EDLC) on the basis of an electric double-cylinder capacitor (EDCC) for mesopores (2 {50 nm pore size), which becomes an electric wire-in-cylinder capacitor (EWCC) for micropores (< 2 nm pore size). Our analysis of the available experimental data in the micropore regime is confirmed by 1st principles density functional theory calculations and reveals significant curvature effects for carbon capacitance. The EDCC (and/or EWCC) model allows the supercapacitor properties to be correlated with pore size, specific surface area, Debye length, electrolyte concentration and dielectric constant, and solute ion size. The new model not only explains the experimental data, but also offers a practical direction for the optimization of the properties of carbon supercapacitors through experiments.
Optimization Online - Efficient parallel coordinate descent algorithm ...
Ion Necoara
2012-11-02T23:59:59.000Z
Nov 2, 2012 ... Efficient parallel coordinate descent algorithm for convex optimization problems with separable constraints: application to distributed MPC.
Optimization Online - Efficient Algorithmic Techniques for Several ...
Mugurel Ionut Andreica
2008-10-23T23:59:59.000Z
Oct 23, 2008 ... Efficient Algorithmic Techniques for Several Multidimensional Geometric Data Management and Analysis Problems. Mugurel Ionut ...
An algorithm for minimization of quantum cost
Anindita Banerjee; Anirban Pathak
2010-04-09T23:59:59.000Z
A new algorithm for minimization of quantum cost of quantum circuits has been designed. The quantum cost of different quantum circuits of particular interest (eg. circuits for EPR, quantum teleportation, shor code and different quantum arithmetic operations) are computed by using the proposed algorithm. The quantum costs obtained using the proposed algorithm is compared with the existing results and it is found that the algorithm has produced minimum quantum cost in all cases.
ALGORITHM & DOCUMENTATION: MINRES-QLP for Singular ...
SOU-CHENG T. CHOI, MICHAEL A. SAUNDERS
2013-01-12T23:59:59.000Z
Jan 12, 2013 ... ALGORITHM & DOCUMENTATION: MINRES-QLP for Singular Symmetric and. Hermitian Linear Equations and Least-Squares. Problems.
Order Module--NNSA Orders Self-Study Program Safety Basis Documentatio...
Broader source: Energy.gov (indexed) [DOE]
NNSA Orders Self-Study Program Safety Basis Documentation Order Module--NNSA Orders Self-Study Program Safety Basis Documentation The familiar level of this module is divided into...
A Note on the Finite Element Method with Singular Basis Functions
Kaneko, Hideaki
finite element analysis that incorporates singular element functions. A need for introducing * *some singular elements as part of basis functions in certain finite element analysis arises o* *ut A Note on the Finite Element Method with Singular Basis
GOETZ, T.G.
2003-07-25T23:59:59.000Z
This technical basis document describes the risk binning process and the technical basis for assigning risk bins for the aboveground structure failure representative accident and associated represented hazardous conditions. This document was developed to support the documented safety analysis.
Field theoretic description of charge regulation interaction
Natasa Adzic; Rudolf Podgornik
2014-05-15T23:59:59.000Z
In order to find the exact form of the electrostatic interaction between two proteins with dissociable charge groups in aqueous solution, we have studied a model system composed of two macroscopic surfaces with charge dissociation sites immersed in a counterion-only ionic solution. Field-theoretic representation of the grand canonical partition function is derived and evaluated within the mean-field approximation, giving the Poisson-Boltzmann theory with the Ninham-Parsegian boundary condition. Gaussian fluctuations around the mean-field are then analyzed in the lowest order correction that we calculate analytically and exactly, using the path integral representation for the partition function of a harmonic oscillator with time-dependent frequency. The first order (one loop) free energy correction gives the interaction free energy that reduces to the zero-frequency van der Waals form in the appropriate limit but in general gives rise to a mono-polar fluctuation term due to charge fluctuation at the dissociation sites. Our formulation opens up the possibility to investigate the Kirkwood-Shumaker interaction in more general contexts where their original derivation fails.
Theoretical Tools for Large Scale Structure
J. R. Bond; L. Kofman; D. Pogosyan; J. Wadsley
1998-10-06T23:59:59.000Z
We review the main theoretical aspects of the structure formation paradigm which impinge upon wide angle surveys: the early universe generation of gravitational metric fluctuations from quantum noise in scalar inflaton fields; the well understood and computed linear regime of CMB anisotropy and large scale structure (LSS) generation; the weakly nonlinear regime, where higher order perturbation theory works well, and where the cosmic web picture operates, describing an interconnected LSS of clusters bridged by filaments, with membranes as the intrafilament webbing. Current CMB+LSS data favour the simplest inflation-based $\\Lambda$CDM models, with a primordial spectral index within about 5% of scale invariant and $\\Omega_\\Lambda \\approx 2/3$, similar to that inferred from SNIa observations, and with open CDM models strongly disfavoured. The attack on the nonlinear regime with a variety of N-body and gas codes is described, as are the excursion set and peak-patch semianalytic approaches to object collapse. The ingredients are mixed together in an illustrative gasdynamical simulation of dense supercluster formation.
Field theoretic simulations of polymer nanocomposites
Koski, Jason; Chao, Huikuan; Riggleman, Robert A., E-mail: rrig@seas.upenn.edu [Department of Chemical and Biomolecular Engineering, University of Pennsylvania, Philadelphia, Pennsylvania 19104 (United States)
2013-12-28T23:59:59.000Z
Polymer field theory has emerged as a powerful tool for describing the equilibrium phase behavior of complex polymer formulations, particularly when one is interested in the thermodynamics of dense polymer melts and solutions where the polymer chains can be accurately described using Gaussian models. However, there are many systems of interest where polymer field theory cannot be applied in such a straightforward manner, such as polymer nanocomposites. Current approaches for incorporating nanoparticles have been restricted to the mean-field level and often require approximations where it is unclear how to improve their accuracy. In this paper, we present a unified framework that enables the description of polymer nanocomposites using a field theoretic approach. This method enables straightforward simulations of the fully fluctuating field theory for polymer formulations containing spherical or anisotropic nanoparticles. We demonstrate our approach captures the correlations between particle positions, present results for spherical and cylindrical nanoparticles, and we explore the effect of the numerical parameters on the performance of our approach.
Theoretical descriptions of neutron emission in fission
Madland, D.G.
1990-01-01T23:59:59.000Z
Brief descriptions are given of the observables in neutron emission in fission together with early theoretical representations of two of these observables, namely, the prompt fission neutron spectrum N(E) and the average prompt neutron multiplicity {bar {nu}}{sub p}. This is followed by summaries, together with examples, of modern approaches to the calculation of these two quantities. Here, emphasis is placed upon the predictability and accuracy of the new approaches. In particular, the dependencies of N(E) and {bar {nu}}{sub p} upon the fissioning nucleus and its excitation energy are discussed. Then, recent work in multiple-chance fission and other recent work involving new measurements are presented and discussed. Following this, some properties of fission fragments are mentioned that must be better known and better understood in order to calculate N(E) and {bar {nu}}{sub p} with higher accuracy than is currently possible. In conclusion, some measurements are recommended for the purpose of benchmarking simultaneous calculations of neutron emission and gamma emission in fission. 32 refs., 26 figs.
Equivalence of Learning Algorithms Julien Audiffren1
Equivalence of Learning Algorithms Julien Audiffren1 and Hachem Kadri2 1 CMLA, ENS Cachan is to introduce a concept of equivalence between machine learn- ing algorithms. We define two notions of algorithmic equivalence, namely, weak and strong equivalence. These notions are of paramount importance
A DISTRIBUTED POWER CONTROL ALGORITHM FOR
Mitra, Debasis
A DISTRIBUTED POWER CONTROL ALGORITHM FOR BURSTY TRANSMISSIONS ON CELLULAR, SPREAD SPECTRUM, USA ABSTRACT We propose a distributed algorithm for power control in cellular, wideband networks, although its parameters are different from data. We propose a distributed algorithm for power control
A Genetic Algorithm Approach for Technology Characterization
Galvan, Edgar
2012-10-19T23:59:59.000Z
A GENETIC ALGORITHM APPROACH FOR TECHNOLOGY CHARACTERIZATION A Thesis by EDGAR GALVAN Submitted to the Office of Graduate Studies of Texas A&M University in partial fulfillment of the requirements for the degree of MASTER... OF SCIENCE August 2012 Major Subject: Mechanical Engineering A Genetic Algorithm Approach for Technology Characterization Copyright 2012 Edgar Galvan A GENETIC ALGORITHM APPROACH FOR TECHNOLOGY...
Algorithmic Decision Theory and the Smart Grid
1 Algorithmic Decision Theory and the Smart Grid Fred Roberts Rutgers University #12;2 Algorithmic Conference on ADT probably Belgium in Fall 2013. #12;9 ADT and Smart Grid ·Many of the following ideas and planning dating at least to World War II. ·But: algorithms to speed up and improve real-time decision
university-logo Intro Algorithm Results Concl.
Aarts, Gert
university-logo Intro Algorithm Results Concl. Strong coupling lattice QCD at finite temperature Ph. de Forcrand Trento, March 2009 = 0 QCD #12;university-logo Intro Algorithm Results Concl. QCD Forcrand Trento, March 2009 = 0 QCD #12;university-logo Intro Algorithm Results Concl. Motivation (1
Final Report: Algorithms for Diffractive Microscopy
Elser, Veit
2010-10-08T23:59:59.000Z
The phenomenal coherence and brightness of x-ray free-electron laser light sources, such as the LCLS at SLAC, have the potential of revolutionizing the investigation of structure and dynamics in the nano-domain. However, this potential will go unrealized without a similar revolution in the way the data are analyzed. While it is true that the ambitious design parameters of the LCLS have been achieved, the prospects of realizing the most publicized goal of this instrument — the imaging of individual bio-particles — remains daunting. Even with 10{sup 12} photons per x-ray pulse, the feebleness of the scattering process represents a fundamental limit that no amount of engineering ingenuity can overcome. Large bio-molecules will scatter on the order of only 10{sup 3} photons per pulse into a detector with 106 pixels; the diffraction “images” will be virtually indistinguishable from noise. Averaging such noisy signals over many pulses is not possible because the particle orientation cannot be controlled. Each noisy laser snapshot is thus confounded by the unknown viewpoint of the particle. Given the heavy DOE investment in LCLS and the profound technical challenges facing single-particle imaging, the final two years of this project have concentrated on this effort. We are happy to report that we succeeded in developing an extremely efficient algorithm that can reconstruct the shapes of particles at even the extremes of noise expected in future LCLS experiments with single bio-particles. Since this is the most important outcome of this project, the major part of this report documents this accomplishment. The theoretical techniques that were developed for the single-particle imaging project have proved useful in other imaging problems that are described at the end of the report.
Theoretical ecology: a successful first year and a bright future for a new journal
Hastings, Alan
2009-01-01T23:59:59.000Z
6 EDITORIAL Theoretical ecology: a successful first year andvolume 2 of Theoretical Ecology. Looking back, this has beenfocusing on theoretical ecology can play an expanding role
Hanford External Dosimetry Technical Basis Manual PNL-MA-842
Rathbone, Bruce A.
2011-04-04T23:59:59.000Z
The Hanford External Dosimetry Technical Basis Manual PNL-MA-842 documents the design and implementation of the external dosimetry system used at the U.S. Department of Energy (DOE) Hanford site. The manual describes the dosimeter design, processing protocols, dose calculation methodology, radiation fields encountered, dosimeter response characteristics, limitations of dosimeter design under field conditions, and makes recommendations for effective use of the dosimeters in the field. The manual describes the technical basis for the dosimetry system in a manner intended to help ensure defensibility of the dose of record at Hanford and to demonstrate compliance with requirements of 10 CFR 835, the DOE Laboratory Accreditation Program, the DOE Richland Operations Office, DOE Office of River Protection, DOE Pacific Northwest Office of Science, and Hanford’s DOE contractors. The dosimetry system is operated by the Pacific Northwest National Laboratory (PNNL) Hanford External Dosimetry Program which provides dosimetry services to PNNL and all Hanford contractors. The primary users of this manual are DOE and DOE contractors at Hanford using the dosimetry services of PNNL. Development and maintenance of this manual is funded directly by DOE and DOE contractors. Its contents have been reviewed and approved by DOE and DOE contractors at Hanford through the Hanford Personnel Dosimetry Advisory Committee which is chartered and chaired by DOE-RL and serves as means of coordinating dosimetry practices across contractors at Hanford. This manual was established in 1996. Since its inception, it has been revised many times and maintained by PNNL as a controlled document with controlled distribution. The first revision to be released through PNNL’s Electronic Records & Information Capture Architecture database was designated Revision 0. Revision numbers that are whole numbers reflect major revisions typically involving significant changes to all chapters in the document. Revision numbers that include a decimal fraction reflect minor revisions, usually restricted to selected chapters or selected pages in the document. Maintenance and distribution of controlled hard copies of the manual by PNNL was discontinued beginning with Revision 0.2.
Experimental Progress Report--Modernizing the Fission Basis
Macri, R A
2012-02-17T23:59:59.000Z
In 2010 a proposal (Modernizing the Fission Basis) was prepared to 'resolve long standing differences between LANL and LLNL associated with the correct fission basis for analysis of nuclear test data'. Collaboration between LANL/LLNL/TUNL has been formed to implement this program by performing high precision measurements of neutron induced fission product yields as a function of incident neutron energy. This new program benefits from successful previous efforts utilizing mono-energetic neutrons undertaken by this collaboration. The first preliminary experiment in this new program was performed between July 24-31, 2011 at TUNL and had 2 main objectives: (1) demonstrating the capability to measure characteristic {gamma}-rays from specific fission products; (2) studying background effects from room scattered neutrons. In addition, a new dual fission ionization chamber has been designed and manufactured. The production design of the chamber is shown in the picture below. The first feasibility experiment to test this chamber is scheduled at the TUNL Tandem Laboratory from September 19-25, 2011. The dual fission chamber design will allow simultaneous exposure of absolute fission fragment emission rate detectors and the thick fission activation foils, positioned between the two chambers. This document formalizes the earlier experimental report demonstrating the experimental capability to make accurate (< 2 %) precision gamma-ray spectroscopic measurements of the excitation function of high fission product yields of the 239Pu(n,f) reaction (induced by quasimonoenergetic neutrons). A second experiment (9/2011) introduced an compact double-sided fission chamber into the experimental arrangement, and so the relative number of incident neutrons striking the sample foil at each bombarding energy is limited only by statistics. (The number of incident neutrons often limits the experimental accuracy.) Fission chamber operation was so exceptional that 2 more chambers have been fabricated; thus fission foils of different isotopes may be left in place with sample changes. The scope of the measurements is both greatly expanded and the results become vetted. Experiment 2 is not reported here. A continuing experiment has been proposed for February 2012.
Hanford External Dosimetry Technical Basis Manual PNL-MA-842
Rathbone, Bruce A.
2010-04-01T23:59:59.000Z
The Hanford External Dosimetry Technical Basis Manual PNL-MA-842 documents the design and implementation of the external dosimetry system used at the U.S. Department of Energy (DOE) Hanford site. The manual describes the dosimeter design, processing protocols, dose calculation methodology, radiation fields encountered, dosimeter response characteristics, limitations of dosimeter design under field conditions, and makes recommendations for effective use of the dosimeters in the field. The manual describes the technical basis for the dosimetry system in a manner intended to help ensure defensibility of the dose of record at Hanford and to demonstrate compliance with requirements of 10 CFR 835, the DOE Laboratory Accreditation Program, the DOE Richland Operations Office, DOE Office of River Protection, DOE Pacific Northwest Office of Science, and Hanford’s DOE contractors. The dosimetry system is operated by the Pacific Northwest National Laboratory (PNNL) Hanford External Dosimetry Program which provides dosimetry services to PNNL and all Hanford contractors. The primary users of this manual are DOE and DOE contractors at Hanford using the dosimetry services of PNNL. Development and maintenance of this manual is funded directly by DOE and DOE contractors. Its contents have been reviewed and approved by DOE and DOE contractors at Hanford through the Hanford Personnel Dosimetry Advisory Committee which is chartered and chaired by DOE-RL and serves as means of coordinating dosimetry practices across contractors at Hanford. This manual was established in 1996. Since its inception, it has been revised many times and maintained by PNNL as a controlled document with controlled distribution. The first revision to be released through PNNL’s Electronic Records & Information Capture Architecture database was designated Revision 0. Revision numbers that are whole numbers reflect major revisions typically involving significant changes to all chapters in the document. Revision numbers that include a decimal fraction reflect minor revisions, usually restricted to selected chapters or selected pages in the document. Maintenance and distribution of controlled hard copies of the manual by PNNL was discontinued beginning with Revision 0.2.
Hanford External Dosimetry Technical Basis Manual PNL-MA-842
Rathbone, Bruce A.
2007-03-12T23:59:59.000Z
The Hanford External Dosimetry Technical Basis Manual PNL-MA-842 documents the design and implementation of the external dosimetry system used at Hanford. The manual describes the dosimeter design, processing protocols, dose calculation methodology, radiation fields encountered, dosimeter response characteristics, limitations of dosimeter design under field conditions, and makes recommendations for effective use of the dosimeters in the field. The manual describes the technical basis for the dosimetry system in a manner intended to help ensure defensibility of the dose of record at Hanford and to demonstrate compliance with 10 CFR 835, DOELAP, DOE-RL, ORP, PNSO, and Hanford contractor requirements. The dosimetry system is operated by PNNL’s Hanford External Dosimetry Program (HEDP) which provides dosimetry services to all Hanford contractors. The primary users of this manual are DOE and DOE contractors at Hanford using the dosimetry services of PNNL. Development and maintenance of this manual is funded directly by DOE and DOE contractors. Its contents have been reviewed and approved by DOE and DOE contractors at Hanford through the Hanford Personnel Dosimetry Advisory Committee (HPDAC) which is chartered and chaired by DOE-RL and serves as means of coordinating dosimetry practices across contractors at Hanford. This manual was established in 1996. Since inception, it has been revised many times and maintained by PNNL as a controlled document with controlled distribution. Rev. 0 marks the first revision to be released through PNNL’s Electronic Records & Information Capture Architecture (ERICA) database. Revision numbers that are whole numbers reflect major revisions typically involving changes to all chapters in the document. Revision numbers that include a decimal fraction reflect minor revisions, usually restricted to selected chapters or selected pages in the document. Revision Log: Rev. 0 (2/25/2005) Major revision and expansion. Rev. 0.1 (3/12/2007) Minor revision. Updated Chapters 5, 6 and 9 to reflect change in default ring calibration factor used in HEDP dose calculation software. Factor changed from 1.5 to 2.0 beginning January 1, 2007. Pages on which changes were made are as follows: 5.23, 5.69, 5.78, 5.80, 5.82, 6.3, 6.5, 6.29, 9.2.
Logo-like Learning of Basic Concepts of Algorithms -Having Fun with Algorithms
Logo-like Learning of Basic Concepts of Algorithms - Having Fun with Algorithms Gerald Futschek are not primarily interested in programming the way of learning is highly influenced by the Logo style of learning to design efficient algorithms. Keywords Logo-like learning, algorithms, group learning 1 2 3 4 5 n ... 1
Efficient basis for the Dicke Model I: theory and convergence in energy
Miguel Angel Bastarrachea-Magnani; Jorge G. Hirsch
2013-12-06T23:59:59.000Z
An extended bosonic coherent basis has been shown by Chen to provide numerically exact solutions of the finite-size Dicke model. The advantages in employing this basis, as compared with the photon number (Fock) basis, are exhibited to be valid for a large region of the Hamiltonian parameter space by analyzing the converged values of the ground state energy.
Basi di Dati: Realizzazione dei DBMS 9.1 ARCHITETTURA DEI DBMS
Ghelli, Giorgio
Basi di Dati: Realizzazione dei DBMS 9.1 ARCHITETTURA DEI DBMS Macchina logica: gestore comandi SQL, indici, catalogo, giornale Basi di Dati: Realizzazione dei DBMS 9.2 MEMORIE A DISCO Â· Un'unitÃ a dischi ms, 0.02 ms/Kb testine Pacco di dischi Cilindro Traccia #12;Basi di Dati: Realizzazione dei DBMS 9
Reactivity accidents: A reassessment of the design-basis events
Diamond, D.J.; Hsu, Chia-Jung; Fitzpatrick, R.; Mirkovic, D.
1989-01-01T23:59:59.000Z
This paper summarizes a study of light water reactor event sequences which have been investigated for their potential to result in reactivity accidents with severe consequences. The study is an outgrowth of the concern which arose after the accident at Chernobyl and was recommended by the report of the US Nuclear Regulatory Commission (NRC) on the implications of that accident (NUREG-1251). The work was done for the NRC to reconfirm or bring into question previous judgments on reactivity events which must be analyzed for licensing. Event sequences were defined and then a probabilistic assessment was completed to estimate the frequency of the reactivity event and/or a deterministic calculation was completed to estimate the consequences to the fuel. Using the results of this analysis, analysis done by others, and a set of screening criteria developed within this study, judgments were made for each sequence as to its importance, and recommendations were made as to whether the NRC ought to be considering the important sequences as part of the design basis or for further, more detailed, investigation. 31 refs., 9 figs., 1 tab.
Climate Change: The Physical Basis and Latest Results
None
2011-10-06T23:59:59.000Z
The 2007 Assessment Report of the Intergovernmental Panel on Climate Change (IPCC) concludes: "Warming in the climate system is unequivocal." Without the contribution of Physics to climate science over many decades, such a statement would not have been possible. Experimental physics enables us to read climate archives such as polar ice cores and so provides the context for the current changes. For example, today the concentration of CO2 in the atmosphere, the second most important greenhouse gas, is 28% higher than any time during the last 800,000 years. Classical fluid mechanics and numerical mathematics are the basis of climate models from which estimates of future climate change are obtained. But major instabilities and surprises in the Earth System are still unknown. These are also to be considered when the climatic consequences of proposals for geo-engineering are estimated. Only Physics will permit us to further improve our understanding in order to provide the foundation for policy decisions facing the global climate change challenge.
Structural basis of substrate discrimination and integrin binding by autotaxin
Hausmann, Jens; Kamtekar, Satwik; Christodoulou, Evangelos; Day, Jacqueline E.; Wu, Tao; Fulkerson, Zachary; Albers, Harald M.H.G.; van Meeteren, Laurens A.; Houben, Anna J.S.; van Zeijl, Leonie; Jansen, Silvia; Andries, Maria; Hall, Troii; Pegg, Lyle E.; Benson, Timothy E.; Kasiem, Mobien; Harlos, Karl; Vander Kooi, Craig W.; Smyth, Susan S.; Ovaa, Huib; Bollen, Mathieu; Morris, Andrew J.; Moolenaar, Wouter H.; Perrakis, Anastassis (Pfizer); (Leuven); (Oxford); (NCI-Netherlands); (Kentucky)
2013-09-25T23:59:59.000Z
Autotaxin (ATX, also known as ectonucleotide pyrophosphatase/phosphodiesterase-2, ENPP2) is a secreted lysophospholipase D that generates the lipid mediator lysophosphatidic acid (LPA), a mitogen and chemoattractant for many cell types. ATX-LPA signaling is involved in various pathologies including tumor progression and inflammation. However, the molecular basis of substrate recognition and catalysis by ATX and the mechanism by which it interacts with target cells are unclear. Here, we present the crystal structure of ATX, alone and in complex with a small-molecule inhibitor. We have identified a hydrophobic lipid-binding pocket and mapped key residues for catalysis and selection between nucleotide and phospholipid substrates. We have shown that ATX interacts with cell-surface integrins through its N-terminal somatomedin B-like domains, using an atypical mechanism. Our results define determinants of substrate discrimination by the ENPP family, suggest how ATX promotes localized LPA signaling and suggest new approaches for targeting ATX with small-molecule therapeutic agents.
Hanford Technical Basis for Multiple Dosimetry Effective Dose Methodology
Hill, Robin L.; Rathbone, Bruce A.
2010-08-01T23:59:59.000Z
The current method at Hanford for dealing with the results from multiple dosimeters worn during non-uniform irradiation is to use a compartmentalization method to calculate the effective dose (E). The method, as documented in the current version of Section 6.9.3 in the 'Hanford External Dosimetry Technical Basis Manual, PNL-MA-842,' is based on the compartmentalization method presented in the 1997 ANSI/HPS N13.41 standard, 'Criteria for Performing Multiple Dosimetry.' With the adoption of the ICRP 60 methodology in the 2007 revision to 10 CFR 835 came changes that have a direct affect on the compartmentalization method described in the 1997 ANSI/HPS N13.41 standard, and, thus, to the method used at Hanford. The ANSI/HPS N13.41 standard committee is in the process of updating the standard, but the changes to the standard have not yet been approved. And, the drafts of the revision of the standard tend to align more with ICRP 60 than with the changes specified in the 2007 revision to 10 CFR 835. Therefore, a revised method for calculating effective dose from non-uniform external irradiation using a compartmental method was developed using the tissue weighting factors and remainder organs specified in 10 CFR 835 (2007).
A Joint Photoelectron Spectroscopy and Theoretical Study on the...
Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)
of UCl5? and UCl5. We also performed systematic theoretical studies on all the uranium pentahalide complexes UX5? (XF, Cl, Br, I). Chemical bonding analyses...
A Theoretical Study of Methanol Oxidation Catalyzed by Isolated...
Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)
Methanol Oxidation Catalyzed by Isolated Vanadia Clusters Supported on the (101) Surface of Anatase. A Theoretical Study of Methanol Oxidation Catalyzed by Isolated Vanadia...
A Game-Theoretical Dynamic Model for Electricity Markets
Aswin Kannan
2010-10-06T23:59:59.000Z
Oct 6, 2010 ... Abstract: We present a game-theoretical dynamic model for competitive electricity markets.We demonstrate that the model can be used to ...
Theoretical Electron Density Distributions for Fe- and Cu-Sulfide...
Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)
Electron Density Distributions for Fe- and Cu-Sulfide Earth Materials: A Connection between Bond Length, Bond Theoretical Electron Density Distributions for Fe- and Cu-Sulfide...
Theoretical overview on top pair production and single top production
Stefan Weinzierl
2012-01-19T23:59:59.000Z
In this talk I will give an overview on theoretical aspects of top quark physics. The focus lies on top pair production and single top production.
aggression theoretical considerations: Topics by E-print Network
Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)
number Clemmer, David E. 4 Theoretical and Experimental Considerations for Neutrinoless Double Beta Decay CERN Preprints Summary: In the rst part of this work we show some...
Theoretical Study of the Structure, Stability and Oxygen Reduction...
Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)
Study of the Structure, Stability and Oxygen Reduction Activity ofUltrathin Platinum Nanowires. Theoretical Study of the Structure, Stability and Oxygen Reduction Activity...
Are There Practical Approaches for Achieving the Theoretical...
Broader source: Energy.gov (indexed) [DOE]
combustion engine? Engine Overview - University of Wisconsin -- Engine Research Center Maximum Theoretical Efficiency Consider a unit mass of fuel and air entering the engine at...
Distributed Approaches for Determination of Reconfiguration Algorithm Termination
Lai, Hong-jian
Distributed Approaches for Determination of Reconfiguration Algorithm Termination Pinak Tulpule architecture was used as globally shared memory structure for detection of algorithm termination. This paper of algorithm termination. Keywords--autonomous agent-based reconfiguration, dis- tributed algorithms, shipboard
Quintom Cosmology: Theoretical implications and observations
Yi-Fu Cai; Emmanuel N. Saridakis; Mohammad R. Setare; Jun-Qing Xia
2010-04-22T23:59:59.000Z
We review the paradigm of quintom cosmology. This scenario is motivated by the observational indications that the equation of state of dark energy across the cosmological constant boundary is mildly favored, although the data are still far from being conclusive. As a theoretical setup we introduce a no-go theorem existing in quintom cosmology, and based on it we discuss the conditions for the equation of state of dark energy realizing the quintom scenario. The simplest quintom model can be achieved by introducing two scalar fields with one being quintessence and the other phantom. Based on the double-field quintom model we perform a detailed analysis of dark energy perturbations and we discuss their effects on current observations. This type of scenarios usually suffer from a manifest problem due to the existence of a ghost degree of freedom, and thus we review various alternative realizations of the quintom paradigm. The developments in particle physics and string theory provide potential clues indicating that a quintom scenario may be obtained from scalar systems with higher derivative terms, as well as from non-scalar systems. Additionally, we construct a quintom realization in the framework of braneworld cosmology, where the cosmic acceleration and the phantom divide crossing result from the combined effects of the field evolution on the brane and the competition between four and five dimensional gravity. Finally, we study the outsets and fates of a universe in quintom cosmology. In a scenario with null energy condition violation one may obtain a bouncing solution at early times and therefore avoid the Big Bang singularity. Furthermore, if this occurs periodically, we obtain a realization of an oscillating universe. Lastly, we comment on several open issues in quintom cosmology and their connection to future investigations.
THEORETICAL SPECTRA OF TERRESTRIAL EXOPLANET SURFACES
Hu Renyu; Seager, Sara [Department of Earth, Atmospheric and Planetary Sciences, Massachusetts Institute of Technology, Cambridge, MA 02139 (United States); Ehlmann, Bethany L., E-mail: hury@mit.edu [Division of Geological and Planetary Sciences, California Institute of Technology, Pasadena, CA 91125 (United States)
2012-06-10T23:59:59.000Z
We investigate spectra of airless rocky exoplanets with a theoretical framework that self-consistently treats reflection and thermal emission. We find that a silicate surface on an exoplanet is spectroscopically detectable via prominent Si-O features in the thermal emission bands of 7-13 {mu}m and 15-25 {mu}m. The variation of brightness temperature due to the silicate features can be up to 20 K for an airless Earth analog, and the silicate features are wide enough to be distinguished from atmospheric features with relatively high resolution spectra. The surface characterization thus provides a method to unambiguously identify a rocky exoplanet. Furthermore, identification of specific rocky surface types is possible with the planet's reflectance spectrum in near-infrared broad bands. A key parameter to observe is the difference between K-band and J-band geometric albedos (A{sub g}(K) - A{sub g}(J)): A{sub g}(K) - A{sub g}(J) > 0.2 indicates that more than half of the planet's surface has abundant mafic minerals, such as olivine and pyroxene, in other words primary crust from a magma ocean or high-temperature lavas; A{sub g}(K) - A{sub g}(J) < -0.09 indicates that more than half of the planet's surface is covered or partially covered by water ice or hydrated silicates, implying extant or past water on its surface. Also, surface water ice can be specifically distinguished by an H-band geometric albedo lower than the J-band geometric albedo. The surface features can be distinguished from possible atmospheric features with molecule identification of atmospheric species by transmission spectroscopy. We therefore propose that mid-infrared spectroscopy of exoplanets may detect rocky surfaces, and near-infrared spectrophotometry may identify ultramafic surfaces, hydrated surfaces, and water ice.
Theoretical Studies of Low Frequency Instabilities in the Ionosphere. Final Report
Dimant, Y. S.
2003-08-20T23:59:59.000Z
The objective of the current project is to provide a theoretical basis for better understanding of numerous radar and rocket observations of density irregularities and related effects in the lower equatorial and high-latitude ionospheres. The research focused on: (1) continuing efforts to develop a theory of nonlinear saturation of the Farley-Buneman instability; (2) revision of the kinetic theory of electron-thermal instability at low altitudes; (3) studying the effects of strong anomalous electron heating in the high-latitude electrojet; (4) analytical and numerical studies of the combined Farley-Bunemadion-thermal instabilities in the E-region ionosphere; (5) studying the effect of dust charging in Polar Mesospheric Clouds. Revision of the kinetic theory of electron thermal instability at low altitudes.
Microgenetic optimization algorithm for optimal wavefront shaping
Anderson, Benjamin R; Gunawidjaja, Ray; Eilers, Hergen
2015-01-01T23:59:59.000Z
One of the main limitations of utilizing optimal wavefront shaping in imaging and authentication applications is the slow speed of the optimization algorithms currently being used. To address this problem we develop a micro-genetic optimization algorithm ($\\mu$GA) for optimal wavefront shaping. We test the abilities of the $\\mu$GA and make comparisons to previous algorithms (iterative and simple-genetic) by using each algorithm to optimize transmission through an opaque medium. From our experiments we find that the $\\mu$GA is faster than both the iterative and simple-genetic algorithms and that both genetic algorithms are more resistant to noise and sample decoherence than the iterative algorithm.
TECHNICAL BASIS FOR VENTILATION REQUIREMENTS IN TANK FARMS OPERATING SPECIFICATIONS DOCUMENTS
BERGLIN, E J
2003-06-23T23:59:59.000Z
This report provides the technical basis for high efficiency particulate air filter (HEPA) for Hanford tank farm ventilation systems (sometimes known as heating, ventilation and air conditioning [HVAC]) to support limits defined in Process Engineering Operating Specification Documents (OSDs). This technical basis included a review of older technical basis and provides clarifications, as necessary, to technical basis limit revisions or justification. This document provides an updated technical basis for tank farm ventilation systems related to Operation Specification Documents (OSDs) for double-shell tanks (DSTs), single-shell tanks (SSTs), double-contained receiver tanks (DCRTs), catch tanks, and various other miscellaneous facilities.
Office of Scientific and Technical Information (OSTI)
AFDC Printable Version Share this resource Send a link to EERE: Alternative Fuels Data Center Home Page to someone by E-mail Share EERE: Alternative Fuels Data Center Home Page on Facebook Tweet about EERE: Alternative Fuels Data Center Home Page on Twitter Bookmark EERE: Alternative1 First Use of Energy for All Purposes (Fuel and Nonfuel), 2002; Level: National5Sales for4,645 3,625 1,006 492 742EnergyOnItem Not FoundInformation DOEInformation Summary Big*Theea2/316Cap plasticity
Theoretical & Experimental Studies of Elementary Particles
McFarland, Kevin
2012-10-04T23:59:59.000Z
Abstract High energy physics has been one of the signature research programs at the University of Rochester for over 60 years. The group has made leading contributions to experimental discoveries at accelerators and in cosmic rays and has played major roles in developing the theoretical framework that gives us our ``standard model'' of fundamental interactions today. This award from the Department of Energy funded a major portion of that research for more than 20 years. During this time, highlights of the supported work included the discovery of the top quark at the Fermilab Tevatron, the completion of a broad program of physics measurements that verified the electroweak unified theory, the measurement of three generations of neutrino flavor oscillations, and the first observation of a ``Higgs like'' boson at the Large Hadron Collider. The work has resulted in more than 2000 publications over the period of the grant. The principal investigators supported on this grant have been recognized as leaders in the field of elementary particle physics by their peers through numerous awards and leadership positions. Most notable among them is the APS W.K.H. Panofsky Prize awarded to Arie Bodek in 2004, the J.J. Sakurai Prizes awarded to Susumu Okubo and C. Richard Hagen in 2005 and 2010, respectively, the Wigner medal awarded to Susumu Okubo in 2006, and five principal investigators (Das, Demina, McFarland, Orr, Tipton) who received Department of Energy Outstanding Junior Investigator awards during the period of this grant. The University of Rochester Department of Physics and Astronomy, which houses the research group, provides primary salary support for the faculty and has waived most tuition costs for graduate students during the period of this grant. The group also benefits significantly from technical support and infrastructure available at the University which supports the work. The research work of the group has provided educational opportunities for graduate students, undergraduate students and high school students and teachers. Seventy-two graduate students received a Ph.D. in physics for research supported by this grant.
Theoretical Description of the Fission Process
Witold Nazarewicz
2009-10-25T23:59:59.000Z
Advanced theoretical methods and high-performance computers may finally unlock the secrets of nuclear fission, a fundamental nuclear decay that is of great relevance to society. In this work, we studied the phenomenon of spontaneous fission using the symmetry-unrestricted nuclear density functional theory (DFT). Our results show that many observed properties of fissioning nuclei can be explained in terms of pathways in multidimensional collective space corresponding to different geometries of fission products. From the calculated collective potential and collective mass, we estimated spontaneous fission half-lives, and good agreement with experimental data was found. We also predicted a new phenomenon of trimodal spontaneous fission for some transfermium isotopes. Our calculations demonstrate that fission barriers of excited superheavy nuclei vary rapidly with particle number, pointing to the importance of shell effects even at large excitation energies. The results are consistent with recent experiments where superheavy elements were created by bombarding an actinide target with 48-calcium; yet even at high excitation energies, sizable fission barriers remained. Not only does this reveal clues about the conditions for creating new elements, it also provides a wider context for understanding other types of fission. Understanding of the fission process is crucial for many areas of science and technology. Fission governs existence of many transuranium elements, including the predicted long-lived superheavy species. In nuclear astrophysics, fission influences the formation of heavy elements on the final stages of the r-process in a very high neutron density environment. Fission applications are numerous. Improved understanding of the fission process will enable scientists to enhance the safety and reliability of the nation’s nuclear stockpile and nuclear reactors. The deployment of a fleet of safe and efficient advanced reactors, which will also minimize radiotoxic waste and be proliferation-resistant, is a goal for the advanced nuclear fuel cycles program. While in the past the design, construction, and operation of reactors were supported through empirical trials, this new phase in nuclear energy production is expected to heavily rely on advanced modeling and simulation capabilities.
Theoretical Studies of Hydrogen Storage Alloys.
Jonsson, Hannes
2012-03-22T23:59:59.000Z
Theoretical calculations were carried out to search for lightweight alloys that can be used to reversibly store hydrogen in mobile applications, such as automobiles. Our primary focus was on magnesium based alloys. While MgH{sub 2} is in many respects a promising hydrogen storage material, there are two serious problems which need to be solved in order to make it useful: (i) the binding energy of the hydrogen atoms in the hydride is too large, causing the release temperature to be too high, and (ii) the diffusion of hydrogen through the hydride is so slow that loading of hydrogen into the metal takes much too long. In the first year of the project, we found that the addition of ca. 15% of aluminum decreases the binding energy to the hydrogen to the target value of 0.25 eV which corresponds to release of 1 bar hydrogen gas at 100 degrees C. Also, the addition of ca. 15% of transition metal atoms, such as Ti or V, reduces the formation energy of interstitial H-atoms making the diffusion of H-atoms through the hydride more than ten orders of magnitude faster at room temperature. In the second year of the project, several calculations of alloys of magnesium with various other transition metals were carried out and systematic trends in stability, hydrogen binding energy and diffusivity established. Some calculations of ternary alloys and their hydrides were also carried out, for example of Mg{sub 6}AlTiH{sub 16}. It was found that the binding energy reduction due to the addition of aluminum and increased diffusivity due to the addition of a transition metal are both effective at the same time. This material would in principle work well for hydrogen storage but it is, unfortunately, unstable with respect to phase separation. A search was made for a ternary alloy of this type where both the alloy and the corresponding hydride are stable. Promising results were obtained by including Zn in the alloy.
Theoretical Integration, Cooperation, and Theories as Tracking Devices
Theoretical Integration, Cooperation, and Theories as Tracking Devices James Griesemer Departments@ucdavis.edu The theoretical problem of integrating evolution, heredity, de- velopment, and cognition has a long pedigree learning and incredulity at their peculiar visions of biological integration. Think of Herbert Spencer
Contributed article Neuro-fuzzy feature evaluation with theoretical analysis
De, Rajat Kumar
Science Ltd. All rights reserved. Keywords: Fuzzy sets; Neural networks; Pattern recognition; Feature a fuzzy set theoretic feature evaluation index and a connectionist model for its evaluation alongContributed article Neuro-fuzzy feature evaluation with theoretical analysis R.K. De, J. Basak, S
Theoretical Determination of the Dissociation Energy of Molecular Hydrogen
Pachucki, Krzysztof
Physics, University of Warsaw, Hoza 69, 00-681 Warsaw, Poland Abstract The dissociation energyTheoretical Determination of the Dissociation Energy of Molecular Hydrogen Konrad Piszczatowski of Chemistry, University of Warsaw, Pasteura 1, 02-093, Warsaw, Poland, Center for Theoretical
Atomic holography with electrons and x-rays: Theoretical and experimental studies
Len, P M [Univ. of California, Davis, CA (United States). Dept. of Physics
1997-06-01T23:59:59.000Z
Gabor first proposed holography in 1948 as a means to experimentally record the amplitude and phase of scattered wavefronts, relative to a direct unscattered wave, and to use such a {open_quotes}hologram{close_quotes} to directly image atomic structure. But imaging at atomic resolution has not yet been possible in the way he proposed. Much more recently, Szoeke in 1986 noted that photoexcited atoms can emit photoelectron of fluorescent x-ray wavefronts that are scattered by neighboring atoms, thus yielding the direct and scattered wavefronts as detected in the far field that can then be interpreted as holographic in nature. By now, several algorithms for directly reconstructing three-dimensional atomic images from electron holograms have been proposed (e.g. by Barton) and successfully tested against experiment and theory. Very recently, Tegze and Faigel, and Grog et al. have recorded experimental x-ray fluorescence holograms, and these are found to yield atomic images that are more free of the kinds of aberrations caused by the non-ideal emission or scattering of electrons. The basic principles of these holographic atomic imaging methods are reviewed, including illustrative applications of the reconstruction algorithms to both theoretical and experimental electron and x-ray holograms. The author also discusses the prospects and limitations of these newly emerging atomic structural probes.
Teodor Buchner; Jan ?ebrowski; Grzegorz Gielerak
2010-07-13T23:59:59.000Z
Using a three-compartment model of blood pressure dynamics, we analyze theoretically the short term cardiovascular variability: how the respiratory-related blood pressure fluctuations are buffered by appropriate heart rate changes: i.e. the respiratory sinus arrhythmia. The buffering is shown to be crucially dependent on the time delay between the stimulus (such as e.g. the inspiration onset) and the application of the control (the moment in time when the efferent response is delivered to the heart). This theoretical analysis shows that the buffering mechanism is effective only in the upright position of the body. It explains a paradoxical effect of enhancement of the blood pressure fluctuations by an ineffective control. Such a phenomenon was observed experimentally. Using the basis of the model, we discuss the blood pressure variability and heart rate variability under such clinical conditions as the states of expressed adrenergic drive and the tilt-test during the parasympathetic blockade or fixed rate atrial pacing. From the results of the variability analysis we draw a conclusion that the control of blood pressure in the HF band does not directly obtain the arterial baroreceptor input. We also discuss methodological issues of baroreflex sensitivity and sympathovagal balance assessment.
Fairness in optimal routing algorithms
Goos, Jeffrey Alan
1988-01-01T23:59:59.000Z
Member) Alberto Garcia-Diaz (Member) J. W. Howze (Head of Department) December 1988 ABSTRACT Fairness in Optimal Routing Algorithms (December- 1988) JefFrey Alan Goos, B. S. , University of Missouri Co ? Chairmen of Advisory Committee: Dr. Wei K... appreciation to my committee co-chairmen, Drs. hei K. Tsai and Pierce E. Cantrell, f' or their support and advice. In addition, I would like to thank Drs. Jerry D. Gibson and Alberto Garcia-Diaz for their time and usef'ul comments in reviewing this document...
A theoretical analysis of the systematic errors in the Red Clump distance to the LMC
Maurizio Salaris; Susan Percival; Leo Girardi
2003-07-17T23:59:59.000Z
We present a detailed analysis of the uncertainty on the theoretical population corrections to the LMC Red Clump (RC) absolute magnitude, by employing a population synthesis algorithm to simulate theoretically the photometric and spectroscopic properties of RC stars, under various assumptions about the LMC Star Formation Rate (SFR) and Age Metallicity Relationship (AMR). A comparison of the outcome of our simulations with observations of evolved low-intermediate mass stars in the LMC allows one to select the combinations of SFR and AMR that bracket the real LMC star formation history, and to estimate the systematic error on the associated RC population corrections. The most accurate estimate of the LMC distance modulus from the RC method (adopting the OGLE-II reddening maps for the LMC) is obtained from the K-band magnitude, and provides (m-M)_{0, LMC}=18.47 +/-0.01(random) +0.05/-0.06(systematic). Distances obtained from the I-band, or from the multicolour RC technique which determines at the same time reddening and distance, both agree (albeit with a slightly larger error bar) with this value.
Jeongho Bang; Seung-Woo Lee; Chang-Woo Lee; Hyunseok Jeong
2014-09-17T23:59:59.000Z
We propose a quantum algorithm to obtain the lowest eigenstate of any Hamiltonian simulated by a quantum computer. The proposed algorithm begins with an arbitrary initial state of the simulated system. A finite series of transforms is iteratively applied to the initial state assisted with an ancillary qubit. The fraction of the lowest eigenstate in the initial state is then amplified up to $\\simeq 1$. We prove that our algorithm can faithfully work for any arbitrary Hamiltonian in the theoretical analysis. Numerical analyses are also carried out. We firstly provide a numerical proof-of-principle demonstration with a simple Hamiltonian in order to compare our scheme with the so-called "Demon-like algorithmic cooling (DLAC)", recently proposed in [Nature Photonics 8, 113 (2014)]. The result shows a good agreement with our theoretical analysis, exhibiting the comparable behavior to the best "cooling" with the DLAC method. We then consider a random Hamiltonian model for further analysis of our algorithm. By numerical simulations, we show that the total number $n_c$ of iterations is proportional to $\\simeq {\\cal O}(D^{-1}\\epsilon^{-0.19})$, where $D$ is the difference between the two lowest eigenvalues, and $\\epsilon$ is an error defined as the probability that the finally obtained system state is in an unexpected (i.e. not the lowest) eigenstate.
for CDMA Wireless Data Mohammad Hayajneh United Arab Emirates University P.O.Box 17555, Al Ain , UAE chaouki@ece.unm.edu Walid Ibrahim United Arab Emirates University P.O.Box 17555, Al Ain , UAE walidibr
Quantum random-walk search algorithm
Shenvi, Neil; Whaley, K. Birgitta [Department of Chemistry, University of California, Berkeley, California 94720 (United States); Kempe, Julia [Department of Chemistry, University of California, Berkeley, California 94720 (United States); Computer Science Division, EECS, University of California, Berkeley, California 94720 (United States); CNRS-LRI, UMR 8623, Universite de Paris-Sud, 91405 Orsay (France)
2003-05-01T23:59:59.000Z
Quantum random walks on graphs have been shown to display many interesting properties, including exponentially fast hitting times when compared with their classical counterparts. However, it is still unclear how to use these novel properties to gain an algorithmic speedup over classical algorithms. In this paper, we present a quantum search algorithm based on the quantum random-walk architecture that provides such a speedup. It will be shown that this algorithm performs an oracle search on a database of N items with O({radical}(N)) calls to the oracle, yielding a speedup similar to other quantum search algorithms. It appears that the quantum random-walk formulation has considerable flexibility, presenting interesting opportunities for development of other, possibly novel quantum algorithms.
Enhanced algorithms for stochastic programming
Krishna, A.S.
1993-09-01T23:59:59.000Z
In this dissertation, we present some of the recent advances made in solving two-stage stochastic linear programming problems of large size and complexity. Decomposition and sampling are two fundamental components of techniques to solve stochastic optimization problems. We describe improvements to the current techniques in both these areas. We studied different ways of using importance sampling techniques in the context of Stochastic programming, by varying the choice of approximation functions used in this method. We have concluded that approximating the recourse function by a computationally inexpensive piecewise-linear function is highly efficient. This reduced the problem from finding the mean of a computationally expensive functions to finding that of a computationally inexpensive function. Then we implemented various variance reduction techniques to estimate the mean of a piecewise-linear function. This method achieved similar variance reductions in orders of magnitude less time than, when we directly applied variance-reduction techniques directly on the given problem. In solving a stochastic linear program, the expected value problem is usually solved before a stochastic solution and also to speed-up the algorithm by making use of the information obtained from the solution of the expected value problem. We have devised a new decomposition scheme to improve the convergence of this algorithm.
Algorithm for a microfluidic assembly line
Tobias M. Schneider; Shreyas Mandre; Michael P. Brenner
2011-01-19T23:59:59.000Z
Microfluidic technology has revolutionized the control of flows at small scales giving rise to new possibilities for assembling complex structures on the microscale. We analyze different possible algorithms for assembling arbitrary structures, and demonstrate that a sequential assembly algorithm can manufacture arbitrary 3D structures from identical constituents. We illustrate the algorithm by showing that a modified Hele-Shaw cell with 7 controlled flowrates can be designed to construct the entire English alphabet from particles that irreversibly stick to each other.
Optimization Online - Efficient Heuristic Algorithms for Maximum ...
T. G. J. Myklebust
2012-11-19T23:59:59.000Z
Nov 19, 2012 ... Efficient Heuristic Algorithms for Maximum Utility Product Pricing Problems. T. G. J. Myklebust(tmyklebu ***at*** csclub.uwaterloo.ca)
Efficient Algorithmic Techniques for Several Multidimensional ...
Mugurel
2008-10-23T23:59:59.000Z
Politehnica University of Bucharest, Romania, mugurel.andreica@cs.pub.ro. Abstract: In this paper I present several novel, efficient, algorithmic techniques for.
Exact Algorithms for Combinatorial Optimization Problems with ...
2012-03-30T23:59:59.000Z
using stochastic objective functions. Potential investment ..... An algorithm to construct a minimum directed spanning tree in a directed network. In. Developments ...
Parallel Interval Continuous Global Optimization Algorithms
abdeljalil benyoub
2002-07-19T23:59:59.000Z
Jul 19, 2002 ... Abstract: We theorically study, on a distributed memory architecture, the parallelization of Hansen's algorithm for the continuous global ...
Algorithmic Cooling in Liquid State NMR
Yosi Atia; Yuval Elias; Tal Mor; Yossi Weinstein
2014-11-17T23:59:59.000Z
Algorithmic cooling is a method that employs thermalization to increase the qubits' purification level, namely it reduces the qubit-system's entropy. We utilized gradient ascent pulse engineering (GRAPE), an optimal control algorithm, to implement algorithmic cooling in liquid state nuclear magnetic resonance. Various cooling algorithms were applied onto the three qubits of 13C2-trichloroethylene, cooling the system beyond Shannon's entropy bound in several different ways. For example, in one experiment a carbon qubit was cooled by a factor of 4.61. This work is a step towards potentially integrating tools of NMR quantum computing into in vivo magnetic resonance spectroscopy.
algorithmics: Topics by E-print Network
Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)
Algorithm Uncertainty Principles Mathematical Physics (arXiv) Summary: Previously, Bennet and Feynman asked if Heisenberg's uncertainty principle puts a limitation on a quantum...
algorithms: Topics by E-print Network
Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)
Algorithm Uncertainty Principles Mathematical Physics (arXiv) Summary: Previously, Bennet and Feynman asked if Heisenberg's uncertainty principle puts a limitation on a quantum...
A preliminary evaluation of a speed threshold incident detection algorithm
Kolb, Stephanie Lang
1996-01-01T23:59:59.000Z
Algorithm . . . . . . . . . . . . , . Event Scan Algorithm . Neural Network . . . . . . . . . . . . . . . , . . . . . . California Algorithm ?8 with Fuzzy Logic Selected Algorithms Page 20 21 22 24 24 25 26 27 28 28 30 32 32 33 33 33 33... 7 California Algorithm ?10 Decision Tree 12 14 15 8 Speed/Flow Curve 9 McMaster Algorithm Template 15 25 10 Traffic Flow Relationships Applied in the Dynamic Model Algorithm. . . 26 11 Multi-Layer Feed-Forward Neural Network 12 Membership...
Technical Basis and Considerations for DOE M 435.1-1 (Appendix A)
Broader source: Directives, Delegations, and Requirements [Office of Management (MA)]
1999-07-09T23:59:59.000Z
This appendix establishes the technical basis of the order revision process and of each of the requirements included in the revised radioactive waste management order.
An eigen-based high-order expansion basis for structured spectral ...
X. Zheng
2011-09-17T23:59:59.000Z
Aug 26, 2011 ... Sherwin–Karniadakis basis is smaller (see [1]). Considerations of sparsity have also prompted the use of orthogonalization in.
Sears, Brad; Mallory, Christy
2011-01-01T23:59:59.000Z
INSTITUTE EVIDENCE OF EMPLOYMENT DISCRIMINATION: COMPLAINTSCharles W. Gossett, Employment Discrimination in State andJULY 2011 Evidence of Employment Discrimination on the Basis
MANTOOTH, D.S.
2000-01-17T23:59:59.000Z
This report documents the technical basis by which the workplace air monitoring and sampling program is operated in the 324 and 327 Buildings.
St Andrews, University of
The Need for Language Repair The Reformation Algorithm Discussion Reformation: A Domain of Edinburgh University of St Andrews, 27th November 2013 #12;The Need for Language Repair The Reformation Algorithm Discussion Outline 1 The Need for Language Repair 2 The Reformation Algorithm 3 Discussion #12;The
MULTI-CRITERIA SEARCH ALGORITHM: AN EFFICIENT APPROXIMATE K-NN ALGORITHM FOR IMAGE RETRIEVAL
MULTI-CRITERIA SEARCH ALGORITHM: AN EFFICIENT APPROXIMATE K-NN ALGORITHM FOR IMAGE RETRIEVAL Mehdi-NN search in large scale image databases, based on top-k multi-criteria search tech- niques. The method retrieval, stor- age requirements and update cost. The search algorithm delivers ap- proximate results
Stojmenovic, Ivan
Operating Systems CSI3131 Lab 4 Winter 2011 Page Replacement Algorithms Objective To use a simulation for evaluating various page replacement algorithms studied in class. Description (Please read to compare the performance of each page replacement algorithm. The constructor of this class contains
Advanced Fuel Cycle Economic Tools, Algorithms, and Methodologies
David E. Shropshire
2009-05-01T23:59:59.000Z
The Advanced Fuel Cycle Initiative (AFCI) Systems Analysis supports engineering economic analyses and trade-studies, and requires a requisite reference cost basis to support adequate analysis rigor. In this regard, the AFCI program has created a reference set of economic documentation. The documentation consists of the “Advanced Fuel Cycle (AFC) Cost Basis” report (Shropshire, et al. 2007), “AFCI Economic Analysis” report, and the “AFCI Economic Tools, Algorithms, and Methodologies Report.” Together, these documents provide the reference cost basis, cost modeling basis, and methodologies needed to support AFCI economic analysis. The application of the reference cost data in the cost and econometric systems analysis models will be supported by this report. These methodologies include: the energy/environment/economic evaluation of nuclear technology penetration in the energy market—domestic and internationally—and impacts on AFCI facility deployment, uranium resource modeling to inform the front-end fuel cycle costs, facility first-of-a-kind to nth-of-a-kind learning with application to deployment of AFCI facilities, cost tradeoffs to meet nuclear non-proliferation requirements, and international nuclear facility supply/demand analysis. The economic analysis will be performed using two cost models. VISION.ECON will be used to evaluate and compare costs under dynamic conditions, consistent with the cases and analysis performed by the AFCI Systems Analysis team. Generation IV Excel Calculations of Nuclear Systems (G4-ECONS) will provide static (snapshot-in-time) cost analysis and will provide a check on the dynamic results. In future analysis, additional AFCI measures may be developed to show the value of AFCI in closing the fuel cycle. Comparisons can show AFCI in terms of reduced global proliferation (e.g., reduction in enrichment), greater sustainability through preservation of a natural resource (e.g., reduction in uranium ore depletion), value from weaning the U.S. from energy imports (e.g., measures of energy self-sufficiency), and minimization of future high level waste (HLW) repositories world-wide.
Naming, Reference, and Sense: Theoretical and Practical Attitudes at Odds
Norman, Andrew
Naming, Reference, and Sense: Theoretical and Practical Attitudes at Odds ANDREW NORMAN Northwestern University Three questions lie at the center of the philosophical controversy over proper names: 1) Do proper names have a sense? 2) If so...
Theoretical Minimum Energy Use of a Building HVAC System
Tanskyi, O.
2011-01-01T23:59:59.000Z
This paper investigates the theoretical minimum energy use required by the HVAC system in a particular code compliant office building. This limit might be viewed as the "Carnot Efficiency" for HVAC system. It assumes that all ventilation and air...
Shrink fit effects on rotordynamic stability: experimental and theoretical study
Jafri, Syed Muhammad Mohsin
2007-09-17T23:59:59.000Z
This dissertation presents an experimental and theoretical study of subsynchronous rotordynamic instability in rotors caused by interference and shrink fit interfaces. The experimental studies show the presence of strong unstable subsynchronous...
Theoretical investigation of energy-trapping mechanism by atomic systems
Srivastava, Rajendra P.
1978-06-01T23:59:59.000Z
The theoretical results are presented here in detail for the atomic device proposed earlier by the author. This device absorbs energy from a continuous radiation source and stores some of it with atoms in metastable states ...
Photoelectron Spectroscopy and Theoretical Studies of UF5 - and...
Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)
Studies of UF5 - and UF6 -. Photoelectron Spectroscopy and Theoretical Studies of UF5 - and UF6 -. Abstract: The UF5 ? and UF6 ? anions are produced using electrospray...
Theoretical Analysis for Obtaining Physical Properties of Composite Electrodes
Weidner, John W.
, 2003. Composite electrodes, composed of a mixture of electronically and ionically conducting materials and electronic conductivities of Nafion/ carbon composites. Shibuya et al.1 used an interdigitated arrayTheoretical Analysis for Obtaining Physical Properties of Composite Electrodes Parthasarathy M
Learning by Game-Building in Theoretical Computer Science Education
Hutchins-Korte, Laura
2008-01-01T23:59:59.000Z
It has been suggested that theoretical computer science (TCS) suffers more than average from a lack of intrinsic motivation. The reasons provided in the literature include the difficulty of the subject, lack of relevance ...
Neutron-Antineutron Oscillations: Theoretical Status and Experimental Prospects
Phillips, D. G.; Snow, W. M.; Babu, K.; Banerjee, S.; Baxter, D. V.; Berezhiani, Z.; Bergevin, M.; Bhattacharya, S.; Brooijmans, G.; Castellanos, L.; et al.,
2014-10-04T23:59:59.000Z
This paper summarizes the relevant theoretical developments, outlines some ideas to improve experimental searches for free neutron-antineutron oscillations, and suggests avenues for future improvement in the experimental sensitivity.
Theoretical Minimum Energy Use of a Building HVAC System
Tanskyi, O.
2011-01-01T23:59:59.000Z
This paper investigates the theoretical minimum energy use required by the HVAC system in a particular code compliant office building. This limit might be viewed as the "Carnot Efficiency" for HVAC system. It assumes that all ventilation and air...
A system theoretic approach to design safety into medical device
Song, Qingyang S.M. Massachusetts Institute of Technology
2012-01-01T23:59:59.000Z
The goal of this thesis is to investigate and demonstrate the application of a systems approach to medical device safety in China. Professor Leveson has developed an accident modeling framework called STAMP (Systems Theoretic ...
Theoretical study of syngas hydrogenation to methanol on the...
Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)
study of syngas hydrogenation to methanol on the polar Zn-terminated ZnO(0001) surface. Theoretical study of syngas hydrogenation to methanol on the polar Zn-terminated ZnO(0001)...
Theoretical Assessment of 178m2Hf De-Excitation
Hartouni, E P; Chen, M; Descalle, M A; Escher, J E; Loshak, A; Navratil, P; Ormand, W E; Pruet, J; Thompson, I J; Wang, T F
2008-10-06T23:59:59.000Z
This document contains a comprehensive literature review in support of the theoretical assessment of the {sup 178m2}Hf de-excitation, as well as a rigorous description of controlled energy release from an isomeric nuclear state.
CRITICALITY SAFETY CONTROLS AND THE SAFETY BASIS AT PFP
Kessler, S
2009-04-21T23:59:59.000Z
With the implementation of DOE Order 420.1B, Facility Safety, and DOE-STD-3007-2007, 'Guidelines for Preparing Criticality Safety Evaluations at Department of Energy Non-Reactor Nuclear Facilities', a new requirement was imposed that all criticality safety controls be evaluated for inclusion in the facility Documented Safety Analysis (DSA) and that the evaluation process be documented in the site Criticality Safety Program Description Document (CSPDD). At the Hanford site in Washington State the CSPDD, HNF-31695, 'General Description of the FH Criticality Safety Program', requires each facility develop a linking document called a Criticality Control Review (CCR) to document performance of these evaluations. Chapter 5, Appendix 5B of HNF-7098, Criticality Safety Program, provided an example of a format for a CCR that could be used in lieu of each facility developing its own CCR. Since the Plutonium Finishing Plant (PFP) is presently undergoing Deactivation and Decommissioning (D&D), new procedures are being developed for cleanout of equipment and systems that have not been operated in years. Existing Criticality Safety Evaluations (CSE) are revised, or new ones written, to develop the controls required to support D&D activities. Other Hanford facilities, including PFP, had difficulty using the basic CCR out of HNF-7098 when first implemented. Interpretation of the new guidelines indicated that many of the controls needed to be elevated to TSR level controls. Criterion 2 of the standard, requiring that the consequence of a criticality be examined for establishing the classification of a control, was not addressed. Upon in-depth review by PFP Criticality Safety staff, it was not clear that the programmatic interpretation of criterion 8C could be applied at PFP. Therefore, the PFP Criticality Safety staff decided to write their own CCR. The PFP CCR provides additional guidance for the evaluation team to use by clarifying the evaluation criteria in DOE-STD-3007-2007. In reviewing documents used in classifying controls for Nuclear Safety, it was noted that DOE-HDBK-1188, 'Glossary of Environment, Health, and Safety Terms', defines an Administrative Control (AC) in terms that are different than typically used in Criticality Safety. As part of this CCR, a new term, Criticality Administrative Control (CAC) was defined to clarify the difference between an AC used for criticality safety and an AC used for nuclear safety. In Nuclear Safety terms, an AC is a provision relating to organization and management, procedures, recordkeeping, assessment, and reporting necessary to ensure safe operation of a facility. A CAC was defined as an administrative control derived in a criticality safety analysis that is implemented to ensure double contingency. According to criterion 2 of Section IV, 'Linkage to the Documented Safety Analysis', of DOESTD-3007-2007, the consequence of a criticality should be examined for the purposes of classifying the significance of a control or component. HNF-PRO-700, 'Safety Basis Development', provides control selection criteria based on consequence and risk that may be used in the development of a Criticality Safety Evaluation (CSE) to establish the classification of a component as a design feature, as safety class or safety significant, i.e., an Engineered Safety Feature (ESF), or as equipment important to safety; or merely provides defense-in-depth. Similar logic is applied to the CACs. Criterion 8C of DOE-STD-3007-2007, as written, added to the confusion of using the basic CCR from HNF-7098. The PFP CCR attempts to clarify this criterion by revising it to say 'Programmatic commitments or general references to control philosophy (e.g., mass control or spacing control or concentration control as an overall control strategy for the process without specific quantification of individual limits) is included in the PFP DSA'. Table 1 shows the PFP methodology for evaluating CACs. This evaluation process has been in use since February of 2008 and has proven to be simple and effective. Each control identified i
Flight test measurements and theoretical lift prediction for flow energizers
Pradhan, Amit Aravind
1986-01-01T23:59:59.000Z
OF SCIENCE May 1986 Major Subject: Aerospace Engineering FLIGHT TEST MEASUREMENTS AND THEORETICAL LIFT PREDICTION FOR FLOW ENERGIZERS A Thesis by AHIT ARAVIND PRADHAN Approved as to style and content by: Donald T. Mard (Chairman of Committee...) Howard L. Chevalier (Member) Garng H. Huang (Member) gg~j(EC( C, Clogs' Malter E. Haisler (Head of Department) Hay 1986 ABSTRACT Flight Test Measurements and Theoretical Lift prediction for Flow Energizers. (May 1986) Amit Aravind Pradhan, B...
Face Recognition Algorithms Surpass Humans Matching Faces
Abdi, HervÃ©
over humans, in light of the absolute performance levels of the algorithms, underscores the need systems for security applications. How accurate must a face recognition algorithm be to contribute to these applications? Over the last decade, academic computer vision researchers and commercial product developers have
Hard Thresholding Pursuit Algorithms: Number of Iterations
Hitczenko, Pawel
algorithms do provide alternative methods. We consider here the hard thresholding pursuit (HTP) algorithm [6 us now recall that (HTP) consists in constructing a sequence (xn) of s-sparse vectors, starting absolute entries of xn-1 + A (y - Axn-1 ),(HTP1) xn := argmin{ y - Az 2, supp(z) Sn },(HTP2) until
On the Potential of Automatic Algorithm Configuration
Hutter, Frank
.g., neighborhood structure in local search or variable/value ordering heuristics in tree search), as well lead to enormous speed-ups of tree search algorithms for SAT for solving SAT-encoded software The problem of setting an algorithm's free parameters for maximal performance on a class of problem instances
A Faster Primal Network Simplex Algorithm
Aggarwal, Charu C.
We present a faster implementation of the polynomial time primal simplex algorithm due to Orlin [23]. His algorithm requires O(nm min{log(nC), m log n}) pivots and O(n2 m ??n{log nC, m log n}) time. The bottleneck operations ...
ASYNPLEX, an asynchronous parallel revised simplex algorithm
Hall, Julian
ASYNPLEX, an asynchronous parallel revised simplex algorithm J.A.J. Hall K.I.M. McKinnon February, an asynchronous parallel revised simplex algorithm J. A. J. Hall K. I. M. McKinnon 27th February 1998 Abstract This paper describes ASYNPLEX, an asynchronous variant of the revised simplex method which is suitable
Ris-R-Report FATIGUE EVALUATION ALGORITHMS
Algorithms: Review Division: Materials Research Division Published on the internet July 2010 Risø-R-1740(EN, WISPERX and NEW WISPER load sequences on a Glass/Epoxy multidirectional laminate typical of a wind turbine rotor blade construction. Two versions of the algorithm, the one using single-step and the other using
Power Control Algorithms in Wireless Communications
Power Control Algorithms in Wireless Communications Judd Rohwer , Chaouki T. Abdallah , Aly El-Osery 1 Abstract This paper presents a comprehensive review of the published algorithms on power control) and Time Division Multiple Access (TDMA). 2 Introduction Power control in cellular systems is applied
CS229 Lecture notes Generative Learning algorithms
Kosecka, Jana
analysis (GDA). In this model, we'll assume that p(x|y) is distributed according to a multivariate normal discriminant analysis The first generative learning algorithm that we'll look at is Gaussian discrim- inant. In these notes, we'll talk about a different type of learning algorithm. Consider a classification problem
Improvements of the local bosonic algorithm
B. Jegerlehner
1996-12-15T23:59:59.000Z
We report on several improvements of the local bosonic algorithm proposed by M. Luescher. We find that preconditioning and over-relaxation works very well. A detailed comparison between the bosonic and the Kramers-algorithms shows comparable performance for the physical situation examined.
Energy Aware Algorithmic Engineering Swapnoneel Roy
Rudra,, Atri
Energy Aware Algorithmic Engineering Swapnoneel Roy School of Computing University of North Florida: akshat.verma@in.ibm.com Abstract--In this work, we argue that energy management should be a guiding are simple and do not aid in design of energy-efficient algorithms. In this work, we conducted a large number
Buffer assignment algorithms for data driven architectures
Chatterjee, Mitrajit
1994-01-01T23:59:59.000Z
algorithms have been shown to be O(V x E) and O(V'xlogV) re spectively; an improvement over the existing strategies. A novel buffer distribution algorithm to maximize the pipelining and throughput has also been proposed. The number of buffers obtained...
Communication and Computation in Distributed CSP Algorithms
Krishnamachari, Bhaskar
Communication and Computation in Distributed CSP Algorithms C`esar Fern`andez1 , Ram´on B´ejar1 in the context of networked distributed systems. In order to study the performance of Distributed CSP (DisCSP consider two complete DisCSP algorithms: asynchronous backtracking (ABT) and asynchronous weak commitment
Algorithms in grid classes Ruth Hoffmann
St Andrews, University of
signs c1, . . . , cs and row signs, r1, . . . , rt and let = {(k, ) : Mk, = 0}. The map : GridAlgorithms in grid classes Ruth Hoffmann University of St Andrews, School of Computer Science Permutation Patterns 2013 UniversitÂ´e Paris Diderot 2nd July 2013 Ruth Hoffmann Algorithms in grid classes 1
A heuristic algorithm for graph isomorphism
Torres Navarro, Luz
1999-01-01T23:59:59.000Z
polynomial time algorithm O(n?), ISO-MT, that seems' to solve the graph isomorphism decision problem correctly for all classes of graphs. Our algorithm is extremely useful from the practical point of view since counter examples (pairs of graphs for which our...
Enhancing Smart Home Algorithms Using Temporal Relations
Cook, Diane J.
Enhancing Smart Home Algorithms Using Temporal Relations Vikramaditya R. JAKKULA1 and Diane J COOK School of Electrical Engineering and Computer Science Abstract. Smart homes offer a potential benefit improves the performance of these algorithms and thus enhances the ability of smart homes to monitor
DETERMINATION OF BASIS VALUES FROM EXPERIMENTAL DATA FOR FABRICS AND COMPOSITES
Barbero, Ever J.
and systems constructed of reinforced composite materials, textile soft goods, and other novel materials or equations while maintaining key characteristics of level III methodology. This is achieved by employing. A methodology to calculate basis-values other than A- and B-basis is presented in this work for the Normal, Log
Using Economics as Basis for Modelling and Evaluating Software Quality Stefan Wagner
Using Economics as Basis for Modelling and Evaluating Software Quality Stefan Wagner Institut f@in.tum.de Abstract The economics and cost of software quality have been discussed in software engineering for decades, economics should be the basis of any quality analysis. However, this implies several issues that have
On properties of the special coordinate basis of linear systems BEN M. CHEN
Benmei, Chen
or technique devel- oped by Sannuti and Saberi in 1987 has a distinct feature of explicitly displaying of the seminal work of Sannuti and Saberi. It makes the theory of the special coordinate basis more complete. 1 work of Sannuti and Saberi (1987). Such a special coordinate basis decomposition or technique has
Local Basis Expansions for MEG Source Localization. Partha P. Mitra1
, but are not identical to, spherical harmonics. Components of the transformed sensor measure- ments which correspond density power. The latter is particularly useful for localization of spontaneous activity. Below we first of the LBEX technique is to transform a global basis set into a local basis set for a given local region
Neural basis of contagious itch and why some people are more prone to it
Sussex, University of
Neural basis of contagious itch and why some people are more prone to it Henning Hollea,1 | insula | touch Itch is--to some degree--socially contagious. Subjective feel- ings of itchiness and based on self-report. The study of the neural basis of contagious itch presents a unique opportunity
GOETZ, T.G.
2003-05-15T23:59:59.000Z
This technical basis document describes the risk binning process and the technical basis for assigning risk bins for the above-ground structure failure representative accident and associated represented hazardous conditions. This document was developed to support the documented safety analysis.
Martin, A; Venkatesan, Dr V Prasanna
2011-01-01T23:59:59.000Z
Today in every organization financial analysis provides the basis for understanding and evaluating the results of business operations and delivering how well a business is doing. This means that the organizations can control the operational activities primarily related to corporate finance. One way that doing this is by analysis of bankruptcy prediction. This paper develops an ontological model from financial information of an organization by analyzing the Semantics of the financial statement of a business. One of the best bankruptcy prediction models is Altman Z-score model. Altman Z-score method uses financial rations to predict bankruptcy. From the financial ontological model the relation between financial data is discovered by using data mining algorithm. By combining financial domain ontological model with association rule mining algorithm and Zscore model a new business intelligence model is developed to predict the bankruptcy.
A Cone Jet-Finding Algorithm for Heavy-Ion Collisions at LHC Energies
S-L Blyth; M J Horner; T Awes; T Cormier; H Gray; J L Klay; S R Klein; M van Leeuwen; A Morsch; G Odyniec; A Pavlinov
2006-09-15T23:59:59.000Z
Standard jet finding techniques used in elementary particle collisions have not been successful in the high track density of heavy-ion collisions. This paper describes a modified cone-type jet finding algorithm developed for the complex environment of heavy-ion collisions. The primary modification to the algorithm is the evaluation and subtraction of the large background energy, arising from uncorrelated soft hadrons, in each collision. A detailed analysis of the background energy and its event-by-event fluctuations has been performed on simulated data, and a method developed to estimate the background energy inside the jet cone from the measured energy outside the cone on an event-by-event basis. The algorithm has been tested using Monte-Carlo simulations of Pb+Pb collisions at $\\sqrt{s}=5.5$ TeV for the ALICE detector at the LHC. The algorithm can reconstruct jets with a transverse energy of 50 GeV and above with an energy resolution of $\\sim30%$.
Aumeier, S.E.; Forsmann, J.H. [Argonne National Lab., Idaho Falls, ID (United States)
1998-04-01T23:59:59.000Z
The ability to nondestructively determine the presence and quantity of fissile/fertile nuclei in various matrices is important in several areas of nuclear applications, including international and domestic safeguards, radioactive waste characterization, and nuclear facility operations. An analysis was performed to determine the feasibility of identifying the masses of individual fissionable isotopes from a cumulative delayed-neutron signal resulting form the neutron irradiation of several uranium and plutonium isotopes. The feasibility of two separate data-processing techniques was studied: Kalman filtering and genetic algorithms. The basis of each technique is reviewed, and the structure of the algorithms as applied to the delayed-neutron analysis problem is presented. The results of parametric studies performed using several variants of the algorithms are presented. The effect of including additional constraining information such as additional measurements and known relative isotopic concentration is discussed. The parametric studies were conducted using simulated delayed-neutron data representative of the cumulative delayed-neutron response following irradiation of a sample containing {sup 238}U, {sup 235}U, {sup 239}Pu, and {sup 240}Pu. The results show that by processing delayed-neutron data representative of two significantly different fissile/fertile fission ratios, both Kalman filters and genetic algorithms are capable of yielding reasonably accurate estimates of the mass of individual isotopes contained in a given assay sample.
Aumeier, S. E.; Forsmann, J. H.; Engineering Division
1998-04-01T23:59:59.000Z
The ability to nondestructively determine the presence and quantity of fissile/fertile nuclei in various matrices is important in several areas of nuclear applications, including international and domestic safeguards, radioactive waste characterization, and nuclear facility operations. An analysis was performed to determine the feasibility of identifying the masses of individual fissionable isotopes from a cumulative delayed-neutron signal resulting from the neutron irradiation of several uranium and plutonium isotopes. The feasibility of two separate data-processing techniques was studied: Kalman filtering and genetic algorithms. The basis of each technique is reviewed, and the structure of the algorithms as applied to the delayed-neutron analysis problem is presented. The results of parametric studies performed using several variants of the algorithms are presented. The effect of including additional constraining information such as additional measurements and known relative isotopic concentration is discussed. The parametric studies were conducted using simulated delayed-neutron data representative of the cumulative delayed-neutron response following irradiation of a sample containing {sup 238}U, {sup 235}U, {sup 239}Pu, and {sup 240}Pu. The results show that by processing delayed-neutron data representative of two significantly different fissile/fertile fission ratios, both Kalman filters and genetic algorithms are capable of yielding reasonably accurate estimates of the mass of individual isotopes contained in a given assay sample.
Comparison of generality based algorithm variants for automatic taxonomy generation
Madnick, Stuart E.
We compare a family of algorithms for the automatic generation of taxonomies by adapting the Heymann-algorithm in various ways. The core algorithm determines the generality of terms and iteratively inserts them in a growing ...
Madduri, Kamesh; Ediger, David; Jiang, Karl; Bader, David A.; Chavarría-Miranda, Daniel
2009-05-29T23:59:59.000Z
We present a new lock-free parallel algorithm for computing betweenness centrality of massive small-world networks. With minor changes to the data structures, our algorithm also achieves better spatial cache locality compared to previous approaches. Betweenness centrality is a key algorithm kernel in the HPCS SSCA#2 Graph Analysis benchmark, which has been extensively used to evaluate the performance of emerging high-performance computing architectures for graph-theoretic computations. We design optimized implementations of betweenness centrality and the SSCA#2 benchmark for two hardware multithreaded systems: a Cray XMT system with the ThreadStorm processor, and a single-socket Sun multicore server with the UltraSparc T2 processor. For a small-world network of 134 million vertices and 1.073 billion edges, the 16-processor XMT system and the 8-core Sun Fire T5120 server achieve TEPS scores (an algorithmic performance count for the SSCA#2 benchmark) of 160 million and 90 million respectively, which corresponds to more than a 2X performance improvement over the previous parallel implementations. To better characterize the performance of these multithreaded systems, we correlate the SSCA#2 performance results with data from the memory-intensive STREAM and RandomAccess benchmarks. Finally, we demonstrate the applicability of our implementation to analyze massive real-world datasets by computing approximate betweenness centrality for a large-scale IMDb movie-actor network.
Advanced algorithms for information science
Argo, P.; Brislawn, C.; Fitzgerald, T.J.; Kelley, B.; Kim, W.H.; Mazieres, B.; Roeder, H.; Strottman, D.
1998-12-31T23:59:59.000Z
This is the final report of a one-year, Laboratory Directed Research and Development (LDRD) project at Los Alamos National Laboratory (LANL). In a modern information-controlled society the importance of fast computational algorithms facilitating data compression and image analysis cannot be overemphasized. Feature extraction and pattern recognition are key to many LANL projects and the same types of dimensionality reduction and compression used in source coding are also applicable to image understanding. The authors have begun developing wavelet coding which decomposes data into different length-scale and frequency bands. New transform-based source-coding techniques offer potential for achieving better, combined source-channel coding performance by using joint-optimization techniques. They initiated work on a system that compresses the video stream in real time, and which also takes the additional step of analyzing the video stream concurrently. By using object-based compression schemes (where an object is an identifiable feature of the video signal, repeatable in time or space), they believe that the analysis is directly related to the efficiency of the compression.
A new loop-reducing routing algorithm
Park, Sung-Woo
1989-01-01T23:59:59.000Z
Coming-up VI. Three Links Failed Page 51 51 52 52 53 53 . 54 54 55 Figure 5. 6. 7. 8. LIST OF FIGURES Bellman-Ford Algorithm Update Tables of Distributed Bellman-Ford Algorithm Two Types of a. Loop Two-Node Loop Multi-Node Loop... distances for all pairs of nodes in the subnet, and distributes updated routing information to all the nodes. The centralized algorithm, however, is vulnerable to a. single node failure ? if the NRC fails, all nodes in the network must stop their rout...
System engineering approach to GPM retrieval algorithms
Rose, C. R. (Chris R.); Chandrasekar, V.
2004-01-01T23:59:59.000Z
System engineering principles and methods are very useful in large-scale complex systems for developing the engineering requirements from end-user needs. Integrating research into system engineering is a challenging task. The proposed Global Precipitation Mission (GPM) satellite will use a dual-wavelength precipitation radar to measure and map global precipitation with unprecedented accuracy, resolution and areal coverage. The satellite vehicle, precipitation radars, retrieval algorithms, and ground validation (GV) functions are all critical subsystems of the overall GPM system and each contributes to the success of the mission. Errors in the radar measurements and models can adversely affect the retrieved output values. Ground validation (GV) systems are intended to provide timely feedback to the satellite and retrieval algorithms based on measured data. These GV sites will consist of radars and DSD measurement systems and also have intrinsic constraints. One of the retrieval algorithms being studied for use with GPM is the dual-wavelength DSD algorithm that does not use the surface reference technique (SRT). The underlying microphysics of precipitation structures and drop-size distributions (DSDs) dictate the types of models and retrieval algorithms that can be used to estimate precipitation. Many types of dual-wavelength algorithms have been studied. Meneghini (2002) analyzed the performance of single-pass dual-wavelength surface-reference-technique (SRT) based algorithms. Mardiana (2003) demonstrated that a dual-wavelength retrieval algorithm could be successfully used without the use of the SRT. It uses an iterative approach based on measured reflectivities at both wavelengths and complex microphysical models to estimate both No and Do at each range bin. More recently, Liao (2004) proposed a solution to the Do ambiguity problem in rain within the dual-wavelength algorithm and showed a possible melting layer model based on stratified spheres. With the No and Do calculated at each bin, the rain rate can then be calculated based on a suitable rain-rate model. This paper develops a system engineering interface to the retrieval algorithms while remaining cognizant of system engineering issues so that it can be used to bridge the divide between algorithm physics an d overall mission requirements. Additionally, in line with the systems approach, a methodology is developed such that the measurement requirements pass through the retrieval model and other subsystems and manifest themselves as measurement and other system constraints. A systems model has been developed for the retrieval algorithm that can be evaluated through system-analysis tools such as MATLAB/Simulink.
Efficient Algorithms for Computing Betti Numbers of Semi-algebraic ...
Saugata Basu
Definition of Roadmaps. Properties of pseudo-critical values. Roadmap Algorithm for a bounded algebraic set. Saugata Basu. Efficient Algorithms for Computing ...
An Efficient Algorithm for Computing Robust Minimum Capacity st Cuts
Doug Altner
2008-03-20T23:59:59.000Z
Mar 20, 2008 ... In this paper, we present an efficient algorithm for computing minimum capacity s-t cuts under a polyhedral model of robustness. Our algorithm ...
algorithm population sizing: Topics by E-print Network
Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)
CiteSeer Summary: Deciding the appropriate population size and number of islands for distributed island-model genetic algorithms is often critical to the algorithms success. This...
adaptive control algorithm: Topics by E-print Network
Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)
24 25 Next Page Last Page Topic Index 1 ON AN ADAPTIVE CONTROL ALGORITHM FOR ADAPTIVE OPTICS APPLICATIONS Mathematics Websites Summary: ON AN ADAPTIVE CONTROL ALGORITHM FOR...
advanced methods algorithms: Topics by E-print Network
Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)
CS 3172, 0203: Advanced Algorithms, Part I Jrgen Dix 12;Chapter 1: Turing Zachmann, Gabriel 9 Advanced Algorithms Course. Lecture Notes. Part 9 Computer Technologies...
advanced fitting algorithms: Topics by E-print Network
Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)
CS 3172, 0203: Advanced Algorithms, Part I Jrgen Dix 12;Chapter 1: Turing Zachmann, Gabriel 15 Advanced Algorithms Course. Lecture Notes. Part 9 Computer Technologies...
New Algorithm Enables Fast Simulations of Ultrafast Processes
Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)
reduces the computational cost and increases the speed of the simulations. Comparing the new algorithm with the old, slower algorithm yields similar results, e.g., the predicted...
A new Search via Probability Algorithm for solving Engineering ...
Admin
2012-08-08T23:59:59.000Z
Without loss of generality, we design an algorithm to solve the problem (I), the .... Statistics of 30 times by running ESVP algorithm for Three-Bar Truss Design.
annealing genetic algorithm: Topics by E-print Network
Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)
25 Next Page Last Page Topic Index 1 Annealing a Genetic Algorithm for Constrained Optimization1 Mathematics Websites Summary: Annealing a Genetic Algorithm for Constrained...
alternative learning algorithms: Topics by E-print Network
Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)
(or projection) algorithm has been successful in the context of solving optimization problems over two variables. The iterative nature and simplicity of the algorithm...
Safety basis academy summary of project implementation from 2007-2009
Johnston, Julie A [Los Alamos National Laboratory
2009-01-01T23:59:59.000Z
During fiscal years 2007 through 2009, in accordance with Performance Based Incentives with DOE/NNSA Los Alamos Site Office, Los Alamos National Security (LANS) implemented and operated a Safety Basis Academy (SBA) to facilitate uniformity in technical qualifications of safety basis professionals across the nuclear weapons complex. The implementation phase of the Safety Basis Academy required development, delivery, and finalizing a set of 23 courses. The courses developed are capable of supporting qualification efforts for both federal and contractor personnel throughout the DOE/NNSA Complex. The LANS Associate Director for Nuclear and High Hazard Operations (AD-NHHO) delegated project responsibillity to the Safety Basis Division. The project was assigned to the Safety Basis Technical Services (SB-TS) Group at Los Alamos National Laboratory (LANL). The main tasks were project needs analysis, design, development, implementation of instructional delivery, and evaluation of SBA courses. DOE/NNSA responsibility for oversight of the SBA project was assigned to the Chief of Defense for Nuclear Safety, and delegated to the Authorization Basis Senior Advisor, Continuous Learning Chair (CDNS-ABSA/CLC). NNSA developed a memorandum of agreement with LANS AD-NHHO. Through a memorandum of agreement initiated by NNSA, the DOE National Training Center (NTC) will maintain the set of Safety Basis Academy courses and is able to facilitate course delivery throughout the DOE Complex.
Computationally efficient double hybrid density functional theory using dual basis methods
Byrd, Jason N
2015-01-01T23:59:59.000Z
We examine the application of the recently developed dual basis methods of Head-Gordon and co-workers to double hybrid density functional computations. Using the B2-PLYP, B2GP-PLYP, DSD-BLYP and DSD-PBEP86 density functionals, we assess the performance of dual basis methods for the calculation of conformational energy changes in C$_4$-C$_7$ alkanes and for the S22 set of noncovalent interaction energies. The dual basis methods, combined with resolution-of-the-identity second-order M{\\o}ller-Plesset theory, are shown to give results in excellent agreement with conventional methods at a much reduced computational cost.
Training a Large Scale Classifier with the Quantum Adiabatic Algorithm
Hartmut Neven; Vasil S. Denchev; Geordie Rose; William G. Macready
2009-12-04T23:59:59.000Z
In a previous publication we proposed discrete global optimization as a method to train a strong binary classifier constructed as a thresholded sum over weak classifiers. Our motivation was to cast the training of a classifier into a format amenable to solution by the quantum adiabatic algorithm. Applying adiabatic quantum computing (AQC) promises to yield solutions that are superior to those which can be achieved with classical heuristic solvers. Interestingly we found that by using heuristic solvers to obtain approximate solutions we could already gain an advantage over the standard method AdaBoost. In this communication we generalize the baseline method to large scale classifier training. By large scale we mean that either the cardinality of the dictionary of candidate weak classifiers or the number of weak learners used in the strong classifier exceed the number of variables that can be handled effectively in a single global optimization. For such situations we propose an iterative and piecewise approach in which a subset of weak classifiers is selected in each iteration via global optimization. The strong classifier is then constructed by concatenating the subsets of weak classifiers. We show in numerical studies that the generalized method again successfully competes with AdaBoost. We also provide theoretical arguments as to why the proposed optimization method, which does not only minimize the empirical loss but also adds L0-norm regularization, is superior to versions of boosting that only minimize the empirical loss. By conducting a Quantum Monte Carlo simulation we gather evidence that the quantum adiabatic algorithm is able to handle a generic training problem efficiently.
Genetic algorithms applied to nonlinear and complex domains
Barash, D; Woodin, A E
1999-06-01T23:59:59.000Z
The dissertation, titled ''Genetic Algorithms Applied to Nonlinear and Complex Domains'', describes and then applies a new class of powerful search algorithms (GAS) to certain domains. GAS are capable of solving complex and nonlinear problems where many parameters interact to produce a final result such as the optimization of the laser pulse in the interaction of an atom with an intense laser field. GAS can very efficiently locate the global maximum by searching parameter space in problems which are unsuitable for a search using traditional methods. In particular, the dissertation contains new scientific findings in two areas. First, the dissertation examines the interaction of an ultra-intense short laser pulse with atoms. GAS are used to find the optimal frequency for stabilizing atoms in the ionization process. This leads to a new theoretical formulation, to explain what is happening during the ionization process and how the electron is responding to finite (real-life) laser pulse shapes. It is shown that the dynamics of the process can be very sensitive to the ramp of the pulse at high frequencies. The new theory which is formulated, also uses a novel concept (known as the (t,t') method) to numerically solve the time-dependent Schrodinger equation Second, the dissertation also examines the use of GAS in modeling decision making problems. It compares GAS with traditional techniques to solve a class of problems known as Markov Decision Processes. The conclusion of the dissertation should give a clear idea of where GAS are applicable, especially in the physical sciences, in problems which are nonlinear and complex, i.e. difficult to analyze by other means.
Genetic algorithms applied to nonlinear and complex domains
Barash, D; Woodin, A E
1999-06-01T23:59:59.000Z
The dissertation, titled ''Genetic Algorithms Applied to Nonlinear and Complex Domains'', describes and then applies a new class of powerful search algorithms (GAS) to certain domains. GAS are capable of solving complex and nonlinear problems where many parameters interact to produce a ''final'' result such as the optimization of the laser pulse in the interaction of an atom with an intense laser field. GAS can very efficiently locate the global maximum by searching parameter space in problems which are unsuitable for a search using traditional methods. In particular, the dissertation contains new scientific findings in two areas. First, the dissertation examines the interaction of an ultra-intense short laser pulse with atoms. GAS are used to find the optimal frequency for stabilizing atoms in the ionization process. This leads to a new theoretical formulation, to explain what is happening during the ionization process and how the electron is responding to finite (real-life) laser pulse shapes. It is shown that the dynamics of the process can be very sensitive to the ramp of the pulse at high frequencies. The new theory which is formulated, also uses a novel concept (known as the (t,t') method) to numerically solve the time-dependent Schrodinger equation Second, the dissertation also examines the use of GAS in modeling decision making problems. It compares GAS with traditional techniques to solve a class of problems known as Markov Decision Processes. The conclusion of the dissertation should give a clear idea of where GAS are applicable, especially in the physical sciences, in problems which are nonlinear and complex, i.e. difficult to analyze by other means.
Summer Research Academy for Theoretical and Computational Chemistry 2008-2012
Morales, Jorge Alberto
1 Summer Research Academy for Theoretical and Computational Chemistry 2008-2012 The Summer Research Academy for Theoretical and Computational Chemistry (SRATCC) is an outreach program that encourages
Non-state actors in international politics: a theoretical framework
Paley, Abram Wil
2009-05-15T23:59:59.000Z
NON-STATE ACTORS IN INTERNATIONAL POLITICS A THEORETICAL FRAMEWORK A Thesis by ABRAM WIL PALEY Submitted to the O?ce of Graduate Studies of Texas A&M University in partial fulflllment of the requirements for the degree of MASTER OF ARTS December... 2008 Major Subject: Political Science NON-STATE ACTORS IN INTERNATIONAL POLITICS A THEORETICAL FRAMEWORK A Thesis by ABRAM WIL PALEY Submitted to the O?ce of Graduate Studies of Texas A&M University in partial fulflllment of the requirements...
SPEEDING UP DYNAMIC SHORTEST PATH ALGORITHMS Finding ...
2003-09-19T23:59:59.000Z
Sep 19, 2003 ... ... and Reps algorithm for updating a shortest path tree, which is a revision of ... tree, although it can be easily specialized for updating a tree [5].
Advanced CHP Control Algorithms: Scope Specification
Katipamula, Srinivas; Brambley, Michael R.
2006-04-28T23:59:59.000Z
The primary objective of this multiyear project is to develop algorithms for combined heat and power systems to ensure optimal performance, increase reliability, and lead to the goal of clean, efficient, reliable and affordable next generation energy systems.
Tree Elaboration Strategies Branch and Bound Algorithms
, Elon College Terri Anne Johnson, Elon College Monique Guignard-Spielberg, The University of a sharp lower bound technique in these algorithms is one of the major difficulties. Recently, Hahn
A polynomial projection algorithm for linear programming
2013-05-03T23:59:59.000Z
algorithm is based on a procedure whose input is a homogeneous system of linear ..... In this case s = 0 and the procedure sets the output vector yout to 0.
IIR algorithms for adaptive line enhancement
David, R.A.; Stearns, S.D.; Elliott, G.R.; Etter, D.M.
1983-01-01T23:59:59.000Z
We introduce a simple IIR structure for the adaptive line enhancer. Two algorithms based on gradient-search techniques are presented for adapting the structure. Results from experiments which utilized real data as well as computer simulations are provided.
EFFICIENT ALGORITHMS FOR MINING ARBITRARY SHAPED CLUSTERS
Zaki, Mohammed Javeed
. . . . . . . . . . . . . . . . . . . . . . . . . 32 3.2 Phase 1 Kmeans Algorithm . . . . . . . . . . . . . . . . . . . . . . 33 3.2.1 Kmeans . . . . . . . . . . . . . . . . . . . . . . 49 3.6.2 Comparison of Kmeans Initialization Methods . . . . . . . . . 50 3.6.3 Results on
Five-dimensional Janis-Newman algorithm
Harold Erbin; Lucien Heurtier
2014-11-07T23:59:59.000Z
The Janis-Newman algorithm has been shown to be successful in finding new sta- tionary solutions of four-dimensional gravity. Attempts for a generalization to higher dimensions have already been found for the restricted cases with only one angular mo- mentum. In this paper we propose an extension of this algorithm to five dimensions with two angular momenta - using the prescription of G. Giampieri - through two specific examples, that are the Myers-Perry and BMPV black holes. We also discuss possible enlargements of our prescriptions to other dimensions and maximal number of angular momenta, and show how dimensions higher than six appear to be much more challenging to treat within this framework. Nonetheless this general algorithm provides a unification of the formulation in d = 3, 4, 5 of the Janis-Newman algorithm, from which which expose several examples including the BTZ black hole.
Journes MAS 2010, Bordeaux Session : Algorithmes Stochastiques
Boyer, Edmond
Carlo adaptatives par Jérome Lelong Adaptive Monte Carlo methods are powerful variance reduction a randomly truncated stochastic algorithm. Finally, we apply this technique to the valuation of nancial deri
Jun. 6, 2013 BBM 202 -ALGORITHMS
Erdem, Erkut
that would create a clockwise turn. 11 aham scan. Choose point p with smallest (or largest) y that would create a clockwise turn. aham scan algorithm p an. e point p with smallest (or largest) y
Patterns hidden from simple algorithms Madhu Sudan
Sudan, Madhu
Patterns hidden from simple algorithms Madhu Sudan February 7, 2011 Is the number the most notorious example. Madhu Sudan (madhu@mit.edu) is a Principal Researcher at Microsoft Research
Large scale prediction models and algorithms
Monsch, Matthieu (Matthieu Frederic)
2013-01-01T23:59:59.000Z
Over 90% of the data available across the world has been produced over the last two years, and the trend is increasing. It has therefore become paramount to develop algorithms which are able to scale to very high dimensions. ...
An algorithmic approach to social networks
Liben-Nowell, David
2005-01-01T23:59:59.000Z
Social networks consist of a set of individuals and some form of social relationship that ties the individuals together. In this thesis, we use algorithmic techniques to study three aspects of social networks: (1) we analyze ...
Algorithms for revenue metering and their evaluation
Martinez-Lagunes, Rodrigo
2000-01-01T23:59:59.000Z
ALGORITHMS FOR REVENUE METERING AND THEIR EVALUATION A Thesis by RODRIGO MARTINEZ-LAGUNES Submitted to the Office of Graduate Studies of Texas AdcM University in partial fulfillment of the requirements for the degree of MASTER OF SCIENCE May... 2000 Major Subject: Electrical Engineering ALGORITHMS FOR REVENUE METERING AND THEIR EVALUATION A Thesis By RODRIGO MARTINEZ-LAGUNES Submitted to Texas A&M University in partial fulfillment of the requirements for the degree of MASTER...
A fast indexing algorithm for sparse matrices
Nieder, Alvin Edward
1971-01-01T23:59:59.000Z
A FAST INDEXING ALGORITHM FOR SPARSE MATRICES A Thesis ALVIN EDWARD NIEDER Submitted to the Graduate College of Texas Algal University in partial fulfillment of the requirement for the degree of MASTER OF SCIENCE December 1/71 Major Subject... INDEXING ALGORITHM FOR SPARSE MATRICES (December, 1/71) Alvin Edward Nieder B. S. , Texas AEZ University Directed by: Dr. Udo Pooch A sparse matrix is defined to be a matrix con- taining a high proportion of elements that are zeros. Sparse matrices...
Texas A&M scheduling algorithm
Payne, Eugene Edgar
1966-01-01T23:59:59.000Z
TEXAS A6 M SCHEDULING ALGORITHM A Thesis By EUGENE EDGAR PAYNE Submitted to the Graduate College of the Texas AF M University in partial fulfillment of the requirements for the degree of MASTER OF SCIENCE January 1966 Major Subject...: Computer Science TEXAS A&M SCHEDULING ALGORITHM A Thesis By EUGENE EDGAR PAYNE Ch trman of Committee) (Head of Department) (M ember) (M ember) January 1966 ACKNOWLEDGMENTS The author would like to express his indebtedness to Mr. James M. Nash...
DIMACS Series in Discrete Mathematics and Theoretical Computer Science
Mirkin, Boris
for Building Genome Classification Trees with Linear Binary Hierarchies Boris Mirkin and Eugene Koonin ABSTRACT. With complete genome sequence data becoming available at an increasing rate, the problem of classification of the genomes on the basis of different criteria is be- coming pressing. Here we present an approach
A Surface-Aware Projection Basis for Quasigeostrophic Flow K. SHAFER SMITH
Young, William R.
A Surface-Aware Projection Basis for Quasigeostrophic Flow K. SHAFER SMITH Center for Atmosphere that is not well represented by standard baroclinic modes. Corresponding author address: K. Shafer Smith, Courant
Neural Basis of Theory of Mind: An eye gaze preference task
Elder, Nicola
2010-11-24T23:59:59.000Z
This study considers the speculation made by previous researchers that ‘Theory of mind’ (ToM) could have a neural basis. ToM refers to our capacity to make inferences regarding other individuals’ mental states and it is ...
Basis for Identification of Disposal Options for R and D for...
Broader source: Energy.gov (indexed) [DOE]
in granitic rocks. Basis for Identification of Disposal Options for R&D for Spent Nuclear Fuel and High-Level Waste, FCRD-USED-2011-000071 More Documents & Publications...
Neurobiology of Disease Neural Basis of Dyslexia: A Comparison between Dyslexic
Neurobiology of Disease Neural Basis of Dyslexia: A Comparison between Dyslexic and Nondyslexic with developmental dyslexia exhibit reduced parietotemporal activation in functional neuroimaging studies words: dyslexia; age-matched; reading ability-matched; parietotemporal region; fMRI; phonological
Contribution of the basis-dependent adiabatic geometric phase to noncyclic evolution
M. T. Thomaz
2015-04-19T23:59:59.000Z
The geometric phase acquired by the vector states under an adiabatic evolution along a noncyclic path can be calculated correctly in any instantaneous basis of a Hamiltonian that varies in time due to a time-dependent classical field.
Basis for Interim Operation for the K-Reactor in Cold Standby
Shedrow, B.
1998-10-19T23:59:59.000Z
The Basis for Interim Operation (BIO) document for K Reactor in Cold Standby and the L- and P-Reactor Disassembly Basins was prepared in accordance with the draft DOE standard for BIO preparation (dated October 26, 1993).
Deriving the continuity of maximum-entropy basis functions via variational analysis
Sukumar, N.; Wets, R. J. -B.
2007-01-01T23:59:59.000Z
and V. J. DellaPietra, A maximum entropy approach to naturalJ. and R. K. Bryan, Maximum entropy image reconstruction:Heidelberg, Continuity of maximum-entropy basis functions p
Genetic algorithm based tomographic flow visualization
Lyons, Donald Paul
1997-01-01T23:59:59.000Z
reconstruction techniques have been used in various fields for many years. As early as 1917, J. Radon developed the mathematical techniques necessary to perform a reconstruchon of a test object fiom its projections. Radon's work, however, was theoretical...
Chil-Min Kim; Yun Jin Choi; Young-Jai Park
2006-03-02T23:59:59.000Z
We introduce new sophisticated attacks with a Hong-Ou-Mandel interferometer against quantum key distribution (QKD) and propose a new QKD protocol grafted with random basis shuffling to block up those attacks. When the polarization basis is randomly and independently shuffled by sender and receiver, the new protocol can overcome the attacks even for not-so-weak coherent pulses. We estimate the number of photons to guarantee the security of the protocol.
Alibes, Andreu
Quite often a single or a combination of protein mutations is linked to specific diseases. However, distinguishing from sequence information which mutations have real effects in the protein’s function is not trivial. Protein ...
Game Theoretic Research on the Design of International Environmental Agreements
GÃ¼ting, Ralf Hartmut
layer and more recently the concern about the impacts of global warming. All these environmentalGame Theoretic Research on the Design of International Environmental Agreements: Insights, Critical environmental agreements (IEAs) using the method of game theory has sharply increased. However, there have also
Friction versus dilation revisited: Insights from theoretical and numerical models
Einat, Aharonov
Friction versus dilation revisited: Insights from theoretical and numerical models N. Makedonska,1 controlled by the frictional strength of the fault gouge, a granular layer that accumulates between the fault friction coefficient) of such granular layers is the systems resistance to dilation, a byprocess
School and Community Psychology Division of Theoretical and
Berdichevsky, Victor
School and Community Psychology Division of Theoretical and Behavioral Foundations College of Education Detroit, MI 48202 Phone: (313) 577-1614 Fax: (313) 577-5235 SCHOOL AND COMMUNITY PSYCHOLOGY PROGRAM The Wayne State University Educational Psychology Program Area offers a graduate program in School
Robust Control-theoretic Thermal Balancing for Server Clusters
Lu, Chenyang
Robust Control-theoretic Thermal Balancing for Server Clusters Yong Fu, Chenyang Lu, Hongan Wang for clusters because of the increasing power consumption of modern processors, compact server architectures and growing server density in data centers. Thermal balancing mitigates hot spots in a cluster through dynamic
Wireless Social Community Networks: A Game-Theoretic Analysis
Marbach, Peter
Wireless Social Community Networks: A Game-Theoretic Analysis Mohammad Hossein Manshaei, Julien: marbach@cs.toronto.edu Abstract--Wireless social community networks have been cre- ated as an alternative to cellular wireless networks to provide wireless data access in urban areas. By relying on access points
Wave and Material Properties of Marine Sediments: Theoretical Relationships for
Buckingham, Michael
Wave and Material Properties of Marine Sediments: Theoretical Relationships for Geoacoustic and Vibration Research The University, Southampton SO17 1BJ, UK Abstract. In recent years, a theory of wave the passage of compressional and shear waves. The theory yields a dispersion pair, representing phase speed
Information Theoretic Novelty Detection6 Maurizio Filippone,a
Filippone, Maurizio
and closely re- lated to classical statistical tests. We then propose an approximation scheme to extend our-parametric approaches (for a good review of statistical approaches for novelty detection see e.g. [4, 18]). In generalInformation Theoretic Novelty Detection6 Maurizio Filippone,a , Guido Sanguinettia,b a Department
Organizational Design Principles and Techniques for Decision-Theoretic Agents
Durfee, Edmund H.
Organizational Design Principles and Techniques for Decision-Theoretic Agents Jason Sleight precisely into exactly which parts of an agent's model should be organizationally influenced, and asserts be sanctioned to influence. We present a formal framework for specifying factored organizational influences
Theoretical Computer Science in Transition John E. Savage
Savage, John
Theoretical Computer Science in Transition John E. Savage Department of Computer Science Brown, computersciencewillcontinueto be extremelysuccessful. In a few short decades computer science has lead to revolutions in work computer science has played a central role in these developments and is destined to play a central role
Theoretical Population Biology 69 (2006) 231233 ESS theory now
Lessard, Sabin
Theoretical Population Biology 69 (2006) 231Â233 Editorial ESS theory now More than 30 years have passed since the concept of evolutionarily stable strategy (ESS) was introduced in the context of animal of the ESS concept. Even today the main idea, and the more general one, remains to look for a population
An Information Theoretic Analysis on Indoor PLC Channel Characterizations
Gesbert, David
An Information Theoretic Analysis on Indoor PLC Channel Characterizations Hao LIN , Aawatif MENOUNI. But the development of Power Line Communications (PLC) highly depends on the knowledge of the channel characterizations. For this reason, a large number of attentions have been payed on the PLC channel analysis using
THEORETICAL FOUNDATIONS OF MOBILE FLEXIBLE NETWORKS Merouane Debbah
Boyer, Edmond
THEORETICAL FOUNDATIONS OF MOBILE FLEXIBLE NETWORKS M´erouane Debbah Alcatel-Lucent Chair.debbah@supelec.fr ABSTRACT The general framework of Mobile Flexible Networks (MFN) is to design dense self-organizing, self-healing and self-energy harvesting secure networks where terminals and base stations interact and self
GG602 Theoretical Petrology Course Description and Organization
Hammer, Julia Eve
GG602 Theoretical Petrology Course Description and Organization Instructor: Julia Hammer phone: 6, R., 1978, Equilibrium Thermodynamics in Petrology: New York, Harper and Row, 284 p. Expected and igneous/metamorphic petrology and who are interested in the geologic application of chemical
A theoretical study of grating structured triboelectric nanogenerators
Wang, Zhong L.
for ultra-high output power but also the most complicated. In this manuscript, the first theoretical model. Then for each of these two categories, a study of the basic output profiles and an in-depth discussion and low matched load resistance. In this paper, from the discussion of the inuence of both structural
A theoretical model of the explosive fragmentation of vesicular magma
McGuinness, Mark
fire fountaining to vigorous Vulcanian and Plinian eruptions. The range of different types of explosiveA theoretical model of the explosive fragmentation of vesicular magma A. C. Fowler, MACSI explosion can occur, and is motivated by the corresponding phenomenon of magmatic explosion during Vulcanian
THEORETICAL EFFECT OF BENTONITE MIGRATION ON CONTAMINANT TRANSPORT
THEORETICAL EFFECT OF BENTONITE MIGRATION ON CONTAMINANT TRANSPORT THROUGH GEOSYNTHETIC CLAY LINERS TRANSPORT THROUGH GEOSYNTHETIC CLAY LINERS Jason H. FitzSimmons1 and Timothy D. Stark2 ABSTRACT: Since the introduction of geosynthetic clay liners (GCLs) to waste containment facilities, one of the major concerns
Information-Theoretic Analysis of an Energy Harvesting Communication System
Ulukus, Sennur
Information-Theoretic Analysis of an Energy Harvesting Communication System Omur Ozel Sennur Ulukus@umd.edu ulukus@umd.edu Abstract--In energy harvesting communication systems, an exogenous recharge process supplies energy for the data trans- mission and arriving energy can be buffered in a battery before
Probabilistic Particle Flow Algorithm for High Occupancy Environment
Andrey Elagin; Pavel Murat; Alexandre Pranko; Alexei Safonov
2012-12-29T23:59:59.000Z
Algorithms based on the particle flow approach are becoming increasingly utilized in collider experiments due to their superior jet energy and missing energy resolution compared to the traditional calorimeter-based measurements. Such methods have been shown to work well in environments with low occupancy of particles per unit of calorimeter granularity. However, at higher instantaneous luminosity or in detectors with coarse calorimeter segmentation, the overlaps of calorimeter energy deposits from charged and neutral particles significantly complicate particle energy reconstruction, reducing the overall energy resolution of the method. We present a technique designed to resolve overlapping energy depositions of spatially close particles using a statistically consistent probabilistic procedure. The technique is nearly free of ad-hoc corrections, improves energy resolution, and provides new important handles that can improve the sensitivity of physics analyses: the uncertainty of the jet energy on an event-by-event basis and the estimate of the probability of a given particle hypothesis for a given detector response. When applied to the reconstruction of hadronic jets produced in the decays of tau leptons using the CDF-II detector at Fermilab, the method has demonstrated reliable and robust performance.
On equivalence relationships between classification and ranking algorithms
Ertekin, Seyda
We demonstrate that there are machine learning algorithms that can achieve success for two separate
A Real-Time Soft Shadow Volume Algorithm
Assarsson, Ulf
algorithm to generate the hard shadows (umbra). The second pass compensates to provide the softness (penum
Efficient Interpolation in the Guruswami-Sudan Algorithm
Trifonov, Peter
2010-01-01T23:59:59.000Z
A novel algorithm is proposed for the interpolation step of the Guruswami-Sudan list decoding algorithm. The proposed method is based on the binary exponentiation algorithm, and can be considered as an extension of the Lee-O'Sullivan algorithm. The algorithm is shown to achieve both asymptotical and practical performance gain compared to the case of iterative interpolation algorithm. Further complexity reduction is achieved by integrating the proposed method with re-encoding. The key contribution of the paper, which enables the complexity reduction, is a novel randomized ideal multiplication algorithm.
Optimizing SRF Gun Cavity Profiles in a Genetic Algorithm Framework
Alicia Hofler, Pavel Evtushenko, Frank Marhauser
2009-09-01T23:59:59.000Z
Automation of DC photoinjector designs using a genetic algorithm (GA) based optimization is an accepted practice in accelerator physics. Allowing the gun cavity field profile shape to be varied can extend the utility of this optimization methodology to superconducting and normal conducting radio frequency (SRF/RF) gun based injectors. Finding optimal field and cavity geometry configurations can provide guidance for cavity design choices and verify existing designs. We have considered two approaches for varying the electric field profile. The first is to determine the optimal field profile shape that should be used independent of the cavity geometry, and the other is to vary the geometry of the gun cavity structure to produce an optimal field profile. The first method can provide a theoretical optimal and can illuminate where possible gains can be made in field shaping. The second method can produce more realistically achievable designs that can be compared to existing designs. In this paper, we discuss the design and implementation for these two methods for generating field profiles for SRF/RF guns in a GA based injector optimization scheme and provide preliminary results.
Heat Bath Algorithmic Cooling with Spins: Review and Prospects
Daniel K. Park; Nayeli A. Rodriguez-Briones; Guanru Feng; Robabeh R. Darabad; Jonathan Baugh; Raymond Laflamme
2015-01-05T23:59:59.000Z
Application of multiple rounds of Quantum Error Correction (QEC) is an essential milestone towards the construction of scalable quantum information processing devices. However, experimental realizations of it are still in their infancy. The requirements for multiple round QEC are high control fidelity and the ability to extract entropy from ancilla qubits. Nuclear Magnetic Resonance (NMR) based quantum devices have demonstrated high control fidelity with up to 12 qubits. On the other hand, the major challenge in the NMR QEC experiment is to efficiently supply ancilla qubits in highly pure states at the beginning of each round of QEC. Purification of qubits in NMR, or in other ensemble based quantum systems can be accomplished through Heat Bath Algorithmic Cooling (HBAC). It is an efficient method for extracting entropy from qubits that interact with a heat bath, allowing cooling below the bath temperature. For practical HBAC, coupled electron-nuclear spin systems are more promising than conventional NMR quantum processors, since electron spin polarization is about $10^3$ times greater than that of a proton under the same experimental conditions. We provide an overview on both theoretical and experimental aspects of HBAC focusing on spin and magnetic resonance based systems, and discuss the prospects of exploiting electron-nuclear coupled systems for the realization of HBAC and multiple round QEC.
Automatic energy calibration algorithm for an RBS setup
Silva, Tiago F.; Moro, Marcos V.; Added, Nemitala; Rizzutto, Marcia A.; Tabacniks, Manfredo H. [Instituto de Fisica da Universidade de Sao Paulo, C.P. 66318, 05315-970 Sao Paulo, SP (Brazil)
2013-05-06T23:59:59.000Z
This work describes a computer algorithm for automatic extraction of the energy calibration parameters from a Rutherford Back-Scattering Spectroscopy (RBS) spectrum. Parameters like the electronic gain, electronic offset and detection resolution (FWHM) of a RBS setup are usually determined using a standard sample. In our case, the standard sample comprises of a multi-elemental thin film made of a mixture of Ti-Al-Ta that is analyzed at the beginning of each run at defined beam energy. A computer program has been developed to extract automatically the calibration parameters from the spectrum of the standard sample. The code evaluates the first derivative of the energy spectrum, locates the trailing edges of the Al, Ti and Ta peaks and fits a first order polynomial for the energy-channel relation. The detection resolution is determined fitting the convolution of a pre-calculated theoretical spectrum. To test the code, data of two years have been analyzed and the results compared with the manual calculations done previously, obtaining good agreement.
Axiomatic Tools versus Constructive approach to Unconventional Algorithms
Gordana Dodig-Crnkovic; Mark Burgin
2012-07-03T23:59:59.000Z
In this paper, we analyze axiomatic issues of unconventional computations from a methodological and philosophical point of view. We explain how the new models of algorithms changed the algorithmic universe, making it open and allowing increased flexibility and creativity. However, the greater power of new types of algorithms also brought the greater complexity of the algorithmic universe, demanding new tools for its study. That is why we analyze new powerful tools brought forth by the axiomatic theory of algorithms, automata and computation.
Energy Aware Algorithm Design via Probabilistic Computing: From Algorithms and Models to Moore's Law
Palem, Krishna V.
Energy Aware Algorithm Design via Probabilistic Computing: From Algorithms and Models to Moore opportunities for being energy-aware, the most fundamental limits are truly rooted in the physics of energy of models of computing for energy-aware al- gorithm design and analysis, culminating in establishing
Paris-Sud XI, UniversitÃ© de
Bayesian Policy Gradient and Actor-Critic Algorithms Bayesian Policy Gradient and Actor Yaakov Engel yakiengel@gmail.com Editor: Abstract Policy gradient methods are reinforcement learning algorithms that adapt a param- eterized policy by following a performance gradient estimate. Many
Ma, Jia, S.M. Massachusetts Institute of Technology
2009-01-01T23:59:59.000Z
This thesis examines how the basis risk affects property derivative hedging in the UK market, based on the tracking error (basis risk) report from the Investment Property Forum study in 2007 (the IPF Study). The thesis ...
Zhuo, Ye
2011-05-15T23:59:59.000Z
In this thesis, we theoretically study the electromagnetic wave propagation in several passive and active optical components and devices including 2-D photonic crystals, straight and curved waveguides, organic light emitting diodes (OLEDs), and etc. Several optical designs are also presented like organic photovoltaic (OPV) cells and solar concentrators. The first part of the thesis focuses on theoretical investigation. First, the plane-wave-based transfer (scattering) matrix method (TMM) is briefly described with a short review of photonic crystals and other numerical methods to study them (Chapter 1 and 2). Next TMM, the numerical method itself is investigated in details and developed in advance to deal with more complex optical systems. In chapter 3, TMM is extended in curvilinear coordinates to study curved nanoribbon waveguides. The problem of a curved structure is transformed into an equivalent one of a straight structure with spatially dependent tensors of dielectric constant and magnetic permeability. In chapter 4, a new set of localized basis orbitals are introduced to locally represent electromagnetic field in photonic crystals as alternative to planewave basis. The second part of the thesis focuses on the design of optical devices. First, two examples of TMM applications are given. The first example is the design of metal grating structures as replacements of ITO to enhance the optical absorption in OPV cells (chapter 6). The second one is the design of the same structure as above to enhance the light extraction of OLEDs (chapter 7). Next, two design examples by ray tracing method are given, including applying a microlens array to enhance the light extraction of OLEDs (chapter 5) and an all-angle wide-wavelength design of solar concentrator (chapter 8). In summary, this dissertation has extended TMM which makes it capable of treating complex optical systems. Several optical designs by TMM and ray tracing method are also given as a full complement of this work.
A theoretical and experimental investigation of gas operated bearing dampers for turbomachinery
Sundararajan, Padmanabhan
1992-01-01T23:59:59.000Z
. . . . . THEORETICAL PREDICTIONS - GDIV 41. THEORETICAL PREDICTIONS - GDIV 1I2. 15 24 25 34 40 III TEST APPARATUS, INSTRUMENTATION AND TEST CONCEPTS. 49 TEST APPARATUS INSTRUMENTATION. TEST CONCEPTS. . 49 54 59 CHAPTER IV EXPERIMENTAL RESULTS... of Damping vs. Supply Pressure - GDIV ?2 Figure 25. Theoretical Prediction of Damping vs. Inlet Pocket Opening - GDIV ?2. 43 Figure 26. Theoretical Prediction of Damping vs. Frequency for Different Supply Pressures ? GDIV ?2 45 Figure 27. Theoretical...
Genetic refinement of cloud-masking algorithms for the multi-spectral thermal imager (MTI)
Hirsch, K. L. (Karen L.); Davis, A. B. (Anthony B.); Harvey, N. R. (Neal R.); Rohde, C. A. (Charles A.); Brumby, Steven P.
2001-01-01T23:59:59.000Z
The Multi-spectral Thermal Imager (MTI) is a high-performance remote-sensing satellite designed, owned and operated by the U.S. Department of Energy, with a dual mission in environmental studies and in nonproliferation. It has enhanced spatial and radiometric resolutions and state-of-the-art calibration capabilities. This instrumental development puts a new burden on retrieval algorithm developers to pass this accuracy on to the inferred geophysical parameters. In particular, the atmospheric correction scheme assumes the intervening atmosphere will be modeled as a plane-parallel horizontally-homogeneous medium. A single dense-enough cloud in view of the ground target can easily offset reality from the calculations, hence the need for a reliable cloud-masking algorithm. Pixel-scale cloud detection relies on the simple facts that clouds are generally whiter, brighter, and colder than the ground below; spatially, dense clouds are generally large on some scale. This is a good basis for searching multispectral datacubes for cloud signatures. However, the resulting cloud mask can be very sensitive to the choice of thresholds in whiteness, brightness, temperature, and connectivity. We have used a genetic algorithm trained on (MODIS Airborne Simulator-based) simulated MTI data to design a cloud-mask. Its performance is compared quantitatively to hand-drawn training data and to the EOS/Terra MODIS cloud mask.
Parallelism of the SANDstorm hash algorithm.
Torgerson, Mark Dolan; Draelos, Timothy John; Schroeppel, Richard Crabtree
2009-09-01T23:59:59.000Z
Mainstream cryptographic hashing algorithms are not parallelizable. This limits their speed and they are not able to take advantage of the current trend of being run on multi-core platforms. Being limited in speed limits their usefulness as an authentication mechanism in secure communications. Sandia researchers have created a new cryptographic hashing algorithm, SANDstorm, which was specifically designed to take advantage of multi-core processing and be parallelizable on a wide range of platforms. This report describes a late-start LDRD effort to verify the parallelizability claims of the SANDstorm designers. We have shown, with operating code and bench testing, that the SANDstorm algorithm may be trivially parallelized on a wide range of hardware platforms. Implementations using OpenMP demonstrates a linear speedup with multiple cores. We have also shown significant performance gains with optimized C code and the use of assembly instructions to exploit particular platform capabilities.
Quantum search algorithms on the hypercube
Birgit Hein; Gregor Tanner
2009-06-17T23:59:59.000Z
We investigate a set of discrete-time quantum search algorithms on the n-dimensional hypercube following a proposal by Shenvi, Kempe and Whaley. We show that there exists a whole class of quantum search algorithms in the symmetry reduced space which perform a search of a marked vertex in time of order $\\sqrt{N}$ where $N = 2^n$, the number of vertices. In analogy to Grover's algorithm, the spatial search is effectively facilitated through a rotation in a two-level sub-space of the full Hilbert space. In the hypercube, these two-level systems are introduced through avoided crossings. We give estimates on the quantum states forming the 2-level sub-spaces at the avoided crossings and derive improved estimates on the search times.
Conjugate gradient algorithms using multiple recursions
Barth, T.; Manteuffel, T.
1996-12-31T23:59:59.000Z
Much is already known about when a conjugate gradient method can be implemented with short recursions for the direction vectors. The work done in 1984 by Faber and Manteuffel gave necessary and sufficient conditions on the iteration matrix A, in order for a conjugate gradient method to be implemented with a single recursion of a certain form. However, this form does not take into account all possible recursions. This became evident when Jagels and Reichel used an algorithm of Gragg for unitary matrices to demonstrate that the class of matrices for which a practical conjugate gradient algorithm exists can be extended to include unitary and shifted unitary matrices. The implementation uses short double recursions for the direction vectors. This motivates the study of multiple recursion algorithms.
Pinning impulsive control algorithms for complex network
Sun, Wen [School of Information and Mathematics, Yangtze University, Jingzhou 434023 (China)] [School of Information and Mathematics, Yangtze University, Jingzhou 434023 (China); Lü, Jinhu [Academy of Mathematics and Systems Science, Chinese Academy of Sciences, Beijing 100190 (China)] [Academy of Mathematics and Systems Science, Chinese Academy of Sciences, Beijing 100190 (China); Chen, Shihua [College of Mathematics and Statistics, Wuhan University, Wuhan 430072 (China)] [College of Mathematics and Statistics, Wuhan University, Wuhan 430072 (China); Yu, Xinghuo [School of Electrical and Computer Engineering, RMIT University, Melbourne VIC 3001 (Australia)] [School of Electrical and Computer Engineering, RMIT University, Melbourne VIC 3001 (Australia)
2014-03-15T23:59:59.000Z
In this paper, we further investigate the synchronization of complex dynamical network via pinning control in which a selection of nodes are controlled at discrete times. Different from most existing work, the pinning control algorithms utilize only the impulsive signals at discrete time instants, which may greatly improve the communication channel efficiency and reduce control cost. Two classes of algorithms are designed, one for strongly connected complex network and another for non-strongly connected complex network. It is suggested that in the strongly connected network with suitable coupling strength, a single controller at any one of the network's nodes can always pin the network to its homogeneous solution. In the non-strongly connected case, the location and minimum number of nodes needed to pin the network are determined by the Frobenius normal form of the coupling matrix. In addition, the coupling matrix is not necessarily symmetric or irreducible. Illustrative examples are then given to validate the proposed pinning impulsive control algorithms.
Density equalizing map projections: A new algorithm
Merrill, D.W.; Selvin, S.; Mohr, M.S.
1992-02-01T23:59:59.000Z
In the study of geographic disease clusters, an alternative to traditional methods based on rates is to analyze case locations on a transformed map in which population density is everywhere equal. Although the analyst`s task is thereby simplified, the specification of the density equalizing map projection (DEMP) itself is not simple and continues to be the subject of considerable research. Here a new DEMP algorithm is described, which avoids some of the difficulties of earlier approaches. The new algorithm (a) avoids illegal overlapping of transformed polygons; (b) finds the unique solution that minimizes map distortion; (c) provides constant magnification over each map polygon; (d) defines a continuous transformation over the entire map domain; (e) defines an inverse transformation; (f) can accept optional constraints such as fixed boundaries; and (g) can use commercially supported minimization software. Work is continuing to improve computing efficiency and improve the algorithm.
Density equalizing map projections: A new algorithm
Merrill, D.W.; Selvin, S.; Mohr, M.S.
1992-02-01T23:59:59.000Z
In the study of geographic disease clusters, an alternative to traditional methods based on rates is to analyze case locations on a transformed map in which population density is everywhere equal. Although the analyst's task is thereby simplified, the specification of the density equalizing map projection (DEMP) itself is not simple and continues to be the subject of considerable research. Here a new DEMP algorithm is described, which avoids some of the difficulties of earlier approaches. The new algorithm (a) avoids illegal overlapping of transformed polygons; (b) finds the unique solution that minimizes map distortion; (c) provides constant magnification over each map polygon; (d) defines a continuous transformation over the entire map domain; (e) defines an inverse transformation; (f) can accept optional constraints such as fixed boundaries; and (g) can use commercially supported minimization software. Work is continuing to improve computing efficiency and improve the algorithm.
Cumulative theoretical uncertainties in lithium depletion boundary age
Tognelli, Emanuele; Degl'Innocenti, Scilla
2015-01-01T23:59:59.000Z
We performed a detailed analysis of the main theoretical uncertainties affecting the age at the lithium depletion boundary (LDB). To do that we computed almost 12000 pre-main sequence models with mass in the range [0.06, 0.4] M_sun by varying input physics (nuclear reaction cross-sections, plasma electron screening, outer boundary conditions, equation of state, and radiative opacity), initial chemical elements abundances (total metallicity, helium and deuterium abundances, and heavy elements mixture), and convection efficiency (mixing length parameter, alpha_ML). As a first step, we studied the effect of varying these quantities individually within their extreme values. Then, we analysed the impact of simultaneously perturbing the main input/parameters without an a priori assumption of independence. Such an approach allowed us to build for the first time the cumulative error stripe, which defines the edges of the maximum uncertainty region in the theoretical LDB age. We found that the cumulative error stripe ...
Investigating Biological Matter with Theoretical Nuclear Physics Methods
Pietro Faccioli
2011-08-25T23:59:59.000Z
The internal dynamics of strongly interacting systems and that of biomolecules such as proteins display several important analogies, despite the huge difference in their characteristic energy and length scales. For example, in all such systems, collective excitations, cooperative transitions and phase transitions emerge as the result of the interplay of strong correlations with quantum or thermal fluctuations. In view of such an observation, some theoretical methods initially developed in the context of theoretical nuclear physics have been adapted to investigate the dynamics of biomolecules. In this talk, we review some of our recent studies performed along this direction. In particular, we discuss how the path integral formulation of the molecular dynamics allows to overcome some of the long-standing problems and limitations which emerge when simulating the protein folding dynamics at the atomistic level of detail.
Theoretical X-ray Line Profiles from Colliding Wind Binaries
Henley, D B; Pittard, J M
2003-01-01T23:59:59.000Z
We present theoretical X-ray line profiles from a range of model colliding wind systems. In particular, we investigate the effects of varying the stellar mass-loss rates, the wind speeds, and the viewing orientation. We find that a wide range of theoretical line profile shapes is possible, varying with orbital inclination and phase. At or near conjunction, the lines have approximately Gaussian profiles, with small widths (HWHM ~ 0.1 v_infty) and definite blue- or redshifts (depending on whether the star with the weaker wind is in front or behind). When the system is viewed at quadrature, the lines are generally much broader (HWHM ~ v_infty), flat-topped and unshifted. Local absorption can have a major effect on the observed profiles - in systems with mass-loss rates of a few times 10^{-6} Msol/yr the lower energy lines (E wind of the primary. The orbital variation ...
An Information Theoretic Location Verification System for Wireless Networks
Yan, Shihao; Nevat, Ido; Peters, Gareth W
2012-01-01T23:59:59.000Z
As location-based applications become ubiquitous in emerging wireless networks, Location Verification Systems (LVS) are of growing importance. In this paper we propose, for the first time, a rigorous information-theoretic framework for an LVS. The theoretical framework we develop illustrates how the threshold used in the detection of a spoofed location can be optimized in terms of the mutual information between the input and output data of the LVS. In order to verify the legitimacy of our analytical framework we have carried out detailed numerical simulations. Our simulations mimic the practical scenario where a system deployed using our framework must make a binary Yes/No "malicious decision" to each snapshot of the signal strength values obtained by base stations. The comparison between simulation and analysis shows excellent agreement. Our optimized LVS framework provides a defence against location spoofing attacks in emerging wireless networks such as those envisioned for Intelligent Transport Systems, wh...
Information theoretic security by the laws of classical physics
Mingesz, R; Gingl, Z; Granqvist, C G; Wen, H; Peper, F; Eubank, T; Schmera, G
2013-01-01T23:59:59.000Z
It has been shown recently that the use of two pairs of resistors with enhanced Johnson-noise and a Kirchhoff-loop-i.e., a Kirchhoff-Law-Johnson-Noise (KLJN) protocol-for secure key distribution leads to information theoretic security levels superior to those of a quantum key distribution, including a natural immunity against a man-in-the-middle attack. This issue is becoming particularly timely because of the recent full cracks of practical quantum communicators, as shown in numerous peer-reviewed publications. This presentation first briefly surveys the KLJN system and then discusses related, essential questions such as: what are perfect and imperfect security characteristics of key distribution, and how can these two types of securities be unconditional (or information theoretical)? Finally the presentation contains a live demonstration.
Investigating Biological Matter with Theoretical Nuclear Physics Methods
Faccioli, Pietro
2011-01-01T23:59:59.000Z
The internal dynamics of strongly interacting systems and that of biomolecules such as proteins display several important analogies, despite the huge difference in their characteristic energy and length scales. For example, in all such systems, collective excitations, cooperative transitions and phase transitions emerge as the result of the interplay of strong correlations with quantum or thermal fluctuations. In view of such an observation, some theoretical methods initially developed in the context of theoretical nuclear physics have been adapted to investigate the dynamics of biomolecules. In this talk, we review some of our recent studies performed along this direction. In particular, we discuss how the path integral formulation of the molecular dynamics allows to overcome some of the long-standing problems and limitations which emerge when simulating the protein folding dynamics at the atomistic level of detail.
Theoretical model for plasma expansion generated by hypervelocity impact
Ju, Yuanyuan; Zhang, Qingming, E-mail: qmzhang@bit.edu.cn; Zhang, Dongjiang; Long, Renrong; Chen, Li; Huang, Fenglei [State Key Laboratory of Explosion Science and Technology, Beijing Institute of Technology, Beijing 100081 (China); Gong, Zizheng [National Key Laboratory of Science and Technology on Reliability and Environment Engineering, Beijing Institute of Spacecraft Environment Engineering, Beijing 100094 (China)
2014-09-15T23:59:59.000Z
The hypervelocity impact experiments of spherical LY12 aluminum projectile diameter of 6.4?mm on LY12 aluminum target thickness of 23?mm have been conducted using a two-stage light gas gun. The impact velocity of the projectile is 5.2, 5.7, and 6.3?km/s, respectively. The experimental results show that the plasma phase transition appears under the current experiment conditions, and the plasma expansion consists of accumulation, equilibrium, and attenuation. The plasma characteristic parameters decrease as the plasma expands outward and are proportional with the third power of the impact velocity, i.e., (T{sub e}, n{sub e})???v{sub p}{sup 3}. Based on the experimental results, a theoretical model on the plasma expansion is developed and the theoretical results are consistent with the experimental data.
Game theoretic analysis of physical protection system design
Canion, B.; Schneider, E. [Nuclear and Radiation Engineering Program, University of Texas, 204 E. Dean Keeton Street, Stop C2200, Austin, TX 78712 (United States); Bickel, E.; Hadlock, C.; Morton, D. [Operations Research Program, University of Texas, 204 E. Dean Keeton Street, Stop C2200, Austin, TX 78712 (United States)
2013-07-01T23:59:59.000Z
The physical protection system (PPS) of a fictional small modular reactor (SMR) facility have been modeled as a platform for a game theoretic approach to security decision analysis. To demonstrate the game theoretic approach, a rational adversary with complete knowledge of the facility has been modeled attempting a sabotage attack. The adversary adjusts his decisions in response to investments made by the defender to enhance the security measures. This can lead to a conservative physical protection system design. Since defender upgrades were limited by a budget, cost benefit analysis may be conducted upon security upgrades. One approach to cost benefit analysis is the efficient frontier, which depicts the reduction in expected consequence per incremental increase in the security budget.
Theoretical analysis and experiments on antireflection coatings for laser diodes
Chin, Kai Jian
1987-01-01T23:59:59.000Z
TEIEORETICAL 'NAI. YSIS AND EXPERIMENTS ON ANTIBEFLECTION COATINGS FOR LASER DIODES A Thesis by KAI 3IAN CHIN Submitted to the Graduate College of Texas AkM University in partial fulfillment of the requirement, for the degree of MASTER... OF SCIENCE December 19SI Major Subject: Electrical Engineering THEORETICAL ANALYSIS AND EXPERIMENTS ON ANTIREFLECTION COATINGS FOR. LASER DIODES A Thesis KAI JIAN CHIN Approved as to style and content by: Henry . Taylor (Chairman of Committee...
Investigations in Experimental and Theoretical High Energy Physics
Krennrich, Frank [Iowa State University
2013-07-29T23:59:59.000Z
We report on the work done under DOE grant DE-FG02-01ER41155. The experimental tasks have ongoing efforts at CERN (ATLAS), the Whipple observatory (VERITAS) and R&D work on dual readout calorimetry and neutrino-less double beta decay. The theoretical task emphasizes the weak interaction and in particular CP violation and neutrino physics. The detailed descriptions of the final report on each project are given under the appropriate task section of this report.
Theoretical analysis and experiments on antireflection coatings for laser diodes
Chin, Kai Jian
1987-01-01T23:59:59.000Z
TEIEORETICAL 'NAI. YSIS AND EXPERIMENTS ON ANTIBEFLECTION COATINGS FOR LASER DIODES A Thesis by KAI 3IAN CHIN Submitted to the Graduate College of Texas AkM University in partial fulfillment of the requirement, for the degree of MASTER... OF SCIENCE December 19SI Major Subject: Electrical Engineering THEORETICAL ANALYSIS AND EXPERIMENTS ON ANTIREFLECTION COATINGS FOR. LASER DIODES A Thesis KAI JIAN CHIN Approved as to style and content by: Henry . Taylor (Chairman of Committee...
Theoretical analysis of perfect quantum state transfer with superconducting qubits
Frederick W. Strauch; Carl J. Williams
2008-12-12T23:59:59.000Z
Superconducting quantum circuits, fabricated with multiple layers, are proposed to implement perfect quantum state transfer between nodes of a hypercube network. For tunable devices such as the phase qubit, each node can transmit quantum information to any other node at a constant rate independent of the distance between qubits. The physical limits of quantum state transfer in this network are theoretically analyzed, including the effects of disorder, decoherence, and higher-order couplings.
Materials for electrochemical capacitors: Theoretical and experimental constraints
Sarangapani, S. [ICET, Inc., Norwood, MA (United States); Tilak, B.V.; Chen, C.P. [Occidental Chemical Corp., Grand Island, NY (United States)
1996-11-01T23:59:59.000Z
Electrochemical capacitors, also called supercapacitors, are unique devices exhibiting 20 to 200 times greater capacitance than conventional capacitors. The large capacitance exhibited by these systems has been demonstrated to arise from a combination of the double-layer capacitance and pseudocapacitance associated with surface redox-type reactions. The purpose of this review is to survey the published data of available electrode materials possessing high specific double-layer or pseudocapacitance and examine their reported performance data in relation to their theoretical expectations.
Madduri, Kamesh; Ediger, David; Jiang, Karl; Bader, David A.; Chavarria-Miranda, Daniel
2009-02-15T23:59:59.000Z
We present a new lock-free parallel algorithm for computing betweenness centralityof massive small-world networks. With minor changes to the data structures, ouralgorithm also achieves better spatial cache locality compared to previous approaches. Betweenness centrality is a key algorithm kernel in HPCS SSCA#2, a benchmark extensively used to evaluate the performance of emerging high-performance computing architectures for graph-theoretic computations. We design optimized implementations of betweenness centrality and the SSCA#2 benchmark for two hardware multithreaded systems: a Cray XMT system with the Threadstorm processor, and a single-socket Sun multicore server with the UltraSPARC T2 processor. For a small-world network of 134 million vertices and 1.073 billion edges, the 16-processor XMT system and the 8-core Sun Fire T5120 server achieve TEPS scores (an algorithmic performance count for the SSCA#2 benchmark) of 160 million and 90 million respectively, which corresponds to more than a 2X performance improvement over the previous parallel implementations. To better characterize the performance of these multithreaded systems, we correlate the SSCA#2 performance results with data from the memory-intensive STREAM and RandomAccess benchmarks. Finally, we demonstrate the applicability of our implementation to analyze massive real-world datasets by computing approximate betweenness centrality for a large-scale IMDb movie-actor network.
Modeling and Algorithmic Approaches to Constitutively-Complex, Micro-structured Fluids
Forest, Mark Gregory [University of North Carolina at Chapel Hill] [University of North Carolina at Chapel Hill
2014-05-06T23:59:59.000Z
The team for this Project made significant progress on modeling and algorithmic approaches to hydrodynamics of fluids with complex microstructure. Our advances are broken down into modeling and algorithmic approaches. In experiments a driven magnetic bead in a complex fluid accelerates out of the Stokes regime and settles into another apparent linear response regime. The modeling explains the take-off as a deformation of entanglements, and the longtime behavior is a nonlinear, far-from-equilibrium property. Furthermore, the model has predictive value, as we can tune microstructural properties relative to the magnetic force applied to the bead to exhibit all possible behaviors. Wave-theoretic probes of complex fluids have been extended in two significant directions, to small volumes and the nonlinear regime. Heterogeneous stress and strain features that lie beyond experimental capability were studied. It was shown that nonlinear penetration of boundary stress in confined viscoelastic fluids is not monotone, indicating the possibility of interlacing layers of linear and nonlinear behavior, and thus layers of variable viscosity. Models, algorithms, and codes were developed and simulations performed leading to phase diagrams of nanorod dispersion hydrodynamics in parallel shear cells and confined cavities representative of film and membrane processing conditions. Hydrodynamic codes for polymeric fluids are extended to include coupling between microscopic and macroscopic models, and to the strongly nonlinear regime.
Graph algorithms in the titan toolkit.
McLendon, William Clarence, III; Wylie, Brian Neil
2009-10-01T23:59:59.000Z
Graph algorithms are a key component in a wide variety of intelligence analysis activities. The Graph-Based Informatics for Non-Proliferation and Counter-Terrorism project addresses the critical need of making these graph algorithms accessible to Sandia analysts in a manner that is both intuitive and effective. Specifically we describe the design and implementation of an open source toolkit for doing graph analysis, informatics, and visualization that provides Sandia with novel analysis capability for non-proliferation and counter-terrorism.