Theoretical Basis for the Design of a DWPF Evacuated Canister
Routt, K.R.
2001-09-17
This report provides the theoretical bases for use of an evacuated canister for draining a glass melter. Design recommendations are also presented to ensure satisfactory performance in future tests of the concept.
A Decision Theoretic Approach to Evaluate Radiation Detection Algorithms
Nobles, Mallory A.; Sego, Landon H.; Cooley, Scott K.; Gosink, Luke J.; Anderson, Richard M.; Hays, Spencer E.; Tardiff, Mark F.
2013-07-01
There are a variety of sensor systems deployed at U.S. border crossings and ports of entry that scan for illicit nuclear material. In this work, we develop a framework for comparing the performance of detection algorithms that interpret the output of these scans and determine when secondary screening is needed. We optimize each algorithm to minimize its risk, or expected loss. We measure an algorithm’s risk by considering its performance over a sample, the probability distribution of threat sources, and the consequence of detection errors. While it is common to optimize algorithms by fixing one error rate and minimizing another, our framework allows one to simultaneously consider multiple types of detection errors. Our framework is flexible and easily adapted to many different assumptions regarding the probability of a vehicle containing illicit material, and the relative consequences of a false positive and false negative errors. Our methods can therefore inform decision makers of the algorithm family and parameter values which best reduce the threat from illicit nuclear material, given their understanding of the environment at any point in time. To illustrate the applicability of our methods, in this paper, we compare the risk from two families of detection algorithms and discuss the policy implications of our results.
Li, Z; Leng, S; Yu, L; McCollough, C
2014-06-15
Purpose: Published methods for image-based material decomposition with multi-energy CT images have required the assumption of volume conservation or accurate knowledge of the x-ray spectra and detector response. The purpose of this work was to develop an image-based material-decomposition algorithm that can overcome these limitations. Methods: An image-based material decomposition algorithm was developed that requires only mass conservation (rather than volume conservation). With this method, using multi-energy CT measurements made with n=4 energy bins, the mass density of each basis material and of the mixture can be determined without knowledge of the tube spectra and detector response. A digital phantom containing 12 samples of mixtures from water, calcium, iron, and iodine was used in the simulation (Siemens DRASIM). The calibration was performed by using pure materials at each energy bin. The accuracy of the technique was evaluated in noise-free and noisy data under the assumption of an ideal photon-counting detector. Results: Basis material densities can be estimated accurately by either theoretic calculation or calibration with known pure materials. The calibration approach requires no prior information about the spectra and detector response. Regression analysis of theoretical values versus estimated values results in excellent agreement for both noise-free and noisy data. For the calibration approach, the R-square values are 0.9960+/âˆ’0.0025 and 0.9476+/âˆ’0.0363 for noise-free and noisy data, respectively. Conclusion: From multi-energy CT images with n=4 energy bins, the developed image-based material decomposition method accurately estimated 4 basis material density (3 without k-edge and 1 with in the range of the simulated energy bins) even without any prior information about spectra and detector response. This method is applicable to mixtures of solutions and dissolvable materials, where volume conservation assumptions do not apply. CHM receives
U.S. Department of Energy (DOE) all webpages (Extended Search)
HEP Theoretical Physics Understanding discoveries at the Energy, Intensity, and Cosmic Frontiers Get ... HEP Theory at Los Alamos The Theoretical High Energy Physics group at ...
Theoretical Plasma Physics (Technical Report) | SciTech Connect
U.S. Department of Energy (DOE) all webpages (Extended Search)
Technical Report: Theoretical Plasma Physics Citation Details In-Document Search Title: Theoretical Plasma Physics Lattice Boltzmann algorithms are a mesoscopic method to solve ...
U.S. Department of Energy (DOE) all webpages (Extended Search)
PADSTE Â» ADTSC Â» T Theoretical Division Theoretical research encompasses all disciplines of science. Physics and chemistry of materials Nuclear and particle physics, astrophysics, and cosmology Fluid dynamics and solid mechanics Physics of condensed matter and complex systems Applied mathematics and plasma physics Theoretical biology and biophysics Contacts Division Leader Jack Shlachter Email Deputy Division Leader Joel Kress Email Point of Contact Tanya Lynn Jackson (505) 667-4401 Email
Belief network algorithms: A study of performance
Jitnah, N.
1996-12-31
This abstract gives an overview of the work. We present a survey of Belief Network algorithms and propose a domain characterization system to be used as a basis for algorithm comparison and for predicting algorithm performance.
U.S. Department of Energy (DOE) all webpages (Extended Search)
HEP Theoretical Physics Understanding discoveries at the Energy, Intensity, and Cosmic Frontiers Get Expertise Rajan Gupta (505) 667-7664 Email Bruce Carlsten (505) 667-5657 Email HEP Theory at Los Alamos The Theoretical High Energy Physics group at Los Alamos National Laboratory is active in a number of diverse areas of research. Their primary areas of interest are in physics beyond the Standard Model, cosmology, dark matter, lattice quantum chromodynamics, neutrinos, the fundamentals of
Improved multiprocessor garbage collection algorithms
Newman, I.A.; Stallard, R.P.; Woodward, M.C.
1983-01-01
Outlines the results of an investigation of existing multiprocessor garbage collection algorithms and introduces two new algorithms which significantly improve some aspects of the performance of their predecessors. The two algorithms arise from different starting assumptions. One considers the case where the algorithm will terminate successfully whatever list structure is being processed and assumes that the extra data space should be minimised. The other seeks a very fast garbage collection time for list structures that do not contain loops. Results of both theoretical and experimental investigations are given to demonstrate the efficacy of the algorithms. 7 references.
Sharkey, Keeper L.; Adamowicz, Ludwik; Department of Physics, University of Arizona, Tucson, Arizona 85721
2014-05-07
An algorithm for quantum-mechanical nonrelativistic variational calculations of L = 0 and M = 0 states of atoms with an arbitrary number of s electrons and with three p electrons have been implemented and tested in the calculations of the ground {sup 4}S state of the nitrogen atom. The spatial part of the wave function is expanded in terms of all-electrons explicitly correlated Gaussian functions with the appropriate pre-exponential Cartesian angular factors for states with the L = 0 and M = 0 symmetry. The algorithm includes formulas for calculating the Hamiltonian and overlap matrix elements, as well as formulas for calculating the analytic energy gradient determined with respect to the Gaussian exponential parameters. The gradient is used in the variational optimization of these parameters. The Hamiltonian used in the approach is obtained by rigorously separating the center-of-mass motion from the laboratory-frame all-particle Hamiltonian, and thus it explicitly depends on the finite mass of the nucleus. With that, the mass effect on the total ground-state energy is determined.
R.J. Garrett
2002-01-14
As part of the internal Integrated Safety Management Assessment verification process, it was determined that there was a lack of documentation that summarizes the safety basis of the current Yucca Mountain Project (YMP) site characterization activities. It was noted that a safety basis would make it possible to establish a technically justifiable graded approach to the implementation of the requirements identified in the Standards/Requirements Identification Document. The Standards/Requirements Identification Documents commit a facility to compliance with specific requirements and, together with the hazard baseline documentation, provide a technical basis for ensuring that the public and workers are protected. This Safety Basis Report has been developed to establish and document the safety basis of the current site characterization activities, establish and document the hazard baseline, and provide the technical basis for identifying structures, systems, and components (SSCs) that perform functions necessary to protect the public, the worker, and the environment from hazards unique to the YMP site characterization activities. This technical basis for identifying SSCs serves as a grading process for the implementation of programs such as Conduct of Operations (DOE Order 5480.19) and the Suspect/Counterfeit Items Program. In addition, this report provides a consolidated summary of the hazards analyses processes developed to support the design, construction, and operation of the YMP site characterization facilities and, therefore, provides a tool for evaluating the safety impacts of changes to the design and operation of the YMP site characterization activities.
Directives, Delegations, and Requirements [Office of Management (MA)]
2007-07-11
The Guide assists DOE/NNSA field elements and operating contractors in identifying and analyzing hazards at facilities and sites to provide the technical planning basis for emergency management programs. Supersedes DOE G 151.1-1, Volume 2.
Radioactive Waste Management Basis
Perkins, B K
2009-06-03
The purpose of this Radioactive Waste Management Basis is to describe the systematic approach for planning, executing, and evaluating the management of radioactive waste at LLNL. The implementation of this document will ensure that waste management activities at LLNL are conducted in compliance with the requirements of DOE Order 435.1, Radioactive Waste Management, and the Implementation Guide for DOE Manual 435.1-1, Radioactive Waste Management Manual. Technical justification is provided where methods for meeting the requirements of DOE Order 435.1 deviate from the DOE Manual 435.1-1 and Implementation Guide.
Theoretical Biology and Biophysics
U.S. Department of Energy (DOE) all webpages (Extended Search)
Theoretical Biology and Biophysics Modeling biological systems and analysis and informatics of molecular and cellular biological data Mathematical BiologyImmunology Fundamental ...
Hallquist, J.O.
1983-03-01
This report provides a theoretical manual for DYNA3D, a vectorized explicit three-dimensional finite element code for analyzing the large deformation dynamic response of inelastic solids. A contact-impact algorithm that permits gaps and sliding along material interfaces is described. By a specialization of this algorithm, such interfaces can be rigidly tied to admit variable zoning without the need of transition regions. Spatial discretization is achieved by the use of 8-node solid elements, and the equations-of-motion are integrated by the central difference method. DYNA3D is operational on the CRAY-1 and CDC7600 computers.
Algorithmic advances in stochastic programming
Morton, D.P.
1993-07-01
Practical planning problems with deterministic forecasts of inherently uncertain parameters often yield unsatisfactory solutions. Stochastic programming formulations allow uncertain parameters to be modeled as random variables with known distributions, but the size of the resulting mathematical programs can be formidable. Decomposition-based algorithms take advantage of special structure and provide an attractive approach to such problems. We consider two classes of decomposition-based stochastic programming algorithms. The first type of algorithm addresses problems with a ``manageable`` number of scenarios. The second class incorporates Monte Carlo sampling within a decomposition algorithm. We develop and empirically study an enhanced Benders decomposition algorithm for solving multistage stochastic linear programs within a prespecified tolerance. The enhancements include warm start basis selection, preliminary cut generation, the multicut procedure, and decision tree traversing strategies. Computational results are presented for a collection of ``real-world`` multistage stochastic hydroelectric scheduling problems. Recently, there has been an increased focus on decomposition-based algorithms that use sampling within the optimization framework. These approaches hold much promise for solving stochastic programs with many scenarios. A critical component of such algorithms is a stopping criterion to ensure the quality of the solution. With this as motivation, we develop a stopping rule theory for algorithms in which bounds on the optimal objective function value are estimated by sampling. Rules are provided for selecting sample sizes and terminating the algorithm under which asymptotic validity of confidence interval statements for the quality of the proposed solution can be verified. Issues associated with the application of this theory to two sampling-based algorithms are considered, and preliminary empirical coverage results are presented.
Algorithms for builder guidelines
Balcomb, J.D.; Lekov, A.B.
1989-06-01
The Builder Guidelines are designed to make simple, appropriate guidelines available to builders for their specific localities. Builders may select from passive solar and conservation strategies with different performance potentials. They can then compare the calculated results for their particular house design with a typical house in the same location. Algorithms used to develop the Builder Guidelines are described. The main algorithms used are the monthly solar ratio (SLR) method for winter heating, the diurnal heat capacity (DHC) method for temperature swing, and a new simplified calculation method (McCool) for summer cooling. This paper applies the algorithms to estimate the performance potential of passive solar strategies, and the annual heating and cooling loads of various combinations of conservation and passive solar strategies. The basis of the McCool method is described. All three methods are implemented in a microcomputer program used to generate the guideline numbers. Guidelines for Denver, Colorado, are used to illustrate the results. The structure of the guidelines and worksheet booklets are also presented. 5 refs., 3 tabs.
Exploratory Development of Theoretical Methods | The Ames Laboratory
U.S. Department of Energy (DOE) all webpages (Extended Search)
Exploratory Development of Theoretical Methods Research Personnel Updates Publications Modeling The purpose of this FWP is to generate new theories, models, and algorithms that will be beneficial to the research programs at the Ames Laboratory and to the mission of DOE. This FWP will lead the development of theoretical tools to study a broad range of problems in physics, materials science, and chemical as well as biological systems. The generality of these tools allows the cross-fertilization of
U.S. Department of Energy (DOE) all webpages (Extended Search)
operator bispectral analysis D. A. Baver, 1 P. W. Terry, 1 and C. Holland 2 1 Department of Physics, University of Wisconsin, Madison, Wisconsin 53706, USA 2 Center for Energy Research, University of California, San Diego, California 92093, USA Í‘Received 11 September 2008; accepted 12 February 2009; published online 30 March 2009Í’ A new procedure for calculating model coefficients from fluctuation data for fully developed turbulence is derived. This procedure differs from previous related
U.S. Department of Energy (DOE) all webpages (Extended Search)
that this equation is a difference- equation representation in the temporal domain of a first- order-in-time nonlinear partial differential equation. The co- efficient L k...
CRAD for Safety Basis (SB). Criteria Review and Approach Documents (CRADs) that can be used to conduct a well-organized and thorough assessment of elements of safety and health programs.
Library of Continuation Algorithms
Energy Science and Technology Software Center (OSTI)
2005-03-01
LOCA (Library of Continuation Algorithms) is scientific software written in C++ that provides advanced analysis tools for nonlinear systems. In particular, it provides parameter continuation algorithms. bifurcation tracking algorithms, and drivers for linear stability analysis. The algorithms are aimed at large-scale applications that use NewtonÂ’s method for their nonlinear solve.
The Basis Code Development System
Energy Science and Technology Software Center (OSTI)
1994-03-15
BASIS9.4 is a system for developing interactive computer programs in Fortran, with some support for C and C++ as well. Using BASIS9.4 you can create a program that has a sophisticated programming language as its user interface so that the user can set, calculate with, and plot, all the major variables in the program. The program author writes only the scientific part of the program; BASIS9.4 supplies an environment in which to exercise that scientificmoreÂ Â» programming which includes an interactive language, an interpreter, graphics, terminal logs, error recovery, macros, saving and retrieving variables, formatted I/O, and online documentation.Â«Â less
Basis functions for electronic structure calculations on spheres
Gill, Peter M. W. Loos, Pierre-FranÃ§ois Agboola, Davids
2014-12-28
We introduce a new basis function (the spherical Gaussian) for electronic structure calculations on spheres of any dimension D. We find general expressions for the one- and two-electron integrals and propose an efficient computational algorithm incorporating the Cauchy-Schwarz bound. Using numerical calculations for the D = 2 case, we show that spherical Gaussians are more efficient than spherical harmonics when the electrons are strongly localized.
MONTHLY RADIATION SURVEY TECHNICAL BASIS
BROWN, R.L.
2003-06-13
This document details the technical basis, analysis, and justification for rescheduling radiation surveys in occupied radiation areas within Tank Farm Facilities from a weekly to a monthly frequency. The purpose of this document is to provide the technical basis, analysis, and justification for seeking a technical equivalency determination (TED) to TFRCM Article 552.1.b. The scope of this document limited to radiation surveys in occupied areas, no equivalency is being sought for high radiation area boundary surveys, radiological buffer area surveys, active ventilations surveys, or work coverage surveys.
Easy and hard testbeds for real-time search algorithms
Koenig, S.; Simmons, R.G.
1996-12-31
Although researchers have studied which factors influence the behavior of traditional search algorithms, currently not much is known about how domain properties influence the performance of real-time search algorithms. In this paper we demonstrate, both theoretically and experimentally, that Eulerian state spaces (a super set of undirected state spaces) are very easy for some existing real-time search algorithms to solve: even real-time search algorithms that can be intractable, in general, are efficient for Eulerian state spaces. Because traditional real-time search testbeds (such as the eight puzzle and gridworlds) are Eulerian, they cannot be used to distinguish between efficient and inefficient real-time search algorithms. It follows that one has to use non-Eulerian domains to demonstrate the general superiority of a given algorithm. To this end, we present two classes of hard-to-search state spaces and demonstrate the performance of various real-time search algorithms on them.
Energy Science and Technology Software Center (OSTI)
002651IBMPC00 Algorithm for Accounting for the Interactions of Multiple Renewable Energy Technologies in Estimation of Annual Performance
Authorization basis requirements comparison report
Brantley, W.M.
1997-08-18
The TWRS Authorization Basis (AB) consists of a set of documents identified by TWRS management with the concurrence of DOE-RL. Upon implementation of the TWRS Basis for Interim Operation (BIO) and Technical Safety Requirements (TSRs), the AB list will be revised to include the BIO and TSRs. Some documents that currently form part of the AB will be removed from the list. This SD identifies each - requirement from those documents, and recommends a disposition for each to ensure that necessary requirements are retained when the AB is revised to incorporate the BIO and TSRs. This SD also identifies documents that will remain part of the AB after the BIO and TSRs are implemented. This document does not change the AB, but provides guidance for the preparation of change documentation.
Acoustic diagnosis of gas insulated substations; A theoretical and experimental basis
Lundgaard, L.E.; Runde, M. ); Skyberg, B. )
1990-10-01
Loose particles and discharges inside the ducts of a gas insulated substation (GIS) are considered hazardous to the insulation system. These irregularities or flaws can be detected by using acoustic sensors placed on the enclosure. An elementary review of sound propagation in the GIS, together with corresponding experimental results are presented. By using ultrasonic acoustic emission sensors an excellent sensitivity to discharges and moving particles is obtained. The method offers possibilities for a quantification of the flaws, and thereby for a risk analysis. However, the degree of certainty of such an analysis is still low, especially for particles.
Hanford Generic Interim Safety Basis
Lavender, J.C.
1994-09-09
The purpose of this document is to identify WHC programs and requirements that are an integral part of the authorization basis for nuclear facilities that are generic to all WHC-managed facilities. The purpose of these programs is to implement the DOE Orders, as WHC becomes contractually obligated to implement them. The Hanford Generic ISB focuses on the institutional controls and safety requirements identified in DOE Order 5480.23, Nuclear Safety Analysis Reports.
Theoretical Division Current Job Openings
U.S. Department of Energy (DOE) all webpages (Extended Search)
IRC50385 Staff Scientist: Material Informatics IRC50253 Staff scientist: Quantum Information and Quantum Physics IRC49276 Theoretical and Computational Fluid Dynamics IRC49630 ACME ...
OSR encapsulation basis -- 100-KW
Meichle, R.H.
1995-01-27
The purpose of this report is to provide the basis for a change in the Operations Safety Requirement (OSR) encapsulated fuel storage requirements in the 105 KW fuel storage basin which will permit the handling and storing of encapsulated fuel in canisters which no longer have a water-free space in the top of the canister. The scope of this report is limited to providing the change from the perspective of the safety envelope (bases) of the Safety Analysis Report (SAR) and Operations Safety Requirements (OSR). It does not change the encapsulation process itself.
Internal dosimetry technical basis manual
Not Available
1990-12-20
The internal dosimetry program at the Savannah River Site (SRS) consists of radiation protection programs and activities used to detect and evaluate intakes of radioactive material by radiation workers. Examples of such programs are: air monitoring; surface contamination monitoring; personal contamination surveys; radiobioassay; and dose assessment. The objectives of the internal dosimetry program are to demonstrate that the workplace is under control and that workers are not being exposed to radioactive material, and to detect and assess inadvertent intakes in the workplace. The Savannah River Site Internal Dosimetry Technical Basis Manual (TBM) is intended to provide a technical and philosophical discussion of the radiobioassay and dose assessment aspects of the internal dosimetry program. Detailed information on air, surface, and personal contamination surveillance programs is not given in this manual except for how these programs interface with routine and special bioassay programs.
Tank characterization technical sampling basis
Brown, T.M.
1998-04-28
Tank Characterization Technical Sampling Basis (this document) is the first step of an in place working process to plan characterization activities in an optimal manner. This document will be used to develop the revision of the Waste Information Requirements Document (WIRD) (Winkelman et al. 1997) and ultimately, to create sampling schedules. The revised WIRD will define all Characterization Project activities over the course of subsequent fiscal years 1999 through 2002. This document establishes priorities for sampling and characterization activities conducted under the Tank Waste Remediation System (TWRS) Tank Waste Characterization Project. The Tank Waste Characterization Project is designed to provide all TWRS programs with information describing the physical, chemical, and radiological properties of the contents of waste storage tanks at the Hanford Site. These tanks contain radioactive waste generated from the production of nuclear weapons materials at the Hanford Site. The waste composition varies from tank to tank because of the large number of chemical processes that were used when producing nuclear weapons materials over the years and because the wastes were mixed during efforts to better use tank storage space. The Tank Waste Characterization Project mission is to provide information and waste sample material necessary for TWRS to define and maintain safe interim storage and to process waste fractions into stable forms for ultimate disposal. This document integrates the information needed to address safety issues, regulatory requirements, and retrieval, treatment, and immobilization requirements. Characterization sampling to support tank farm operational needs is also discussed.
Advanced Fuel Cycle Cost Basis
D. E. Shropshire; K. A. Williams; W. B. Boore; J. D. Smith; B. W. Dixon; M. Dunzik-Gougar; R. D. Adams; D. Gombert; E. Schneider
2008-03-01
This report, commissioned by the U.S. Department of Energy (DOE), provides a comprehensive set of cost data supporting a cost analysis for the relative economic comparison of options for use in the Advanced Fuel Cycle Initiative (AFCI) Program. The report describes the AFCI cost basis development process, reference information on AFCI cost modules, a procedure for estimating fuel cycle costs, economic evaluation guidelines, and a discussion on the integration of cost data into economic computer models. This report contains reference cost data for 25 cost modulesâ€”23 fuel cycle cost modules and 2 reactor modules. The cost modules were developed in the areas of natural uranium mining and milling, conversion, enrichment, depleted uranium disposition, fuel fabrication, interim spent fuel storage, reprocessing, waste conditioning, spent nuclear fuel (SNF) packaging, long-term monitored retrievable storage, near surface disposal of low-level waste (LLW), geologic repository and other disposal concepts, and transportation processes for nuclear fuel, LLW, SNF, transuranic, and high-level waste.
Advanced Fuel Cycle Cost Basis
D. E. Shropshire; K. A. Williams; W. B. Boore; J. D. Smith; B. W. Dixon; M. Dunzik-Gougar; R. D. Adams; D. Gombert
2007-04-01
This report, commissioned by the U.S. Department of Energy (DOE), provides a comprehensive set of cost data supporting a cost analysis for the relative economic comparison of options for use in the Advanced Fuel Cycle Initiative (AFCI) Program. The report describes the AFCI cost basis development process, reference information on AFCI cost modules, a procedure for estimating fuel cycle costs, economic evaluation guidelines, and a discussion on the integration of cost data into economic computer models. This report contains reference cost data for 26 cost modulesâ€”24 fuel cycle cost modules and 2 reactor modules. The cost modules were developed in the areas of natural uranium mining and milling, conversion, enrichment, depleted uranium disposition, fuel fabrication, interim spent fuel storage, reprocessing, waste conditioning, spent nuclear fuel (SNF) packaging, long-term monitored retrievable storage, near surface disposal of low-level waste (LLW), geologic repository and other disposal concepts, and transportation processes for nuclear fuel, LLW, SNF, and high-level waste.
Advanced Fuel Cycle Cost Basis
D. E. Shropshire; K. A. Williams; W. B. Boore; J. D. Smith; B. W. Dixon; M. Dunzik-Gougar; R. D. Adams; D. Gombert; E. Schneider
2009-12-01
This report, commissioned by the U.S. Department of Energy (DOE), provides a comprehensive set of cost data supporting a cost analysis for the relative economic comparison of options for use in the Advanced Fuel Cycle Initiative (AFCI) Program. The report describes the AFCI cost basis development process, reference information on AFCI cost modules, a procedure for estimating fuel cycle costs, economic evaluation guidelines, and a discussion on the integration of cost data into economic computer models. This report contains reference cost data for 25 cost modulesâ€”23 fuel cycle cost modules and 2 reactor modules. The cost modules were developed in the areas of natural uranium mining and milling, conversion, enrichment, depleted uranium disposition, fuel fabrication, interim spent fuel storage, reprocessing, waste conditioning, spent nuclear fuel (SNF) packaging, long-term monitored retrievable storage, near surface disposal of low-level waste (LLW), geologic repository and other disposal concepts, and transportation processes for nuclear fuel, LLW, SNF, transuranic, and high-level waste.
DOE Data Explorer [Office of Scientific and Technical Information (OSTI)]
Feller, D; Schuchardt, Karen L.; Didier, Brett T.; Elsethagen, Todd; Sun, Lisong; Gurumoorthi, Vidhya; Chase, Jared; Li, Jun
The Basis Set Exchange (BSE) provides a web-based user interface for downloading and uploading Gaussian-type (GTO) basis sets, including effective core potentials (ECPs), from the EMSL Basis Set Library. It provides an improved user interface and capabilities over its predecessor, the EMSL Basis Set Order Form, for exploring the contents of the EMSL Basis Set Library. The popular Basis Set Order Form and underlying Basis Set Library were originally developed by Dr. David Feller and have been available from the EMSL webpages since 1994. BSE not only allows downloading of the more than 200 Basis sets in various formats; it allows users to annotate existing sets and to upload new sets. (Specialized Interface)
DOE Data Explorer [Office of Scientific and Technical Information (OSTI)]
Feller, D; Schuchardt, Karen L.; Didier, Brett T.; Elsethagen, Todd; Sun, Lisong; Gurumoorthi, Vidhya; Chase, Jared; Li, Jun
The Basis Set Exchange (BSE) provides a web-based user interface for downloading and uploading Gaussian-type (GTO) basis sets, including effective core potentials (ECPs), from the EMSL Basis Set Library. It provides an improved user interface and capabilities over its predecessor, the EMSL Basis Set Order Form, for exploring the contents of the EMSL Basis Set Library. The popular Basis Set Order Form and underlying Basis Set Library were originally developed by Dr. David Feller and have been available from the EMSL webpages since 1994. BSE not only allows downloading of the more than 500 Basis sets in various formats; it allows users to annotate existing sets and to upload new sets. (Specialized Interface)
James R. Chelikowsky
2009-03-31
The work reported here took place at the University of Minnesota from September 15, 2003 to November 14, 2005. This funding resulted in 10 invited articles or book chapters, 37 articles in refereed journals and 13 invited talks. The funding helped train 5 PhD students. The research supported by this grant focused on developing theoretical methods for predicting and understanding the properties of matter at the nanoscale. Within this regime, new phenomena occur that are characteristic of neither the atomic limit, nor the crystalline limit. Moreover, this regime is crucial for understanding the emergence of macroscopic properties such as ferromagnetism. For example, elemental Fe clusters possess magnetic moments that reside between the atomic and crystalline limits, but the transition from the atomic to the crystalline limit is not a simple interpolation between the two size regimes. To capitalize properly on predicting such phenomena in this transition regime, a deeper understanding of the electronic, magnetic and structural properties of matter is required, e.g., electron correlation effects are enhanced within this size regime and the surface of a confined system must be explicitly included. A key element of our research involved the construction of new algorithms to address problems peculiar to the nanoscale. Typically, one would like to consider systems with thousands of atoms or more, e.g., a silicon nanocrystal that is 7 nm in diameter would contain over 10,000 atoms. Previous ab initio methods could address systems with hundreds of atoms whereas empirical methods can routinely handle hundreds of thousands of atoms (or more). However, these empirical methods often rely on ad hoc assumptions and lack incorporation of structural and electronic degrees of freedom. The key theoretical ingredients in our work involved the use of ab initio pseudopotentials and density functional approaches. The key numerical ingredients involved the implementation of algorithms for
Safety Basis Information System | Department of Energy
Click on the above link to log in to the Safety Basis web interface. "RESTRICTED; access ... Click on the above link to access the form to request access to the Safety Basis web ...
Energy Science and Technology Software Center (OSTI)
2013-07-29
The OpenEIS Algorithm package seeks to provide a low-risk path for building owners, service providers and managers to explore analytical methods for improving building control and operational efficiency. Users of this software can analyze building data, and learn how commercial implementations would provide long-term value. The code also serves as a reference implementation for developers who wish to adapt the algorithms for use in commercial tools or service offerings.
Lightning Talks 2015: Theoretical Division
Shlachter, Jack S.
2015-11-25
This document is a compilation of slides from a number of student presentations given to LANL Theoretical Division members. The subjects cover the range of activities of the Division, including plasma physics, environmental issues, materials research, bacterial resistance to antibiotics, and computational methods.
ON THE VERIFICATION AND VALIDATION OF GEOSPATIAL IMAGE ANALYSIS ALGORITHMS
Roberts, Randy S.; Trucano, Timothy G.; Pope, Paul A.; Aragon, Cecilia R.; Jiang , Ming; Wei, Thomas; Chilton, Lawrence; Bakel, A. J.
2010-07-25
Verification and validation (V&V) of geospatial image analysis algorithms is a difficult task and is becoming increasingly important. While there are many types of image analysis algorithms, we focus on developing V&V methodologies for algorithms designed to provide textual descriptions of geospatial imagery. In this paper, we present a novel methodological basis for V&V that employs a domain-specific ontology, which provides a naming convention for a domain-bounded set of objects and a set of named relationship between these objects. We describe a validation process that proceeds through objectively comparing benchmark imagery, produced using the ontology, with algorithm results. As an example, we describe how the proposed V&V methodology would be applied to algorithms designed to provide textual descriptions of facilities
Theoretical Nuclear Physics - Research - Cyclotron Institute
U.S. Department of Energy (DOE) all webpages (Extended Search)
Theoretical Nuclear Physics By addressing this elastic scattering indirect technique, we ... The theoretical physics program concentrates on the development of fundamental and ...
Theoretical energy release of thermites, intermetallics, and...
Office of Scientific and Technical Information (OSTI)
Theoretical energy release of thermites, intermetallics, and combustible metals Citation Details In-Document Search Title: Theoretical energy release of thermites, intermetallics, and ...
New Effective Multithreaded Matching Algorithms
Manne, Fredrik; Halappanavar, Mahantesh
2014-05-19
Matching is an important combinatorial problem with a number of applications in areas such as community detection, sparse linear algebra, and network alignment. Since computing optimal matchings can be very time consuming, several fast approximation algorithms, both sequential and parallel, have been suggested. Common to the algorithms giving the best solutions is that they tend to be sequential by nature, while algorithms more suitable for parallel computation give solutions of less quality. We present a new simple 1 2 -approximation algorithm for the weighted matching problem. This algorithm is both faster than any other suggested sequential 1 2 -approximation algorithm on almost all inputs and also scales better than previous multithreaded algorithms. We further extend this to a general scalable multithreaded algorithm that computes matchings of weight comparable with the best sequential algorithms. The performance of the suggested algorithms is documented through extensive experiments on different multithreaded architectures.
Structural basis for Tetrahymena telomerase processivity factor...
Office of Scientific and Technical Information (OSTI)
factor Teb1 binding to single-stranded telomeric-repeat DNA Citation Details In-Document Search Title: Structural basis for Tetrahymena telomerase processivity factor Teb1 ...
Property:ExplorationBasis | Open Energy Information
Open Energy Information (Open El) [EERE & EIA]
Text Description Exploration Basis Why was exploration work conducted in this area (e.g., USGS report of a geothermal resource, hot springs with geothemmetry indicating...
On constructing optimistic simulation algorithms for the discrete event system specification
Nutaro, James J
2008-01-01
This article describes a Time Warp simulation algorithm for discrete event models that are described in terms of the Discrete Event System Specification (DEVS). The article shows how the total state transition and total output function of a DEVS atomic model can be transformed into an event processing procedure for a logical process. A specific Time Warp algorithm is constructed around this logical process, and it is shown that the algorithm correctly simulates a DEVS coupled model that consists entirely of interacting atomic models. The simulation algorithm is presented abstractly; it is intended to provide a basis for implementing efficient and scalable parallel algorithms that correctly simulate DEVS models.
Dynamical properties of non-ideal plasma on the basis of effective potentials
Ramazanov, T. S.; Kodanova, S. K.; Moldabekov, Zh. A.; Issanova, M. K.
2013-11-15
In this work, stopping power has been calculated on the basis of the Coulomb logarithm using the effective potentials. Calculations of the Coulomb logarithm and stopping power for different interaction potentials and degrees of ionization are compared. The comparison with the data of other theoretical and experimental works was carried out.
design basis threat | National Nuclear Security Administration
National Nuclear Security Administration (NNSA)
design basis threat Design Basis Threat NNSA has taken aggressive action to improve the security of its nuclear weapons material (often referred to as special nuclear material, or SNM) and nuclear weapons in its custody. One major challenge has been, and remains, ensuring that SNM is well protected, while at the same time
Energy Science and Technology Software Center (OSTI)
2005-03-30
The Robotic Follow Algorithm enables allows any robotic vehicle to follow a moving target while reactively choosing a route around nearby obstacles. The robotic follow behavior can be used with different camera systems and can be used with thermal or visual tracking as well as other tracking methods such as radio frequency tags.
Theoretical Estimate of Maximum Possible Nuclear Explosion
DOE R&D Accomplishments [OSTI]
Bethe, H. A.
1950-01-31
The maximum nuclear accident which could occur in a Na-cooled, Be moderated, Pu and power producing reactor is estimated theoretically. (T.R.H.) 2O82 Results of nuclear calculations for a variety of compositions of fast, heterogeneous, sodium-cooled, U-235-fueled, plutonium- and power-producing reactors are reported. Core compositions typical of plate-, pin-, or wire-type fuel elements and with uranium as metal, alloy, and oxide were considered. These compositions included atom ratios in the following range: U-23B to U-235 from 2 to 8; sodium to U-235 from 1.5 to 12; iron to U-235 from 5 to 18; and vanadium to U-235 from 11 to 33. Calculations were performed to determine the effect of lead and iron reflectors between the core and blanket. Both natural and depleted uranium were evaluated as the blanket fertile material. Reactors were compared on a basis of conversion ratio, specific power, and the product of both. The calculated results are in general agreement with the experimental results from fast reactor assemblies. An analysis of the effect of new cross-section values as they became available is included. (auth)
Theoretical studies of combustion dynamics
Bowman, J.M.
1993-12-01
The basic objectives of this research program are to develop and apply theoretical techniques to fundamental dynamical processes of importance in gas-phase combustion. There are two major areas currently supported by this grant. One is reactive scattering of diatom-diatom systems, and the other is the dynamics of complex formation and decay based on L{sup 2} methods. In all of these studies, the authors focus on systems that are of interest experimentally, and for which potential energy surfaces based, at least in part, on ab initio calculations are available.
Williams, P.T.
1993-09-01
As the field of computational fluid dynamics (CFD) continues to mature, algorithms are required to exploit the most recent advances in approximation theory, numerical mathematics, computing architectures, and hardware. Meeting this requirement is particularly challenging in incompressible fluid mechanics, where primitive-variable CFD formulations that are robust, while also accurate and efficient in three dimensions, remain an elusive goal. This dissertation asserts that one key to accomplishing this goal is recognition of the dual role assumed by the pressure, i.e., a mechanism for instantaneously enforcing conservation of mass and a force in the mechanical balance law for conservation of momentum. Proving this assertion has motivated the development of a new, primitive-variable, incompressible, CFD algorithm called the Continuity Constraint Method (CCM). The theoretical basis for the CCM consists of a finite-element spatial semi-discretization of a Galerkin weak statement, equal-order interpolation for all state-variables, a 0-implicit time-integration scheme, and a quasi-Newton iterative procedure extended by a Taylor Weak Statement (TWS) formulation for dispersion error control. Original contributions to algorithmic theory include: (a) formulation of the unsteady evolution of the divergence error, (b) investigation of the role of non-smoothness in the discretized continuity-constraint function, (c) development of a uniformly H{sup 1} Galerkin weak statement for the Reynolds-averaged Navier-Stokes pressure Poisson equation, (d) derivation of physically and numerically well-posed boundary conditions, and (e) investigation of sparse data structures and iterative methods for solving the matrix algebra statements generated by the algorithm.
Theoretical perspectives on strange physics
Ellis, J.
1983-04-01
Kaons are heavy enough to have an interesting range of decay modes available to them, and light enough to be produced in sufficient numbers to explore rare modes with satisfying statistics. Kaons and their decays have provided at least two major breakthroughs in our knowledge of fundamental physics. They have revealed to us CP violation, and their lack of flavor-changing neutral interactions warned us to expect charm. In addition, K/sup 0/-anti K/sup 0/ mixing has provided us with one of our most elegant and sensitive laboratories for testing quantum mechanics. There is every reason to expect that future generations of kaon experiments with intense sources would add further to our knowledge of fundamental physics. This talk attempts to set future kaon experiments in a general theoretical context, and indicate how they may bear upon fundamental theoretical issues. A survey of different experiments which would be done with an Intense Medium Energy Source of Strangeness, including rare K decays, probes of the nature of CP isolation, ..mu.. decays, hyperon decays and neutrino physics is given. (WHK)
A new augmentation based algorithm for extracting maximal chordal subgraphs
Bhowmick, Sanjukta; Chen, Tzu-Yi; Halappanavar, Mahantesh
2014-10-18
If every cycle of a graph is chordal length greater than three then it contains an edge between non-adjacent vertices. Chordal graphs are of interest both theoretically, since they admit polynomial time solutions to a range of NP-hard graph problems, and practically, since they arise in many applications including sparse linear algebra, computer vision, and computational biology. A maximal chordal subgraph is a chordal subgraph that is not a proper subgraph of any other chordal subgraph. Existing algorithms for computing maximal chordal subgraphs depend on dynamically ordering the vertices, which is an inherently sequential process and therefore limits the algorithmsâ€™ parallelizability. In our paper we explore techniques to develop a scalable parallel algorithm for extracting a maximal chordal subgraph. We demonstrate that an earlier attempt at developing a parallel algorithm may induce a non-optimal vertex ordering and is therefore not guaranteed to terminate with a maximal chordal subgraph. We then give a new algorithm that first computes and then repeatedly augments a spanning chordal subgraph. After proving that the algorithm terminates with a maximal chordal subgraph, we then demonstrate that this algorithm is more amenable to parallelization and that the parallel version also terminates with a maximal chordal subgraph. That said, the complexity of the new algorithm is higher than that of the previous parallel algorithm, although the earlier algorithm computes a chordal subgraph which is not guaranteed to be maximal. Finally, we experimented with our augmentation-based algorithm on both synthetic and real-world graphs. We provide scalability results and also explore the effect of different choices for the initial spanning chordal subgraph on both the running time and on the number of edges in the maximal chordal subgraph.
A new augmentation based algorithm for extracting maximal chordal subgraphs
Bhowmick, Sanjukta; Chen, Tzu-Yi; Halappanavar, Mahantesh
2014-10-18
If every cycle of a graph is chordal length greater than three then it contains an edge between non-adjacent vertices. Chordal graphs are of interest both theoretically, since they admit polynomial time solutions to a range of NP-hard graph problems, and practically, since they arise in many applications including sparse linear algebra, computer vision, and computational biology. A maximal chordal subgraph is a chordal subgraph that is not a proper subgraph of any other chordal subgraph. Existing algorithms for computing maximal chordal subgraphs depend on dynamically ordering the vertices, which is an inherently sequential process and therefore limits the algorithmsâ€™moreÂ Â» parallelizability. In our paper we explore techniques to develop a scalable parallel algorithm for extracting a maximal chordal subgraph. We demonstrate that an earlier attempt at developing a parallel algorithm may induce a non-optimal vertex ordering and is therefore not guaranteed to terminate with a maximal chordal subgraph. We then give a new algorithm that first computes and then repeatedly augments a spanning chordal subgraph. After proving that the algorithm terminates with a maximal chordal subgraph, we then demonstrate that this algorithm is more amenable to parallelization and that the parallel version also terminates with a maximal chordal subgraph. That said, the complexity of the new algorithm is higher than that of the previous parallel algorithm, although the earlier algorithm computes a chordal subgraph which is not guaranteed to be maximal. Finally, we experimented with our augmentation-based algorithm on both synthetic and real-world graphs. We provide scalability results and also explore the effect of different choices for the initial spanning chordal subgraph on both the running time and on the number of edges in the maximal chordal subgraph.Â«Â less
Large scale tracking algorithms.
Hansen, Ross L.; Love, Joshua Alan; Melgaard, David Kennett; Karelitz, David B.; Pitts, Todd Alan; Zollweg, Joshua David; Anderson, Dylan Z.; Nandy, Prabal; Whitlow, Gary L.; Bender, Daniel A.; Byrne, Raymond Harry
2015-01-01
Low signal-to-noise data processing algorithms for improved detection, tracking, discrimination and situational threat assessment are a key research challenge. As sensor technologies progress, the number of pixels will increase signi cantly. This will result in increased resolution, which could improve object discrimination, but unfortunately, will also result in a significant increase in the number of potential targets to track. Many tracking techniques, like multi-hypothesis trackers, suffer from a combinatorial explosion as the number of potential targets increase. As the resolution increases, the phenomenology applied towards detection algorithms also changes. For low resolution sensors, "blob" tracking is the norm. For higher resolution data, additional information may be employed in the detection and classfication steps. The most challenging scenarios are those where the targets cannot be fully resolved, yet must be tracked and distinguished for neighboring closely spaced objects. Tracking vehicles in an urban environment is an example of such a challenging scenario. This report evaluates several potential tracking algorithms for large-scale tracking in an urban environment.
Safety Basis Information System | Department of Energy
Office of Energy Efficiency and Renewable Energy (EERE) (indexed site)
Request Click on the above link to access the form to request access to the Safety Basis web interface. If you need assistance logging in, please AU UserSupport. Contact Nimi Rao...
Basis for UCNI | Department of Energy
Office of Energy Efficiency and Renewable Energy (EERE) (indexed site)
UCNI Basis for UCNI What documents contain the legal and policy foundations for the UCNI program? Section 148 of the Atomic Energy Act of 1954, as amended (42 U.S.C. 2011 et seq.), is the statutory basis for the UCNI program. 10 CFR Part 1017, Identification and Protection of Unclassified Controlled Nuclear Information specifies many detailed policies and requirements concerning the UCNI program. DOE O 471.1B, Identification and Protection of Unclassified Controlled Nuclear Information,
Critical review of theoretical models for anomalous effects in deuterated metals
Chechin, V.A.; Tsarev, V.A. ); Rabinowitz, M. ); Kim, Y.E. )
1994-03-01
The authors briefly summarize the reported anomalous effects in deuterated metals at ambient temperature commonly known as [open quotes]cold fusion[close quotes] (CF) with an emphasis on the latest experiments, as well as the theoretical basis for the opposition to interpreting them as cold fusion. Then they critically examine more than 25 theoretical models for CF, including unusual nuclear and exotic chemical hypotheses. They conclude that they do not explain the data.
Rossi, Tuomas P. Sakko, Arto; Puska, Martti J.; Lehtola, Susi; Nieminen, Risto M.
2015-03-07
We present an approach for generating local numerical basis sets of improving accuracy for first-principles nanoplasmonics simulations within time-dependent density functional theory. The method is demonstrated for copper, silver, and gold nanoparticles that are of experimental interest but computationally demanding due to the semi-core d-electrons that affect their plasmonic response. The basis sets are constructed by augmenting numerical atomic orbital basis sets by truncated Gaussian-type orbitals generated by the completeness-optimization scheme, which is applied to the photoabsorption spectra of homoatomic metal atom dimers. We obtain basis sets of improving accuracy up to the complete basis set limit and demonstrate that the performance of the basis sets transfers to simulations of larger nanoparticles and nanoalloys as well as to calculations with various exchange-correlation functionals. This work promotes the use of the local basis set approach of controllable accuracy in first-principles nanoplasmonics simulations and beyond.
Recent Theoretical Results for Advanced Thermoelectric Materials...
More Documents & Publications Recent Theoretical Results for Advanced Thermoelectric Materials Thermoelectric Materials by Design, Computational Theory and Structure ...
Gu, Renliang E-mail: ald@iastate.edu; DogandÅ¾iÄ‡, Aleksandar E-mail: ald@iastate.edu
2015-03-31
We develop a sparse image reconstruction method for polychromatic computed tomography (CT) measurements under the blind scenario where the material of the inspected object and the incident energy spectrum are unknown. To obtain a parsimonious measurement model parameterization, we first rewrite the measurement equation using our mass-attenuation parameterization, which has the Laplace integral form. The unknown mass-attenuation spectrum is expanded into basis functions using a B-spline basis of order one. We develop a block coordinate-descent algorithm for constrained minimization of a penalized negative log-likelihood function, where constraints and penalty terms ensure nonnegativity of the spline coefficients and sparsity of the density map image in the wavelet domain. This algorithm alternates between a Nesterovâ€™s proximal-gradient step for estimating the density map image and an active-set step for estimating the incident spectrum parameters. Numerical simulations demonstrate the performance of the proposed scheme.
Research in Theoretical Particle Physics
Feldman, Hume A; Marfatia, Danny
2014-09-24
This document is the final report on activity supported under DOE Grant Number DE-FG02-13ER42024. The report covers the period July 15, 2013 â€“ March 31, 2014. Faculty supported by the grant during the period were Danny Marfatia (1.0 FTE) and Hume Feldman (1% FTE). The grant partly supported University of Hawaii students, David Yaylali and Keita Fukushima, who are supervised by Jason Kumar. Both students are expected to graduate with Ph.D. degrees in 2014. Yaylali will be joining the University of Arizona theory group in Fall 2014 with a 3-year postdoctoral appointment under Keith Dienes. The groupâ€™s research covered topics subsumed under the Energy Frontier, the Intensity Frontier, and the Cosmic Frontier. Many theoretical results related to the Standard Model and models of new physics were published during the reporting period. The report contains brief project descriptions in Section 1. Sections 2 and 3 lists published and submitted work, respectively. Sections 4 and 5 summarize group activity including conferences, workshops and professional presentations.
Smith, Kyle K. G.; Poulsen, Jens Aage Nyman, Gunnar; Rossky, Peter J.
2015-06-28
We develop two classes of quasi-classical dynamics that are shown to conserve the initial quantum ensemble when used in combination with the Feynman-Kleinert approximation of the density operator. These dynamics are used to improve the Feynman-Kleinert implementation of the classical Wigner approximation for the evaluation of quantum time correlation functions known as Feynman-Kleinert linearized path-integral. As shown, both classes of dynamics are able to recover the exact classical and high temperature limits of the quantum time correlation function, while a subset is able to recover the exact harmonic limit. A comparison of the approximate quantum time correlation functions obtained from both classes of dynamics is made with the exact results for the challenging model problems of the quartic and double-well potentials. It is found that these dynamics provide a great improvement over the classical Wigner approximation, in which purely classical dynamics are used. In a special case, our first method becomes identical to centroid molecular dynamics.
CRAD, Facility Safety- Nuclear Facility Safety Basis
Office of Energy Efficiency and Renewable Energy (EERE)
A section of Appendix C to DOE G 226.1-2 "Federal Line Management Oversight of Department of Energy Nuclear Facilities." Consists of Criteria Review and Approach Documents (CRADs) that can be used for assessment of a contractor's Nuclear Facility Safety Basis.
TWRS authorization basis configuration control summary
Mendoza, D.P.
1997-12-26
This document was developed to define the Authorization Basis management functional requirements for configuration control, to evaluate the management control systems currently in place, and identify any additional controls that may be required until the TWRS [Tank Waste Remediation System] Configuration Management system is fully in place.
Review and Approval of Nuclear Facility Safety Basis and Safety Design Basis Documents
Directives, Delegations, and Requirements [Office of Management (MA)]
2014-12-19
This Standard describes a framework and the criteria to be used for approval of (1) safety basis documents, as required by 10 Code of Federal Regulation (C.F.R.) 830, Nuclear Safety Management, and (2) safety design basis documents, as required by Department of Energy (DOE) Standard (STD)-1189-2008, Integration of Safety into the Design Process.
Cyclotron Institute Â» Theoretical Nuclear Physics
U.S. Department of Energy (DOE) all webpages (Extended Search)
Theoretical Nuclear Physics Coulomb Barrier Progress toward understanding the structure and behavior of strongly interacting many-body systems requires detailed theoretical study. The theoretical physics program concentrates on the development of fundamental and phenomenological models of nuclear behavior. In some systems, the nucleons move quite freely and independently, while in others they behave in a very cooperative and coherent manner. To understand this dichotomy and search for new modes
Parameters and error of a theoretical model
Moeller, P.; Nix, J.R.; Swiatecki, W.
1986-09-01
We propose a definition for the error of a theoretical model of the type whose parameters are determined from adjustment to experimental data. By applying a standard statistical method, the maximum-likelihoodlmethod, we derive expressions for both the parameters of the theoretical model and its error. We investigate the derived equations by solving them for simulated experimental and theoretical quantities generated by use of random number generators. 2 refs., 4 tabs.
Theoretical High Energy Physics | Argonne National Laboratory
U.S. Department of Energy (DOE) all webpages (Extended Search)
Research Accelerator Technology ATLAS at the LHC Cosmology & Astrophysics Instrumentation Precision Muon Physics Neutrino Physics Theoretical High Energy Physics Theoretical High Energy Physics Theoretical High Energy Physics Much of the work of high-energy physics concentrates on the interplay between theory and experiment. The theory group of Argonne's High Energy Physics Division performs high-precision calculations of Standard Model processes, interprets experimental data in terms of
Arctic Mixed-Phase Cloud Properties from AERI Lidar Observations: Algorithm and Results from SHEBA
Turner, David D.
2005-04-01
A new approach to retrieve microphysical properties from mixed-phase Arctic clouds is presented. This mixed-phase cloud property retrieval algorithm (MIXCRA) retrieves cloud optical depth, ice fraction, and the effective radius of the water and ice particles from ground-based, high-resolution infrared radiance and lidar cloud boundary observations. The theoretical basis for this technique is that the absorption coefficient of ice is greater than that of liquid water from 10 to 13 ?m, whereas liquid water is more absorbing than ice from 16 to 25 ?m. MIXCRA retrievals are only valid for optically thin (?visible < 6) single-layer clouds when the precipitable water vapor is less than 1 cm. MIXCRA was applied to the Atmospheric Emitted Radiance Interferometer (AERI) data that were collected during the Surface Heat Budget of the Arctic Ocean (SHEBA) experiment from November 1997 to May 1998, where 63% of all of the cloudy scenes above the SHEBA site met this specification. The retrieval determined that approximately 48% of these clouds were mixed phase and that a significant number of clouds (during all 7 months) contained liquid water, even for cloud temperatures as low as 240 K. The retrieved distributions of effective radii for water and ice particles in single-phase clouds are shown to be different than the effective radii in mixed-phase clouds.
Catalyst by Design - Theoretical, Nanostructural, and Experimental...
Office of Energy Efficiency and Renewable Energy (EERE) (indexed site)
Oxidation Catalyst for Diesel Engine Emission Treatment Catalyst by Design - Theoretical, Nanostructural, and Experimental Studies of Oxidation Catalyst for Diesel Engine Emission ...
Catalyst by Design - Theoretical, Nanostructural, and Experimental...
Catalyst by Design - Theoretical, Nanostructural, and Experimental Studies of Oxidation Catalyst for Diesel Engine Emission Treatment Catalysts via First Principles Catalysts via ...
COLLOQUIUM: Theoretical and Experimental Aspects of Controlled...
U.S. Department of Energy (DOE) all webpages (Extended Search)
5:30pm MBG Auditorium COLLOQUIUM: Theoretical and Experimental Aspects of Controlled Quantum Dynamics Professor Herschel Rabitz Princeton University Abstract: PDF icon...
2005 American Conference on Theoretical Chemistry
Carter, Emily A
2006-11-19
The materials uploaded are meant to serve as final report on the funds provided by DOE-BES to help sponsor the 2005 American Conference on Theoretical Chemistry.
Theoretical calculating the thermodynamic properties of solid...
Office of Scientific and Technical Information (OSTI)
calculations, a theoretical screening methodology to identify the most promising COsub ... Such methodology not only can be used to search for good candidates from existing database ...
TECHNICAL BASIS DOCUMENT FOR NATURAL EVENT HAZARDS
KRIPPS, L.J.
2006-07-31
This technical basis document was developed to support the documented safety analysis (DSA) and describes the risk binning process and the technical basis for assigning risk bins for natural event hazard (NEH)-initiated accidents. The purpose of the risk binning process is to determine the need for safety-significant structures, systems, and components (SSC) and technical safety requirement (TSR)-level controls for a given representative accident or represented hazardous conditions based on an evaluation of the frequency and consequence. Note that the risk binning process is not applied to facility workers, because all facility worker hazardous conditions are considered for safety-significant SSCs and/or TSR-level controls.
Radioactive Waste Management BasisApril 2006
Perkins, B K
2011-08-31
This Radioactive Waste Management Basis (RWMB) documents radioactive waste management practices adopted at Lawrence Livermore National Laboratory (LLNL) pursuant to Department of Energy (DOE) Order 435.1, Radioactive Waste Management. The purpose of this Radioactive Waste Management Basis is to describe the systematic approach for planning, executing, and evaluating the management of radioactive waste at LLNL. The implementation of this document will ensure that waste management activities at LLNL are conducted in compliance with the requirements of DOE Order 435.1, Radioactive Waste Management, and the Implementation Guide for DOE Manual 435.1-1, Radioactive Waste Management Manual. Technical justification is provided where methods for meeting the requirements of DOE Order 435.1 deviate from the DOE Manual 435.1-1 and Implementation Guide.
Grid and basis adaptive polynomial chaos techniques for sensitivity and uncertainty analysis
Perkó, Zoltán Gilli, Luca Lathouwers, Danny Kloosterman, Jan Leen
2014-03-01
The demand for accurate and computationally affordable sensitivity and uncertainty techniques is constantly on the rise and has become especially pressing in the nuclear field with the shift to Best Estimate Plus Uncertainty methodologies in the licensing of nuclear installations. Besides traditional, already well developed methods – such as first order perturbation theory or Monte Carlo sampling – Polynomial Chaos Expansion (PCE) has been given a growing emphasis in recent years due to its simple application and good performance. This paper presents new developments of the research done at TU Delft on such Polynomial Chaos (PC) techniques. Our work is focused on the Non-Intrusive Spectral Projection (NISP) approach and adaptive methods for building the PCE of responses of interest. Recent efforts resulted in a new adaptive sparse grid algorithm designed for estimating the PC coefficients. The algorithm is based on Gerstner's procedure for calculating multi-dimensional integrals but proves to be computationally significantly cheaper, while at the same it retains a similar accuracy as the original method. More importantly the issue of basis adaptivity has been investigated and two techniques have been implemented for constructing the sparse PCE of quantities of interest. Not using the traditional full PC basis set leads to further reduction in computational time since the high order grids necessary for accurately estimating the near zero expansion coefficients of polynomial basis vectors not needed in the PCE can be excluded from the calculation. Moreover the sparse PC representation of the response is easier to handle when used for sensitivity analysis or uncertainty propagation due to the smaller number of basis vectors. The developed grid and basis adaptive methods have been implemented in Matlab as the Fully Adaptive Non-Intrusive Spectral Projection (FANISP) algorithm and were tested on four analytical problems. These show consistent good performance both in
Computational and theoretical aspects of biomolecular structure and dynamics
Garcia, A.E.; Berendzen, J.; Catasti, P., Chen, X.
1996-09-01
This is the final report for a project that sought to evaluate and develop theoretical, and computational bases for designing, performing, and analyzing experimental studies in structural biology. Simulations of large biomolecular systems in solution, hydrophobic interactions, and quantum chemical calculations for large systems have been performed. We have developed a code that implements the Fast Multipole Algorithm (FMA) that scales linearly in the number of particles simulated in a large system. New methods have been developed for the analysis of multidimensional NMR data in order to obtain high resolution atomic structures. These methods have been applied to the study of DNA sequences in the human centromere, sequences linked to genetic diseases, and the dynamics and structure of myoglobin.
Review and Approval of Nuclear Facility Safety Basis and Safety Design Basis Documents
Office of Energy Efficiency and Renewable Energy (EERE) (indexed site)
SENSITIVE DOE-STD-1104-2014 December 2014 Superseding DOE-STD-1104-2009 DOE STANDARD REVIEW AND APPROVAL OF NUCLEAR FACILITY SAFETY BASIS AND SAFETY DESIGN BASIS DOCUMENTS U.S. Department of Energy AREA SAFT Washington, DC 20585 DISTRIBUTION STATEMENT A. Approved for public release; distribution is unlimited. DOE-STD-1104-2014 i FOREWORD 1. This Standard describes a framework and the criteria to be used for approval of (1) safety basis documents, as required by 10 Code of Federal Regulation
Cubit Adaptive Meshing Algorithm Library
Energy Science and Technology Software Center (OSTI)
2004-09-01
CAMAL (Cubit adaptive meshing algorithm library) is a software component library for mesh generation. CAMAL 2.0 includes components for triangle, quad and tetrahedral meshing. A simple Application Programmers Interface (API) takes a discrete boundary definition and CAMAL computes a quality interior unstructured grid. The triangle and quad algorithms may also import a geometric definition of a surface on which to define the grid. CAMALÂ’s triangle meshing uses a 3D space advancing front method, the quadmoreÂ Â» meshing algorithm is based upon SandiaÂ’s patented paving algorithm and the tetrahedral meshing algorithm employs the GHS3D-Tetmesh component developed by INRIA, France.Â«Â less
Graph Characterization and Sampling Algorithms
Office of Scientific and Technical Information (OSTI)
Sandia National Laboratories ubiquitous Computer traffic Social networks Biological ... conference on Innovations in theoretical computer science, pp. 471-482, 2014, doi:10.1145...
Technical Cost Modeling - Life Cycle Analysis Basis for Program...
Office of Energy Efficiency and Renewable Energy (EERE) (indexed site)
More Documents & Publications Technical Cost Modeling - Life Cycle Analysis Basis for Program Focus Technical Cost Modeling - Life Cycle Analysis Basis for Program Focus Polymer ...
Technical Cost Modeling - Life Cycle Analysis Basis for Program...
Office of Energy Efficiency and Renewable Energy (EERE) (indexed site)
More Documents & Publications Technical Cost Modeling - Life Cycle Analysis Basis for Program Focus Technical Cost Modeling - Life Cycle Analysis Basis for Program Focus Life Cycle ...
Technical Cost Modeling - Life Cycle Analysis Basis for Program...
Office of Energy Efficiency and Renewable Energy (EERE) (indexed site)
More Documents & Publications Technical Cost Modeling - Life Cycle Analysis Basis for Program Focus Technical Cost Modeling - Life Cycle Analysis Basis for Program Focus ...
Structural Basis for the Interaction between Pyk2-FAT Domain...
Office of Scientific and Technical Information (OSTI)
Structural Basis for the Interaction between Pyk2-FAT Domain and Leupaxin LD Repeats Citation Details In-Document Search Title: Structural Basis for the Interaction between ...
Structural Basis of Prion Inhibition by Phenothiazine Compounds...
Office of Scientific and Technical Information (OSTI)
SciTech Connect Search Results Journal Article: Structural Basis of Prion Inhibition by Phenothiazine Compounds Citation Details In-Document Search Title: Structural Basis of Prion ...
ORISE: The Medical Basis for Radiation-Accident Preparedness...
U.S. Department of Energy (DOE) all webpages (Extended Search)
The Medical Basis for Radiation-Accident Preparedness: Medical Management Proceedings of the Fifth International REACTS Symposium on the Medical Basis for Radiation-Accident ...
Heavy quarkonium in a holographic basis (Journal Article) | DOE...
Office of Scientific and Technical Information (OSTI)
Heavy quarkonium in a holographic basis Title: Heavy quarkonium in a holographic basis Authors: Li, Yang Search DOE PAGES for author "Li, Yang" Search DOE PAGES for ORCID ...
Los Alamos National Laboratory fission basis (Conference) | SciTech...
Office of Scientific and Technical Information (OSTI)
Los Alamos National Laboratory fission basis Citation Details In-Document Search Title: Los Alamos National Laboratory fission basis You are accessing a document from the ...
Enterprise Assessments Targeted Review of the Safety Basis at...
Office of Energy Efficiency and Renewable Energy (EERE) (indexed site)
Review of the Safety Basis at the Savannah River Site F-Area ... Office of Environment, Safety and Health Assessments ... SAR Safety Analysis Report SBRT Safety Basis Review ...
A molecular basis for advanced materials in water treatment....
Office of Scientific and Technical Information (OSTI)
A molecular basis for advanced materials in water treatment. Citation Details In-Document Search Title: A molecular basis for advanced materials in water treatment. Authors: Rempe, ...
Technical Basis for PNNL Beryllium Inventory
Johnson, Michelle Lynn
2014-07-09
The Department of Energy (DOE) issued Title 10 of the Code of Federal Regulations Part 850, â€œChronic Beryllium Disease Prevention Programâ€ (the Beryllium Rule) in 1999 and required full compliance by no later than January 7, 2002. The Beryllium Rule requires the development of a baseline beryllium inventory of the locations of beryllium operations and other locations of potential beryllium contamination at DOE facilities. The baseline beryllium inventory is also required to identify workers exposed or potentially exposed to beryllium at those locations. Prior to DOE issuing 10 CFR 850, Pacific Northwest Nuclear Laboratory (PNNL) had documented the beryllium characterization and worker exposure potential for multiple facilities in compliance with DOEâ€™s 1997 Notice 440.1, â€œInterim Chronic Beryllium Disease.â€ After DOEâ€™s issuance of 10 CFR 850, PNNL developed an implementation plan to be compliant by 2002. In 2014, an internal self-assessment (ITS #E-00748) of PNNLâ€™s Chronic Beryllium Disease Prevention Program (CBDPP) identified several deficiencies. One deficiency is that the technical basis for establishing the baseline beryllium inventory when the Beryllium Rule was implemented was either not documented or not retrievable. In addition, the beryllium inventory itself had not been adequately documented and maintained since PNNL established its own CBDPP, separate from Hanford Siteâ€™s program. This document reconstructs PNNLâ€™s baseline beryllium inventory as it would have existed when it achieved compliance with the Beryllium Rule in 2001 and provides the technical basis for the baseline beryllium inventory.
Optimized Algorithms Boost Combustion Research
U.S. Department of Energy (DOE) all webpages (Extended Search)
Optimized Algorithms Boost Combustion Research Optimized Algorithms Boost Combustion Research Methane Flame Simulations Run 6x Faster on NERSC's Hopper Supercomputer November 25, 2014 Contact: Kathy Kincade, +1 510 495 2124, kkincade@lbl.gov Turbulent combustion simulations, which provide input to the design of more fuel-efficient combustion systems, have gotten their own efficiency boost, thanks to researchers from the Computational Research Division (CRD) at Lawrence Berkeley National
Radioactive Waste Management BasisSept 2001
Goodwin, S S
2011-08-31
This Radioactive Waste Management Basis (RWMB) documents radioactive waste management practices adopted at Lawrence Livermore National Laboratory (LLNL) pursuant to Department of Energy (DOE) Order 435.1, Radioactive Waste Management. The purpose of this RWMB is to describe the systematic approach for planning, executing, and evaluating the management of radioactive waste at LLNL. The implementation of this document will ensure that waste management activities at LLNL are conducted in compliance with the requirements of DOE Order 435.1, Radioactive Waste Management, and the Implementation Guide for DOE manual 435.1-1, Radioactive Waste Management Manual. Technical justification is provided where methods for meeeting the requirements of DOE Order 435.1 deviate from the DOE Manual 435.1-1 and Implementation Guide.
Berkeley Algorithms Help Researchers Understand Dark Energy
U.S. Department of Energy (DOE) all webpages (Extended Search)
Algorithms Help Researchers Understand Dark Energy Berkeley Algorithms Help Researchers Understand Dark Energy November 24, 2014 Contact: Linda Vu, +1 510 495 2402, lvu@lbl.gov ...
Theoretical Fusion Research | Princeton Plasma Physics Lab
U.S. Department of Energy (DOE) all webpages (Extended Search)
NSTX-U Education Organization Contact Us Overview Experimental Fusion Research Theoretical Fusion Research Basic Plasma Science Plasma Astrophysics Other Physics and Engineering Research PPPL Technical Reports NSTX-U Theoretical Fusion Research About Theory Department The fusion energy sciences mission of the Theory Department at the Princeton Plasma Physics Laboratory (PPPL) is to help provide the scientific foundations for establishing magnetic confinement as an attractive, technically
Theoretical Plasma Physicist | Princeton Plasma Physics Lab
U.S. Department of Energy (DOE) all webpages (Extended Search)
Theoretical Plasma Physicist Department: Theory Supervisor(s): Amitava Bhattacharjee Staff: RM 3 Requisition Number: 16000351 PPPL/Theory Department has an opening at the rank of Research Physicist in theoretical plasma physics. Research areas of interest include macroscopic equilibrium and stability, energetic particles, turbulence and transport, and waves in fusion plasmas. The Department is looking to recruit an exceptionally strong theorist with leadership potential. Minimum qualifications
Review and Approval of Nuclear Facility Safety Basis and Safety Design Basis Documents
Office of Energy Efficiency and Renewable Energy (EERE) (indexed site)
SENSITIVE DOE-STD-1104-2009 May 2009 Superseding DOE-STD-1104-96 DOE STANDARD REVIEW AND APPROVAL OF NUCLEAR FACILITY SAFETY BASIS AND SAFETY DESIGN BASIS DOCUMENTS U.S. Department of Energy AREA SAFT Washington, DC 20585 DISTRIBUTION STATEMENT A. Approved for public release; distribution is unlimited. DOE-STD-1104-2009 ii Available on the Department of Energy Technical Standards web page at http://www.hss.energy.gov/nuclearsafety/ns/techstds/ DOE-STD-1104-2009 iii CONTENTS FOREWORD
Generation of multi-million element meshes for solid model-based geometries: The Dicer algorithm
Melander, D.J.; Benzley, S.E.; Tautges, T.J.
1997-06-01
The Dicer algorithm generates a fine mesh by refining each element in a coarse all-hexahedral mesh generated by any existing all-hexahedral mesh generation algorithm. The fine mesh is geometry-conforming. Using existing all-hexahedral meshing algorithms to define the initial coarse mesh simplifies the overall meshing process and allows dicing to take advantage of improvements in other meshing algorithms immediately. The Dicer algorithm will be used to generate large meshes in support of the ASCI program. The authors also plan to use dicing as the basis for parallel mesh generation. Dicing strikes a careful balance between the interactive mesh generation and multi-million element mesh generation processes for complex 3D geometries, providing an efficient means for producing meshes of varying refinement once the coarse mesh is obtained.
GPU Accelerated Event Detection Algorithm
Energy Science and Technology Software Center (OSTI)
2011-05-25
Smart grid external require new algorithmic approaches as well as parallel formulations. One of the critical components is the prediction of changes and detection of anomalies within the power grid. The state-of-the-art algorithms are not suited to handle the demands of streaming data analysis. (i) need for events detection algorithms that can scale with the size of data, (ii) need for algorithms that can not only handle multi dimensional nature of the data, but alsomoreÂ Â» model both spatial and temporal dependencies in the data, which, for the most part, are highly nonlinear, (iii) need for algorithms that can operate in an online fashion with streaming data. The GAEDA code is a new online anomaly detection techniques that take into account spatial, temporal, multi-dimensional aspects of the data set. The basic idea behind the proposed approach is to (a) to convert a multi-dimensional sequence into a univariate time series that captures the changes between successive windows extracted from the original sequence using singular value decomposition (SVD), and then (b) to apply known anomaly detection techniques for univariate time series. A key challenge for the proposed approach is to make the algorithm scalable to huge datasets by adopting techniques from perturbation theory, incremental SVD analysis. We used recent advances in tensor decomposition techniques which reduce computational complexity to monitor the change between successive windows and detect anomalies in the same manner as described above. Therefore we propose to develop the parallel solutions on many core systems such as GPUs, because these algorithms involve lot of numerical operations and are highly data-parallelizable.Â«Â less
Probability-theoretic characteristics of solar batteries
Lidorenko, N.S.; Asharin, L.N.; Borisova, N.A.; Evdokimov, V.M.; Ryabikov, S.V.
1980-01-01
Results are reported for an investigation into the characteristics of solar batteries on the basis of probability theory with the photocells treated as current generators; methods for reducing solar-battery circuit losses are considered.
Real-time algorithm for robust coincidence search
Petrovic, T.; Vencelj, M.; Lipoglavsek, M.; Gajevic, J.; Pelicon, P.
2012-10-20
In in-beam {gamma}-ray spectroscopy experiments, we often look for coincident detection events. Among every N events detected, coincidence search is naively of principal complexity O(N{sup 2}). When we limit the approximate width of the coincidence search window, the complexity can be reduced to O(N), permitting the implementation of the algorithm into real-time measurements, carried out indefinitely. We have built an algorithm to find simultaneous events between two detection channels. The algorithm was tested in an experiment where coincidences between X and {gamma} rays detected in two HPGe detectors were observed in the decay of {sup 61}Cu. Functioning of the algorithm was validated by comparing calculated experimental branching ratio for EC decay and theoretical calculation for 3 selected {gamma}-ray energies for {sup 61}Cu decay. Our research opened a question on the validity of the adopted value of total angular momentum of the 656 keV state (J{sup {pi}} = 1/2{sup -}) in {sup 61}Ni.
Interim Basis for PCB Sampling and Analyses
BANNING, D.L.
2001-03-20
This document was developed as an interim basis for sampling and analysis of polychlorinated biphenyls (PCBs) and will be used until a formal data quality objective (DQO) document is prepared and approved. On August 31, 2000, the Framework Agreement for Management of Polychlorinated Biphenyls (PCBs) in Hanford Tank Waste was signed by the U.S. Department of Energy (DOE), the Environmental Protection Agency (EPA), and the Washington State Department of Ecology (Ecology) (Ecology et al. 2000). This agreement outlines the management of double shell tank (DST) waste as Toxic Substance Control Act (TSCA) PCB remediation waste based on a risk-based disposal approval option per Title 40 of the Code of Federal Regulations 761.61 (c). The agreement calls for ''Quantification of PCBs in DSTs, single shell tanks (SSTs), and incoming waste to ensure that the vitrification plant and other ancillary facilities PCB waste acceptance limits and the requirements of the anticipated risk-based disposal approval are met.'' Waste samples will be analyzed for PCBs to satisfy this requirement. This document describes the DQO process undertaken to assure appropriate data will be collected to support management of PCBs and is presented in a DQO format. The DQO process was implemented in accordance with the U.S. Environmental Protection Agency EPA QA/G4, Guidance for the Data Quality Objectives Process (EPA 1994) and the Data Quality Objectives for Sampling and Analyses, HNF-IP-0842, Rev. 1A, Vol. IV, Section 4.16 (Banning 1999).
Interim Basis for PCB Sampling and Analyses
BANNING, D.L.
2001-01-18
This document was developed as an interim basis for sampling and analysis of polychlorinated biphenyls (PCBs) and will be used until a formal data quality objective (DQO) document is prepared and approved. On August 31, 2000, the Framework Agreement for Management of Polychlorinated Biphenyls (PCBs) in Hanford Tank Waste was signed by the US. Department of Energy (DOE), the Environmental Protection Agency (EPA), and the Washington State Department of Ecology (Ecology) (Ecology et al. 2000). This agreement outlines the management of double shell tank (DST) waste as Toxic Substance Control Act (TSCA) PCB remediation waste based on a risk-based disposal approval option per Title 40 of the Code of Federal Regulations 761.61 (c). The agreement calls for ''Quantification of PCBs in DSTs, single shell tanks (SSTs), and incoming waste to ensure that the vitrification plant and other ancillary facilities PCB waste acceptance limits and the requirements of the anticipated risk-based disposal approval are met.'' Waste samples will be analyzed for PCBs to satisfy this requirement. This document describes the DQO process undertaken to assure appropriate data will be collected to support management of PCBs and is presented in a DQO format. The DQO process was implemented in accordance with the U.S. Environmental Protection Agency EPA QAlG4, Guidance for the Data Quality Objectives Process (EPA 1994) and the Data Quality Objectives for Sampling and Analyses, HNF-IP-0842, Rev. 1 A, Vol. IV, Section 4.16 (Banning 1999).
Theoretical vibrations of carbon chains C3, C4, C5, C6, C7, C8, and C9
Kurtz, J.; Adamowicz, L. Arizona Univ., Tucson )
1991-04-01
The MBPT (2) procedure with the 6-31g (asterisk) basis set was used to study nearly linear carbon chains. The theoretical vibrational frequencies of the molecules C3 through C9 are presented and, for C3 through C6, compared to experimental stretching frequencies and their (C-13)/(C-12) isotopomers. Predictions for C7, C8, and C9 stretching frequencies are calculated by directly scaling the theoretical frequencies with factors derived from experimental-to-theoretical ratios known for the smaller molecules. 28 refs.
A radial basis function Galerkin method for inhomogeneous nonlocal diffusion
Lehoucq, Richard B.; Rowe, Stephen T.
2016-02-01
We introduce a discretization for a nonlocal diffusion problem using a localized basis of radial basis functions. The stiffness matrix entries are assembled by a special quadrature routine unique to the localized basis. Combining the quadrature method with the localized basis produces a well-conditioned, sparse, symmetric positive definite stiffness matrix. We demonstrate that both the continuum and discrete problems are well-posed and present numerical results for the convergence behavior of the radial basis function method. As a result, we explore approximating the solution to anisotropic differential equations by solving anisotropic nonlocal integral equations using the radial basis function method.
Jet measurements at D0 using a KT algorithm
V.Daniel Elvira
2002-10-03
D0 has implemented and calibrated a k{perpendicular} jet algorithm for the first time in a p{bar p} collider. We present two results based on 1992-1996 data which were recently published: the subjet multiplicity in quark and gluon jets and the central inclusive jet cross section. The measured ratio between subjet multiplicities in gluon and quark jets is consistent with theoretical predictions and previous experimental values. NLO pQCD predictions of the k{perpendicular} inclusive jet cross section agree with the D0 measurement, although marginally in the low p{sub T} range. We also present a preliminary measurement of thrust cross sections, which indicates the need to include higher than {alpha}{sub s}{sup 3} terms and resumation in the theoretical calculations.
A new paradigm for the molecular basis of rubber elasticity
Hanson, David E.; Barber, John L.
2015-02-19
The molecular basis for rubber elasticity is arguably the oldest and one of the most important questions in the field of polymer physics. The theoretical investigation of rubber elasticity began in earnest almost a century ago with the development of analytic thermodynamic models, based on simple, highly-symmetric configurations of so-called Gaussian chains, i.e. polymer chains that obey Markov statistics. Numerous theories have been proposed over the past 90 years based on the ansatz that the elastic force for individual network chains arises from the entropy change associated with the distribution of end-to-end distances of a free polymer chain. There are serious philosophical objections to this assumption and others, such as the assumption that all network nodes undergo affine motion and that all of the network chains have the same length. Recently, a new paradigm for elasticity in rubber networks has been proposed that is based on mechanisms that originate at the molecular level. Using conventional statistical mechanics analyses, quantum chemistry, and molecular dynamics simulations, the fundamental entropic and enthalpic chain extension forces for polyisoprene (natural rubber) have been determined, along with estimates for the basic force constants. Concurrently, the complex morphology of natural rubber networks (the joint probability density distributions that relate the chain end-to-end distance to its contour length) has also been captured in a numerical model. When molecular chain forces are merged with the network structure in this model, it is possible to study the mechanical response to tensile and compressive strains of a representative volume element of a polymer network. As strain is imposed on a network, pathways of connected taut chains, that completely span the network along strain axis, emerge. Although these chains represent only a few percent of the total, they account for nearly all of the elastic stress at high strain. Here we provide a brief
PARFUME Theory and Model basis Report
Darrell L. Knudson; Gregory K Miller; G.K. Miller; D.A. Petti; J.T. Maki; D.L. Knudson
2009-09-01
The success of gas reactors depends upon the safety and quality of the coated particle fuel. The fuel performance modeling code PARFUME simulates the mechanical, thermal and physico-chemical behavior of fuel particles during irradiation. This report documents the theory and material properties behind variÂ¬ous capabilities of the code, which include: 1) various options for calculating CO production and fission product gas release, 2) an analytical solution for stresses in the coating layers that accounts for irradiation-induced creep and swelling of the pyrocarbon layers, 3) a thermal model that calculates a time-dependent temperature profile through a pebble bed sphere or a prismatic block core, as well as through the layers of each analyzed particle, 4) simulation of multi-dimensional particle behavior associated with cracking in the IPyC layer, partial debonding of the IPyC from the SiC, particle asphericity, and kernel migration (or amoeba effect), 5) two independent methods for determining particle failure probabilities, 6) a model for calculating release-to-birth (R/B) ratios of gaseous fission products that accounts for particle failures and uranium contamination in the fuel matrix, and 7) the evaluation of an accident condition, where a particle experiences a sudden change in temperature following a period of normal irradiation. The accident condiÂ¬tion entails diffusion of fission products through the particle coating layers and through the fuel matrix to the coolant boundary. This document represents the initial version of the PARFUME Theory and Model Basis Report. More detailed descriptions will be provided in future revisions.
Theoretical studies of chemical reaction dynamics
Schatz, G.C.
1993-12-01
This collaborative program with the Theoretical Chemistry Group at Argonne involves theoretical studies of gas phase chemical reactions and related energy transfer and photodissociation processes. Many of the reactions studied are of direct relevance to combustion; others are selected they provide important examples of special dynamical processes, or are of relevance to experimental measurements. Both classical trajectory and quantum reactive scattering methods are used for these studies, and the types of information determined range from thermal rate constants to state to state differential cross sections.
Theoretical aspects of light meson spectroscopy
Barnes, T. |
1995-12-31
In this pedagogical review the authors discuss the theoretical understanding of light hadron spectroscopy in terms of QCD and the quark model. They begin with a summary of the known and surmised properties of QCD and confinement. Following this they review the nonrelativistic quark potential model for q{anti q} mesons and discuss the quarkonium spectrum and methods for identifying q{anti q} states. Finally, they review theoretical expectations for non-q{anti q} states (glueballs, hybrids and multiquark systems) and the status of experimental candidates for these states.
Parallel algorithms for unconstrained optimizations by multisplitting
He, Qing
1994-12-31
In this paper a new parallel iterative algorithm for unconstrained optimization using the idea of multisplitting is proposed. This algorithm uses the existing sequential algorithms without any parallelization. Some convergence and numerical results for this algorithm are presented. The experiments are performed on an Intel iPSC/860 Hyper Cube with 64 nodes. It is interesting that the sequential implementation on one node shows that if the problem is split properly, the algorithm converges much faster than one without splitting.
Office of Nuclear Safety Basis and Facility Design
Office of Energy Efficiency and Renewable Energy (EERE)
The Office of Nuclear Safety Basis & Facility Design establishes safety basis and facility design requirements and expectations related to analysis and design of nuclear facilities to ensure protection of workers and the public from the hazards associated with nuclear operations.
Review and Approval of Nuclear Facility Safety Basis and Safety...
U.S. Department of Energy (DOE) all webpages (Extended Search)
104-2014, Review and Approval of Nuclear Facility Safety Basis and Safety Design Basis Documents by Website Administrator This Standard describes a framework and the criteria to be...
Authorization basis status report (miscellaneous TWRS facilities, tanks and components)
Stickney, R.G.
1998-04-29
This report presents the results of a systematic evaluation conducted to identify miscellaneous TWRS facilities, tanks and components with potential needed authorization basis upgrades. It provides the Authorization Basis upgrade plan for those miscellaneous TWRS facilities, tanks and components identified.
CRAD, Integrated Safety Basis and Engineering Design Review ...
Office of Energy Efficiency and Renewable Energy (EERE) (indexed site)
Integrated Safety Basis and Engineering Design Review - August 20, 2014 (EA CRAD 31-4, Rev. 0) CRAD, Integrated Safety Basis and Engineering Design Review - August 20, 2014 (EA...
Convergent conductivity corrections to the Casimir force via exponential basis functions
Cui, Song; Soh, Yeng Chai
2010-12-15
A closed-form finite conductivity correction factor for the ideal Casimir force is proposed, based on exponential basis functions. Our method can facilitate experimental verifications of theories in the study of the Casimir force. A theoretical analysis is given to explain why our method is accurate at both large and small separation gaps. Numerical computations have been performed to confirm that our method is accurate in various experimental configurations. Our approach is widely applicable to various Casimir force interactions between metals and dielectrics. Our study can be extended to the study of the repulsive Casimir force as well.
Kinetically balanced Gaussian basis-set approach to relativistic Compton profiles of atoms
Jaiswal, Prerit; Shukla, Alok
2007-02-15
Atomic Compton profiles (CPs) are a very important property which provide us information about the momentum distribution of atomic electrons. Therefore, for CPs of heavy atoms, relativistic effects are expected to be important, warranting a relativistic treatment of the problem. In this paper, we present an efficient approach aimed at ab initio calculations of atomic CPs within a Dirac-Hartree-Fock (DHF) formalism, employing kinetically balanced Gaussian basis functions. The approach is used to compute the CPs of noble gases ranging from He to Rn, and the results have been compared to the experimental and other theoretical data, wherever possible. The influence of the quality of the basis set on the calculated CPs has also been systematically investigated.
Hamiltonian Light-front Field Theory Within an AdS/QCD Basis
Vary, J.P.; Honkanen, H.; Li, Jun; Maris, P.; Brodsky, S.J.; Harindranath, A.; de Teramond, G.F.; Sternberg, P.; Ng, E.G.; Yang, C.; /LBL, Berkeley
2009-12-16
Non-perturbative Hamiltonian light-front quantum field theory presents opportunities and challenges that bridge particle physics and nuclear physics. Fundamental theories, such as Quantum Chromodynamics (QCD) and Quantum Electrodynamics (QED) offer the promise of great predictive power spanning phenomena on all scales from the microscopic to cosmic scales, but new tools that do not rely exclusively on perturbation theory are required to make connection from one scale to the next. We outline recent theoretical and computational progress to build these bridges and provide illustrative results for nuclear structure and quantum field theory. As our framework we choose light-front gauge and a basis function representation with two-dimensional harmonic oscillator basis for transverse modes that corresponds with eigensolutions of the soft-wall AdS/QCD model obtained from light-front holography.
CRAD, Engineering Design and Safety Basis- December 22, 2009
Engineering Design and Safety Basis Inspection Criteria, Inspection Activities, and Lines of Inquiry (HSS CRAD 64-19, Rev. 0)
Nuclear Safety Basis Program Review Overview and Management Oversight
Office of Energy Efficiency and Renewable Energy (EERE) (indexed site)
Standard Review Plan | Department of Energy Nuclear Safety Basis Program Review Overview and Management Oversight Standard Review Plan Nuclear Safety Basis Program Review Overview and Management Oversight Standard Review Plan This SRP, Nuclear Safety Basis Program Review, consists of five volumes. It provides information to help strengthen the technical rigor of line management oversight and federal monitoring of DOE nuclear facilities. It provides a primer on the safety basis development
Adaptive protection algorithm and system
Hedrick, Paul (Pittsburgh, PA) [Pittsburgh, PA; Toms, Helen L. (Irwin, PA) [Irwin, PA; Miller, Roger M. (Mars, PA) [Mars, PA
2009-04-28
An adaptive protection algorithm and system for protecting electrical distribution systems traces the flow of power through a distribution system, assigns a value (or rank) to each circuit breaker in the system and then determines the appropriate trip set points based on the assigned rank.
Operando Raman and Theoretical Vibration Spectroscopy of Non...
Office of Energy Efficiency and Renewable Energy (EERE) (indexed site)
Operando Raman and Theoretical Vibration Spectroscopy of Non-PGM Catalysts Operando Raman and Theoretical Vibration Spectroscopy of Non-PGM Catalysts Presentation about ...
ITP Steel: Theoretical Minimum Energies to Produce Steel for...
Office of Energy Efficiency and Renewable Energy (EERE) (indexed site)
Theoretical Minimum Energies to Produce Steel for Selected Conditions, March 2000 ITP Steel: Theoretical Minimum Energies to Produce Steel for Selected Conditions, March 2000 ...
Toward Catalyst Design from Theoretical Calculations (464th Brookhaven...
Office of Scientific and Technical Information (OSTI)
Toward Catalyst Design from Theoretical Calculations (464th Brookhaven Lecture) Citation Details In-Document Search Title: Toward Catalyst Design from Theoretical Calculations...
Research in theoretical nuclear and neutrino physics. Final report...
Office of Scientific and Technical Information (OSTI)
Technical Report: Research in theoretical nuclear and neutrino physics. Final report Citation Details In-Document Search Title: Research in theoretical nuclear and neutrino physics...
Research in theoretical nuclear and neutrino physics. Final report...
Office of Scientific and Technical Information (OSTI)
Technical Report: Research in theoretical nuclear and neutrino physics. Final report Citation Details In-Document Search Title: Research in theoretical nuclear and neutrino ...
Hanford External Dosimetry Technical Basis Manual PNL-MA-842
Rathbone, Bruce A.
2010-01-01
The Hanford External Dosimetry Technical Basis Manual PNL-MA-842 documents the design and implementation of the external dosimetry system used at Hanford. The manual describes the dosimeter design, processing protocols, dose calculation methodology, radiation fields encountered, dosimeter response characteristics, limitations of dosimeter design under field conditions, and makes recommendations for effective use of the dosimeters in the field. The manual describes the technical basis for the dosimetry system in a manner intended to help ensure defensibility of the dose of record at Hanford and to demonstrate compliance with 10 CFR 835, DOELAP, DOE-RL, ORP, PNSO, and Hanford contractor requirements. The dosimetry system is operated by PNNLâ€™s Hanford External Dosimetry Program (HEDP) which provides dosimetry services to all Hanford contractors. The primary users of this manual are DOE and DOE contractors at Hanford using the dosimetry services of PNNL. Development and maintenance of this manual is funded directly by DOE and DOE contractors. Its contents have been reviewed and approved by DOE and DOE contractors at Hanford through the Hanford Personnel Dosimetry Advisory Committee (HPDAC) which is chartered and chaired by DOE-RL and serves as means of coordinating dosimetry practices across contractors at Hanford. This manual was established in 1996. Since its inception, it has been revised many times and maintained by PNNL as a controlled document with controlled distribution. The first revision to be released through PNNLâ€™s Electronic Records & Information Capture Architecture (ERICA) database was designated Revision 0. Revision numbers that are whole numbers reflect major revisions typically involving significant changes to all chapters in the document. Revision numbers that include a decimal fraction reflect minor revisions, usually restricted to selected chapters or selected pages in the document. Maintenance and distribution of controlled hard copies of the
U.S. Department of Energy (DOE) all webpages (Extended Search)
and its Use in Coupling Codes for Multiphysics Simulations Rod Schmidt, Noel Belcourt, Russell Hooper, and Roger Pawlowski Sandia National Laboratories P.O. Box 5800...
U.S. Department of Energy (DOE) all webpages (Extended Search)
of the vehicle. Here the two domains are the fluid exterior to the vehicle (compressible, turbulent fluid flow) and the interior of the vehicle (structural dynamics)...
Office of Scientific and Technical Information (OSTI)
1 are estimated us- ing the conventional MCMC (C-MCMC) with 60,000 model executions (red-solid lines), the linear, quadratic, and cubic surrogate systems with 9226, 4375, 3765...
Theoretical description of Coulomb balls: Fluid phase
Wrighton, J.; Dufty, J. W.; Kaehlert, H.; Bonitz, M.
2009-12-15
A theoretical description for the radial density profile of a finite number of identical charged particles confined in a harmonic trap is developed for application over a wide range of Coulomb coupling (or, equivalently, temperatures) and particle numbers. A simple mean-field approximation neglecting correlations yields a density profile which is monotonically decreasing with radius for all temperatures, in contrast to molecular dynamics simulations and experiments showing shell structure at lower temperatures. A more complete theoretical description including charge correlations is developed here by an extension of the hypernetted chain approximation, developed for bulk fluids, to the confined charges. The results reproduce all of the qualitative features observed in molecular dynamics simulations and experiments. These predictions are then tested quantitatively by comparison with benchmark Monte Carlo simulations. Quantitative accuracy of the theory is obtained by correcting the hypernetted chain approximation with a representation for the associated bridge functions.
A new paradigm for the molecular basis of rubber elasticity
Hanson, David E.; Barber, John L.
2015-02-19
The molecular basis for rubber elasticity is arguably the oldest and one of the most important questions in the field of polymer physics. The theoretical investigation of rubber elasticity began in earnest almost a century ago with the development of analytic thermodynamic models, based on simple, highly-symmetric configurations of so-called Gaussian chains, i.e. polymer chains that obey Markov statistics. Numerous theories have been proposed over the past 90 years based on the ansatz that the elastic force for individual network chains arises from the entropy change associated with the distribution of end-to-end distances of a free polymer chain. There aremoreÂ Â» serious philosophical objections to this assumption and others, such as the assumption that all network nodes undergo affine motion and that all of the network chains have the same length. Recently, a new paradigm for elasticity in rubber networks has been proposed that is based on mechanisms that originate at the molecular level. Using conventional statistical mechanics analyses, quantum chemistry, and molecular dynamics simulations, the fundamental entropic and enthalpic chain extension forces for polyisoprene (natural rubber) have been determined, along with estimates for the basic force constants. Concurrently, the complex morphology of natural rubber networks (the joint probability density distributions that relate the chain end-to-end distance to its contour length) has also been captured in a numerical model. When molecular chain forces are merged with the network structure in this model, it is possible to study the mechanical response to tensile and compressive strains of a representative volume element of a polymer network. As strain is imposed on a network, pathways of connected taut chains, that completely span the network along strain axis, emerge. Although these chains represent only a few percent of the total, they account for nearly all of the elastic stress at high strain. Here we provide
Time Variant Floating Mean Counting Algorithm
Energy Science and Technology Software Center (OSTI)
1999-06-03
This software was written to test a time variant floating mean counting algorithm. The algorithm was developed by Westinghouse Savannah River Company and a provisional patent has been filed on the algorithm. The test software was developed to work with the Val Tech model IVB prototype version II count rate meter hardware. The test software was used to verify the algorithm developed by WSRC could be correctly implemented with the vendor''s hardware.
ORISE: The Medical Basis for Radiation-Accident Preparedness: Medical
U.S. Department of Energy (DOE) all webpages (Extended Search)
Management (Published by REAC/TS) The Medical Basis for Radiation-Accident Preparedness: Medical Management Proceedings of the Fifth International REAC/TS Symposium on the Medical Basis for Radiation-Accident Preparedness and the Biodosimetry Workshop As part of its mission to provide continuing education for personnel responsible for treating radiation injuries, REAC/TS hosted the Fifth International REAC/TS Symposium on the Medical Basis for Radiation-Accident Preparedness symposium and
Volume-preserving algorithm for secular relativistic dynamics of charged particles
Zhang, Ruili; Liu, Jian; Wang, Yulei; He, Yang; Qin, Hong; Sun, Yajuan
2015-04-15
Secular dynamics of relativistic charged particles has theoretical significance and a wide range of applications. However, conventional algorithms are not applicable to this problem due to the coherent accumulation of numerical errors. To overcome this difficulty, we develop a volume-preserving algorithm (VPA) with long-term accuracy and conservativeness via a systematic splitting method. Applied to the simulation of runaway electrons with a time-span over 10 magnitudes, the VPA generates accurate results and enables the discovery of new physics for secular runaway dynamics.
Protocol for Enhanced Evaluations of Beyond Design Basis Events...
Protocol for Enhanced Evaluations of Beyond Design Basis Events Supporting Implementation of Operating Experience Report 2013-01 Protocol for Enhanced Evaluations of Beyond Design ...
Enterprise Assessments Review of the Delegation of Safety Basis...
Office of Environmental Management (EM)
Review of the Delegation of Safety Basis Approval Authority for Hazard Category 1, 2, and 3 Nuclear Facilities - April 2016 Enterprise Assessments Review of the Delegation of ...
Enterprise Assessments Review of the Delegation of Safety Basis...
Review of the Delegation of Safety Basis Approval Authority for Hazard Category 1, 2, and 3 Nuclear Facilities April 2016 Office of Nuclear Safety and Environmental Assessments ...
CRAD, Review of Safety Basis Development- January 31, 2013
Review of Safety Basis Development for the Savannah River Site Salt Waste Processing Facility - Inspection Criteria, Approach, and Lines of Inquiry (HSS CRAD 45-57, Rev. 0)
Assessing Beyond Design Basis Seismic Events and Implications...
Office of Environmental Management (EM)
Defense Nuclear Facilities Safety Board Topics Covered: Department of Energy Approach to Natural Phenomena Hazards Analysis and Design (Seismic) Design Basis and Beyond Design...
Appraisal of the Uranium Processing Facility Safety Basis Preliminary...
Office of Energy Efficiency and Renewable Energy (EERE) (indexed site)
Independent Oversight Appraisal of the Uranium Processing Facility Safety Basis ... Evaluation Study HEUMF Highly Enriched Uranium Materials Facility HSS Office of Health, ...
Nuclear Safety Basis Program Review Overview and Management Oversight...
This SRP, Nuclear Safety Basis Program Review, consists of five volumes. It provides ... rigor of line management oversight and federal monitoring of DOE nuclear facilities. ...
Structural and Functional Basis for Inhibition of Erythrocyte...
Office of Scientific and Technical Information (OSTI)
Target Plasmodium falciparum EBA-175 Citation Details In-Document Search Title: Structural and Functional Basis for Inhibition of Erythrocyte Invasion by Antibodies that Target ...
SRS FTF Section 3116 Basis for Determination | Department of Energy
Office of Energy Efficiency and Renewable Energy (EERE) (indexed site)
FTF Section 3116 Basis for Determination SRS FTF Section 3116 Basis for Determination Basis for Section 3116 Determination for Closure of F-Tank Farm at the Savannah River Site. In accordance with NDAA Section 3116, certain waste from reprocessing of spent nuclear fuel is not high-level waste if the Secretary of Energy, in consultation with the NRC, determines that the criteria in NDAA Section 3116(a) are met. This FTF 3116 Basis Document shows that those criteria are satisfied, to support a
Preparation of Safety Basis Documents for Transuranic (TRU) Waste...
Office of Environmental Management (EM)
Basis Documents for Transuranic (TRU) Waste Facilities U.S. Department of Energy ... for transuranic (TRU) waste facilities in the U.S. Department of Energy (DOE) Complex. ...
Technical Planning Basis - DOE Directives, Delegations, and Requiremen...
U.S. Department of Energy (DOE) all webpages (Extended Search)
2, Technical Planning Basis by David Freshwater Functional areas: Defense Nuclear Facility Safety and Health Requirement, Safety and Security, The Guide assists DOENNSA field...
Structural and Functional Basis for Broad-spectrum Neutralization...
U.S. Department of Energy (DOE) all webpages (Extended Search)
Structural and Functional Basis for Broad-spectrum Neutralization of Avian and Human ... globally that have little or no immunity, represents a grave threat to human health. ...
Theoretical, Methodological, and Empirical Approaches to Cost Savings: A Compendium
M Weimar
1998-12-10
This publication summarizes and contains the original documentation for understanding why the U.S. Department of Energy's (DOE's) privatization approach provides cost savings and the different approaches that could be used in calculating cost savings for the Tank Waste Remediation System (TWRS) Phase I contract. The initial section summarizes the approaches in the different papers. The appendices are the individual source papers which have been reviewed by individuals outside of the Pacific Northwest National Laboratory and the TWRS Program. Appendix A provides a theoretical basis for and estimate of the level of savings that can be" obtained from a fixed-priced contract with performance risk maintained by the contractor. Appendix B provides the methodology for determining cost savings when comparing a fixed-priced contractor with a Management and Operations (M&O) contractor (cost-plus contractor). Appendix C summarizes the economic model used to calculate cost savings and provides hypothetical output from preliminary calculations. Appendix D provides the summary of the approach for the DOE-Richland Operations Office (RL) estimate of the M&O contractor to perform the same work as BNFL Inc. Appendix E contains information on cost growth and per metric ton of glass costs for high-level waste at two other DOE sites, West Valley and Savannah River. Appendix F addresses a risk allocation analysis of the BNFL proposal that indicates,that the current approach is still better than the alternative.
The double-beta decay: Theoretical challenges
Horoi, Mihai
2012-11-20
Neutrinoless double beta decay is a unique process that could reveal physics beyond the Standard Model of particle physics namely, if observed, it would prove that neutrinos are Majorana particles. In addition, it could provide information regarding the neutrino masses and their hierarchy, provided that reliable nuclear matrix elements can be obtained. The two neutrino double beta decay is an associate process that is allowed by the Standard Model, and it was observed for about ten nuclei. The present contribution gives a brief review of the theoretical challenges associated with these two process, emphasizing the reliable calculation of the associated nuclear matrix elements.
Final Technical Report "Multiscale Simulation Algorithms for Biochemical Systems"
Petzold, Linda R.
2012-10-25
Biochemical systems are inherently multiscale and stochastic. In microscopic systems formed by living cells, the small numbers of reactant molecules can result in dynamical behavior that is discrete and stochastic rather than continuous and deterministic. An analysis tool that respects these dynamical characteristics is the stochastic simulation algorithm (SSA, Gillespie, 1976), a numerical simulation procedure that is essentially exact for chemical systems that are spatially homogeneous or well stirred. Despite recent improvements, as a procedure that simulates every reaction event, the SSA is necessarily inefficient for most realistic problems. There are two main reasons for this, both arising from the multiscale nature of the underlying problem: (1) stiffness, i.e. the presence of multiple timescales, the fastest of which are stable; and (2) the need to include in the simulation both species that are present in relatively small quantities and should be modeled by a discrete stochastic process, and species that are present in larger quantities and are more efficiently modeled by a deterministic differential equation (or at some scale in between). This project has focused on the development of fast and adaptive algorithms, and the fun- damental theory upon which they must be based, for the multiscale simulation of biochemical systems. Areas addressed by this project include: (1) Theoretical and practical foundations for ac- celerated discrete stochastic simulation (tau-leaping); (2) Dealing with stiffness (fast reactions) in an efficient and well-justified manner in discrete stochastic simulation; (3) Development of adaptive multiscale algorithms for spatially homogeneous discrete stochastic simulation; (4) Development of high-performance SSA algorithms.
THEORETICAL STUDIES OF HADRONS AND NUCLEI
STEPHEN R COTANCH
2007-03-20
This report details final research results obtained during the 9 year period from June 1, 1997 through July 15, 2006. The research project, entitled â?Theoretical Studies of Hadrons and Nucleiâ?, was supported by grant DE-FG02-97ER41048 between North Carolina State University [NCSU] and the U. S. Department of Energy [DOE]. In compliance with grant requirements the Principal Investigator [PI], Professor Stephen R. Cotanch, conducted a theoretical research program investigating hadrons and nuclei and devoted to this program 50% of his time during the academic year and 100% of his time in the summer. Highlights of new, significant research results are briefly summarized in the following three sections corresponding to the respective sub-programs of this project (hadron structure, probing hadrons and hadron systems electromagnetically, and many-body studies). Recent progress is also discussed in a recent renewal/supplemental grant proposal submitted to DOE. Finally, full detailed descriptions of completed work can be found in the publications listed at the end of this report.
Mathematical challenges from theoretical/computational chemistry
1995-12-31
The committee believes that this report has relevance and potentially valuable suggestions for a wide range of readers. Target audiences include: graduate departments in the mathematical and chemical sciences; federal and private agencies that fund research in the mathematical and chemical sciences; selected industrial and government research and development laboratories; developers of software and hardware for computational chemistry; and selected individual researchers. Chapter 2 of this report covers some history of computational chemistry for the nonspecialist, while Chapter 3 illustrates the fruits of some past successful cross-fertilization between mathematical scientists and computational/theoretical chemists. In Chapter 4 the committee has assembled a representative, but not exhaustive, survey of research opportunities. Most of these are descriptions of important open problems in computational/theoretical chemistry that could gain much from the efforts of innovative mathematical scientists, written so as to be accessible introductions to the nonspecialist. Chapter 5 is an assessment, necessarily subjective, of cultural differences that must be overcome if collaborative work is to be encouraged between the mathematical and the chemical communities. Finally, the report ends with a brief list of conclusions and recommendations that, if followed, could promote accelerated progress at this interface. Recognizing that bothersome language issues can inhibit prospects for collaborative research at the interface between distinctive disciplines, the committee has attempted throughout to maintain an accessible style, in part by using illustrative boxes, and has included at the end of the report a glossary of technical terms that may be familiar to only a subset of the target audiences listed above.
Theoretical efficiency limits for thermoradiative energy conversion
Strandberg, Rune
2015-02-07
A new method to produce electricity from heat called thermoradiative energy conversion is analyzed. The method is based on sustaining a difference in the chemical potential for electron populations above and below an energy gap and let this difference drive a current through an electric circuit. The difference in chemical potential originates from an imbalance in the excitation and de-excitation of electrons across the energy gap. The method has similarities to thermophotovoltaics and conventional photovoltaics. While photovoltaic cells absorb thermal radiation from a body with higher temperature than the cell itself, thermoradiative cells are hot during operation and emit a net outflow of photons to colder surroundings. A thermoradiative cell with an energy gap of 0.25â€‰eV at a temperature of 500â€‰K in surroundings at 300â€‰K is found to have a theoretical efficiency limit of 33.2%. For a high-temperature thermoradiative cell with an energy gap of 0.4â€‰eV, a theoretical efficiency close to 50% is found while the cell produces 1000â€‰W/m{sup 2} has a temperature of 1000â€‰K and is placed in surroundings with a temperature of 300â€‰K. Some aspects related to the practical implementation of the concept are discussed and some challenges are addressed. It is, for example, obvious that there is an upper boundary for the temperature under which solid state devices can work properly over time. No conclusions are drawn with regard to such practical boundaries, because the work is aimed at establishing upper limits for ideal thermoradiative devices.
Microsoft Word - Final_SRS_FTF_WD_Basis_March_2012
U.S. Department of Energy (DOE) all webpages (Extended Search)
2-001 Revision 0 Basis for Section 3116 Determination for Closure of F-Tank Farm at the Savannah River Site March 2012 Basis for Section 3116 Determination DOE/SRS-WD-2012-001 for Closure of F-Tank Farm Revision 0 at the Savannah River Site March 2012 Page ii REVISION SUMMARY REV. # DESCRIPTION DATE OF ISSUE 0 Initial Issue March 2012 Basis for Section 3116 Determination DOE/SRS-WD-2012-001 for Closure of F-Tank Farm Revision 0 at the Savannah River Site March 2012 Page iii TABLE OF CONTENTS
Ecological Research Division Theoretical Ecology Program. [Contains abstracts
Not Available
1990-10-01
This report presents the goals of the Theoretical Ecology Program and abstracts of research in progress. Abstracts cover both theoretical research that began as part of the terrestrial ecology core program and new projects funded by the theoretical program begun in 1988. Projects have been clustered into four major categories: Ecosystem dynamics; landscape/scaling dynamics; population dynamics; and experiment/sample design.
Solar Position Algorithm (SPA) - Energy Innovation Portal
U.S. Department of Energy (DOE) all webpages (Extended Search)
Thermal Solar Thermal Energy Analysis Energy Analysis Find More Like This Return to Search Solar Position Algorithm (SPA) National Renewable Energy Laboratory Contact NREL About This Technology Technology Marketing Summary This algorithm calculates the solar zenith and azimuth angles in the period from the year -2000 to 6000, with uncertainties of +/- 0.0003 degrees based on the date, time, and location on Earth. (Reference: Reda, I.; Andreas, A., Solar Position Algorithm for Solar Radiation
Student's algorithm solves real-world problem
U.S. Department of Energy (DOE) all webpages (Extended Search)
Student's algorithm solves real-world problem Supercomputing Challenge: student's algorithm solves real-world problem Students learn how to use powerful computers to analyze, model, and solve real-world problems. April 3, 2012 Jordon Medlock of Albuquerque's Manzano High School won the 2012 Lab-sponsored Supercomputing Challenge Jordon Medlock of Albuquerque's Manzano High School won the 2012 Lab-sponsored Supercomputing Challenge by creating a computer algorithm that automates the process of
Final Report: Sublinear Algorithms for In-situ and In-transit Data Analysis at Exascale.
Bennett, Janine Camille; Pinar, Ali; Seshadhri, C.; Thompson, David; Salloum, Maher; Bhagatwala, Ankit; Chen, Jacqueline H.
2015-09-01
Post-Moore's law scaling is creating a disruptive shift in simulation workflows, as saving the entirety of raw data to persistent storage becomes expensive. We are moving away from a post-process centric data analysis paradigm towards a concurrent analysis framework, in which raw simulation data is processed as it is computed. Algorithms must adapt to machines with extreme concurrency, low communication bandwidth, and high memory latency, while operating within the time constraints prescribed by the simulation. Furthermore, in- put parameters are often data dependent and cannot always be prescribed. The study of sublinear algorithms is a recent development in theoretical computer science and discrete mathematics that has significant potential to provide solutions for these challenges. The approaches of sublinear algorithms address the fundamental mathematical problem of understanding global features of a data set using limited resources. These theoretical ideas align with practical challenges of in-situ and in-transit computation where vast amounts of data must be processed under severe communication and memory constraints. This report details key advancements made in applying sublinear algorithms in-situ to identify features of interest and to enable adaptive workflows over the course of a three year LDRD. Prior to this LDRD, there was no precedent in applying sublinear techniques to large-scale, physics based simulations. This project has definitively demonstrated their efficacy at mitigating high performance computing challenges and highlighted the rich potential for follow-on re- search opportunities in this space.
Generation of attributes for learning algorithms
Hu, Yuh-Jyh; Kibler, D.
1996-12-31
Inductive algorithms rely strongly on their representational biases. Constructive induction can mitigate representational inadequacies. This paper introduces the notion of a relative gain measure and describes a new constructive induction algorithm (GALA) which is independent of the learning algorithm. Unlike most previous research on constructive induction, our methods are designed as preprocessing step before standard machine learning algorithms are applied. We present the results which demonstrate the effectiveness of GALA on artificial and real domains for several learners: C4.5, CN2, perceptron and backpropagation.
Java implementation of Class Association Rule algorithms
Energy Science and Technology Software Center (OSTI)
2007-08-30
Java implementation of three Class Association Rule mining algorithms, NETCAR, CARapriori, and clustering based rule mining. NETCAR algorithm is a novel algorithm developed by Makio Tamura. The algorithm is discussed in a paper: UCRL-JRNL-232466-DRAFT, and would be published in a peer review scientific journal. The software is used to extract combinations of genes relevant with a phenotype from a phylogenetic profile and a phenotype profile. The phylogenetic profiles is represented by a binary matrix andmoreÂ Â» a phenotype profile is represented by a binary vector. The present application of this software will be in genome analysis, however, it could be applied more generally.Â«Â less
Hybrid Discrete - Continuum Algorithms for Stochastic Reaction...
Office of Scientific and Technical Information (OSTI)
for Stochastic Reaction Networks. Citation Details In-Document Search Title: Hybrid Discrete - Continuum Algorithms for Stochastic Reaction Networks. Abstract not provided. ...
CRAD, Review of Safety Basis Development- October 11, 2012
Review of Safety Basis Development for the Y-12 National Security Complex Uranium Processing Facility Inspection Criteria, Approach, and Lines of Inquiry (HSS CRAD 45-55, Rev. 0)
General Engineer/Physical Scientist (Safety Basis Engineer/Scientist)
A successful candidate in this position will serve as an authority in the safety basis functional area. The incumbent is responsible for managing, coordinating, and authorizing work in the context...
Structural basis for ubiquitin-mediated antiviral signal activation...
Office of Scientific and Technical Information (OSTI)
Title: Structural basis for ubiquitin-mediated antiviral signal activation by RIG-I Authors: Peisley, Alys ; Wu, Bin ; Xu, Hui ; Chen, Zhijian J. ; Hur , Sun 1 ; HHMI) 2 ; ...
Basis for Section 3116 Determination for Salt Waste Disposal...
Office of Environmental Management (EM)
WD-2005-001 January 2006 Basis for Section 3116 Determination for Salt Waste Disposal at ......... 28 4.0 THE WASTE DOES NOT REQUIRE PERMANENT ISOLATION IN A ...
Structural basis for the antibody neutralization of Herpes simplex...
Office of Scientific and Technical Information (OSTI)
of Herpes simplex virus Citation Details In-Document Search Title: Structural basis for the antibody neutralization of Herpes simplex virus The gD-E317-Fab complex ...
Advanced Test Reactor Design Basis Reconstitution Project Issue Resolution Process
Steven D. Winter; Gregg L. Sharp; William E. Kohn; Richard T. McCracken
2007-05-01
The Advanced Test Reactor (ATR) Design Basis Reconstitution Program (DBRP) is a structured assessment and reconstitution of the design basis for the ATR. The DBRP is designed to establish and document the ties between the Document Safety Analysis (DSA), design basis, and actual system configurations. Where the DBRP assessment team cannot establish a link between these three major elements, a gap is identified. Resolutions to identified gaps represent configuration management and design basis recovery actions. The proposed paper discusses the process being applied to define, evaluate, report, and address gaps that are identified through the ATR DBRP. Design basis verification may be performed or required for a nuclear facility safety basis on various levels. The process is applicable to large-scale design basis reconstitution efforts, such as the ATR DBRP, or may be scaled for application on smaller projects. The concepts are applicable to long-term maintenance of a nuclear facility safety basis and recovery of degraded safety basis components. The ATR DBRP assessment team has observed numerous examples where a clear and accurate link between the DSA, design basis, and actual system configuration was not immediately identifiable in supporting documentation. As a result, a systematic approach to effectively document, prioritize, and evaluate each observation is required. The DBRP issue resolution process provides direction for consistent identification, documentation, categorization, and evaluation, and where applicable, entry into the determination process for a potential inadequacy in the safety analysis (PISA). The issue resolution process is a key element for execution of the DBRP. Application of the process facilitates collection, assessment, and reporting of issues identified by the DBRP team. Application of the process results in an organized database of safety basis gaps and prioritized corrective action planning and resolution. The DBRP team follows the ATR
Technical Basis Document for PFP Area Monitoring Dosimetry Program
COOPER, J.R.
2000-04-17
This document describes the phantom dosimetry used for the PFP Area Monitoring program and establishes the basis for the Plutonium Finishing Plant's (PFP) area monitoring dosimetry program in accordance with the following requirements: Title 10, Code of Federal Regulations (CFR), part 835, ''Occupational Radiation Protection'' Part 835.403; Hanford Site Radiological Control Manual (HSRCM-1), Part 514; HNF-PRO-382, Area Dosimetry Program; and PNL-MA-842, Hanford External Dosimetry Technical Basis Manual.
Technical Cost Modeling - Life Cycle Analysis Basis for Program Focus |
Office of Energy Efficiency and Renewable Energy (EERE) (indexed site)
Department of Energy 2 DOE Hydrogen and Fuel Cells Program and Vehicle Technologies Program Annual Merit Review and Peer Evaluation Meeting lm001_das_2012_o.pdf (547.05 KB) More Documents & Publications Technical Cost Modeling - Life Cycle Analysis Basis for Program Focus Technical Cost Modeling - Life Cycle Analysis Basis for Program Focus Polymer Composites Research in the LM Materials Program Overview
Technical Cost Modeling - Life Cycle Analysis Basis for Program Focus |
Office of Energy Efficiency and Renewable Energy (EERE) (indexed site)
Department of Energy 0 DOE Vehicle Technologies and Hydrogen Programs Annual Merit Review and Peer Evaluation Meeting, June 7-11, 2010 -- Washington D.C. lm001_das_2010_o.pdf (421.39 KB) More Documents & Publications Technical Cost Modeling - Life Cycle Analysis Basis for Program Focus Technical Cost Modeling - Life Cycle Analysis Basis for Program Focus Life Cycle Modeling of Propulsion Materials
Theoretical crystallography with the Advanced Visualization System
Younkin, C.R.; Thornton, E.N.; Nicholas, J.B.; Jones, D.R.; Hess, A.C.
1993-05-01
Space is an Application Visualization System (AVS) graphics module designed for crystallographic and molecular research. The program can handle molecules, two-dimensional periodic systems, and three-dimensional periodic systems, all referred to in the paper as models. Using several methods, the user can select atoms, groups of atoms, or entire molecules. Selections can be moved, copied, deleted, and merged. An important feature of Space is the crystallography component. The program allows the user to generate the unit cell from the asymmetric unit, manipulate the unit cell, and replicate it in three dimensions. Space includes the Buerger reduction algorithm which determines the asymmetric unit and the space group of highest symmetry of an input unit cell. Space also allows the user to display planes in the lattice based on Miller indices, and to cleave the crystal to expose the surface. The user can display important precalculated volumetric data in Space, such as electron densities and electrostatic surfaces. With a variety of methods, Space can compute the electrostatic potential of any chemical system based on input point charges.
Computing single step operators of logic programming in radial basis function neural networks
Hamadneh, Nawaf; Sathasivam, Saratha; Choon, Ong Hong
2014-07-10
Logic programming is the process that leads from an original formulation of a computing problem to executable programs. A normal logic program consists of a finite set of clauses. A valuation I of logic programming is a mapping from ground atoms to false or true. The single step operator of any logic programming is defined as a function (T{sub p}:Iâ†’I). Logic programming is well-suited to building the artificial intelligence systems. In this study, we established a new technique to compute the single step operators of logic programming in the radial basis function neural networks. To do that, we proposed a new technique to generate the training data sets of single step operators. The training data sets are used to build the neural networks. We used the recurrent radial basis function neural networks to get to the steady state (the fixed point of the operators). To improve the performance of the neural networks, we used the particle swarm optimization algorithm to train the networks.
Theoretical priors on modified growth parametrisations
Song, Yong-Seon; Hollenstein, Lukas; Caldera-Cabral, Gabriela; Koyama, Kazuya E-mail: Lukas.Hollenstein@unige.ch E-mail: Kazuya.Koyama@port.ac.uk
2010-04-01
Next generation surveys will observe the large-scale structure of the Universe with unprecedented accuracy. This will enable us to test the relationships between matter over-densities, the curvature perturbation and the Newtonian potential. Any large-distance modification of gravity or exotic nature of dark energy modifies these relationships as compared to those predicted in the standard smooth dark energy model based on General Relativity. In linear theory of structure growth such modifications are often parameterised by virtue of two functions of space and time that enter the relation of the curvature perturbation to, first, the matter over- density, and second, the Newtonian potential. We investigate the predictions for these functions in Brans-Dicke theory, clustering dark energy models and interacting dark energy models. We find that each theory has a distinct path in the parameter space of modified growth. Understanding these theoretical priors on the parameterisations of modified growth is essential to reveal the nature of cosmic acceleration with the help of upcoming observations of structure formation.
Wang, C. L.
2016-05-17
On the basis of FluoroBancroft linear-algebraic method [S.B. Andersson, Opt. Exp. 16, 18714 (2008)] three highly-resolved positioning methodswere proposed for wavelength-shifting fiber (WLSF) neutron detectors. Using a Gaussian or exponential-decay light-response function (LRF), the non-linear relation of photon-number profiles vs. x-pixels was linearized and neutron positions were determined. The proposed algorithms give an average 0.03-0.08 pixel position error, much smaller than that (0.29 pixel) from a traditional maximum photon algorithm (MPA). The new algorithms result in better detector uniformity, less position misassignment (ghosting), better spatial resolution, and an equivalent or better instrument resolution in powder diffraction than the MPA. Moreover,moreÂ Â» these characters will facilitate broader applications of WLSF detectors at time-of-flight neutron powder diffraction beamlines, including single-crystal diffraction and texture analysis.Â«Â less
Initial borehole acoustic televiewer data processing algorithms
Moore, T.K.
1988-06-01
With the development of a new digital televiewer, several algorithms have been developed in support of off-line data processing. This report describes the initial set of utilities developed to support data handling as well as data display. Functional descriptions, implementation details, and instructions for use of the seven algorithms are provided. 5 refs., 33 figs., 1 tab.
Theoretical Model for Nanoporous Carbon Supercapacitors
Sumpter, Bobby G; Meunier, Vincent; Huang, Jingsong
2008-01-01
The unprecedented anomalous increase in capacitance of nanoporous carbon supercapacitors at pore sizes smaller than 1 nm [Science 2006, 313, 1760.] challenges the long-held presumption that pores smaller than the size of solvated electrolyte ions do not contribute to energy storage. We propose a heuristic model to replace the commonly used model for an electric double-layer capacitor (EDLC) on the basis of an electric double-cylinder capacitor (EDCC) for mesopores (2 {50 nm pore size), which becomes an electric wire-in-cylinder capacitor (EWCC) for micropores (< 2 nm pore size). Our analysis of the available experimental data in the micropore regime is confirmed by 1st principles density functional theory calculations and reveals significant curvature effects for carbon capacitance. The EDCC (and/or EWCC) model allows the supercapacitor properties to be correlated with pore size, specific surface area, Debye length, electrolyte concentration and dielectric constant, and solute ion size. The new model not only explains the experimental data, but also offers a practical direction for the optimization of the properties of carbon supercapacitors through experiments.
Petascale algorithms for reactor hydrodynamics.
Fischer, P.; Lottes, J.; Pointer, W. D.; Siegel, A.
2008-01-01
We describe recent algorithmic developments that have enabled large eddy simulations of reactor flows on up to P = 65, 000 processors on the IBM BG/P at the Argonne Leadership Computing Facility. Petascale computing is expected to play a pivotal role in the design and analysis of next-generation nuclear reactors. Argonne's SHARP project is focused on advanced reactor simulation, with a current emphasis on modeling coupled neutronics and thermal-hydraulics (TH). The TH modeling comprises a hierarchy of computational fluid dynamics approaches ranging from detailed turbulence computations, using DNS (direct numerical simulation) and LES (large eddy simulation), to full core analysis based on RANS (Reynolds-averaged Navier-Stokes) and subchannel models. Our initial study is focused on LES of sodium-cooled fast reactor cores. The aim is to leverage petascale platforms at DOE's Leadership Computing Facilities (LCFs) to provide detailed information about heat transfer within the core and to provide baseline data for less expensive RANS and subchannel models.
THEORETICAL SPECTRA OF TERRESTRIAL EXOPLANET SURFACES
Hu Renyu; Seager, Sara; Ehlmann, Bethany L.
2012-06-10
We investigate spectra of airless rocky exoplanets with a theoretical framework that self-consistently treats reflection and thermal emission. We find that a silicate surface on an exoplanet is spectroscopically detectable via prominent Si-O features in the thermal emission bands of 7-13 {mu}m and 15-25 {mu}m. The variation of brightness temperature due to the silicate features can be up to 20 K for an airless Earth analog, and the silicate features are wide enough to be distinguished from atmospheric features with relatively high resolution spectra. The surface characterization thus provides a method to unambiguously identify a rocky exoplanet. Furthermore, identification of specific rocky surface types is possible with the planet's reflectance spectrum in near-infrared broad bands. A key parameter to observe is the difference between K-band and J-band geometric albedos (A{sub g}(K) - A{sub g}(J)): A{sub g}(K) - A{sub g}(J) > 0.2 indicates that more than half of the planet's surface has abundant mafic minerals, such as olivine and pyroxene, in other words primary crust from a magma ocean or high-temperature lavas; A{sub g}(K) - A{sub g}(J) < -0.09 indicates that more than half of the planet's surface is covered or partially covered by water ice or hydrated silicates, implying extant or past water on its surface. Also, surface water ice can be specifically distinguished by an H-band geometric albedo lower than the J-band geometric albedo. The surface features can be distinguished from possible atmospheric features with molecule identification of atmospheric species by transmission spectroscopy. We therefore propose that mid-infrared spectroscopy of exoplanets may detect rocky surfaces, and near-infrared spectrophotometry may identify ultramafic surfaces, hydrated surfaces, and water ice.
Analytic reconstruction algorithms for triple-source CT with horizontal data truncation
Chen, Ming; Yu, Hengyong
2015-10-15
Purpose: This paper explores a triple-source imaging method with horizontal data truncation to enlarge the field of view (FOV) for big objects. Methods: The study is conducted by using theoretical analysis, mathematical deduction, and numerical simulations. The proposed algorithms are implemented in c + + and MATLAB. While the basic platform is constructed in MATLAB, the computationally intensive segments are coded in c + +, which are linked via a MEX interface. Results: A triple-source circular scanning configuration with horizontal data truncation is developed, where three pairs of x-ray sources and detectors are unevenly distributed on the same circle to cover the whole imaging object. For this triple-source configuration, a fan-beam filtered backprojection-type algorithm is derived for truncated full-scan projections without data rebinning. The algorithm is also extended for horizontally truncated half-scan projections and cone-beam projections in a Feldkamp-type framework. Using their method, the FOV is enlarged twofold to threefold to scan bigger objects with high speed and quality. The numerical simulation results confirm the correctness and effectiveness of the developed algorithms. Conclusions: The triple-source scanning configuration with horizontal data truncation cannot only keep most of the advantages of a traditional multisource system but also cover a larger FOV for big imaging objects. In addition, because the filtering is shift-invariant, the proposed algorithms are very fast and easily parallelized on graphic processing units.
Development of design basis capacity for SNF project systems
Pajunen, A.L.
1996-02-27
An estimate of the design capacity for Spent Nuclear Fuel Project systems producing Multi-Canister Overpacks is developed based on completing fuel processing in a two year period. The design basis capacity for systems relates the desired annual processing rate to potential operating inefficiencies which may be actually experienced to project a design capacity for systems. The basis for estimating operating efficiency factors is described. Estimates of the design basis capacity were limited to systems actually producing the Multi-Canister Overpack. These systems include Fuel Retrieval, K Basin SNF Vacuum Drying, Canister Storage Building support for Staging and Storage, and Hot Vacuum conditioning. The capacity of other systems are assumed to be derived from these system capacities such that systems producing a Multi-Canister Overpack are not constrained.
Theoretical Studies of Hydrogen Storage Alloys.
Jonsson, Hannes
2012-03-22
Theoretical calculations were carried out to search for lightweight alloys that can be used to reversibly store hydrogen in mobile applications, such as automobiles. Our primary focus was on magnesium based alloys. While MgH{sub 2} is in many respects a promising hydrogen storage material, there are two serious problems which need to be solved in order to make it useful: (i) the binding energy of the hydrogen atoms in the hydride is too large, causing the release temperature to be too high, and (ii) the diffusion of hydrogen through the hydride is so slow that loading of hydrogen into the metal takes much too long. In the first year of the project, we found that the addition of ca. 15% of aluminum decreases the binding energy to the hydrogen to the target value of 0.25 eV which corresponds to release of 1 bar hydrogen gas at 100 degrees C. Also, the addition of ca. 15% of transition metal atoms, such as Ti or V, reduces the formation energy of interstitial H-atoms making the diffusion of H-atoms through the hydride more than ten orders of magnitude faster at room temperature. In the second year of the project, several calculations of alloys of magnesium with various other transition metals were carried out and systematic trends in stability, hydrogen binding energy and diffusivity established. Some calculations of ternary alloys and their hydrides were also carried out, for example of Mg{sub 6}AlTiH{sub 16}. It was found that the binding energy reduction due to the addition of aluminum and increased diffusivity due to the addition of a transition metal are both effective at the same time. This material would in principle work well for hydrogen storage but it is, unfortunately, unstable with respect to phase separation. A search was made for a ternary alloy of this type where both the alloy and the corresponding hydride are stable. Promising results were obtained by including Zn in the alloy.
Theoretical & Experimental Studies of Elementary Particles
McFarland, Kevin
2012-10-04
Abstract High energy physics has been one of the signature research programs at the University of Rochester for over 60 years. The group has made leading contributions to experimental discoveries at accelerators and in cosmic rays and has played major roles in developing the theoretical framework that gives us our ``standard model'' of fundamental interactions today. This award from the Department of Energy funded a major portion of that research for more than 20 years. During this time, highlights of the supported work included the discovery of the top quark at the Fermilab Tevatron, the completion of a broad program of physics measurements that verified the electroweak unified theory, the measurement of three generations of neutrino flavor oscillations, and the first observation of a ``Higgs like'' boson at the Large Hadron Collider. The work has resulted in more than 2000 publications over the period of the grant. The principal investigators supported on this grant have been recognized as leaders in the field of elementary particle physics by their peers through numerous awards and leadership positions. Most notable among them is the APS W.K.H. Panofsky Prize awarded to Arie Bodek in 2004, the J.J. Sakurai Prizes awarded to Susumu Okubo and C. Richard Hagen in 2005 and 2010, respectively, the Wigner medal awarded to Susumu Okubo in 2006, and five principal investigators (Das, Demina, McFarland, Orr, Tipton) who received Department of Energy Outstanding Junior Investigator awards during the period of this grant. The University of Rochester Department of Physics and Astronomy, which houses the research group, provides primary salary support for the faculty and has waived most tuition costs for graduate students during the period of this grant. The group also benefits significantly from technical support and infrastructure available at the University which supports the work. The research work of the group has provided educational opportunities for graduate students
Theoretical Description of the Fission Process
Witold Nazarewicz
2009-10-25
Advanced theoretical methods and high-performance computers may finally unlock the secrets of nuclear fission, a fundamental nuclear decay that is of great relevance to society. In this work, we studied the phenomenon of spontaneous fission using the symmetry-unrestricted nuclear density functional theory (DFT). Our results show that many observed properties of fissioning nuclei can be explained in terms of pathways in multidimensional collective space corresponding to different geometries of fission products. From the calculated collective potential and collective mass, we estimated spontaneous fission half-lives, and good agreement with experimental data was found. We also predicted a new phenomenon of trimodal spontaneous fission for some transfermium isotopes. Our calculations demonstrate that fission barriers of excited superheavy nuclei vary rapidly with particle number, pointing to the importance of shell effects even at large excitation energies. The results are consistent with recent experiments where superheavy elements were created by bombarding an actinide target with 48-calcium; yet even at high excitation energies, sizable fission barriers remained. Not only does this reveal clues about the conditions for creating new elements, it also provides a wider context for understanding other types of fission. Understanding of the fission process is crucial for many areas of science and technology. Fission governs existence of many transuranium elements, including the predicted long-lived superheavy species. In nuclear astrophysics, fission influences the formation of heavy elements on the final stages of the r-process in a very high neutron density environment. Fission applications are numerous. Improved understanding of the fission process will enable scientists to enhance the safety and reliability of the nation’s nuclear stockpile and nuclear reactors. The deployment of a fleet of safe and efficient advanced reactors, which will also minimize radiotoxic
Efficient Theoretical Screening of Solid Sorbents for CO2 Capture
Office of Scientific and Technical Information (OSTI)
Applications* (Journal Article) | SciTech Connect Journal Article: Efficient Theoretical Screening of Solid Sorbents for CO2 Capture Applications* Citation Details In-Document Search Title: Efficient Theoretical Screening of Solid Sorbents for CO2 Capture Applications* By combining thermodynamic database mining with first principles density functional theory and phonon lattice dynamics calculations, a theoretical screening methodology to identify the most promising CO2 sorbent candidates
Improvements of Nuclear Data and Its Uncertainties by Theoretical...
Office of Scientific and Technical Information (OSTI)
Improvements of Nuclear Data and Its Uncertainties by Theoretical Modeling Talou, Patrick Los Alamos National Laboratory; Nazarewicz, Witold University of Tennessee, Knoxville,...
EXPERIMENTAL AND THEORETICAL DETERMINATION OF HEAVY OIL VISCOSITY...
Office of Scientific and Technical Information (OSTI)
OF HEAVY OIL VISCOSITY UNDER RESERVOIR CONDITIONS Citation Details In-Document Search Title: EXPERIMENTAL AND THEORETICAL DETERMINATION OF HEAVY OIL VISCOSITY UNDER ...
Improvements of Nuclear Data and Its Uncertainties by Theoretical...
Office of Scientific and Technical Information (OSTI)
Technical Report: Improvements of Nuclear Data and Its Uncertainties by Theoretical Modeling Citation Details In-Document Search Title: Improvements of Nuclear Data and Its ...
Theoretical analysis of uranium-doped thorium dioxide: Introduction...
Office of Scientific and Technical Information (OSTI)
polarization Citation Details In-Document Search Title: Theoretical analysis of uranium-doped thorium dioxide: Introduction of a thoria force field with explicit polarization ...
Catalysis by Design - Theoretical and Experimental Studies of...
Office of Energy Efficiency and Renewable Energy (EERE) (indexed site)
Catalysis by Design - Theoretical and Experimental Studies of Model Catalysts for Lean NOx ... Lean NOx Traps - Microstructural Studies of Real World and Model Catalysts Catalysis by ...
Experimental and Theoretical Investigation of Lubricant and Additive...
Combining data from motored engine friction, a theoretical engine model, a line friction contact rig, and a fired engine can provide better insight to lube oil and additive ...
Fraction of Theoretical Specific Energy Achieved at Battery Pack...
U.S. Department of Energy (DOE) all webpages (Extended Search)
Fraction of Theoretical Specific Energy Achieved at Battery Pack Level Is Very Sensitive ... factors in determining the fraction of battery material specific energy captured at pack ...
Theoretical study of Ag- and Au-filled skutterudites. | Department...
theoretical values of thermoelectric properties for Ag-filled skutterudites. stoica.pdf (3.05 MB) More Documents & Publications Thermoelectric Generator Development for ...
OSTIblog Articles in the theoretical physics Topic | OSTI, US...
Office of Scientific and Technical Information (OSTI)
theoretical physics Topic The Remarkable Legacy of Kenneth Geddes Wilson by Kathy Chambers ... Laureate Kenneth Geddes Wilson (1936 -2013) forever changed how we think about physics. ...
First-Principles Theoretical Studies of Hydrogen Interaction...
Office of Scientific and Technical Information (OSTI)
Studies of Hydrogen Interaction with Ultrathin Mg and Mg-based Alloy Films Citation Details In-Document Search Title: First-Principles Theoretical Studies of Hydrogen Interaction ...
CRAD, Safety Basis- Idaho MF-628 Drum Treatment Facility
A section of Appendix C to DOE G 226.1-2 "Federal Line Management Oversight of Department of Energy Nuclear Facilities." Consists of Criteria Review and Approach Documents (CRADs) used for a May 2007 readiness assessment of the Safety Basis at the Advanced Mixed Waste Treatment Project.
Canister Storage Building (CSB) Design Basis Accident Analysis Documentation
CROWE, R.D.
1999-09-09
This document provides the detailed accident analysis to support ''HNF-3553, Spent Nuclear Fuel Project Final Safety, Analysis Report, Annex A,'' ''Canister Storage Building Final Safety Analysis Report.'' All assumptions, parameters, and models used to provide the analysis of the design basis accidents are documented to support the conclusions in the Canister Storage Building Final Safety Analysis Report.
Solar Power Tower Design Basis Document, Revision 0
ZAVOICO,ALEXIS B.
2001-07-01
This report contains the design basis for a generic molten-salt solar power tower. A solar power tower uses a field of tracking mirrors (heliostats) that redirect sunlight on to a centrally located receiver mounted on top a tower, which absorbs the concentrated sunlight. Molten nitrate salt, pumped from a tank at ground level, absorbs the sunlight, heating it up to 565 C. The heated salt flows back to ground level into another tank where it is stored, then pumped through a steam generator to produce steam and make electricity. This report establishes a set of criteria upon which the next generation of solar power towers will be designed. The report contains detailed criteria for each of the major systems: Collector System, Receiver System, Thermal Storage System, Steam Generator System, Master Control System, and Electric Heat Tracing System. The Electric Power Generation System and Balance of Plant discussions are limited to interface requirements. This design basis builds on the extensive experience gained from the Solar Two project and includes potential design innovations that will improve reliability and lower technical risk. This design basis document is a living document and contains several areas that require trade-studies and design analysis to fully complete the design basis. Project- and site-specific conditions and requirements will also resolve open To Be Determined issues.
Cold Vacuum Drying (CVD) Facility Design Basis Accident Analysis Documentation
PIEPHO, M.G.
1999-10-20
This document provides the detailed accident analysis to support HNF-3553, Annex B, Spent Nuclear Fuel Project Final Safety Analysis Report, ''Cold Vacuum Drying Facility Final Safety Analysis Report (FSAR).'' All assumptions, parameters and models used to provide the analysis of the design basis accidents are documented to support the conclusions in the FSAR.
Canister Storage Building (CSB) Design Basis Accident Analysis Documentation
CROWE, R.D.; PIEPHO, M.G.
2000-03-23
This document provided the detailed accident analysis to support HNF-3553, Spent Nuclear Fuel Project Final Safety Analysis Report, Annex A, ''Canister Storage Building Final Safety Analysis Report''. All assumptions, parameters, and models used to provide the analysis of the design basis accidents are documented to support the conclusions in the Canister Storage Building Final Safety Analysis Report.
Canister storage building design basis accident analysis documentation
KOPELIC, S.D.
1999-02-25
This document provides the detailed accident analysis to support HNF-3553, Spent Nuclear Fuel Project Final Safety Analysis Report, Annex A, ''Canister Storage Building Final Safety Analysis Report.'' All assumptions, parameters, and models used to provide the analysis of the design basis accidents are documented to support the conclusions in the Canister Storage Building Final Safety Analysis Report.
CRAD, Safety Basis- Idaho Accelerated Retrieval Project Phase II
A section of Appendix C to DOE G 226.1-2 "Federal Line Management Oversight of Department of Energy Nuclear Facilities." Consists of Criteria Review and Approach Documents (CRADs) used for a February 2006 Commencement of Operations assessment of the Safety Basis at the Idaho Accelerated Retrieval Project Phase II.
Slattery, Stuart R.
2015-12-02
In this study we analyze and extend mesh-free algorithms for three-dimensional data transfer problems in partitioned multiphysics simulations. We first provide a direct comparison between a mesh-based weighted residual method using the common-refinement scheme and two mesh-free algorithms leveraging compactly supported radial basis functions: one using a spline interpolation and one using a moving least square reconstruction. Through the comparison we assess both the conservation and accuracy of the data transfer obtained from each of the methods. We do so for a varying set of geometries with and without curvature and sharp features and for functions with and without smoothness and with varying gradients. Our results show that the mesh-based and mesh-free algorithms are complementary with cases where each was demonstrated to perform better than the other. We then focus on the mesh-free methods by developing a set of algorithms to parallelize them based on sparse linear algebra techniques. This includes a discussion of fast parallel radius searching in point clouds and restructuring the interpolation algorithms to leverage data structures and linear algebra services designed for large distributed computing environments. The scalability of our new algorithms is demonstrated on a leadership class computing facility using a set of basic scaling studies. Finally, these scaling studies show that for problems with reasonable load balance, our new algorithms for both spline interpolation and moving least square reconstruction demonstrate both strong and weak scalability using more than 100,000 MPI processes with billions of degrees of freedom in the data transfer operation.
Slattery, Stuart R.
2015-12-02
In this study we analyze and extend mesh-free algorithms for three-dimensional data transfer problems in partitioned multiphysics simulations. We first provide a direct comparison between a mesh-based weighted residual method using the common-refinement scheme and two mesh-free algorithms leveraging compactly supported radial basis functions: one using a spline interpolation and one using a moving least square reconstruction. Through the comparison we assess both the conservation and accuracy of the data transfer obtained from each of the methods. We do so for a varying set of geometries with and without curvature and sharp features and for functions with and without smoothnessmoreÂ Â» and with varying gradients. Our results show that the mesh-based and mesh-free algorithms are complementary with cases where each was demonstrated to perform better than the other. We then focus on the mesh-free methods by developing a set of algorithms to parallelize them based on sparse linear algebra techniques. This includes a discussion of fast parallel radius searching in point clouds and restructuring the interpolation algorithms to leverage data structures and linear algebra services designed for large distributed computing environments. The scalability of our new algorithms is demonstrated on a leadership class computing facility using a set of basic scaling studies. Finally, these scaling studies show that for problems with reasonable load balance, our new algorithms for both spline interpolation and moving least square reconstruction demonstrate both strong and weak scalability using more than 100,000 MPI processes with billions of degrees of freedom in the data transfer operation.Â«Â less
Advanced CHP Control Algorithms: Scope Specification
Katipamula, Srinivas; Brambley, Michael R.
2006-04-28
The primary objective of this multiyear project is to develop algorithms for combined heat and power systems to ensure optimal performance, increase reliability, and lead to the goal of clean, efficient, reliable and affordable next generation energy systems.
Advanced Imaging Algorithms for Radiation Imaging Systems
Marleau, Peter
2015-10-01
The intent of the proposed work, in collaboration with University of Michigan, is to develop the algorithms that will bring the analysis from qualitative images to quantitative attributes of objects containing SNM. The first step to achieving this is to develop an indepth understanding of the intrinsic errors associated with the deconvolution and MLEM algorithms. A significant new effort will be undertaken to relate the image data to a posited three-dimensional model of geometric primitives that can be adjusted to get the best fit. In this way, parameters of the model such as sizes, shapes, and masses can be extracted for both radioactive and non-radioactive materials. This model-based algorithm will need the integrated response of a hypothesized configuration of material to be calculated many times. As such, both the MLEM and the model-based algorithm require significant increases in calculation speed in order to converge to solutions in practical amounts of time.
Genetic algorithms at UC Davis/LLNL
Vemuri, V.R.
1993-12-31
A tutorial introduction to genetic algorithms is given. This brief tutorial should serve the purpose of introducing the subject to the novice. The tutorial is followed by a brief commentary on the term project reports that follow.
Tracking Algorithm for Multi- Dimensional Array Transposition
U.S. Department of Energy (DOE) all webpages (Extended Search)
192002 Yun (Helen) He, SC2002 1 MPI and OpenMP Paradigms on Cluster of SMP Architectures: the Vacancy Tracking Algorithm for Multi- Dimensional Array Transposition Yun (Helen) He...
Theoretical minimum energies to produce steel for selected conditions
Fruehan, R. J.; Fortini, O.; Paxton, H. W.; Brindle, R.
2000-03-01
An ITP study has determined the theoretical minimum energy requirements for producing steel from ore, scrap, and direct reduced iron. Dr. Richard Fruehan's report, Theoretical Minimum Energies to Produce Steel for Selected Conditions, provides insight into the potential energy savings (and associated reductions in carbon dioxide emissions) for ironmaking, steelmaking, and rolling processes (PDF 459 KB).
Parallel Algorithms and Patterns (Technical Report) | SciTech...
Office of Scientific and Technical Information (OSTI)
Parallel Algorithms and Patterns Citation Details In-Document Search Title: Parallel Algorithms and Patterns Authors: Robey, Robert W. 1 + Show Author Affiliations Los Alamos ...
Efficient algorithm for generating spectra using line-by-lne...
Office of Scientific and Technical Information (OSTI)
Citation Details In-Document Search Title: Efficient algorithm for generating spectra ... Subject: 74 ATOMIC AND MOLECULAR PHYSICS; 70 PLASMA PHYSICS AND FUSION; ALGORITHMS; ...
Robust Algorithm for Computing Statistical Stark Broadening of...
Office of Scientific and Technical Information (OSTI)
Citation Details In-Document Search Title: Robust Algorithm for Computing Statistical ... Language: English Subject: 70 PLASMA PHYSICS AND FUSION; ACCURACY; ALGORITHMS; ...
Solar Position Algorithm for Solar Radiation Applications (Revised...
Office of Scientific and Technical Information (OSTI)
Solar Position Algorithm for Solar Radiation Applications (Revised) Citation Details In-Document Search Title: Solar Position Algorithm for Solar Radiation Applications (Revised) ...
New Design Methods and Algorithms for Multi-component Distillation...
Office of Energy Efficiency and Renewable Energy (EERE) (indexed site)
Design Methods and Algorithms for Multi-component Distillation Processes New Design Methods and Algorithms for Multi-component Distillation Processes multicomponent.pdf (517.32 KB) ...
New Algorithm Enables Faster Simulations of Ultrafast Processes
U.S. Department of Energy (DOE) all webpages (Extended Search)
Algorithm Enables Faster Simulations of Ultrafast Processes New Algorithm Enables Faster ... Academy of Sciences, have developed a new real-time time-dependent density function ...
James, L.A.
1997-10-01
The Section 11 Working Group on Flaw Evaluation of the ASME B and PV Code Committee is considering a Code Case to allow the determination of the conditions under which environmentally-assisted cracking of low-alloy steels could occur in PWR primary environments. This paper provides the technical support basis for such an EAC Initiation and Cessation Criterion by reviewing the theoretical and experimental information in support of the proposed Code Case.
Drainage Algorithm for Geospatial Knowledge
Energy Science and Technology Software Center (OSTI)
2006-08-15
The Pacific Northwest National Laboratory (PNNL) has developed a prototype stream extraction algorithm that semi-automatically extracts and characterizes streams using a variety of multisensor imagery and digital terrain elevation data (DTEDÃƒÂƒÃ‚Â¯ÃƒÂ‚Ã‚ÂƒÃƒÂ‚Ã‚Â¢) data. The system is currently optimized for three types of single-band imagery: radar, visible, and thermal. Method of Solution: DRAGON: (1) classifies pixels into clumps of water objects based on the classification of water pixels by spectral signatures and neighborhood relationships, (2) uses themoreÂ Â» morphology operations (erosion and dilation) to separate out large lakes (or embayment), isolated lakes, ponds, wide rivers and narrow rivers, and (3) translates the river objects into vector objects. In detail, the process can be broken down into the following steps. A. Water pixels are initially identified using on the extend range and slope values (if an optional DEM file is available). B. Erode to the distance that defines a large water body and then dilate back. The resulting mask can be used to identify large lake and embayment objects that are then removed from the image. Since this operation be time consuming it is only performed if a simple test (i.e. a large box can be found somewhere in the image that contains only water pixels) that indicates a large water body is present. C. All water pixels are ÃƒÂƒÃ‚Â¢ÃƒÂ‚Ã‚Â€ÃƒÂ‚Ã‚Â˜clumpedÃƒÂƒÃ‚Â¢ÃƒÂ‚Ã‚Â€ÃƒÂ‚Ã‚Â™ (in Imagine terminology clumping is when pixels of a common classification that touch are connected) and clumps which do not contain pure water pixels (e.g. dark cloud shadows) are removed D. The resulting true water pixels are clumped and water objects which are too small (e.g. ponds) or isolated lakes (i.e. isolated objects with a small compactness ratio) are removed. Note that at this point lakes have been identified has a byproduct of the filtering process and can be output has vector layers if needed. E. At this point only river pixels
Industrial ecology: A basis for sustainable relations and cooperation
Blades, K.
1996-07-19
The Commission for Environmental Cooperation (CEC) seeks to address, in a cooperative manner, the environmental issues affecting the North American region and understand the linkages between environment and economy. Broadly, the goal of the CEC can be thought of as an attempt to achieve a sustainable economy concomitantly with continued economic, cultural, and technological evolution. The emerging field of industrial ecology provides a useful means for balancing the environmental and economical objectives of NAFTA. As NAFTA stimulates economic cooperation and growth, we must collectively develop mechanisms that enhance the environmental quality of the region. LLNL`s effort in industrial ecology provides the scientific basis and innovative use of technology to reconcile environmental and economic concerns. Nevertheless, these are not issues which can be resolved by a single institution. Efficient use of the linkages established by NAFTA is necessary to nurture our regional partnership which forms the basis for a sustainable environment, economy and relationship.
Design-Load Basis for LANL Structures, Systems, and Components
I. Cuesta
2004-09-01
This document supports the recommendations in the Los Alamos National Laboratory (LANL) Engineering Standard Manual (ESM), Chapter 5--Structural providing the basis for the loads, analysis procedures, and codes to be used in the ESM. It also provides the justification for eliminating the loads to be considered in design, and evidence that the design basis loads are appropriate and consistent with the graded approach required by the Department of Energy (DOE) Code of Federal Regulation Nuclear Safety Management, 10, Part 830. This document focuses on (1) the primary and secondary natural phenomena hazards listed in DOE-G-420.1-2, Appendix C, (2) additional loads not related to natural phenomena hazards, and (3) the design loads on structures during construction.
Basis for NGNP Reactor Design Down-Selection
L.E. Demick
2011-11-01
The purpose of this paper is to identify the extent of technology development, design and licensing maturity anticipated to be required to credibly identify differences that could make a technical choice practical between the prismatic and pebble bed reactor designs. This paper does not address a business decision based on the economics, business model and resulting business case since these will vary based on the reactor application. The selection of the type of reactor, the module ratings, the number of modules, the configuration of the balance of plant and other design selections will be made on the basis of optimizing the Business Case for the application. These are not decisions that can be made on a generic basis.
Resilient Control Systems Practical Metrics Basis for Defining Mission Impact
Craig G. Rieger
2014-08-01
"Resilienceâ€ describes how systems operate at an acceptable level of normalcy despite disturbances or threats. In this paper we first consider the cognitive, cyber-physical interdependencies inherent in critical infrastructure systems and how resilience differs from reliability to mitigate these risks. Terminology and metrics basis are provided to integrate the cognitive, cyber-physical aspects that should be considered when defining solutions for resilience. A practical approach is taken to roll this metrics basis up to system integrity and business case metrics that establish â€œproper operationâ€ and â€œimpact.â€ A notional chemical processing plant is the use case for demonstrating how the system integrity metrics can be applied to establish performance, and
Online Monitoring Technical Basis and Analysis Framework for Emergency
Office of Energy Efficiency and Renewable Energy (EERE) (indexed site)
Diesel Generators-Interim Report for FY 2013 | Department of Energy for Emergency Diesel Generators-Interim Report for FY 2013 Online Monitoring Technical Basis and Analysis Framework for Emergency Diesel Generators-Interim Report for FY 2013 The Light Water Reactor Sustainability Program is a research, development, and deployment program sponsored by the U.S. Department of Energy Office of Nuclear Energy. The program is operated in collaboration with the Electric Power Research Institute's
Online Monitoring Technical Basis and Analysis Framework for Large Power
Office of Energy Efficiency and Renewable Energy (EERE) (indexed site)
Transformers; Interim Report for FY 2012 | Department of Energy for Large Power Transformers; Interim Report for FY 2012 Online Monitoring Technical Basis and Analysis Framework for Large Power Transformers; Interim Report for FY 2012 The Light Water Reactor Sustainability Program is a research, development, and deployment program sponsored by the U.S. Department of Energy Office of Nuclear Energy. The program is operated in collaboration with the Electric Power Research Institute's (EPRI's)
Interim Safety Basis for Fuel Supply Shutdown Facility
BENECKE, M.W.
2000-09-07
This ISB, in conjunction with the IOSR, provides the required basis for interim operation or restrictions on interim operations and administrative controls for the facility until a SAR is prepared in accordance with the new requirements or the facility is shut down. It is concluded that the risks associated with tha current and anticipated mode of the facility, uranium disposition, clean up, and transition activities required for permanent closure, are within risk guidelines.
Auxiliary basis expansions for large-scale electronic structure calculations
Jung, Yousung; Sodt, Alexander; Gill, Peter W.M.; Head-Gordon, Martin
2005-04-04
One way to reduce the computational cost of electronic structure calculations is to employ auxiliary basis expansions to approximate 4 center integrals in terms of 2 and 3-center integrals, usually using the variationally optimum Coulomb metric to determine the expansion coefficients. However the long-range decay behavior of the auxiliary basis expansion coefficients has not been characterized. We find that this decay can be surprisingly slow. Numerical experiments on linear alkanes and a toy model both show that the decay can be as slow as 1/r in the distance between the auxiliary function and the fitted charge distribution. The Coulomb metric fitting equations also involve divergent matrix elements for extended systems treated with periodic boundary conditions. An attenuated Coulomb metric that is short-range can eliminate these oddities without substantially degrading calculated relative energies. The sparsity of the fit coefficients is assessed on simple hydrocarbon molecules, and shows quite early onset of linear growth in the number of significant coefficients with system size using the attenuated Coulomb metric. This means it is possible to design linear scaling auxiliary basis methods without additional approximations to treat large systems.
Bootstrap performance profiles in stochastic algorithms assessment
Costa, Lino; Espírito Santo, Isabel A.C.P.; Oliveira, Pedro
2015-03-10
Optimization with stochastic algorithms has become a relevant research field. Due to its stochastic nature, its assessment is not straightforward and involves integrating accuracy and precision. Performance profiles for the mean do not show the trade-off between accuracy and precision, and parametric stochastic profiles require strong distributional assumptions and are limited to the mean performance for a large number of runs. In this work, bootstrap performance profiles are used to compare stochastic algorithms for different statistics. This technique allows the estimation of the sampling distribution of almost any statistic even with small samples. Multiple comparison profiles are presented for more than two algorithms. The advantages and drawbacks of each assessment methodology are discussed.
Factorization using the quadratic sieve algorithm
Davis, J.A.; Holdridge, D.B.
1983-01-01
Since the cryptosecurity of the RSA two key cryptoalgorithm is no greater than the difficulty of factoring the modulus (product of two secret primes), a code that implements the Quadratic Sieve factorization algorithm on the CRAY I computer has been developed at the Sandia National Laboratories to determine as sharply as possible the current state-of-the-art in factoring. Because all viable attacks on RSA thus far proposed are equivalent to factorization of the modulus, sharper bounds on the computational difficulty of factoring permit improved estimates for the size of RSA parameters needed for given levels of cryptosecurity. Analysis of the Quadratic Sieve indicates that it may be faster than any previously published general purpose algorithm for factoring large integers. The high speed of the CRAY I coupled with the capability of the CRAY to pipeline certain vectorized operations make this algorithm (and code) the front runner in current factoring techniques.
Factorization using the quadratic sieve algorithm
Davis, J.A.; Holdridge, D.B.
1983-12-01
Since the cryptosecurity of the RSA two key cryptoalgorithm is no greater than the difficulty of factoring the modulus (product of two secret primes), a code that implements the Quadratic Sieve factorization algorithm on the CRAY I computer has been developed at the Sandia National Laboratories to determine as sharply as possible the current state-of-the-art in factoring. Because all viable attacks on RSA thus far proposed are equivalent to factorization of the modulus, sharper bounds on the computational difficulty of factoring permit improved estimates for the size of RSA parameters needed for given levels of cryptosecurity. Analysis of the Quadratic Sieve indicates that it may be faster than any previously published general purpose algorithm for factoring large integers. The high speed of the CRAY I coupled with the capability of the CRAY to pipeline certain vectorized operations make this algorithm (and code) the front runner in current factoring techniques.
Nonlinear Global Optimization Using Curdling Algorithm
Energy Science and Technology Software Center (OSTI)
1996-03-01
An algorithm for performing curdling optimization which is a derivative-free, grid-refinement approach to nonlinear optimization was developed and implemented in software. This approach overcomes a number of deficiencies in existing approaches. Most notably, it finds extremal regions rather than only single external extremal points. The program is interactive and collects information on control parameters and constraints using menus. For up to four dimensions, function convergence is displayed graphically. Because the algorithm does not compute derivatives,moreÂ Â» gradients or vectors, it is numerically stable. It can find all the roots of a polynomial in one pass. It is an inherently parallel algorithm. Constraints are handled as being initially fuzzy, but become tighter with each iteration.Â«Â less
Parallelism of the SANDstorm hash algorithm.
Torgerson, Mark Dolan; Draelos, Timothy John; Schroeppel, Richard Crabtree
2009-09-01
Mainstream cryptographic hashing algorithms are not parallelizable. This limits their speed and they are not able to take advantage of the current trend of being run on multi-core platforms. Being limited in speed limits their usefulness as an authentication mechanism in secure communications. Sandia researchers have created a new cryptographic hashing algorithm, SANDstorm, which was specifically designed to take advantage of multi-core processing and be parallelizable on a wide range of platforms. This report describes a late-start LDRD effort to verify the parallelizability claims of the SANDstorm designers. We have shown, with operating code and bench testing, that the SANDstorm algorithm may be trivially parallelized on a wide range of hardware platforms. Implementations using OpenMP demonstrates a linear speedup with multiple cores. We have also shown significant performance gains with optimized C code and the use of assembly instructions to exploit particular platform capabilities.
RELEASE OF DRIED RADIOACTIVE WASTE MATERIALS TECHNICAL BASIS DOCUMENT
KOZLOWSKI, S.D.
2007-05-30
This technical basis document was developed to support RPP-23429, Preliminary Documented Safety Analysis for the Demonstration Bulk Vitrification System (PDSA) and RPP-23479, Preliminary Documented Safety Analysis for the Contact-Handled Transuranic Mixed (CH-TRUM) Waste Facility. The main document describes the risk binning process and the technical basis for assigning risk bins to the representative accidents involving the release of dried radioactive waste materials from the Demonstration Bulk Vitrification System (DBVS) and to the associated represented hazardous conditions. Appendices D through F provide the technical basis for assigning risk bins to the representative dried waste release accident and associated represented hazardous conditions for the Contact-Handled Transuranic Mixed (CH-TRUM) Waste Packaging Unit (WPU). The risk binning process uses an evaluation of the frequency and consequence of a given representative accident or represented hazardous condition to determine the need for safety structures, systems, and components (SSC) and technical safety requirement (TSR)-level controls. A representative accident or a represented hazardous condition is assigned to a risk bin based on the potential radiological and toxicological consequences to the public and the collocated worker. Note that the risk binning process is not applied to facility workers because credible hazardous conditions with the potential for significant facility worker consequences are considered for safety-significant SSCs and/or TSR-level controls regardless of their estimated frequency. The controls for protection of the facility workers are described in RPP-23429 and RPP-23479. Determination of the need for safety-class SSCs was performed in accordance with DOE-STD-3009-94, Preparation Guide for US. Department of Energy Nonreactor Nuclear Facility Documented Safety Analyses, as described below.
Theoretical Studies of Low Frequency Instabilities in the Ionosphere. Final Report
Dimant, Y. S.
2003-08-20
The objective of the current project is to provide a theoretical basis for better understanding of numerous radar and rocket observations of density irregularities and related effects in the lower equatorial and high-latitude ionospheres. The research focused on: (1) continuing efforts to develop a theory of nonlinear saturation of the Farley-Buneman instability; (2) revision of the kinetic theory of electron-thermal instability at low altitudes; (3) studying the effects of strong anomalous electron heating in the high-latitude electrojet; (4) analytical and numerical studies of the combined Farley-Bunemadion-thermal instabilities in the E-region ionosphere; (5) studying the effect of dust charging in Polar Mesospheric Clouds. Revision of the kinetic theory of electron thermal instability at low altitudes.
5th International REAC/TS Symposium: The Medical Basis for Radiation...
U.S. Department of Energy (DOE) all webpages (Extended Search)
...TS Symposium: The Medical Basis for Radiation Accident Preparedness Skip site ...TS Symposium: The Medical Basis for Radiation Accident Preparedness Sept. 27-29, 2011 ...
CRAD, Safety Basis Upgrade Review (DOE-STD-3009-2014) - May 15...
Office of Environmental Management (EM)
1) provides objectives, criteria, and approaches for establishing and maintaining the safety basis at nuclear facilities. CRAD, Safety Basis Upgrade Review (DOE-STD-3009-2014)...
AUDIT REPORT Follow-up on Nuclear Safety: Safety Basis and Quality...
Nuclear Safety: Safety Basis and Quality Assurance at the Los Alamos National Laboratory ... INFORMATION: Audit Report: "Follow-up on Nuclear Safety: Safety Basis and Quality ...
EXPERIMENTAL AND THEORETICAL DETERMINATION OF HEAVY OIL VISCOSITY...
Office of Scientific and Technical Information (OSTI)
EXPERIMENTAL AND THEORETICAL DETERMINATION OF HEAVY OIL VISCOSITY UNDER RESERVOIR CONDITIONS FINAL PROGRESS REPORT PERIOD: OCT 1999-MAY 2003 CONTRACT NUMBER: DE-FG26-99FT40615 ...
Final Report. Research in Theoretical High Energy Physics
Greensite, Jeffrey P.; Golterman, Maarten F.L.
2015-04-30
Grant-supported research in theoretical high-energy physics, conducted in the period 1992-2015 is briefly described, and a full listing of published articles result from those research activities is supplied.
Neutron-Antineutron Oscillations: Theoretical Status and Experimental Prospects
Phillips, D. G.; Snow, W. M.; Babu, K.; Banerjee, S.; Baxter, D. V.; Berezhiani, Z.; Bergevin, M.; Bhattacharya, S.; Brooijmans, G.; Castellanos, L.; et al.,
2014-10-04
This paper summarizes the relevant theoretical developments, outlines some ideas to improve experimental searches for free neutron-antineutron oscillations, and suggests avenues for future improvement in the experimental sensitivity.
Theoretical Studies of Elementary Hydrocarbon Species and Their Reactions
Allen, Wesley D.; Schaefer, III, Henry F.
2015-11-14
This is the final report of the theoretical studies of elementary hydrocarbon species and their reactions. Part A has a bibliography of publications supported by DOE from 2010 to 2016 and Part B goes into recent research highlights.
Theoretical/best practice energy use in metalcasting operations
Schifo, J. F.; Radia, J. T.
2004-05-01
This study determined the theoretical minimum energy requirements for melting processes for all ferrous and noferrous engenieering alloys. Also the report details the Best Practice energy consumption for the industry.
Neutron-antineutron oscillations: Theoretical status and experimental prospects
Phillips, D. G.; Snow, W. M.; Babu, K.; Banerjee, S.; Baxter, D. V.; Berezhiani, Z.; Bergevin, M.; Bhattacharya, S.; Brooijmans, G.; Castellanos, L.; et al
2016-02-01
This paper summarizes the relevant theoretical developments, outlines some ideas to improve experimental searches for free neutron-antineutron oscillations, and suggests avenues for future improvement in the experimental sensitivity.
Theoretical Study of the Mechanism Behind the para-Selective...
Office of Scientific and Technical Information (OSTI)
in Zeolite H-Beta Citation Details In-Document Search Title: Theoretical Study of the Mechanism Behind the para-Selective Nitration of Toluene in Zeolite H-Beta Periodic ...
Theoretical investigations of two Si-based spintronic materials...
Office of Scientific and Technical Information (OSTI)
Title: Theoretical investigations of two Si-based spintronic materials Two Si-based spintronic materials, a Mn-Si digital ferromagnetic heterostructure (delta-layer of Mn doped ...
The Bender-Dunne basis operators as Hilbert space operators
Bunao, Joseph; Galapon, Eric A. E-mail: eric.galapon@upd.edu.ph
2014-02-15
The Bender-Dunne basis operators, T{sub ?m,n}=2{sup ?n}?{sub k=0}{sup n}(n/k )q{sup k}p{sup ?m}q{sup n?k} where q and p are the position and momentum operators, respectively, are formal integral operators in position representation in the entire real line R for positive integers n and m. We show, by explicit construction of a dense domain, that the operators T{sub ?m,n}'s are densely defined operators in the Hilbert space L{sup 2}(R)
Guidance For Preparatioon of Basis For Interim Operation (BIO) Documents
Office of Energy Efficiency and Renewable Energy (EERE) (indexed site)
3011-2002 December 2002 Superceding DOE-STD-3011-94 November 1994 DOE STANDARD GUIDANCE FOR PREPARATION OF BASIS FOR INTERIM OPERATION (BIO) DOCUMENTS U.S. Department of Energy AREA SAFT Washington, D.C. 20585 DISTRIBUTION STATEMENT A. Approved for public release; distribution is unlimited. NOT MEASUREMENT SENSITIVE DOE-STD-3011-2002 ii This document has been reproduced directly from the best available copy. Available to DOE and DOE contractors from ES&H Technical Information Services, U.S.
Berkeley Algorithms Help Researchers Understand Dark Energy
U.S. Department of Energy (DOE) all webpages (Extended Search)
Algorithms Help Researchers Understand Dark Energy Berkeley Algorithms Help Researchers Understand Dark Energy November 24, 2014 Contact: Linda Vu, +1 510 495 2402, lvu@lbl.gov Scientists believe that dark energy-the mysterious force that is accelerating cosmic expansion-makes up about 70 percent of the mass and energy of the universe. But because they don't know what it is, they cannot observe it directly. To unlock the mystery of dark energy and its influence on the universe, researchers
Speckle imaging algorithms for planetary imaging
Johansson, E.
1994-11-15
I will discuss the speckle imaging algorithms used to process images of the impact sites of the collision of comet Shoemaker-Levy 9 with Jupiter. The algorithms use a phase retrieval process based on the average bispectrum of the speckle image data. High resolution images are produced by estimating the Fourier magnitude and Fourier phase of the image separately, then combining them and inverse transforming to achieve the final result. I will show raw speckle image data and high-resolution image reconstructions from our recent experiment at Lick Observatory.
Graph algorithms in the titan toolkit.
McLendon, William Clarence, III; Wylie, Brian Neil
2009-10-01
Graph algorithms are a key component in a wide variety of intelligence analysis activities. The Graph-Based Informatics for Non-Proliferation and Counter-Terrorism project addresses the critical need of making these graph algorithms accessible to Sandia analysts in a manner that is both intuitive and effective. Specifically we describe the design and implementation of an open source toolkit for doing graph analysis, informatics, and visualization that provides Sandia with novel analysis capability for non-proliferation and counter-terrorism.
Theoretical Study on Catalysis by Protein Enzymes and Ribozyme
U.S. Department of Energy (DOE) all webpages (Extended Search)
Theoretical Study on Catalysis by Protein Enzymes and Ribozyme Theoretical Study on Catalysis by Protein Enzymes and Ribozyme 2000 NERSC Annual Report 17shkarplus.jpg The energetics were determined for three mechanisms proposed for TIM catalyzed reactions. Results from reaction path calculations suggest that the two mechanisms that involve an enediol intermediate are likely to occur, while the direct intra-substrate proton transfer mechanism (in green) is energetically unfavorable due to the
Improvements of Nuclear Data and Its Uncertainties by Theoretical Modeling
Office of Scientific and Technical Information (OSTI)
(Technical Report) | SciTech Connect Improvements of Nuclear Data and Its Uncertainties by Theoretical Modeling Citation Details In-Document Search Title: Improvements of Nuclear Data and Its Uncertainties by Theoretical Modeling Authors: Talou, Patrick [1] ; Nazarewicz, Witold [2] ; Prinja, Anil [3] ; Danon, Yaron [4] + Show Author Affiliations Los Alamos National Laboratory University of Tennessee, Knoxville, TN 37996, USA University of New Mexico, USA Rensselaer Polytechnic Institute, USA
The Geometry Of Disorder: Theoretical Investigations Of Quasicrystals And
Office of Scientific and Technical Information (OSTI)
Frustrated Magnets: Quasi-Crystals And Quasi-Equivalence: Symmetries And Energies In Alloys And Biological Materials (Technical Report) | SciTech Connect The Geometry Of Disorder: Theoretical Investigations Of Quasicrystals And Frustrated Magnets: Quasi-Crystals And Quasi-Equivalence: Symmetries And Energies In Alloys And Biological Materials Citation Details In-Document Search Title: The Geometry Of Disorder: Theoretical Investigations Of Quasicrystals And Frustrated Magnets: Quasi-Crystals
Experimental and theoretical investigations of non-centrosymmetric
Office of Scientific and Technical Information (OSTI)
8-hydroxyquinolinium dibenzoyl-(L)-tartrate methanol monohydrate single crystal (Journal Article) | SciTech Connect Experimental and theoretical investigations of non-centrosymmetric 8-hydroxyquinolinium dibenzoyl-(L)-tartrate methanol monohydrate single crystal Citation Details In-Document Search Title: Experimental and theoretical investigations of non-centrosymmetric 8-hydroxyquinolinium dibenzoyl-(L)-tartrate methanol monohydrate single crystal Graphical abstract: ORTEP diagram of HQDBT.
Improved algorithm for processing grating-based phase contrast interferometry image sets
Marathe, Shashidhara Assoufid, Lahsen Xiao, Xianghui; Ham, Kyungmin; Johnson, Warren W.; Butler, Leslie G.
2014-01-15
Grating-based X-ray and neutron interferometry tomography using phase-stepping methods generates large data sets. An improved algorithm is presented for solving for the parameters to calculate transmissions, differential phase contrast, and dark-field images. The method takes advantage of the vectorization inherent in high-level languages such as Mathematica and MATLAB and can solve a 16 Ã— 1k Ã— 1k data set in less than a second. In addition, the algorithm can function with partial data sets. This is demonstrated with processing of a 16-step grating data set with partial use of the original data chosen without any restriction. Also, we have calculated the reduced chi-square for the fit and notice the effect of grating support structural elements upon the differential phase contrast image and have explored expanded basis set representations to mitigate the impact.
Hanford External Dosimetry Technical Basis Manual PNL-MA-842
Rathbone, Bruce A.
2005-02-25
The Hanford External Dosimetry Technical Basis Manual PNL-MA-842 documents the design and implementation of the external dosimetry system used at Hanford. The manual describes the dosimeter design, processing protocols, dose calculation methodology, radiation fields encountered, dosimeter response characteristics, limitations of dosimeter design under field conditions, and makes recommendations for effective use of the dosimeters in the field. The manual describes the technical basis for the dosimetry system in a manner intended to help ensure defensibility of the dose of record at Hanford and to demonstrate compliance with 10 CFR 835, DOELAP, DOE-RL, ORP, PNSO, and Hanford contractor requirements. The dosimetry system is operated by PNNLâ€™s Hanford External Dosimetry Program which provides dosimetry services to all Hanford contractors. The primary users of this manual are DOE and DOE contractors at Hanford using the dosimetry services of PNNL. Development and maintenance of this manual is funded directly by DOE and DOE contractors. Its contents have been reviewed and approved by DOE and DOE contractors at Hanford through the Hanford Personnel Dosimetry Advisory Committee which is chartered and chaired by DOE-RL and serves as means of coordinating dosimetry practices across contractors at Hanford. This manual was established in 1996. Since inception, it has been revised many times and maintained by PNNL as a controlled document with controlled distribution. Rev. 0 marks the first revision to be released through PNNLâ€™s Electronic Records & Information Capture Architecture (ERICA) database.
Cold Vacuum Drying facility design basis accident analysis documentation
CROWE, R.D.
2000-08-08
This document provides the detailed accident analysis to support HNF-3553, Annex B, Spent Nuclear Fuel Project Final Safety Analysis Report (FSAR), ''Cold Vacuum Drying Facility Final Safety Analysis Report.'' All assumptions, parameters, and models used to provide the analysis of the design basis accidents are documented to support the conclusions in the FSAR. The calculations in this document address the design basis accidents (DBAs) selected for analysis in HNF-3553, ''Spent Nuclear Fuel Project Final Safety Analysis Report'', Annex B, ''Cold Vacuum Drying Facility Final Safety Analysis Report.'' The objective is to determine the quantity of radioactive particulate available for release at any point during processing at the Cold Vacuum Drying Facility (CVDF) and to use that quantity to determine the amount of radioactive material released during the DBAs. The radioactive material released is used to determine dose consequences to receptors at four locations, and the dose consequences are compared with the appropriate evaluation guidelines and release limits to ascertain the need for preventive and mitigative controls.
AN APPROACH TO SAFETY DESIGN BASIS DOCUMENTATION CHANGE CONTROL
RYAN GW
2008-05-15
This paper describes a safety design basis documentation change control process. The process identifies elements that can be used to manage the project/facility configuration during design evolution through the Initiation, Definition, and Execution project phases. The project phases addressed by the process are defined in US Department of Energy (DOE) Order (O) 413.3A, Program and Project Management for the Acquisition of Capital Assets, in support of DOE project Critical Decisions (CD). This approach has been developed for application to two Hanford Site projects in their early CD phases and is considered to be a key element of safety and design integration. As described in the work that has been performed, the purpose of change control is to maintain consistency among design requirements, the physical configuration, related facility documentation, and the nuclear safety basis during the evolution of the design. The process developed (1) ensures an appropriate level of rigor is applied at each project phase and (2) is considered to implement the requirements and guidance provided in DOE-STD-1189-2008, Integration of Safety into the Design Process. Presentation of this work is expected to benefit others in the DOE Complex that may be implementing DOE-STD-1189-2008 or managing nuclear safety documentation in support of projects in-process.
Optimizing SRF Gun Cavity Profiles in a Genetic Algorithm Framework
Alicia Hofler, Pavel Evtushenko, Frank Marhauser
2009-09-01
Automation of DC photoinjector designs using a genetic algorithm (GA) based optimization is an accepted practice in accelerator physics. Allowing the gun cavity field profile shape to be varied can extend the utility of this optimization methodology to superconducting and normal conducting radio frequency (SRF/RF) gun based injectors. Finding optimal field and cavity geometry configurations can provide guidance for cavity design choices and verify existing designs. We have considered two approaches for varying the electric field profile. The first is to determine the optimal field profile shape that should be used independent of the cavity geometry, and the other is to vary the geometry of the gun cavity structure to produce an optimal field profile. The first method can provide a theoretical optimal and can illuminate where possible gains can be made in field shaping. The second method can produce more realistically achievable designs that can be compared to existing designs. In this paper, we discuss the design and implementation for these two methods for generating field profiles for SRF/RF guns in a GA based injector optimization scheme and provide preliminary results.
On the convergence of inexact Uzawa algorithms
Welfert, B.D.
1994-12-31
The author considers the solution of symmetric indefinite systems which can be cast in matrix block form, where diagonal blocks A and C are symmetric positive definite and semi-definite, respectively. Systems of this type arise frequently in quadratic minimization problems, as well as mixed finite element discretizations of fluid flow equation. The author uses the Uzawa algorithm to precondition the matrix equations.
Gamma-ray spectral analysis algorithm library
Energy Science and Technology Software Center (OSTI)
2013-05-06
The routines of the Gauss Algorithms library are used to implement special purpose products that need to analyze gamma-ray spectra from Ge semiconductor detectors as a part of their function. These routines provide the ability to calibrate energy, calibrate peakwidth, search for peaks, search for regions, and fit the spectral data in a given region to locate gamma rays.
Gamma-ray Spectral Analysis Algorithm Library
Energy Science and Technology Software Center (OSTI)
1997-09-25
The routines of the Gauss Algorithm library are used to implement special purpose products that need to analyze gamma-ray spectra from GE semiconductor detectors as a part of their function. These routines provide the ability to calibrate energy, calibrate peakwidth, search for peaks, search for regions, and fit the spectral data in a given region to locate gamma rays.
PDES. FIPS Standard Data Encryption Algorithm
Nessett, D.N.
1992-03-03
PDES performs the National Bureau of Standards FIPS Pub. 46 data encryption/description algorithm used for the cryptographic protection of computer data. The DES algorithm is designed to encipher and decipher blocks of data consisting of 64 bits under control of a 64-bit key. The key is generated in such a way that each of the 56 bits used directly by the algorithm are random and the remaining 8 error-detecting bits are set to make the parity of each 8-bit byte of the key odd, i.e. there is an odd number of 1 bits in each 8-bit byte. Each member of a group of authorized users of encrypted computer data must have the key that was used to encipher the data in order to use it. Data can be recovered from cipher only by using exactly the same key used to encipher it, but with the schedule of addressing the key bits altered so that the deciphering process is the reverse of the enciphering process. A block of data to be enciphered is subjected to an initial permutation, then to a complex key-dependent computation, and finally to a permutation which is the inverse of the initial permutation. Two PDES routines are included; both perform the same calculation. One, identified as FDES.MAR, is designed to achieve speed in execution, while the other identified as PDES.MAR, presents a clearer view of how the algorithm is executed.
Control algorithms for autonomous robot navigation
Jorgensen, C.C.
1985-09-20
This paper examines control algorithm requirements for autonomous robot navigation outside laboratory environments. Three aspects of navigation are considered: navigation control in explored terrain, environment interactions with robot sensors, and navigation control in unanticipated situations. Major navigation methods are presented and relevance of traditional human learning theory is discussed. A new navigation technique linking graph theory and incidental learning is introduced.
Semi-Implicit Reversible Algorithms for Rigid Body Rotational Dynamics
Nukala, Phani K; Shelton Jr, William Allison
2006-09-01
This paper presents two semi-implicit algorithms based on splitting methodology for rigid body rotational dynamics. The first algorithm is a variation of partitioned Runge-Kutta (PRK) methodology that can be formulated as a splitting method. The second algorithm is akin to a multiple time stepping scheme and is based on modified Crouch-Grossman (MCG) methodology, which can also be expressed as a splitting algorithm. These algorithms are second-order accurate and time-reversible; however, they are not Poisson integrators, i.e., non-symplectic. These algorithms conserve some of the first integrals of motion, but some others are not conserved; however, the fluctuations in these invariants are bounded over exponentially long time intervals. These algorithms exhibit excellent long-term behavior because of their reversibility property and their (approximate) Poisson structure preserving property. The numerical results indicate that the proposed algorithms exhibit superior performance compared to some of the currently well known algorithms such as the Simo-Wong algorithm, Newmark algorithm, discrete Moser-Veselov algorithm, Lewis-Simo algorithm, and the LIEMID[EA] algorithm.
Electronic structure basis for the extraordinary magnetoresistance in WTe2
PletikosiÄ‡, I.; Ali, Mazhar N.; Fedorov, A. V.; Cava, R. J.; Valla, T.
2014-11-19
The electronic structure basis of the extremely large magnetoresistance in layered non-magnetic tungsten ditelluride has been investigated by angle-resolved photoelectron spectroscopy. Hole and electron pockets of approximately the same size were found at the Fermi level, suggesting that carrier compensation should be considered the primary source of the effect. The material exhibits a highly anisotropic, quasi one-dimensional Fermi surface from which the pronounced anisotropy of the magnetoresistance follows. As a result, a change in the Fermi surface with temperature was found and a high-density-of-states band that may take over conduction at higher temperatures and cause the observed turn-on behavior ofmoreÂ Â» the magnetoresistance in WTeâ‚‚ was identified.Â«Â less
Electronic structure basis for the titanic magnetoresistance in WTe?
Pletikosic, I.; Ali, Mazhar N.; Fedorov, A. V.; Cava, R. J.; Valla, T.
2014-11-19
The electronic structure basis of the extremely large magnetoresistance in layered non-magnetic tungsten ditelluride has been investigated by angle-resolved photoelectron spectroscopy. Hole and electron pockets of approximately the same size were found at the Fermi level, suggesting that carrier compensation should be considered the primary source of the effect. The material exhibits a highly anisotropic, quasi one-dimensional Fermi surface from which the pronounced anisotropy of the magnetoresistance follows. A change in the Fermi surface with temperature was found and a high-density-of-states band that may take over conduction at higher temperatures and cause the observed turn-on behavior of the magnetoresistance inmore »WTe? was identified.« less
Electronic structure basis for the titanic magnetoresistance in WTe?
Pletikosic, I.; Ali, Mazhar N.; Fedorov, A. V.; Cava, R. J.; Valla, T.
2014-11-19
The electronic structure basis of the extremely large magnetoresistance in layered non-magnetic tungsten ditelluride has been investigated by angle-resolved photoelectron spectroscopy. Hole and electron pockets of approximately the same size were found at the Fermi level, suggesting that carrier compensation should be considered the primary source of the effect. The material exhibits a highly anisotropic, quasi one-dimensional Fermi surface from which the pronounced anisotropy of the magnetoresistance follows. A change in the Fermi surface with temperature was found and a high-density-of-states band that may take over conduction at higher temperatures and cause the observed turn-on behavior of the magnetoresistance in WTe? was identified.
Draft Geologic Disposal Requirements Basis for STAD Specification
Ilgen, Anastasia G.; Bryan, Charles R.; Hardin, Ernest
2015-03-25
This document provides the basis for requirements in the current version of Performance Specification for Standardized Transportation, Aging, and Disposal Canister Systems, (FCRD-NFST-2014-0000579) that are driven by storage and geologic disposal considerations. Performance requirements for the Standardized Transportation, Aging, and Disposal (STAD) canister are given in Section 3.1 of that report. Here, the requirements are reviewed and the rationale for each provided. Note that, while FCRD-NFST-2014-0000579 provides performance specifications for other components of the STAD storage system (e.g. storage overpack, transfer and transportation casks, and others), these have no impact on the canister performance during disposal, and are not discussed here.
Advanced Fuel Cycle Economic Tools, Algorithms, and Methodologies
David E. Shropshire
2009-05-01
The Advanced Fuel Cycle Initiative (AFCI) Systems Analysis supports engineering economic analyses and trade-studies, and requires a requisite reference cost basis to support adequate analysis rigor. In this regard, the AFCI program has created a reference set of economic documentation. The documentation consists of the â€œAdvanced Fuel Cycle (AFC) Cost Basisâ€ report (Shropshire, et al. 2007), â€œAFCI Economic Analysisâ€ report, and the â€œAFCI Economic Tools, Algorithms, and Methodologies Report.â€ Together, these documents provide the reference cost basis, cost modeling basis, and methodologies needed to support AFCI economic analysis. The application of the reference cost data in the cost and econometric systems analysis models will be supported by this report. These methodologies include: the energy/environment/economic evaluation of nuclear technology penetration in the energy marketâ€”domestic and internationallyâ€”and impacts on AFCI facility deployment, uranium resource modeling to inform the front-end fuel cycle costs, facility first-of-a-kind to nth-of-a-kind learning with application to deployment of AFCI facilities, cost tradeoffs to meet nuclear non-proliferation requirements, and international nuclear facility supply/demand analysis. The economic analysis will be performed using two cost models. VISION.ECON will be used to evaluate and compare costs under dynamic conditions, consistent with the cases and analysis performed by the AFCI Systems Analysis team. Generation IV Excel Calculations of Nuclear Systems (G4-ECONS) will provide static (snapshot-in-time) cost analysis and will provide a check on the dynamic results. In future analysis, additional AFCI measures may be developed to show the value of AFCI in closing the fuel cycle. Comparisons can show AFCI in terms of reduced global proliferation (e.g., reduction in enrichment), greater sustainability through preservation of a natural resource (e.g., reduction in uranium ore depletion), value from
Hanford External Dosimetry Technical Basis Manual PNL-MA-842
Rathbone, Bruce A.
2009-08-28
The Hanford External Dosimetry Technical Basis Manual PNL-MA-842 documents the design and implementation of the external dosimetry system used at Hanford. The manual describes the dosimeter design, processing protocols, dose calculation methodology, radiation fields encountered, dosimeter response characteristics, limitations of dosimeter design under field conditions, and makes recommendations for effective use of the dosimeters in the field. The manual describes the technical basis for the dosimetry system in a manner intended to help ensure defensibility of the dose of record at Hanford and to demonstrate compliance with 10 CFR 835, DOELAP, DOE-RL, ORP, PNSO, and Hanford contractor requirements. The dosimetry system is operated by PNNLâ€™s Hanford External Dosimetry Program (HEDP) which provides dosimetry services to all Hanford contractors. The primary users of this manual are DOE and DOE contractors at Hanford using the dosimetry services of PNNL. Development and maintenance of this manual is funded directly by DOE and DOE contractors. Its contents have been reviewed and approved by DOE and DOE contractors at Hanford through the Hanford Personnel Dosimetry Advisory Committee (HPDAC) which is chartered and chaired by DOE-RL and serves as means of coordinating dosimetry practices across contractors at Hanford. This manual was established in 1996. Since inception, it has been revised many times and maintained by PNNL as a controlled document with controlled distribution. The first revision to be released through PNNLâ€™s Electronic Records & Information Capture Architecture (ERICA) database was designated Revision 0. Revision numbers that are whole numbers reflect major revisions typically involving changes to all chapters in the document. Revision numbers that include a decimal fraction reflect minor revisions, usually restricted to selected chapters or selected pages in the document.
A garbage collection algorithm for shared memory parallel processors
Crammond, J. )
1988-12-01
This paper describes a technique for adapting the Morris sliding garbage collection algorithm to execute on parallel machines with shared memory. The algorithm is described within the framework of an implementation of the parallel logic language Parlog. However, the algorithm is a general one and can easily be adapted to parallel Prolog systems and to other languages. The performance of the algorithm executing a few simple Parlog benchmarks is analyzed. Finally, it is shown how the technique for parallelizing the sequential algorithm can be adapted for a semi-space copying algorithm.
Research in theoretical nuclear and neutrino physics. Final report
Office of Scientific and Technical Information (OSTI)
(Technical Report) | SciTech Connect Research in theoretical nuclear and neutrino physics. Final report Citation Details In-Document Search Title: Research in theoretical nuclear and neutrino physics. Final report The main focus of the research supported by the nuclear theory grant DE-FG02-04ER41319 was on studying parton dynamics in high-energy heavy ion collisions, perturbative approach to charm production and its contribution to atmospheric neutrinos, application of AdS/CFT approach to
Theoretical solution of the minimum charge problem for gaseous detonations
Ostensen, R.W.
1990-12-01
A theoretical model was developed for the minimum charge to trigger a gaseous detonation in spherical geometry as a generalization of the Zeldovich model. Careful comparisons were made between the theoretical predictions and experimental data on the minimum charge to trigger detonations in propane-air mixtures. The predictions are an order of magnitude too high, and there is no apparent resolution to the discrepancy. A dynamic model, which takes into account the experimentally observed oscillations in the detonation zone, may be necessary for reliable predictions. 27 refs., 9 figs.
Theoretical and experimental investigation of heat pipe solar collector
Azad, E.
2008-09-15
Heat pipe solar collector was designed and constructed at IROST and its performance was measured on an outdoor test facility. The thermal behavior of a gravity assisted heat pipe solar collector was investigated theoretically and experimentally. A theoretical model based on effectiveness-NTU method was developed for evaluating the thermal efficiency of the collector, the inlet, outlet water temperatures and heat pipe temperature. Optimum value of evaporator length to condenser length ratio is also determined. The modelling predictions were validated using experimental data and it shows that there is a good concurrence between measured and predicted results. (author)
Fast computation algorithms for speckle pattern simulation
Nascov, Victor; SamoilÄƒ, Cornel; UrsuÅ£iu, Doru
2013-11-13
We present our development of a series of efficient computation algorithms, generally usable to calculate light diffraction and particularly for speckle pattern simulation. We use mainly the scalar diffraction theory in the form of Rayleigh-Sommerfeld diffraction formula and its Fresnel approximation. Our algorithms are based on a special form of the convolution theorem and the Fast Fourier Transform. They are able to evaluate the diffraction formula much faster than by direct computation and we have circumvented the restrictions regarding the relative sizes of the input and output domains, met on commonly used procedures. Moreover, the input and output planes can be tilted each to other and the output domain can be off-axis shifted.
Structural basis for the antibody neutralization of Herpes simplex virus
Lee, Cheng-Chung; Lin, Li-Ling; Chan, Woan-Eng; Ko, Tzu-Ping; Lai, Jiann-Shiun; Wang, Andrew H.-J.
2013-10-01
The gDâ€“E317-Fab complex crystal revealed the conformational epitope of human mAb E317 on HSV gD, providing a molecular basis for understanding the viral neutralization mechanism. Glycoprotein D (gD) of Herpes simplex virus (HSV) binds to a host cell surface receptor, which is required to trigger membrane fusion for virion entry into the host cell. gD has become a validated anti-HSV target for therapeutic antibody development. The highly inhibitory human monoclonal antibody E317 (mAb E317) was previously raised against HSV gD for viral neutralization. To understand the structural basis of antibody neutralization, crystals of the gD ectodomain bound to the E317 Fab domain were obtained. The structure of the complex reveals that E317 interacts with gD mainly through the heavy chain, which covers a large area for epitope recognition on gD, with a flexible N-terminal and C-terminal conformation. The epitope core structure maps to the external surface of gD, corresponding to the binding sites of two receptors, herpesvirus entry mediator (HVEM) and nectin-1, which mediate HSV infection. E317 directly recognizes the gDâ€“nectin-1 interface and occludes the HVEM contact site of gD to block its binding to either receptor. The binding of E317 to gD also prohibits the formation of the N-terminal hairpin of gD for HVEM recognition. The major E317-binding site on gD overlaps with either the nectin-1-binding residues or the neutralizing antigenic sites identified thus far (Tyr38, Asp215, Arg222 and Phe223). The epitopes of gD for E317 binding are highly conserved between two types of human herpesvirus (HSV-1 and HSV-2). This study enables the virus-neutralizing epitopes to be correlated with the receptor-binding regions. The results further strengthen the previously demonstrated therapeutic and diagnostic potential of the E317 antibody.
Hanford External Dosimetry Technical Basis Manual PNL-MA-842
Rathbone, Bruce A.
2011-04-04
The Hanford External Dosimetry Technical Basis Manual PNL-MA-842 documents the design and implementation of the external dosimetry system used at the U.S. Department of Energy (DOE) Hanford site. The manual describes the dosimeter design, processing protocols, dose calculation methodology, radiation fields encountered, dosimeter response characteristics, limitations of dosimeter design under field conditions, and makes recommendations for effective use of the dosimeters in the field. The manual describes the technical basis for the dosimetry system in a manner intended to help ensure defensibility of the dose of record at Hanford and to demonstrate compliance with requirements of 10 CFR 835, the DOE Laboratory Accreditation Program, the DOE Richland Operations Office, DOE Office of River Protection, DOE Pacific Northwest Office of Science, and Hanfordâ€™s DOE contractors. The dosimetry system is operated by the Pacific Northwest National Laboratory (PNNL) Hanford External Dosimetry Program which provides dosimetry services to PNNL and all Hanford contractors. The primary users of this manual are DOE and DOE contractors at Hanford using the dosimetry services of PNNL. Development and maintenance of this manual is funded directly by DOE and DOE contractors. Its contents have been reviewed and approved by DOE and DOE contractors at Hanford through the Hanford Personnel Dosimetry Advisory Committee which is chartered and chaired by DOE-RL and serves as means of coordinating dosimetry practices across contractors at Hanford. This manual was established in 1996. Since its inception, it has been revised many times and maintained by PNNL as a controlled document with controlled distribution. The first revision to be released through PNNLâ€™s Electronic Records & Information Capture Architecture database was designated Revision 0. Revision numbers that are whole numbers reflect major revisions typically involving significant changes to all chapters in the document. Revision
Hanford External Dosimetry Technical Basis Manual PNL-MA-842
Rathbone, Bruce A.
2010-04-01
The Hanford External Dosimetry Technical Basis Manual PNL-MA-842 documents the design and implementation of the external dosimetry system used at the U.S. Department of Energy (DOE) Hanford site. The manual describes the dosimeter design, processing protocols, dose calculation methodology, radiation fields encountered, dosimeter response characteristics, limitations of dosimeter design under field conditions, and makes recommendations for effective use of the dosimeters in the field. The manual describes the technical basis for the dosimetry system in a manner intended to help ensure defensibility of the dose of record at Hanford and to demonstrate compliance with requirements of 10 CFR 835, the DOE Laboratory Accreditation Program, the DOE Richland Operations Office, DOE Office of River Protection, DOE Pacific Northwest Office of Science, and Hanfordâ€™s DOE contractors. The dosimetry system is operated by the Pacific Northwest National Laboratory (PNNL) Hanford External Dosimetry Program which provides dosimetry services to PNNL and all Hanford contractors. The primary users of this manual are DOE and DOE contractors at Hanford using the dosimetry services of PNNL. Development and maintenance of this manual is funded directly by DOE and DOE contractors. Its contents have been reviewed and approved by DOE and DOE contractors at Hanford through the Hanford Personnel Dosimetry Advisory Committee which is chartered and chaired by DOE-RL and serves as means of coordinating dosimetry practices across contractors at Hanford. This manual was established in 1996. Since its inception, it has been revised many times and maintained by PNNL as a controlled document with controlled distribution. The first revision to be released through PNNLâ€™s Electronic Records & Information Capture Architecture database was designated Revision 0. Revision numbers that are whole numbers reflect major revisions typically involving significant changes to all chapters in the document. Revision
Hanford External Dosimetry Technical Basis Manual PNL-MA-842
Rathbone, Bruce A.
2007-03-12
The Hanford External Dosimetry Technical Basis Manual PNL-MA-842 documents the design and implementation of the external dosimetry system used at Hanford. The manual describes the dosimeter design, processing protocols, dose calculation methodology, radiation fields encountered, dosimeter response characteristics, limitations of dosimeter design under field conditions, and makes recommendations for effective use of the dosimeters in the field. The manual describes the technical basis for the dosimetry system in a manner intended to help ensure defensibility of the dose of record at Hanford and to demonstrate compliance with 10 CFR 835, DOELAP, DOE-RL, ORP, PNSO, and Hanford contractor requirements. The dosimetry system is operated by PNNLâ€™s Hanford External Dosimetry Program (HEDP) which provides dosimetry services to all Hanford contractors. The primary users of this manual are DOE and DOE contractors at Hanford using the dosimetry services of PNNL. Development and maintenance of this manual is funded directly by DOE and DOE contractors. Its contents have been reviewed and approved by DOE and DOE contractors at Hanford through the Hanford Personnel Dosimetry Advisory Committee (HPDAC) which is chartered and chaired by DOE-RL and serves as means of coordinating dosimetry practices across contractors at Hanford. This manual was established in 1996. Since inception, it has been revised many times and maintained by PNNL as a controlled document with controlled distribution. Rev. 0 marks the first revision to be released through PNNLâ€™s Electronic Records & Information Capture Architecture (ERICA) database. Revision numbers that are whole numbers reflect major revisions typically involving changes to all chapters in the document. Revision numbers that include a decimal fraction reflect minor revisions, usually restricted to selected chapters or selected pages in the document. Revision Log: Rev. 0 (2/25/2005) Major revision and expansion. Rev. 0.1 (3/12/2007) Minor
Charles, P. H. Crowe, S. B.; Langton, C. M.; Trapp, J. V.; Cranmer-Sargison, G.; Thwaites, D. I.; Kairn, T.; Knight, R. T.; Kenny, J.
2014-04-15
Purpose: This work introduces the concept of very small field size. Output factor (OPF) measurements at these field sizes require extremely careful experimental methodology including the measurement of dosimetric field size at the same time as each OPF measurement. Two quantifiable scientific definitions of the threshold of very small field size are presented. Methods: A practical definition was established by quantifying the effect that a 1 mm error in field size or detector position had on OPFs and setting acceptable uncertainties on OPF at 1%. Alternatively, for a theoretical definition of very small field size, the OPFs were separated into additional factors to investigate the specific effects of lateral electronic disequilibrium, photon scatter in the phantom, and source occlusion. The dominant effect was established and formed the basis of a theoretical definition of very small fields. Each factor was obtained using Monte Carlo simulations of a Varian iX linear accelerator for various square field sizes of side length from 4 to 100 mm, using a nominal photon energy of 6 MV. Results: According to the practical definition established in this project, field sizes ?15 mm were considered to be very small for 6 MV beams for maximal field size uncertainties of 1 mm. If the acceptable uncertainty in the OPF was increased from 1.0% to 2.0%, or field size uncertainties are 0.5 mm, field sizes ?12 mm were considered to be very small. Lateral electronic disequilibrium in the phantom was the dominant cause of change in OPF at very small field sizes. Thus the theoretical definition of very small field size coincided to the field size at which lateral electronic disequilibrium clearly caused a greater change in OPF than any other effects. This was found to occur at field sizes ?12 mm. Source occlusion also caused a large change in OPF for field sizes ?8 mm. Based on the results of this study, field sizes ?12 mm were considered to be theoretically very small for 6 MV beams
Algorithmic Techniques for Massive Data Sets
Moses Charikar
2006-04-03
This report describes the progress made during the Early Career Principal Investigator (ECPI) project on Algorithmic Techniques for Large Data Sets. Research was carried out in the areas of dimension reduction, clustering and finding structure in data, aggregating information from different sources and designing efficient methods for similarity search for high dimensional data. A total of nine different research results were obtained and published in leading conferences and journals.
Automated Algorithm for MFRSR Data Analysis
U.S. Department of Energy (DOE) all webpages (Extended Search)
Automated Algorithm for MFRSR Data Analysis M. D. Alexandrov and B. Cairns Columbia University and National Aeronautics and Space Administration Goddard Institute for Space Studies New York, New York A. A. Lacis and B. E. Carlson National Aeronautics and Space Administration Goddard Institute for Space Studies New York, New York A. Marshak National Aeronautics and Space Administration Goddard Space Flight Center Greenbelt, Maryland We present a substantial upgrade of our previously developed
Modeling and Algorithmic Approaches to Constitutively-Complex, Micro-structured Fluids
Forest, Mark Gregory
2014-05-06
The team for this Project made significant progress on modeling and algorithmic approaches to hydrodynamics of fluids with complex microstructure. Our advances are broken down into modeling and algorithmic approaches. In experiments a driven magnetic bead in a complex fluid accelerates out of the Stokes regime and settles into another apparent linear response regime. The modeling explains the take-off as a deformation of entanglements, and the longtime behavior is a nonlinear, far-from-equilibrium property. Furthermore, the model has predictive value, as we can tune microstructural properties relative to the magnetic force applied to the bead to exhibit all possible behaviors. Wave-theoretic probes of complex fluids have been extended in two significant directions, to small volumes and the nonlinear regime. Heterogeneous stress and strain features that lie beyond experimental capability were studied. It was shown that nonlinear penetration of boundary stress in confined viscoelastic fluids is not monotone, indicating the possibility of interlacing layers of linear and nonlinear behavior, and thus layers of variable viscosity. Models, algorithms, and codes were developed and simulations performed leading to phase diagrams of nanorod dispersion hydrodynamics in parallel shear cells and confined cavities representative of film and membrane processing conditions. Hydrodynamic codes for polymeric fluids are extended to include coupling between microscopic and macroscopic models, and to the strongly nonlinear regime.
Madduri, Kamesh; Ediger, David; Jiang, Karl; Bader, David A.; Chavarria-Miranda, Daniel
2009-02-15
We present a new lock-free parallel algorithm for computing betweenness centralityof massive small-world networks. With minor changes to the data structures, ouralgorithm also achieves better spatial cache locality compared to previous approaches. Betweenness centrality is a key algorithm kernel in HPCS SSCA#2, a benchmark extensively used to evaluate the performance of emerging high-performance computing architectures for graph-theoretic computations. We design optimized implementations of betweenness centrality and the SSCA#2 benchmark for two hardware multithreaded systems: a Cray XMT system with the Threadstorm processor, and a single-socket Sun multicore server with the UltraSPARC T2 processor. For a small-world network of 134 million vertices and 1.073 billion edges, the 16-processor XMT system and the 8-core Sun Fire T5120 server achieve TEPS scores (an algorithmic performance count for the SSCA#2 benchmark) of 160 million and 90 million respectively, which corresponds to more than a 2X performance improvement over the previous parallel implementations. To better characterize the performance of these multithreaded systems, we correlate the SSCA#2 performance results with data from the memory-intensive STREAM and RandomAccess benchmarks. Finally, we demonstrate the applicability of our implementation to analyze massive real-world datasets by computing approximate betweenness centrality for a large-scale IMDb movie-actor network.
Climate Change: The Physical Basis and Latest Results
None
2016-07-12
The 2007 Assessment Report of the Intergovernmental Panel on Climate Change (IPCC) concludes: "Warming in the climate system is unequivocal." Without the contribution of Physics to climate science over many decades, such a statement would not have been possible. Experimental physics enables us to read climate archives such as polar ice cores and so provides the context for the current changes. For example, today the concentration of CO2 in the atmosphere, the second most important greenhouse gas, is 28% higher than any time during the last 800,000 years. Classical fluid mechanics and numerical mathematics are the basis of climate models from which estimates of future climate change are obtained. But major instabilities and surprises in the Earth System are still unknown. These are also to be considered when the climatic consequences of proposals for geo-engineering are estimated. Only Physics will permit us to further improve our understanding in order to provide the foundation for policy decisions facing the global climate change challenge.
Hanford Technical Basis for Multiple Dosimetry Effective Dose Methodology
Hill, Robin L.; Rathbone, Bruce A.
2010-08-01
The current method at Hanford for dealing with the results from multiple dosimeters worn during non-uniform irradiation is to use a compartmentalization method to calculate the effective dose (E). The method, as documented in the current version of Section 6.9.3 in the 'Hanford External Dosimetry Technical Basis Manual, PNL-MA-842,' is based on the compartmentalization method presented in the 1997 ANSI/HPS N13.41 standard, 'Criteria for Performing Multiple Dosimetry.' With the adoption of the ICRP 60 methodology in the 2007 revision to 10 CFR 835 came changes that have a direct affect on the compartmentalization method described in the 1997 ANSI/HPS N13.41 standard, and, thus, to the method used at Hanford. The ANSI/HPS N13.41 standard committee is in the process of updating the standard, but the changes to the standard have not yet been approved. And, the drafts of the revision of the standard tend to align more with ICRP 60 than with the changes specified in the 2007 revision to 10 CFR 835. Therefore, a revised method for calculating effective dose from non-uniform external irradiation using a compartmental method was developed using the tissue weighting factors and remainder organs specified in 10 CFR 835 (2007).
Development of engineering technology basis for industrialization of pyrometallurgical reprocessing
Koyama, Tadafumi; Hijikata, Takatoshi; Yokoo, Takeshi; Inoue, Tadashi
2007-07-01
Development of the engineering technology basis of pyrometallurgical reprocessing is a key issue for industrialization. For development of the transport technologies of molten salt and liquid cadmium at around 500 deg. C, a salt transport test rig and a metal transport test rig were installed in Ar glove box. Function of centrifugal pump and 1/2' declined tubing were confirmed with LiCl- KCl molten salt. The transport behavior of molten salt was found to follow that of water. Function of centrifugal pump, vacuum sucking and 1/2' declined tubing were confirmed with liquid Cd. With employing the transport technologies, industrialization applicable electro-refiner was newly designed and engineering-scale model was fabricated in Ar glove box. The electro-refiner has semi-continuous liquid Cd cathode instead of conventional one used in small-scale tests. With using actinide-simulating elements, demonstration of industrial-scale throughput will be carried out in this electro-refiner for more precise evaluation of industrialization potential of pyrometallurgical reprocessing. (authors)
Scaling Up Coordinate Descent Algorithms for Large ?1 Regularization Problems
Scherrer, Chad; Halappanavar, Mahantesh; Tewari, Ambuj; Haglin, David J.
2012-07-03
We present a generic framework for parallel coordinate descent (CD) algorithms that has as special cases the original sequential algorithms of Cyclic CD and Stochastic CD, as well as the recent parallel Shotgun algorithm of Bradley et al. We introduce two novel parallel algorithms that are also special cases---Thread-Greedy CD and Coloring-Based CD---and give performance measurements for an OpenMP implementation of these.
Zhang, Gang; Harichandran, Ronald S.; Ramuhalli, Pradeep
2011-09-13
Delamination is a commonly observed distress in concrete bridge decks. Among all the delamination detection methods, acoustic methods have the advantages of being fast and inexpensive. In traditional acoustic inspection methods, the inspector drags a chain along or hammers on the bridge deck and detects delamination from the 'hollowness' of the sounds. The signals are often contaminated by ambient traffic noise and the detection of delamination is highly subjective. This paper describes the performance of an impact-bases acoustic NDE method where the traffic noise was filtered by employing a noise cancelling algorithm and where subjectivity was eliminated by introducing feature extraction and pattern recognition algorithms. Different algorithms were compared and the best one was selected in each category. The comparison showed that the modified independent component analysis (ICA) algorithm was most effective in cancelling the traffic noise and features consisting of mel-frequency cepstral coefficients (MFCCs) had the best performance in terms of repeatability and separabillty. The condition of the bridge deck was then detected by a radial basis function (RBF) neural network. The performance of the system was evaluated using both experimental and field data. The results show that the selected algorithms increase the noise robustness of acoustic methods and perform satisfactorily if the training data is representative.
New algorithms for the symmetric tridiagonal eigenvalue computation
Pan, V. |
1994-12-31
The author presents new algorithms that accelerate the bisection method for the symmetric eigenvalue problem. The algorithms rely on some new techniques, which include acceleration of Newton`s iteration and can also be further applied to acceleration of some other iterative processes, in particular, of iterative algorithms for approximating polynomial zeros.
ITP Metal Casting: Theoretical/Best Practice Energy Use in Metalcastin...
Office of Energy Efficiency and Renewable Energy (EERE) (indexed site)
TheoreticalBest Practice Energy Use in Metalcasting Operations ITP Metal Casting: TheoreticalBest Practice Energy Use in Metalcasting Operations doebestpractice052804.pdf (1.64 ...
Basis for seismic provisions of DOE-STD-1020
Kennedy, R.C.; Short, S.A.
1994-04-01
DOE-STD-1020 provides for a graded approach for the seismic design and evaluation of DOE structures, systems, and components (SSC). Each SSC is assigned to a Performance Category (PC) with a performance description and an approximate annual probability of seismic-induced unacceptable performance, P{sub F}. The seismic annual probability performance goals for PC 1 through 4 for which specific seismic design and evaluation criteria are presented. DOE-STD-1020 also provides a seismic design and evaluation procedure applicable to achieve any seismic performance goal annual probability of unacceptable performance specified by the user. The desired seismic performance goal is achieved by defining the seismic hazard in terms of a site-specified design/evaluation response spectrum (called herein, the Design/Evaluation Basis Earthquake, DBE). Probabilistic seismic hazard estimates are used to establish the DBE. The resulting seismic hazard curves define the amplitude of the ground motion as a function of the annual probability of exceedance P{sub H} of the specified seismic hazard. Once the DBE is defined, the SSC is designed or evaluated for this DBE using adequately conservative deterministic acceptance criteria. To be adequately conservative, the acceptance criteria must introduce an additional reduction in the risk of unacceptable performance below the annual risk of exceeding the DBE. The ratio of the seismic hazard exceedance probability P{sub H} to the performance goal probability P{sub F} is defined herein as the risk reduction ratio. The required degree of conservatism in the deterministic acceptance criteria is a function of the specified risk reduction ratio.
Practical auxiliary basis implementation of Rung 3.5 functionals
Janesko, Benjamin G.; Scalmani, Giovanni; Frisch, Michael J.
2014-07-21
Approximate exchange-correlation functionals for Kohn-Sham density functional theory often benefit from incorporating exact exchange. Exact exchange is constructed from the noninteracting reference system's nonlocal one-particle density matrix Î³(r{sup -vector},r{sup -vector}â€²). Rung 3.5 functionals attempt to balance the strengths and limitations of exact exchange using a new ingredient, a projection of Î³(r{sup -vector},r{sup -vector} â€²) onto a semilocal model density matrix Î³{sub SL}(Ï(r{sup -vector}),âˆ‡Ï(r{sup -vector}),r{sup -vector}âˆ’r{sup -vector} â€²). Î³{sub SL} depends on the electron density Ï(r{sup -vector}) at reference point r{sup -vector}, and is closely related to semilocal model exchange holes. We present a practical implementation of Rung 3.5 functionals, expanding the r{sup -vector}âˆ’r{sup -vector} â€² dependence of Î³{sub SL} in an auxiliary basis set. Energies and energy derivatives are obtained from 3D numerical integration as in standard semilocal functionals. We also present numerical tests of a range of properties, including molecular thermochemistry and kinetics, geometries and vibrational frequencies, and bandgaps and excitation energies. Rung 3.5 functionals typically provide accuracy intermediate between semilocal and hybrid approximations. Nonlocal potential contributions from Î³{sub SL} yield interesting successes and failures for band structures and excitation energies. The results enable and motivate continued exploration of Rung 3.5 functional forms.
Safety evaluation of MHTGR licensing basis accident scenarios
Kroeger, P.G.
1989-04-01
The safety potential of the Modular High-Temperature Gas Reactor (MHTGR) was evaluated, based on the Preliminary Safety Information Document (PSID), as submitted by the US Department of Energy to the US Nuclear Regulatory Commission. The relevant reactor safety codes were extended for this purpose and applied to this new reactor concept, searching primarily for potential accident scenarios that might lead to fuel failures due to excessive core temperatures and/or to vessel damage, due to excessive vessel temperatures. The design basis accident scenario leading to the highest vessel temperatures is the depressurized core heatup scenario without any forced cooling and with decay heat rejection to the passive Reactor Cavity Cooling System (RCCS). This scenario was evaluated, including numerous parametric variations of input parameters, like material properties and decay heat. It was found that significant safety margins exist, but that high confidence levels in the core effective thermal conductivity, the reactor vessel and RCCS thermal emissivities and the decay heat function are required to maintain this safety margin. Severe accident extensions of this depressurized core heatup scenario included the cases of complete RCCS failure, cases of massive air ingress, core heatup without scram and cases of degraded RCCS performance due to absorbing gases in the reactor cavity. Except for no-scram scenarios extending beyond 100 hr, the fuel never reached the limiting temperature of 1600/degree/C, below which measurable fuel failures are not expected. In some of the scenarios, excessive vessel and concrete temperatures could lead to investment losses but are not expected to lead to any source term beyond that from the circulating inventory. 19 refs., 56 figs., 11 tabs.
Ling, Julia; Templeton, Jeremy Alan
2015-08-04
Reynolds Averaged Navier Stokes (RANS) models are widely used in industry to predict fluid flows, despite their acknowledged deficiencies. Not only do RANS models often produce inaccurate flow predictions, but there are very limited diagnostics available to assess RANS accuracy for a given flow configuration. If experimental or higher fidelity simulation results are not available for RANS validation, there is no reliable method to evaluate RANS accuracy. This paper explores the potential of utilizing machine learning algorithms to identify regions of high RANS uncertainty. Three different machine learning algorithms were evaluated: support vector machines, Adaboost decision trees, and random forests. The algorithms were trained on a database of canonical flow configurations for which validated direct numerical simulation or large eddy simulation results were available, and were used to classify RANS results on a point-by-point basis as having either high or low uncertainty, based on the breakdown of specific RANS modeling assumptions. Classifiers were developed for three different basic RANS eddy viscosity model assumptions: the isotropy of the eddy viscosity, the linearity of the Boussinesq hypothesis, and the non-negativity of the eddy viscosity. It is shown that these classifiers are able to generalize to flows substantially different from those on which they were trained. As a result, feature selection techniques, model evaluation, and extrapolation detection are discussed in the context of turbulence modeling applications.
Ling, Julia; Templeton, Jeremy Alan
2015-08-04
Reynolds Averaged Navier Stokes (RANS) models are widely used in industry to predict fluid flows, despite their acknowledged deficiencies. Not only do RANS models often produce inaccurate flow predictions, but there are very limited diagnostics available to assess RANS accuracy for a given flow configuration. If experimental or higher fidelity simulation results are not available for RANS validation, there is no reliable method to evaluate RANS accuracy. This paper explores the potential of utilizing machine learning algorithms to identify regions of high RANS uncertainty. Three different machine learning algorithms were evaluated: support vector machines, Adaboost decision trees, and random forests.moreÂ Â» The algorithms were trained on a database of canonical flow configurations for which validated direct numerical simulation or large eddy simulation results were available, and were used to classify RANS results on a point-by-point basis as having either high or low uncertainty, based on the breakdown of specific RANS modeling assumptions. Classifiers were developed for three different basic RANS eddy viscosity model assumptions: the isotropy of the eddy viscosity, the linearity of the Boussinesq hypothesis, and the non-negativity of the eddy viscosity. It is shown that these classifiers are able to generalize to flows substantially different from those on which they were trained. As a result, feature selection techniques, model evaluation, and extrapolation detection are discussed in the context of turbulence modeling applications.Â«Â less
Evaluation of machine learning algorithms for prediction of regions of high RANS uncertainty
Ling, Julia; Templeton, Jeremy Alan
2015-08-04
Reynolds Averaged Navier Stokes (RANS) models are widely used in industry to predict fluid flows, despite their acknowledged deficiencies. Not only do RANS models often produce inaccurate flow predictions, but there are very limited diagnostics available to assess RANS accuracy for a given flow configuration. If experimental or higher fidelity simulation results are not available for RANS validation, there is no reliable method to evaluate RANS accuracy. This paper explores the potential of utilizing machine learning algorithms to identify regions of high RANS uncertainty. Three different machine learning algorithms were evaluated: support vector machines, Adaboost decision trees, and random forests. The algorithms were trained on a database of canonical flow configurations for which validated direct numerical simulation or large eddy simulation results were available, and were used to classify RANS results on a point-by-point basis as having either high or low uncertainty, based on the breakdown of specific RANS modeling assumptions. Classifiers were developed for three different basic RANS eddy viscosity model assumptions: the isotropy of the eddy viscosity, the linearity of the Boussinesq hypothesis, and the non-negativity of the eddy viscosity. It is shown that these classifiers are able to generalize to flows substantially different from those on which they were trained. As a result, feature selection techniques, model evaluation, and extrapolation detection are discussed in the context of turbulence modeling applications.
Evaluation of machine learning algorithms for prediction of regions of high RANS uncertainty
Ling, Julia; Templeton, Jeremy Alan
2015-08-04
Reynolds Averaged Navier Stokes (RANS) models are widely used in industry to predict fluid flows, despite their acknowledged deficiencies. Not only do RANS models often produce inaccurate flow predictions, but there are very limited diagnostics available to assess RANS accuracy for a given flow configuration. If experimental or higher fidelity simulation results are not available for RANS validation, there is no reliable method to evaluate RANS accuracy. This paper explores the potential of utilizing machine learning algorithms to identify regions of high RANS uncertainty. Three different machine learning algorithms were evaluated: support vector machines, Adaboost decision trees, and random forests.moreÂ Â» The algorithms were trained on a database of canonical flow configurations for which validated direct numerical simulation or large eddy simulation results were available, and were used to classify RANS results on a point-by-point basis as having either high or low uncertainty, based on the breakdown of specific RANS modeling assumptions. Classifiers were developed for three different basic RANS eddy viscosity model assumptions: the isotropy of the eddy viscosity, the linearity of the Boussinesq hypothesis, and the non-negativity of the eddy viscosity. It is shown that these classifiers are able to generalize to flows substantially different from those on which they were trained. As a result, feature selection techniques, model evaluation, and extrapolation detection are discussed in the context of turbulence modeling applications.Â«Â less
A Unified Differential Evolution Algorithm for Global Optimization
Qiang, Ji; Mitchell, Chad
2014-06-24
Abstract?In this paper, we propose a new unified differential evolution (uDE) algorithm for single objective global optimization. Instead of selecting among multiple mutation strategies as in the conventional differential evolution algorithm, this algorithm employs a single equation as the mutation strategy. It has the virtue of mathematical simplicity and also provides users the flexbility for broader exploration of different mutation strategies. Numerical tests using twelve basic unimodal and multimodal functions show promising performance of the proposed algorithm in comparison to convential differential evolution algorithms.
Grant, C W; Lenderman, J S; Gansemer, J D
2011-02-24
This document is an update to the 'ADIS Algorithm Evaluation Project Plan' specified in the Statement of Work for the US-VISIT Identity Matching Algorithm Evaluation Program, as deliverable II.D.1. The original plan was delivered in August 2010. This document modifies the plan to reflect modified deliverables reflecting delays in obtaining a database refresh. This document describes the revised schedule of the program deliverables. The detailed description of the processes used, the statistical analysis processes and the results of the statistical analysis will be described fully in the program deliverables. The US-VISIT Identity Matching Algorithm Evaluation Program is work performed by Lawrence Livermore National Laboratory (LLNL) under IAA HSHQVT-07-X-00002 P00004 from the Department of Homeland Security (DHS).
Structural Basis of Selective Ubiquitination of TRF1 by SCFFbx4...
Office of Scientific and Technical Information (OSTI)
Structural Basis of Selective Ubiquitination of TRF1 by SCFFbx4 Citation Details In-Document Search Title: Structural Basis of Selective Ubiquitination of TRF1 by SCFFbx4 Authors: ...
Structural basis for Notch1 engagement of Delta-like 4 (Journal...
Office of Scientific and Technical Information (OSTI)
Structural basis for Notch1 engagement of Delta-like 4 Citation Details In-Document Search Title: Structural basis for Notch1 engagement of Delta-like 4 Authors: Luca, Vincent C. ; ...
An Adaptive Unified Differential Evolution Algorithm for Global Optimization
Qiang, Ji; Mitchell, Chad
2014-11-03
In this paper, we propose a new adaptive unified differential evolution algorithm for single-objective global optimization. Instead of the multiple mutation strate- gies proposed in conventional differential evolution algorithms, this algorithm employs a single equation unifying multiple strategies into one expression. It has the virtue of mathematical simplicity and also provides users the flexibility for broader exploration of the space of mutation operators. By making all control parameters in the proposed algorithm self-adaptively evolve during the process of optimization, it frees the application users from the burden of choosing appro- priate control parameters and also improves the performance of the algorithm. In numerical tests using thirteen basic unimodal and multimodal functions, the proposed adaptive unified algorithm shows promising performance in compari- son to several conventional differential evolution algorithms.
Algorithmic crystal chemistry: A cellular automata approach
Krivovichev, S. V.
2012-01-15
Atomic-molecular mechanisms of crystal growth can be modeled based on crystallochemical information using cellular automata (a particular case of finite deterministic automata). In particular, the formation of heteropolyhedral layered complexes in uranyl selenates can be modeled applying a one-dimensional three-colored cellular automaton. The use of the theory of calculations (in particular, the theory of automata) in crystallography allows one to interpret crystal growth as a computational process (the realization of an algorithm or program with a finite number of steps).
Algorithms for Contact in a Mulitphysics Environment
Energy Science and Technology Software Center (OSTI)
2001-12-19
Many codes require either a contact capability or a need to determine geometric proximity of non-connected topological entities (which is a subset of what contact requires). ACME is a library to provide services to determine contact forces and/or geometric proximity interactions. This includes generic capabilities such as determining points in Cartesian volumes, finding faces in Cartesian volumes, etc. ACME can be run in single or multi-processor mode (the basic algorithms have been tested up tomoreÂ Â» 4500 processors).Â«Â less
A Monte Carlo algorithm for degenerate plasmas
Turrell, A.E. Sherlock, M.; Rose, S.J.
2013-09-15
A procedure for performing Monte Carlo calculations of plasmas with an arbitrary level of degeneracy is outlined. It has possible applications in inertial confinement fusion and astrophysics. Degenerate particles are initialised according to the Fermi–Dirac distribution function, and scattering is via a Pauli blocked binary collision approximation. The algorithm is tested against degenerate electron–ion equilibration, and the degenerate resistivity transport coefficient from unmagnetised first order transport theory. The code is applied to the cold fuel shell and alpha particle equilibration problem of inertial confinement fusion.
Theoretical model for plasma expansion generated by hypervelocity impact
Ju, Yuanyuan; Zhang, Qingming Zhang, Dongjiang; Long, Renrong; Chen, Li; Huang, Fenglei; Gong, Zizheng
2014-09-15
The hypervelocity impact experiments of spherical LY12 aluminum projectile diameter of 6.4?mm on LY12 aluminum target thickness of 23?mm have been conducted using a two-stage light gas gun. The impact velocity of the projectile is 5.2, 5.7, and 6.3?km/s, respectively. The experimental results show that the plasma phase transition appears under the current experiment conditions, and the plasma expansion consists of accumulation, equilibrium, and attenuation. The plasma characteristic parameters decrease as the plasma expands outward and are proportional with the third power of the impact velocity, i.e., (T{sub e}, n{sub e})???v{sub p}{sup 3}. Based on the experimental results, a theoretical model on the plasma expansion is developed and the theoretical results are consistent with the experimental data.
Theoretical evaluation of the optimal performance of a thermoacoustic refrigerator
Minner, B.L.; Braun, J.E.; Mongeau, L.G.
1997-12-31
Theoretical models were integrated with a design optimization tool to allow estimates of the maximum coefficient of performance for thermoacoustic cooling systems. The system model was validated using experimental results for a well-documented prototype. The optimization tool was then applied to this prototype to demonstrate the benefits of systematic optimization. A twofold increase in performance was predicted through the variation of component dimensions alone, while a threefold improvement was estimated when the working fluid parameters were also considered. Devices with a similar configuration were optimized for operating requirements representative of a home refrigerator. The results indicate that the coefficients of performance are comparable to those of existing vapor-compression equipment for this application. In addition to the choice of working fluid, the heat exchanger configuration was found to be a critical design factor affecting performance. Further experimental work is needed to confirm the theoretical predictions presented in this paper.
Materials for electrochemical capacitors: Theoretical and experimental constraints
Sarangapani, S.; Tilak, B.V.; Chen, C.P.
1996-11-01
Electrochemical capacitors, also called supercapacitors, are unique devices exhibiting 20 to 200 times greater capacitance than conventional capacitors. The large capacitance exhibited by these systems has been demonstrated to arise from a combination of the double-layer capacitance and pseudocapacitance associated with surface redox-type reactions. The purpose of this review is to survey the published data of available electrode materials possessing high specific double-layer or pseudocapacitance and examine their reported performance data in relation to their theoretical expectations.
Theoretical Spectroscopy of Low Dimensional Systems | MIT-Harvard Center
U.S. Department of Energy (DOE) all webpages (Extended Search)
for Excitonics Theoretical Spectroscopy of Low Dimensional Systems November 11, 2009 at 2pm/Pfizer Hall - Mb-23 Harvard University 12 Oxford Street Cambridge Angel Rubio Universidad del Pais Vasco UPV/EHU and Centro Mixto CSIC-UPV/EHU rubio abstract: There has been much progress in the synthesis and characterization of nanostructures however, there remain immense challenges in understanding their properties and interactions with external probes in order to realize their tremendous potential
COLLOQUIUM: Theoretical and Experimental Aspects of Controlled Quantum
U.S. Department of Energy (DOE) all webpages (Extended Search)
Dynamics | Princeton Plasma Physics Lab March 25, 2015, 4:15pm to 5:30pm MBG Auditorium COLLOQUIUM: Theoretical and Experimental Aspects of Controlled Quantum Dynamics Professor Herschel Rabitz Princeton University Abstract: PDF icon COLL.03.25.15.pdf Controlling quantum dynamics phenomena spans a wide range of applications and potential technologies. Although some experiments are far more demanding than others, the experiments are collectively proving to be remarkably successful considering
Instrument design and optimization using genetic algorithms
Hoelzel, Robert; Bentley, Phillip M.; Fouquet, Peter
2006-10-15
This article describes the design of highly complex physical instruments by using a canonical genetic algorithm (GA). The procedure can be applied to all instrument designs where performance goals can be quantified. It is particularly suited to the optimization of instrument design where local optima in the performance figure of merit are prevalent. Here, a GA is used to evolve the design of the neutron spin-echo spectrometer WASP which is presently being constructed at the Institut Laue-Langevin, Grenoble, France. A comparison is made between this artificial intelligence approach and the traditional manual design methods. We demonstrate that the search of parameter space is more efficient when applying the genetic algorithm, and the GA produces a significantly better instrument design. Furthermore, it is found that the GA increases flexibility, by facilitating the reoptimization of the design after changes in boundary conditions during the design phase. The GA also allows the exploration of 'nonstandard' magnet coil geometries. We conclude that this technique constitutes a powerful complementary tool for the design and optimization of complex scientific apparatus, without replacing the careful thought processes employed in traditional design methods.
"Greenbook Algorithms and Hardware Needs Analysis"
De Jong, Wibe A.; Oehmen, Chris S.; Baxter, Douglas J.
2007-01-09
"This document describes the algorithms, and hardware balance requirements needed to enable the solution of real scientific problems in the DOE core mission areas of environmental and subsurface chemistry, computational and systems biology, and climate science. The MSCF scientific drivers have been outlined in the Greenbook, which is available online at http://mscf.emsl.pnl.gov/docs/greenbook_for_web.pdf . Historically, the primary science driver has been the chemical and the molecular dynamics of the biological science area, whereas the remaining applications in the biological and environmental systems science areas have been occupying a smaller segment of the available hardware resources. To go from science drivers to hardware balance requirements, the major applications were identified. Major applications on the MSCF resources are low- to high-accuracy electronic structure methods, molecular dynamics, regional climate modeling, subsurface transport, and computational biology. The algorithms of these applications were analyzed to identify the computational kernels in both sequential and parallel execution. This analysis shows that a balanced architecture is needed with respect to processor speed, peak flop rate, peak integer operation rate, and memory hierarchy, interprocessor communication, and disk access and storage. A single architecture can satisfy the needs of all of the science areas, although some areas may take greater advantage of certain aspects of the architecture. "
TECHNICAL BASIS FOR VENTILATION REQUIREMENTS IN TANK FARMS OPERATING SPECIFICATIONS DOCUMENTS
BERGLIN, E J
2003-06-23
This report provides the technical basis for high efficiency particulate air filter (HEPA) for Hanford tank farm ventilation systems (sometimes known as heating, ventilation and air conditioning [HVAC]) to support limits defined in Process Engineering Operating Specification Documents (OSDs). This technical basis included a review of older technical basis and provides clarifications, as necessary, to technical basis limit revisions or justification. This document provides an updated technical basis for tank farm ventilation systems related to Operation Specification Documents (OSDs) for double-shell tanks (DSTs), single-shell tanks (SSTs), double-contained receiver tanks (DCRTs), catch tanks, and various other miscellaneous facilities.
Daylighting simulation: methods, algorithms, and resources
Carroll, William L.
1999-12-01
This document presents work conducted as part of Subtask C, ''Daylighting Design Tools'', Subgroup C2, ''New Daylight Algorithms'', of the IEA SHC Task 21 and the ECBCS Program Annex 29 ''Daylight in Buildings''. The search for and collection of daylighting analysis methods and algorithms led to two important observations. First, there is a wide range of needs for different types of methods to produce a complete analysis tool. These include: Geometry; Light modeling; Characterization of the natural illumination resource; Materials and components properties, representations; and Usability issues (interfaces, interoperability, representation of analysis results, etc). Second, very advantageously, there have been rapid advances in many basic methods in these areas, due to other forces. They are in part driven by: The commercial computer graphics community (commerce, entertainment); The lighting industry; Architectural rendering and visualization for projects; and Academia: Course materials, research. This has led to a very rich set of information resources that have direct applicability to the small daylighting analysis community. Furthermore, much of this information is in fact available online. Because much of the information about methods and algorithms is now online, an innovative reporting strategy was used: the core formats are electronic, and used to produce a printed form only secondarily. The electronic forms include both online WWW pages and a downloadable .PDF file with the same appearance and content. Both electronic forms include live primary and indirect links to actual information sources on the WWW. In most cases, little additional commentary is provided regarding the information links or citations that are provided. This in turn allows the report to be very concise. The links are expected speak for themselves. The report consists of only about 10+ pages, with about 100+ primary links, but with potentially thousands of indirect links. For purposes of
CRITICALITY SAFETY CONTROLS AND THE SAFETY BASIS AT PFP
Kessler, S
2009-04-21
reviewing documents used in classifying controls for Nuclear Safety, it was noted that DOE-HDBK-1188, 'Glossary of Environment, Health, and Safety Terms', defines an Administrative Control (AC) in terms that are different than typically used in Criticality Safety. As part of this CCR, a new term, Criticality Administrative Control (CAC) was defined to clarify the difference between an AC used for criticality safety and an AC used for nuclear safety. In Nuclear Safety terms, an AC is a provision relating to organization and management, procedures, recordkeeping, assessment, and reporting necessary to ensure safe operation of a facility. A CAC was defined as an administrative control derived in a criticality safety analysis that is implemented to ensure double contingency. According to criterion 2 of Section IV, 'Linkage to the Documented Safety Analysis', of DOESTD-3007-2007, the consequence of a criticality should be examined for the purposes of classifying the significance of a control or component. HNF-PRO-700, 'Safety Basis Development', provides control selection criteria based on consequence and risk that may be used in the development of a Criticality Safety Evaluation (CSE) to establish the classification of a component as a design feature, as safety class or safety significant, i.e., an Engineered Safety Feature (ESF), or as equipment important to safety; or merely provides defense-in-depth. Similar logic is applied to the CACs. Criterion 8C of DOE-STD-3007-2007, as written, added to the confusion of using the basic CCR from HNF-7098. The PFP CCR attempts to clarify this criterion by revising it to say 'Programmatic commitments or general references to control philosophy (e.g., mass control or spacing control or concentration control as an overall control strategy for the process without specific quantification of individual limits) is included in the PFP DSA'. Table 1 shows the PFP methodology for evaluating CACs. This evaluation process has been in use since
Critical dynamics of cluster algorithms in the dilute Ising model
Hennecke, M. Universitaet Karlsruhe ); Heyken, U. )
1993-08-01
Autocorrelation times for thermodynamic quantities at [Tc] are calculated from Monte Carlo simulations of the site-diluted simple cubic Ising model, using the Swendsen-Wand and Wolff cluster algorithms. The results show that for these algorithms the autocorrelation times decrease when reducing the concentration of magnetic sites from 100% down to 40%. This is of crucial importance when estimating static properties of the model, since the variances of these estimators increase with autocorrelation time. The dynamical critical exponents are calculated for both algorithms, observing pronounced finite-size effects in the energy autocorrelation data for the algorithm of Wolff. It is concluded that, when applied to the dilute Ising model, cluster algorithms become even more effective than local algorithms, for which increasing autocorrelation times are expected. 33 refs., 5 figs., 2 tabs.
Theoretical hot methane line lists up to T = 2000 K for astrophysical applications
Rey, M.; Tyuterev, Vl. G.; Nikitin, A. V.
2014-07-01
The paper describes the construction of complete sets of hot methane lines based on accurate ab initio potential and dipole moment surfaces and extensive first-principle calculations. Four line lists spanning the [0-5000] cm{sup –1} infrared region were built at T = 500, 1000, 1500, and 2000 K. For each of these four temperatures, we have constructed two versions of line lists: a version for high-resolution applications containing strong and medium lines and a full version appropriate for low-resolution opacity calculations. A comparison with available empirical databases is discussed in detail for both cold and hot bands giving a very good agreement for line positions, typically <0.1-0.5 cm{sup –1} and ?5% for intensities of strong lines. Together with numerical tests using various basis sets, this confirms the computational convergence of our results for the most important lines, which is the major issue for theoretical spectra predictions. We showed that transitions with lower state energies up to 14,000 cm{sup –1} could give significant contributions to the methane opacity and have to be systematically taken into account. Our list at 2000 K calculated up to J = 50 contains 11.5 billion transitions for I > 10{sup –29} cm mol{sup –1}. These new lists are expected to be quantitatively accurate with respect to the precision of available and currently planned observations of astrophysical objects with improved spectral resolution.
Laser cooling of MgCl and MgBr in theoretical approach
Wan, Mingjie; Shao, Juxiang; Huang, Duohui; Yang, Junsheng; Cao, Qilong; Jin, Chengguo; Wang, Fanhou; Gao, Yufeng
2015-07-14
Ab initio calculations for three low-lying electronic states (X{sup 2}Î£{sup +}, A{sup 2}Î , and 2{sup 2}Î ) of MgCl and MgBr molecules, including spin-orbit coupling, are performed using multi-reference configuration interaction plus Davidson correction method. The calculations involve all-electronic basis sets and Douglasâ€“Kroll scalar relativistic correction. Spectroscopic parameters well agree with available theoretical and experimental data. Highly diagonally distributed Franck-Condon factors f{sub 00} for A{sup 2}Î {sub 3/2,1/2} (Ï…â€² = 0) â†’ X{sup 2}Î£{sup +}{sub 1/2} (Ï…â€³ = 0) are determined for both MgCl and MgBr molecules. Suitable radiative lifetimes Ï„ of A{sup 2}Î {sub 3/2,1/2} (Ï…â€² = 0) states for rapid laser cooling are also obtained. The proposed laser drives A{sup 2}Î {sub 3/2} (Ï…â€² = 0) â†’ X{sup 2}Î£{sup +}{sub 1/2} (Ï…â€³ = 0) transition by using three wavelengths (main pump laser Î»{sub 00}; two repumping lasers Î»{sub 10} and Î»{sub 21}). These results indicate the probability of laser cooling MgCl and MgBr molecules.
Component evaluation testing and analysis algorithms.
Hart, Darren M.; Merchant, Bion John
2011-10-01
The Ground-Based Monitoring R&E Component Evaluation project performs testing on the hardware components that make up Seismic and Infrasound monitoring systems. The majority of the testing is focused on the Digital Waveform Recorder (DWR), Seismic Sensor, and Infrasound Sensor. In order to guarantee consistency, traceability, and visibility into the results of the testing process, it is necessary to document the test and analysis procedures that are in place. Other reports document the testing procedures that are in place (Kromer, 2007). This document serves to provide a comprehensive overview of the analysis and the algorithms that are applied to the Component Evaluation testing. A brief summary of each test is included to provide the context for the analysis that is to be performed.
Automated DNA Base Pair Calling Algorithm
Energy Science and Technology Software Center (OSTI)
1999-07-07
The procedure solves the problem of calling the DNA base pair sequence from two channel electropherogram separations in an automated fashion. The core of the program involves a peak picking algorithm based upon first, second, and third derivative spectra for each electropherogram channel, signal levels as a function of time, peak spacing, base pair signal to noise sequence patterns, frequency vs ratio of the two channel histograms, and confidence levels generated during the run. ThemoreÂ Â» ratios of the two channels at peak centers can be used to accurately and reproducibly determine the base pair sequence. A further enhancement is a novel Gaussian deconvolution used to determine the peak heights used in generating the ratio.Â«Â less
Neurons to algorithms LDRD final report.
Rothganger, Fredrick H.; Aimone, James Bradley; Warrender, Christina E.; Trumbo, Derek
2013-09-01
Over the last three years the Neurons to Algorithms (N2A) LDRD project teams has built infrastructure to discover computational structures in the brain. This consists of a modeling language, a tool that enables model development and simulation in that language, and initial connections with the Neuroinformatics community, a group working toward similar goals. The approach of N2A is to express large complex systems like the brain as populations of a discrete part types that have specific structural relationships with each other, along with internal and structural dynamics. Such an evolving mathematical system may be able to capture the essence of neural processing, and ultimately of thought itself. This final report is a cover for the actual products of the project: the N2A Language Specification, the N2A Application, and a journal paper summarizing our methods.
Dual pricing algorithm in ISO markets
O'Neill, Richard P.; Castillo, Anya; Eldridge, Brent; Hytowitz, Robin Broder
2016-10-10
The challenge to create efficient market clearing prices in centralized day-ahead electricity markets arises from inherent non-convexities in unit commitment problems. When this aspect is ignored, marginal prices may result in economic losses to market participants who are part of the welfare maximizing solution. In this essay, we present an axiomatic approach to efficient prices and cost allocation for a revenue neutral and non-confiscatory day-ahead market. Current cost allocation practices do not adequately attribute costs based on transparent cost causation criteria. Instead we propose an ex post multi-part pricing scheme, which we refer to as the Dual Pricing Algorithm. Lastly,moreÂ Â» our approach can be incorporated into current dayahead markets without altering the market equilibrium.Â«Â less
Solar and Moon Position Algorithm (SAMPA) - Energy Innovation Portal
U.S. Department of Energy (DOE) all webpages (Extended Search)
Energy Analysis Energy Analysis Find More Like This Return to Search Solar and Moon Position Algorithm (SAMPA) National Renewable Energy Laboratory Contact NREL About This Technology Technology Marketing Summary This algorithm calculates the solar and lunar zenith and azimuth angles in the period from the year -2000 to 6000, with uncertainties of +/- 0.0003 degrees for the Sun and +/- 0.003 degrees for the Moon, based on the date, time, and location on Earth. Description The algorithm can be
New Design Methods and Algorithms for Multi-component Distillation
Office of Energy Efficiency and Renewable Energy (EERE) (indexed site)
Processes | Department of Energy Design Methods and Algorithms for Multi-component Distillation Processes New Design Methods and Algorithms for Multi-component Distillation Processes multicomponent.pdf (517.32 KB) More Documents & Publications Development of Method and Algorithms To Identify Easily Implementable Energy-Efficient Low-Cost Multicomponent Distillation Column Trains With Large Energy Savings For Wide Number of Separations CX-100137 Categorical Exclusion Determination ITP
Microbial enhancement of non-Darcy flow: Theoretical consideration
Shi, Jianxin; Schneider, D.R.
1995-12-31
In the near well-bore region and perforations, petroleum fluids usually flow at high velocities and may exhibit non-Darcy-flow behavior. Microorganisms can increase permeability and porosity by removing paraffin or asphaltene accumulations. They can also reduce interfacial tension by producing biosurfactants. These changes can significantly affect non-Darcy flow behavior. Theoretical analysis shows that microbial activities can enhance production by decreasing the turbulence pressure drop and in some cases increasing the drag force exerted to the oil phase. This implies that the effects of microbial activities on non-Darcy flow are important and should be considered in the evaluation of microbial well stimulation and enhanced oil recovery.
Theoretical and experimental research on multi-beam klystron
Ding Yaogen; Peng Jun; Zhu Yunshu; Shi Shaoming [Institute of Electronics, Chinese Academy of Sciences, Beijing 100080 (China)
1999-05-07
Theoretical and experimental research work on multi-beam klystron (MBK) conducted in Institute of Electronics, Chinese Academy of Sciences (IECAS) is described in this paper. Research progress on Interaction between multi-electron beam and microwave electric field, multi-beam cavity, filter loaded double gap cavity broadband output circuit, multi-beam electron gun, and periodic reversal focusing system is presented. Performance and measurement results of five types of MBK are also given out. The key technical problems for present MBK are discussed in this paper.
Subotnik, Joseph E. Ouyang, Wenjun; Landry, Brian R.
2013-12-07
In this article, we demonstrate that Tully's fewest-switches surface hopping (FSSH) algorithm approximately obeys the mixed quantum-classical Liouville equation (QCLE), provided that several conditions are satisfied – some major conditions, and some minor. The major conditions are: (1) nuclei must be moving quickly with large momenta; (2) there cannot be explicit recoherences or interference effects between nuclear wave packets; (3) force-based decoherence must be added to the FSSH algorithm, and the trajectories can no longer rigorously be independent (though approximations for independent trajectories are possible). We furthermore expect that FSSH (with decoherence) will be most robust when nonadiabatic transitions in an adiabatic basis are dictated primarily by derivative couplings that are presumably localized to crossing regions, rather than by small but pervasive off-diagonal force matrix elements. In the end, our results emphasize the strengths of and possibilities for the FSSH algorithm when decoherence is included, while also demonstrating the limitations of the FSSH algorithm and its inherent inability to follow the QCLE exactly.
Numerical Analysis of Fixed Point Algorithms in the Presence...
Office of Scientific and Technical Information (OSTI)
in the Presence of Hardware Faults Citation Details In-Document Search Title: Numerical Analysis of Fixed Point Algorithms in the Presence of Hardware Faults You are ...
DEVELOPMENT OF METHOD AND ALGORITHMS TO IDENTIFY EASILY IMPLEMENTABLE...
Office of Energy Efficiency and Renewable Energy (EERE) (indexed site)
Purdue researchers will team with an IT company to develop a user-friendly interface for the algorithm and to better address software development and commercialization issues. The ...
NREL: Awards and Honors - Current Interrupt Charging Algorithm...
U.S. Department of Energy (DOE) all webpages (Extended Search)
Current Interrupt Charging Algorithm for Lead-Acid Batteries Developers: Matthew A. Keyser, Ahmad A. Pesaran, and Mark M. Mihalic, National Renewable Energy Laboratory; Robert F....
A sequential implicit algorithm of chemo-thermo-poro-mechanics...
Office of Scientific and Technical Information (OSTI)
A sequential implicit algorithm of chemo-thermo-poro-mechanics for fractured geothermal ...emo-thermo-poro-mechanics for fractured geothermal reservoirs Authors: Kim, Jihoon ; ...
PREPRINT An Efficient Algorithm for Geocentric to Geodetic Coordinate...
Office of Scientific and Technical Information (OSTI)
Correlation. Datum Transformation, Modeling and Simulation Interoperability ABSTRACT ... This algorithm is discussed in the context of machines that have FPUs and legacy machines ...
Evaluation of Monte Carlo Electron-Transport Algorithms in the...
Office of Scientific and Technical Information (OSTI)
Series Codes for Stochastic-Media Simulations. Citation Details In-Document Search Title: Evaluation of Monte Carlo Electron-Transport Algorithms in the Integrated Tiger Series ...
Algorithm for Finding Similar Shapes in Large Molecular Structures Libraries
Energy Science and Technology Software Center (OSTI)
1994-10-19
The SHAPES software consists of methods and algorithms for representing and rapidly comparing molecular shapes. Molecular shapes algorithms are a class of algorithm derived and applied for recognizing when two three-dimensional shapes share common features. They proceed from the notion that the shapes to be compared are regions in three-dimensional space. The algorithms allow recognition of when localized subregions from two or more different shapes could never be superimposed by any rigid-body motion. Rigid-body motionsmoreÂ Â» are arbitrary combinations of translations and rotations.Â«Â less
Use of a Radon Stripping Algorithm for Retrospective Assessment...
Office of Scientific and Technical Information (OSTI)
and beta spectroscopy system employing a passive implanted planar silicon (PIPS) detector. ... MODIFICATIONS; PROGENY; RADON; SILICON air monitoring, radon, algorithm, PIPS, ...
Problems Found Using a Radon Stripping Algorithm for Retrospective...
Office of Scientific and Technical Information (OSTI)
and beta spectroscopy system employing a passive implanted planar silicon (PIPS) detector. ... MODIFICATIONS; PROGENY; RADON; SILICON air monitoring, radon, algorithm, PIPS, ...
A modern solver framework to manage solution algorithms in the...
Office of Scientific and Technical Information (OSTI)
A modern solver framework to manage solution algorithms in the Community Earth System Model Citation Details In-Document Search Title: A modern solver framework to manage solution ...
An optimal point spread function subtraction algorithm for high...
Office of Scientific and Technical Information (OSTI)
An optimal point spread function subtraction algorithm for high-contrast imaging: a ... This image is built as a linear combination of all available images and is optimized ...
Development of an Outdoor Temperature-Based Control Algorithm...
Office of Scientific and Technical Information (OSTI)
Development of an Outdoor Temperature-Based Control Algorithm for Residential Mechanical Ventilation Control Citation Details In-Document Search Title: Development of an Outdoor ...
A robust return-map algorithm for general multisurface plasticity
Adhikary, Deepak P.; Jayasundara, Chandana T.; Podgorney, Robert K.; Wilkins, Andy H.
2016-06-16
Three new contributions to the field of multisurface plasticity are presented for general situations with an arbitrary number of nonlinear yield surfaces with hardening or softening. A method for handling linearly dependent flow directions is described. A residual that can be used in a line search is defined. An algorithm that has been implemented and comprehensively tested is discussed in detail. Examples are presented to illustrate the computational cost of various components of the algorithm. The overall result is that a single Newton-Raphson iteration of the algorithm costs between 1.5 and 2 times that of an elastic calculation. Examples alsomoreÂ Â» illustrate the successful convergence of the algorithm in complicated situations. For example, without using the new contributions presented here, the algorithm fails to converge for approximately 50% of the trial stresses for a common geomechanical model of sedementary rocks, while the current algorithm results in complete success. Since it involves no approximations, the algorithm is used to quantify the accuracy of an efficient, pragmatic, but approximate, algorithm used for sedimentary-rock plasticity in a commercial software package. Furthermore, the main weakness of the algorithm is identified as the difficulty of correctly choosing the set of initially active constraints in the general setting.Â«Â less
Use of a Radon Stripping Algorithm for Retrospective Assessment...
Office of Scientific and Technical Information (OSTI)
using a commercial alpha and beta spectroscopy system employing a passive implanted ... FLOW; ALGORITHMS; BETA SOURCES; BETA SPECTROSCOPY; EVALUATION; MODIFICATIONS; PROGENY; ...
Kim, Jeongnim; Reboredo, Fernando A
2014-01-01
The self-healing diffusion Monte Carlo method for complex functions [F. A. Reboredo J. Chem. Phys. {\\bf 136}, 204101 (2012)] and some ideas of the correlation function Monte Carlo approach [D. M. Ceperley and B. Bernu, J. Chem. Phys. {\\bf 89}, 6316 (1988)] are blended to obtain a method for the calculation of thermodynamic properties of many-body systems at low temperatures. In order to allow the evolution in imaginary time to describe the density matrix, we remove the fixed-node restriction using complex antisymmetric trial wave functions. A statistical method is derived for the calculation of finite temperature properties of many-body systems near the ground state. In the process we also obtain a parallel algorithm that optimizes the many-body basis of a small subspace of the many-body Hilbert space. This small subspace is optimized to have maximum overlap with the one expanded by the lower energy eigenstates of a many-body Hamiltonian. We show in a model system that the Helmholtz free energy is minimized within this subspace as the iteration number increases. We show that the subspace expanded by the small basis systematically converges towards the subspace expanded by the lowest energy eigenstates. Possible applications of this method to calculate the thermodynamic properties of many-body systems near the ground state are discussed. The resulting basis can be also used to accelerate the calculation of the ground or excited states with Quantum Monte Carlo.
Theoretical Minimum Energies to Produce Steel for Selected Conditions
Fruehan, R.J.; Fortini, O.; Paxton, H.W.; Brindle, R.
2000-05-01
The energy used to produce liquid steel in today's integrated and electric arc furnace (EAF) facilities is significantly higher than the theoretical minimum energy requirements. This study presents the absolute minimum energy required to produce steel from ore and mixtures of scrap and scrap alternatives. Additional cases in which the assumptions are changed to more closely approximate actual operating conditions are also analyzed. The results, summarized in Table E-1, should give insight into the theoretical and practical potentials for reducing steelmaking energy requirements. The energy values have also been converted to carbon dioxide (CO{sub 2}) emissions in order to indicate the potential for reduction in emissions of this greenhouse gas (Table E-2). The study showed that increasing scrap melting has the largest impact on energy consumption. However, scrap should be viewed as having ''invested'' energy since at one time it was produced by reducing ore. Increasing scrap melting in the BOF mayor may not decrease energy if the ''invested'' energy in scrap is considered.
The magnetic flywheel flow meter: Theoretical and experimental contributions
Buchenau, D. Galindo, V.; Eckert, S.
2014-06-02
The development of contactless flow meters is an important issue for monitoring and controlling of processes in different application fields, like metallurgy, liquid metal casting, or cooling systems for nuclear reactors and transmutation machines. Shercliff described in his book â€œThe Theory of Electromagnetic Flow Measurement, Cambridge University Press, 1962â€ a simple and robust device for contact-less measurements of liquid metal flow rates which is known as magnetic flywheel. The sensor consists of several permanent magnets attached on a rotatable soft iron plate. This arrangement will be placed closely to the liquid metal flow to be measured, so that the field of the permanent magnets penetrates into the fluid volume. The flywheel will be accelerated by a Lorentz force arising from the interaction between the magnetic field and the moving liquid. Steady rotation rates of the flywheel can be taken as a measure for the mean flow rate inside the fluid channel. The present paper provides a detailed theoretical description of the sensor in order to gain a better insight into the functional principle of the magnetic flywheel. Theoretical predictions are confirmed by corresponding laboratory experiments. For that purpose, a laboratory model of such a flow meter was built and tested on a GaInSn-loop under various test conditions.
Analysis of the theoretical bias in dark matter direct detection
Catena, Riccardo
2014-09-01
Fitting the model ''A'' to dark matter direct detection data, when the model that underlies the data is ''B'', introduces a theoretical bias in the fit. We perform a quantitative study of the theoretical bias in dark matter direct detection, with a focus on assumptions regarding the dark matter interactions, and velocity distribution. We address this problem within the effective theory of isoscalar dark matter-nucleon interactions mediated by a heavy spin-1 or spin-0 particle. We analyze 24 benchmark points in the parameter space of the theory, using frequentist and Bayesian statistical methods. First, we simulate the data of future direct detection experiments assuming a momentum/velocity dependent dark matter-nucleon interaction, and an anisotropic dark matter velocity distribution. Then, we fit a constant scattering cross section, and an isotropic Maxwell-Boltzmann velocity distribution to the simulated data, thereby introducing a bias in the analysis. The best fit values of the dark matter particle mass differ from their benchmark values up to 2 standard deviations. The best fit values of the dark matter-nucleon coupling constant differ from their benchmark values up to several standard deviations. We conclude that common assumptions in dark matter direct detection are a source of potentially significant bias.
An Experimental and Theoretical High Energy Physics Program
Shipsey, Ian
2012-07-31
The Purdue High Energy Physics Group conducts research in experimental and theoretical elementary particle physics and experimental high energy astrophysics. Our goals, which we share with high energy physics colleagues around the world, are to understand at the most fundamental level the nature of matter, energy, space and time, and in order to explain the birth, evolution and fate of the Universe. The experiments in which we are currently involved are: CDF, CLEO-c, CMS, LSST, and VERITAS. We have been instrumental in establishing two major in-house facilities: The Purdue Particle Physics Microstructure Detector Facility (P3MD) in 1995 and the CMS Tier-2 center in 2005. The research efforts of the theory group span phenomenological and theoretical aspects of the Standard Model as well as many of its possible extensions. Recent work includes phenomenological consequences of supersymmetric models, string theory and applications of gauge/gravity duality, the cosmological implications of massive gravitons, and the physics of extra dimensions.
Enterprise Assessments Targeted Review of the Safety Basis at the Savannah
Office of Energy Efficiency and Renewable Energy (EERE) (indexed site)
River Site F-Area Central Laboratory Facility - January 2016 | Department of Energy Basis at the Savannah River Site F-Area Central Laboratory Facility - January 2016 Enterprise Assessments Targeted Review of the Safety Basis at the Savannah River Site F-Area Central Laboratory Facility - January 2016 January 2016 Review of the Safety Basis F-Area Central Laboratory Facility at the Savannah River Site The Office of Nuclear Safety and Environmental Assessments, within the U.S. Department of
Basis for Section 3116 Determination for the Idaho Nuclear Technology and
Office of Energy Efficiency and Renewable Energy (EERE) (indexed site)
Engineering Center Tank Farm Facility at the Idaho National Laboratory | Department of Energy Basis for Section 3116 Determination for the Idaho Nuclear Technology and Engineering Center Tank Farm Facility at the Idaho National Laboratory Basis for Section 3116 Determination for the Idaho Nuclear Technology and Engineering Center Tank Farm Facility at the Idaho National Laboratory This 3116 Basis Document addresses the disposal of stabilized residuals in the TFF, and the TFF tank system, and
CRAD, Safety Basis Upgrade Review (DOE-STD-3009-2014) - May 15, 2015 (EA
Office of Energy Efficiency and Renewable Energy (EERE) (indexed site)
CRAD 31-03, Rev. 1) | Department of Energy Safety Basis Upgrade Review (DOE-STD-3009-2014) - May 15, 2015 (EA CRAD 31-03, Rev. 1) CRAD, Safety Basis Upgrade Review (DOE-STD-3009-2014) - May 15, 2015 (EA CRAD 31-03, Rev. 1) May 2015 Safety Basis Upgrade Review (DOE-STD-3009-2014) (EA CRAD 31-03, Rev. 1) This Criteria Review and Approach Document (EA CRAD 31-03, Rev. 1) provides objectives, criteria, and approaches for establishing and maintaining the safety basis at nuclear facilities. CRAD,
Technical Basis and Considerations for DOE M 435.1-1 (Appendix A)
Directives, Delegations, and Requirements [Office of Management (MA)]
1999-07-09
This appendix establishes the technical basis of the order revision process and of each of the requirements included in the revised radioactive waste management order.
Report to the Secretary of Energy on Beyond Design Basis Event...
BDBEReportfinal.pdf More Documents & Publications Report to the Secretary of Energy on Beyond Design Basis Event Pilot Evaluations, Results and Recommendations for Improvements...
A Brief Review of the Basis for, and the Procedures Currently Utilized in,
Office of Energy Efficiency and Renewable Energy (EERE) (indexed site)
Gross Gamma-Ray Log Calibration (October 1976) | Department of Energy A Brief Review of the Basis for, and the Procedures Currently Utilized in, Gross Gamma-Ray Log Calibration (October 1976) A Brief Review of the Basis for, and the Procedures Currently Utilized in, Gross Gamma-Ray Log Calibration (October 1976) A Brief Review of the Basis for, and the Procedures Currently Utilized in, Gross Gamma-Ray Log Calibration (October 1976) A Brief Review of the Basis for, and the Procedures
Improving Department of Energy Capabilities for Mitigating Beyond Design Basis Events
This is a level 1 operating experience document providing direction for Improving Department of Energy Capabilities for Mitigating Beyond Design Basis Events. [OE-1: 2013-01
Structural basis for the prion-like MAVS filaments in antiviral...
Office of Scientific and Technical Information (OSTI)
in antiviral innate immunity Citation Details In-Document Search Title: Structural basis for the prion-like MAVS filaments in antiviral innate immunity Authors: Xu, Hui ; He, ...
2010 DOE National Science BowlÂ® Photos - Basis Charter School...
Basis Charter School National Science Bowl (NSB) NSB Home About National Science Bowl Contacts Regional Science Bowl Coordinators National Science Bowl FAQ's Alumni Past National ...
O'Brien, M. J.; Brantley, P. S.
2015-01-20
In order to run Monte Carlo particle transport calculations on new supercomputers with hundreds of thousands or millions of processors, care must be taken to implement scalable algorithms. This means that the algorithms must continue to perform well as the processor count increases. In this paper, we examine the scalability of:(1) globally resolving the particle locations on the correct processor, (2) deciding that particle streaming communication has finished, and (3) efficiently coupling neighbor domains together with different replication levels. We have run domain decomposed Monte Carlo particle transport on up to 2^{21} = 2,097,152 MPI processes on the IBM BG/Q Sequoia supercomputer and observed scalable results that agree with our theoretical predictions. These calculations were carefully constructed to have the same amount of work on every processor, i.e. the calculation is already load balanced. We also examine load imbalanced calculations where each domainâ€™s replication level is proportional to its particle workload. In this case we show how to efficiently couple together adjacent domains to maintain within workgroup load balance and minimize memory usage.
Zhuo, Ye
2011-01-01
In this thesis, we theoretically study the electromagnetic wave propagation in several passive and active optical components and devices including 2-D photonic crystals, straight and curved waveguides, organic light emitting diodes (OLEDs), and etc. Several optical designs are also presented like organic photovoltaic (OPV) cells and solar concentrators. The first part of the thesis focuses on theoretical investigation. First, the plane-wave-based transfer (scattering) matrix method (TMM) is briefly described with a short review of photonic crystals and other numerical methods to study them (Chapter 1 and 2). Next TMM, the numerical method itself is investigated in details and developed in advance to deal with more complex optical systems. In chapter 3, TMM is extended in curvilinear coordinates to study curved nanoribbon waveguides. The problem of a curved structure is transformed into an equivalent one of a straight structure with spatially dependent tensors of dielectric constant and magnetic permeability. In chapter 4, a new set of localized basis orbitals are introduced to locally represent electromagnetic field in photonic crystals as alternative to planewave basis. The second part of the thesis focuses on the design of optical devices. First, two examples of TMM applications are given. The first example is the design of metal grating structures as replacements of ITO to enhance the optical absorption in OPV cells (chapter 6). The second one is the design of the same structure as above to enhance the light extraction of OLEDs (chapter 7). Next, two design examples by ray tracing method are given, including applying a microlens array to enhance the light extraction of OLEDs (chapter 5) and an all-angle wide-wavelength design of solar concentrator (chapter 8). In summary, this dissertation has extended TMM which makes it capable of treating complex optical systems. Several optical designs by TMM and ray tracing method are also given as a full complement of this
Nuclear magnetic resonance implementation of a quantum clock synchronization algorithm
Zhang Jingfu; Long, G.C; Liu Wenzhang; Deng Zhiwei; Lu Zhiheng
2004-12-01
The quantum clock synchronization (QCS) algorithm proposed by Chuang [Phys. Rev. Lett. 85, 2006 (2000)] has been implemented in a three qubit nuclear magnetic resonance quantum system. The time difference between two separated clocks can be determined by measuring the output states. The experimental realization of the QCS algorithm also demonstrates an application of the quantum phase estimation.
Adaptive path planning algorithm for cooperating unmanned air vehicles
Cunningham, C T; Roberts, R S
2001-02-08
An adaptive path planning algorithm is presented for cooperating Unmanned Air Vehicles (UAVs) that are used to deploy and operate land-based sensor networks. The algorithm employs a global cost function to generate paths for the UAVs, and adapts the paths to exceptions that might occur. Examples are provided of the paths and adaptation.
An Adaptive Path Planning Algorithm for Cooperating Unmanned Air Vehicles
Cunningham, C.T.; Roberts, R.S.
2000-09-12
An adaptive path planning algorithm is presented for cooperating Unmanned Air Vehicles (UAVs) that are used to deploy and operate land-based sensor networks. The algorithm employs a global cost function to generate paths for the UAVs, and adapts the paths to exceptions that might occur. Examples are provided of the paths and adaptation.
Development of Speckle Interferometry Algorithm and System
Shamsir, A. A. M.; Jafri, M. Z. M.; Lim, H. S.
2011-05-25
Electronic speckle pattern interferometry (ESPI) method is a wholefield, non destructive measurement method widely used in the industries such as detection of defects on metal bodies, detection of defects in intergrated circuits in digital electronics components and in the preservation of priceless artwork. In this research field, this method is widely used to develop algorithms and to develop a new laboratory setup for implementing the speckle pattern interferometry. In speckle interferometry, an optically rough test surface is illuminated with an expanded laser beam creating a laser speckle pattern in the space surrounding the illuminated region. The speckle pattern is optically mixed with a second coherent light field that is either another speckle pattern or a smooth light field. This produces an interferometric speckle pattern that will be detected by sensor to count the change of the speckle pattern due to force given. In this project, an experimental setup of ESPI is proposed to analyze a stainless steel plate using 632.8 nm (red) wavelength of lights.
An efficient parallel algorithm for matrix-vector multiplication
Hendrickson, B.; Leland, R.; Plimpton, S.
1993-03-01
The multiplication of a vector by a matrix is the kernel computation of many algorithms in scientific computation. A fast parallel algorithm for this calculation is therefore necessary if one is to make full use of the new generation of parallel supercomputers. This paper presents a high performance, parallel matrix-vector multiplication algorithm that is particularly well suited to hypercube multiprocessors. For an n x n matrix on p processors, the communication cost of this algorithm is O(n/[radical]p + log(p)), independent of the matrix sparsity pattern. The performance of the algorithm is demonstrated by employing it as the kernel in the well-known NAS conjugate gradient benchmark, where a run time of 6.09 seconds was observed. This is the best published performance on this benchmark achieved to date using a massively parallel supercomputer.
Structural and Electronic Properties of Isolated Nanodiamonds: A Theoretical Perspective
Raty, J; Galli, G
2004-09-09
Nanometer sized diamond has been found in meteorites, proto-planetary nebulae and interstellar dusts, as well as in residues of detonation and in diamond films. Remarkably, the size distribution of diamond nanoparticles appears to be peaked around 2-5 nm, and to be largely independent of preparation conditions. Using ab-initio calculations, we have shown that in this size range nanodiamond has a fullerene-like surface and, unlike silicon and germanium, exhibit very weak quantum confinement effects. We called these carbon nanoparticles bucky-diamonds: their atomic structure, predicted by simulations, is consistent with many experimental findings. In addition, we carried out calculations of the stability of nanodiamond which provided a unifying explanation of its size distribution in extra-terrestrial samples, and in ultra-crystalline diamond films. Here we present a summary of our theoretical results and we briefly outline work in progress on doping of nanodiamond with nitrogen.
Theoretical collapse pressures for two pressurized torispherical heads
Kalnins, A.; Updike, D.P.; Rana, M.D.
1995-12-01
In order to determine the pressures at which real torispherical heads fail upon a single application of pressure, two heads were pressurized in recent Praxair tests, and displacements and strains were recorded at various locations. In this paper, theoretical results for the two test heads are presented in the form of curves of pressure versus crown deflections, using the available geometry and material parameters. From these curves, limit and collapse pressures are calculated, using procedures permitted by the ASME B and PV Code Section 8/Div.2. These pressures are shown to vary widely, depending on the method and model used to calculate them. The effect of no stress relief on the behavior of the Praxair test heads is also evaluated and found to be of no significance for neither the objectives of the tests nor the objectives of this paper. The results of this paper are submitted as an enhancement to the experimental results recorded during the Praxair tests.
Theoretical approach to heterogeneous catalysis using large finite crystals
Salem, L.
1985-12-19
A theoretical-approach is described for heterogeneous catalysis using large finite crystals and an exactly soluble model. First, reviews of some themes which are well-known to physicists but need a translation into chemical language: wave vectors, the tight-binding model, and energy bands. Next a description of the finite simple cubic crystal and its analytical wave functions and energies in the Hueckel scheme is given. Also the analytical Hueckel wave functions for a finite face-centered cubic (FCC) crystal cut along square, (100)-type faces is described. Then the calculation of the perturbation interaction energy between H/sub 2/ and large finite (simple cubic or FCC) crystals of Ni atoms, having up to 13,824 atoms is described. The interaction energy is shown to be independent of crystal size, whatever the position of attack of the H/sub 2/ molecule. 28 references, 9 figures, 8 tables.
A theoretical analysis of rotating cavitation in inducers
Tsujimoto, Y.; Kamijo, K. (National Aerospace Lab., Miyagi, (Japan)); Yoshida, Y. (Osaka Univ., Toyonaka, (Japan). Engineering Science)
1993-03-01
Rotating cavitation was analyzed using an actuator disk method. Quasi-steady pressure performance of the impeller, mass flow gain factor, and cavitation compliance of the cavity were taken into account. Three types of destabilizing modes were predicted: rotation cavitation propagating faster than the rotational speed of the impeller, rotating cavitation propagating in the direction opposite that of the impeller, and rotating stall propagating slower than the rotational speed of the impeller. It was shown that both types of rotating cavitation were caused by the positive mass flow gain factor, while the rotating stall was caused by the positive slope of the pressure performance. Stability and propagation velocity maps are presented for the two types of rotating cavitation in the mass flow gain factor-cavitation compliance place. The correlation between theoretical results and experimental observations is discussed.
Theoretical analysis of sound transmission loss through graphene sheets
Natsuki, Toshiaki; Ni, Qing-Qing
2014-11-17
We examine the potential of using graphene sheets (GSs) as sound insulating materials that can be used for nano-devices because of their small size, super electronic, and mechanical properties. In this study, a theoretical analysis is proposed to predict the sound transmission loss through multi-layered GSs, which are formed by stacks of GS and bound together by van der Waals (vdW) forces between individual layers. The result shows that the resonant frequencies of the sound transmission loss occur in the multi-layered GSs and the values are very high. Based on the present analytical solution, we predict the acoustic insulation property for various layers of sheets under both normal incident wave and acoustic field of random incidence source. The scheme could be useful in vibration absorption application of nano devices and materials.
Experimental And Theoretical High Energy Physics Research At UCLA
Cousins, Robert D.
2013-07-22
This is the final report of the UCLA High Energy Physics DOE Grant No. DE-FG02- 91ER40662. This report covers the last grant project period, namely the three years beginning January 15, 2010, plus extensions through April 30, 2013. The report describes the broad range of our experimental research spanning direct dark matter detection searches using both liquid xenon (XENON) and liquid argon (DARKSIDE); present (ICARUS) and R&D for future (LBNE) neutrino physics; ultra-high-energy neutrino and cosmic ray detection (ANITA); and the highest-energy accelerator-based physics with the CMS experiment and CERNâ€™s Large Hadron Collider. For our theory group, the report describes frontier activities including particle astrophysics and cosmology; neutrino physics; LHC interaction cross section calculations now feasible due to breakthroughs in theoretical techniques; and advances in the formal theory of supergravity.
Modeling an Application's Theoretical Minimum and Average Transactional Response Times
Paiz, Mary Rose
2015-04-01
The theoretical minimum transactional response time of an application serves as a ba- sis for the expected response time. The lower threshold for the minimum response time represents the minimum amount of time that the application should take to complete a transaction. Knowing the lower threshold is beneficial in detecting anomalies that are re- sults of unsuccessful transactions. On the converse, when an application's response time falls above an upper threshold, there is likely an anomaly in the application that is causing unusual performance issues in the transaction. This report explains how the non-stationary Generalized Extreme Value distribution is used to estimate the lower threshold of an ap- plication's daily minimum transactional response time. It also explains how the seasonal Autoregressive Integrated Moving Average time series model is used to estimate the upper threshold for an application's average transactional response time.
Theoretic base of Edge Local Mode triggering by vertical displacements
Wang, Z. T.; He, Z. X.; Wang, Z. H.; Wu, N.; Tang, C. J.
2015-05-15
Vertical instability is studied with R-dependent displacement. For Solovev's configuration, the stability boundary of the vertical instability is calculated. The pressure gradient is a destabilizing factor which is contrary to Rebhan's result. Equilibrium parallel current density, j{sub //}, at plasma boundary is a drive of the vertical instability similar to Peeling-ballooning modes; however, the vertical instability cannot be stabilized by the magnetic shear which tends towards infinity near the separatrix. The induced current observed in the Edge Local Mode (ELM) triggering experiment by vertical modulation is derived. The theory provides some theoretic explanation for the mitigation of type-I ELMS on ASDEX Upgrade. The principle could be also used for ITER.
Womack, E.A.; Kelly, J.J.; Elliott, N.S.
1980-01-01
The Abnormal Transient Operating Guidelines (ATOG) Program is intended to ''close the loop'' on a continuing basis between the engineering designers/performance analysts and the operators who control the plant. It will make the technical basis for operation responsive to information from the study of actual plant transients, as well as new developments in engineering.
Establishing the Technical Basis for Disposal of Heat-generating Waste in Salt
The report summarizes available historic tests and the developed technical basis for disposal of heat-generating waste in salt, and the means by which a safety case for disposal of heat generating waste at a generic salt site can be initiated from the existing technical basis.
Theoretical Research in Cosmology, High-Energy Physics and String Theory
Ng, Y Jack; Dolan, Louise; Mersini-Houghton, Laura; Frampton, Paul
2013-07-29
The research was in the area of Theoretical Physics: Cosmology, High-Energy Physics and String Theory
Criteria Document for B-plant Surveillance and Maintenance Phase Safety Basis Document
SCHWEHR, B.A.
1999-08-31
This document is required by the Project Hanford Managing Contractor (PHMC) procedure, HNF-PRO-705, Safety Basis Planning, Documentation, Review, and Approval. This document specifies the criteria that shall be in the B Plant surveillance and maintenance phase safety basis in order to obtain approval of the DOE-RL. This CD describes the criteria to be addressed in the S&M Phase safety basis for the deactivated Waste Fractionization Facility (B Plant) on the Hanford Site in Washington state. This criteria document describes: the document type and format that will be used for the S&M Phase safety basis, the requirements documents that will be invoked for the document development, the deactivated condition of the B Plant facility, and the scope of issues to be addressed in the S&M Phase safety basis document.
Wang, Lin-Wang
2006-12-01
Quantum mechanical ab initio calculation constitutes the biggest portion of the computer time in material science and chemical science simulations. As a computer center like NERSC, to better serve these communities, it will be very useful to have a prediction for the future trends of ab initio calculations in these areas. Such prediction can help us to decide what future computer architecture can be most useful for these communities, and what should be emphasized on in future supercomputer procurement. As the size of the computer and the size of the simulated physical systems increase, there is a renewed interest in using the real space grid method in electronic structure calculations. This is fueled by two factors. First, it is generally assumed that the real space grid method is more suitable for parallel computation for its limited communication requirement, compared with spectrum method where a global FFT is required. Second, as the size N of the calculated system increases together with the computer power, O(N) scaling approaches become more favorable than the traditional direct O(N{sup 3}) scaling methods. These O(N) methods are usually based on localized orbital in real space, which can be described more naturally by the real space basis. In this report, the author compares the real space methods versus the traditional plane wave (PW) spectrum methods, for their technical pros and cons, and the possible of future trends. For the real space method, the author focuses on the regular grid finite different (FD) method and the finite element (FE) method. These are the methods used mostly in material science simulation. As for chemical science, the predominant methods are still Gaussian basis method, and sometime the atomic orbital basis method. These two basis sets are localized in real space, and there is no indication that their roles in quantum chemical simulation will change anytime soon. The author focuses on the density functional theory (DFT), which is the
Shimojo, Fuyuki; Hattori, Shinnosuke; Department of Physics, Kumamoto University, Kumamoto 860-8555 ; Kalia, Rajiv K.; Mou, Weiwei; Nakano, Aiichiro; Nomura, Ken-ichi; Rajak, Pankaj; Vashishta, Priya; Kunaseth, Manaschai; National Nanotechnology Center, Pathumthani 12120 ; Ohmura, Satoshi; Department of Physics, Kumamoto University, Kumamoto 860-8555; Department of Physics, Kyoto University, Kyoto 606-8502 ; Shimamura, Kohei; Department of Physics, Kumamoto University, Kumamoto 860-8555; Department of Applied Quantum Physics and Nuclear Engineering, Kyushu University, Fukuoka 819-0395
2014-05-14
We introduce an extension of the divide-and-conquer (DC) algorithmic paradigm called divide-conquer-recombine (DCR) to perform large quantum molecular dynamics (QMD) simulations on massively parallel supercomputers, in which interatomic forces are computed quantum mechanically in the framework of density functional theory (DFT). In DCR, the DC phase constructs globally informed, overlapping local-domain solutions, which in the recombine phase are synthesized into a global solution encompassing large spatiotemporal scales. For the DC phase, we design a lean divide-and-conquer (LDC) DFT algorithm, which significantly reduces the prefactor of the O(N) computational cost for N electrons by applying a density-adaptive boundary condition at the peripheries of the DC domains. Our globally scalable and locally efficient solver is based on a hybrid real-reciprocal space approach that combines: (1) a highly scalable real-space multigrid to represent the global charge density; and (2) a numerically efficient plane-wave basis for local electronic wave functions and charge density within each domain. Hybrid space-band decomposition is used to implement the LDC-DFT algorithm on parallel computers. A benchmark test on an IBM Blue Gene/Q computer exhibits an isogranular parallel efficiency of 0.984 on 786â€‰432 cores for a 50.3 Ã— 10{sup 6}-atom SiC system. As a test of production runs, LDC-DFT-based QMD simulation involving 16â€‰661 atoms is performed on the Blue Gene/Q to study on-demand production of hydrogen gas from water using LiAl alloy particles. As an example of the recombine phase, LDC-DFT electronic structures are used as a basis set to describe global photoexcitation dynamics with nonadiabatic QMD (NAQMD) and kinetic Monte Carlo (KMC) methods. The NAQMD simulations are based on the linear response time-dependent density functional theory to describe electronic excited states and a surface-hopping approach to describe transitions between the excited states. A series of
Two linear time, low overhead algorithms for graph layout
Energy Science and Technology Software Center (OSTI)
2008-01-10
The software comprises two algorithms designed to perform a 2D layout of a graph structure in time linear with respect to the vertices and edges in the graph, whereas most other layout algorithms have a running time that is quadratic with respect to the number of vertices or greater. Although these layout algorithms run in a fraction of the time as their competitors, they provide competitive results when applied to most real-world graphs. These algorithmsmoreÂ Â» also have a low constant running time and small memory footprint, making them useful for small to large graphs.Â«Â less
PAC learning algorithms for functions approximated by feedforward networks
Rao, N.S.V.; Protopopescu, V.
1996-06-01
The authors present a class of efficient algorithms for PAC learning continuous functions and regressions that are approximated by feedforward networks. The algorithms are applicable to networks with unknown weights located only in the output layer and are obtained by utilizing the potential function methods of Aizerman et al. Conditions relating the sample sizes to the error bounds are derived using martingale-type inequalities. For concreteness, the discussion is presented in terms of neural networks, but the results are applicable to general feedforward networks, in particular to wavelet networks. The algorithms can be directly adapted to concept learning problems.
Genetic Algorithm Based Neural Networks for Nonlinear Optimization
Energy Science and Technology Software Center (OSTI)
1994-09-28
This software develops a novel approach to nonlinear optimization using genetic algorithm based neural networks. To our best knowledge, this approach represents the first attempt at applying both neural network and genetic algorithm techniques to solve a nonlinear optimization problem. The approach constructs a neural network structure and an appropriately shaped energy surface whose minima correspond to optimal solutions of the problem. A genetic algorithm is employed to perform a parallel and powerful search ofmoreÂ Â» the energy surface.Â«Â less
Safety basis academy summary of project implementation from 2007-2009
Johnston, Julie A
2009-01-01
During fiscal years 2007 through 2009, in accordance with Performance Based Incentives with DOE/NNSA Los Alamos Site Office, Los Alamos National Security (LANS) implemented and operated a Safety Basis Academy (SBA) to facilitate uniformity in technical qualifications of safety basis professionals across the nuclear weapons complex. The implementation phase of the Safety Basis Academy required development, delivery, and finalizing a set of 23 courses. The courses developed are capable of supporting qualification efforts for both federal and contractor personnel throughout the DOE/NNSA Complex. The LANS Associate Director for Nuclear and High Hazard Operations (AD-NHHO) delegated project responsibillity to the Safety Basis Division. The project was assigned to the Safety Basis Technical Services (SB-TS) Group at Los Alamos National Laboratory (LANL). The main tasks were project needs analysis, design, development, implementation of instructional delivery, and evaluation of SBA courses. DOE/NNSA responsibility for oversight of the SBA project was assigned to the Chief of Defense for Nuclear Safety, and delegated to the Authorization Basis Senior Advisor, Continuous Learning Chair (CDNS-ABSA/CLC). NNSA developed a memorandum of agreement with LANS AD-NHHO. Through a memorandum of agreement initiated by NNSA, the DOE National Training Center (NTC) will maintain the set of Safety Basis Academy courses and is able to facilitate course delivery throughout the DOE Complex.
Theoretical and Experimental Studies of Elementary Particle Physics
Evans, Harold G; Kostelecky, V Alan; Musser, James A
2013-07-29
The elementary particle physics research program at Indiana University spans a broad range of the most interesting topics in this fundamental field, including important contributions to each of the frontiers identified in the recent report of HEPAP's Particle Physics Prioritization Panel: the Energy Frontier, the Intensity Frontier, and the Cosmic Frontier. Experimentally, we contribute to knowledge at the Energy Frontier through our work on the D0 and ATLAS collaborations. We work at the Intensity Frontier on the MINOS and NOvA experiments and participate in R&D for LBNE. We are also very active on the theoretical side of each of these areas with internationally recognized efforts in phenomenology both in and beyond the Standard Model and in lattice QCD. Finally, although not part of this grant, members of the Indiana University particle physics group have strong involvement in several astrophysics projects at the Cosmic Frontier. Our research efforts are divided into three task areas. The Task A group works on D0 and ATLAS; Task B is our theory group; and Task C contains our MINOS, NOvA, and LBNE (LArTPC) research. Each task includes contributions from faculty, senior scientists, postdocs, graduate and undergraduate students, engineers, technicians, and administrative personnel. This work was supported by DOE Grant DE-FG02-91ER40661. In the following, we describe progress made in the research of each task during the final period of the grant, from November 1, 2009 to April 30, 2013.
Inference of Mix from Experimental Data and Theoretical Mix Models
Welser-Sherrill, L.; Haynes, D. A.; Cooley, J. H.; Mancini, R. C.; Haan, S. W.; Golovkin, I. E.
2007-08-02
The mixing between fuel and shell materials in Inertial Confinement Fusion implosion cores is a topic of great interest. Mixing due to hydrodynamic instabilities can affect implosion dynamics and could also go so far as to prevent ignition. We have demonstrated that it is possible to extract information on mixing directly from experimental data using spectroscopic arguments. In order to compare this data-driven analysis to a theoretical framework, two independent mix models, Youngs' phenomenological model and the Haan saturation model, have been implemented in conjunction with a series of clean hydrodynamic simulations that model the experiments. The first tests of these methods were carried out based on a set of indirect drive implosions at the OMEGA laser. We now focus on direct drive experiments, and endeavor to approach the problem from another perspective. In the current work, we use Youngs' and Haan's mix models in conjunction with hydrodynamic simulations in order to design experimental platforms that exhibit measurably different levels of mix. Once the experiments are completed based on these designs, the results of a data-driven mix analysis will be compared to the levels of mix predicted by the simulations. In this way, we aim to increase our confidence in the methods used to extract mixing information from the experimental data, as well as to study sensitivities and the range of validity of the mix models.
Windmill wake turbulence decay: a preliminary theoretical model
Bossanyi, E.A.
1983-02-01
The results are given of initial theoretical attempts to predict dynamic wake characteristics, particularly turbulence decay, downstream of wind turbine generators in order to assess the potential for acoustic noise generation in clusters or arrays of turbines. These results must be considered preliminary, because the model described is at least partially based on the assumption of isotropy in the turbine wakes; however, anisotrpic conditions may actually exist, particularly in the near-wake regions. The results indicate that some excess spectral energy may still exist. The turbine-generated turbulence from one machine can reach the next machine in the cluster and, depending on the turbulent wavelengths critical for acoustic noise production and perhaps structural excitation, this may be a cause for concern. Such a situation is most likely to occur in the evening or morining, during the transition from the daytime to the nocturnal boundary layer and vice-versa, particularly at more elevated sites where the winds tend to increase after dark.
Incipient Transient Detection in Reactor Systems: Experimental and Theoretical Investigation
Lefteri H. Tsoukalas; S.T. Revankar; X Wang; R. Sattuluri
2005-09-27
The main goal of this research was to develop a method for detecting reactor system transients at the earliest possible time through a comprehensive experimental, testing and benchmarking program. This approach holds strong promise for developing new diagnostic technologies that are non-intrusive, generic and highly portable across different systems. It will help in the design of new generation nuclear power reactors, which utilize passive safety systems with a reliable and non-intrusive multiphase flow diagnostic system to monitor the function of the passive safety systems. The main objective of this research was to develop an improved fuzzy logic based detection method based on a comprehensive experimental testing program to detect reactor transients at the earliest possible time, practically at their birth moment. A fuzzy logic and neural network based transient identification methodology and implemented in a computer code called PROTREN was considered in this research and was compared with SPRT (Sequentially Probability Ratio Testing) decision and Bayesian inference. The project involved experiment, theoretical modeling and a thermal-hydraulic code assessment. It involved graduate and undergraduate students participation providing them with exposure and training in advanced reactor concepts and safety systems. In this final report, main tasks performed during the project period are summarized and the selected results are presented. Detailed descriptions for the tasks and the results are presented in previous yearly reports (Revankar et al 2003 and Revankar et al 2004).
Theoretical studies of potential energy surfaces and computational methods
Shepard, R.
1993-12-01
This project involves the development, implementation, and application of theoretical methods for the calculation and characterization of potential energy surfaces involving molecular species that occur in hydrocarbon combustion. These potential energy surfaces require an accurate and balanced treatment of reactants, intermediates, and products. This difficult challenge is met with general multiconfiguration self-consistent-field (MCSCF) and multireference single- and double-excitation configuration interaction (MRSDCI) methods. In contrast to the more common single-reference electronic structure methods, this approach is capable of describing accurately molecular systems that are highly distorted away from their equilibrium geometries, including reactant, fragment, and transition-state geometries, and of describing regions of the potential surface that are associated with electronic wave functions of widely varying nature. The MCSCF reference wave functions are designed to be sufficiently flexible to describe qualitatively the changes in the electronic structure over the broad range of geometries of interest. The necessary mixing of ionic, covalent, and Rydberg contributions, along with the appropriate treatment of the different electron-spin components (e.g. closed shell, high-spin open-shell, low-spin open shell, radical, diradical, etc.) of the wave functions, are treated correctly at this level. Further treatment of electron correlation effects is included using large scale multireference CI wave functions, particularly including the single and double excitations relative to the MCSCF reference space. This leads to the most flexible and accurate large-scale MRSDCI wave functions that have been used to date in global PES studies.
Theoretical rate coefficients for allyl + HO2 and allyloxy decomposition
Goldsmith, C. F.; Klippenstein, S. J.; Green, W. H.
2011-01-01
The kinetics of the allyl + HO{sub 2} bimolecular reaction, the thermal decomposition of C{sub 3}H{sub 5}OOH, and the unimolecular reactions of C{sub 3}H{sub 5}O are studied theoretically. High-level ab initio calculations of the C{sub 3}H{sub 5}OOH and C{sub 3}H{sub 5}O potential energy surfaces are coupled with RRKM master equation methods to compute the temperature- and pressure-dependence of the rate coefficients. Variable reaction coordinate transition state theory is used to characterize the barrierless transition states for the allyl + HO{sub 2} and C{sub 3}H{sub 5}O + OH reactions. The predicted rate coefficients for allyl + HO{sub 2} ? C{sub 3}H{sub 5}OOH ? products are in good agreement with experimental values. The calculations for allyl + HO{sub 2} ? C{sub 3}H{sub 6} + O{sub 2} underpredict the observed rate. The new rate coefficients suggest that the reaction of allyl + HO{sub 2} will promote chain-branching significantly more than previous models suggest.
Protonated Forms of Monoclinic Zirconia: A Theoretical Study
Mantz, Yves A.; Gemmen, Randall S.
2010-05-06
In various materials applications of zirconia, protonated forms of monoclinic zirconia may be formed, motivating their study within the framework of density-functional theory. Using the HCTH/120 exchange-correlation functional, the equations of state of yttria and of the three low-pressure zirconia polymorphs are computed, to verify our approach. Next, the favored charge state of a hydrogen atom in monoclinic zirconia is shown to be positive for all Fermilevel energies in the band gap, by the computation of defect formation energies.This result is consistent with a single previous theoretical prediction at midgap as well as muonium spectroscopy experiments. For the formally positively (+1e) charged system of a proton in monoclinic zirconia (with a homogeneous neutralizing background charge densityimplicitly included), modeled using up to a 3 x 3 x 3 arrangement of unit cells, different stable and metastable structures are identified. They are similar to those structures previously proposed for the neutral system of hydrogen-doedmonoclinic zirconia, at a similar level of theory. As predicted using the HCTH/120 functional, the lowest energy structure of the proton bonded to one of the two available oxygen atom types, O1, is favored by 0.39 eV compared to that of the proton bonded to O2. The rate of proton transfer between O1 ions is slower than that for hydrogen-dopedmonoclinic zirconia, whose transition-state structures may be lowered in energy by the extra electron.
Theoretical and computer models of detonation in solid explosives
Tarver, C.M.; Urtiew, P.A.
1997-10-01
Recent experimental and theoretical advances in understanding energy transfer and chemical kinetics have led to improved models of detonation waves in solid explosives. The Nonequilibrium Zeldovich - von Neumann - Doring (NEZND) model is supported by picosecond laser experiments and molecular dynamics simulations of the multiphonon up-pumping and internal vibrational energy redistribution (IVR) processes by which the unreacted explosive molecules are excited to the transition state(s) preceding reaction behind the leading shock front(s). High temperature, high density transition state theory calculates the induction times measured by laser interferometric techniques. Exothermic chain reactions form product gases in highly excited vibrational states, which have been demonstrated to rapidly equilibrate via supercollisions. Embedded gauge and Fabry-Perot techniques measure the rates of reaction product expansion as thermal and chemical equilibrium is approached. Detonation reaction zone lengths in carbon-rich condensed phase explosives depend on the relatively slow formation of solid graphite or diamond. The Ignition and Growth reactive flow model based on pressure dependent reaction rates and Jones-Wilkins-Lee (JWL) equations of state has reproduced this nanosecond time resolved experimental data and thus has yielded accurate average reaction zone descriptions in one-, two- and three- dimensional hydrodynamic code calculations. The next generation reactive flow model requires improved equations of state and temperature dependent chemical kinetics. Such a model is being developed for the ALE3D hydrodynamic code, in which heat transfer and Arrhenius kinetics are intimately linked to the hydrodynamics.
Brown, James Carrington, Tucker
2015-07-28
Although phase-space localized Gaussians are themselves poor basis functions, they can be used to effectively contract a discrete variable representation basis [A. Shimshovitz and D. J. Tannor, Phys. Rev. Lett. 109, 070402 (2012)]. This works despite the fact that elements of the Hamiltonian and overlap matrices labelled by discarded Gaussians are not small. By formulating the matrix problem as a regular (i.e., not a generalized) matrix eigenvalue problem, we show that it is possible to use an iterative eigensolver to compute vibrational energy levels in the Gaussian basis.
Effect of cosolvent on protein stability: A theoretical investigation
Chalikian, Tigran V.
2014-12-14
We developed a statistical thermodynamic algorithm for analyzing solvent-induced folding/unfolding transitions of proteins. The energetics of protein transitions is governed by the interplay between the cavity formation contribution and the term reflecting direct solute-cosolvent interactions. The latter is viewed as an exchange reaction in which the binding of a cosolvent to a solute is accompanied by release of waters of hydration to the bulk. Our model clearly differentiates between the stoichiometric and non-stoichiometric interactions of solvent or co-solvent molecules with a solute. We analyzed the urea- and glycine betaine (GB)-induced conformational transitions of model proteins of varying size which are geometrically approximated by a sphere in their native state and a spherocylinder in their unfolded state. The free energy of cavity formation and its changes accompanying protein transitions were computed based on the concepts of scaled particle theory. The free energy of direct solute-cosolvent interactions were analyzed using empirical parameters previously determined for urea and GB interactions with low molecular weight model compounds. Our computations correctly capture the mode of action of urea and GB and yield realistic numbers for (??G°/?a{sub 3}){sub T,P} which are related to the m-values of protein denaturation. Urea is characterized by negative values of (??G°/?a{sub 3}){sub T,P} within the entire range of urea concentrations analyzed. At concentrations below ?1 M, GB exhibits positive values of (??G°/?a{sub 3}){sub T,P} which turn positive at higher GB concentrations. The balance between the thermodynamic contributions of cavity formation and direct solute-cosolvent interactions that, ultimately, defines the mode of cosolvent action is extremely subtle. A 20% increase or decrease in the equilibrium constant for solute-cosolvent binding may change the sign of (??G°/?a{sub 3}){sub T,P} thereby altering the mode of cosolvent action (stabilizing
A Reduction in Systematic Errors of a Bayesian Retrieval Algorithm
U.S. Department of Energy (DOE) all webpages (Extended Search)
A Reduction in Systematic Errors of a Bayesian Retrieval Algorithm Seo, Eun-Kyoung Florida State University Liu, Guosheng Florida State University Kim, Kwang-Yul Texas A&M ...
Sandia Energy - Genetic Algorithm for Innovative Device Designs...
U.S. Department of Energy (DOE) all webpages (Extended Search)
Genetic Algorithm for Innovative Device Designs in High-Efficiency III-V Nitride Light-Emitting Diodes Home Energy Solid-State Lighting News Energy Efficiency News & Events Genetic...
Gacs quantum algorithmic entropy in infinite dimensional Hilbert spaces
Benatti, Fabio; Oskouei, Samad Khabbazi Deh Abad, Ahmad Shafiei
2014-08-15
We extend the notion of Gacs quantum algorithmic entropy, originally formulated for finitely many qubits, to infinite dimensional quantum spin chains and investigate the relation of this extension with two quantum dynamical entropies that have been proposed in recent years.
Wavelet Algorithm for Feature Identification and Image Analysis
Energy Science and Technology Software Center (OSTI)
2005-10-01
WVL are a set of python scripts based on the algorithm described in "A novel 3D wavelet-based filter for visualizing features in noisy biological data, " W. C. Moss et al., J. Microsc. 219, 43-49 (2005)
Development of Method and Algorithms To Identify Easily Implementable
Office of Energy Efficiency and Renewable Energy (EERE) (indexed site)
Energy-Efficient Low-Cost Multicomponent Distillation Column Trains With Large Energy Savings For Wide Number of Separations | Department of Energy Development of Method and Algorithms To Identify Easily Implementable Energy-Efficient Low-Cost Multicomponent Distillation Column Trains With Large Energy Savings For Wide Number of Separations Development of Method and Algorithms To Identify Easily Implementable Energy-Efficient Low-Cost Multicomponent Distillation Column Trains With Large
New Algorithm Enables Faster Simulations of Ultrafast Processes
U.S. Department of Energy (DOE) all webpages (Extended Search)
Algorithm Enables Faster Simulations of Ultrafast Processes New Algorithm Enables Faster Simulations of Ultrafast Processes Opens the Door for Real-Time Simulations in Atomic-Level Materials Research February 20, 2015 Contact: Rachel Berkowitz, 510-486-7254, rberkowitz@lbl.gov femtosecondalgorithm copy Model of ion (Cl) collision with atomically thin semiconductor (MoSe2). Collision region is shown in blue and zoomed in; red points show initial positions of Cl. The simulation calculates the
Visualizing and improving the robustness of phase retrieval algorithms
Tripathi, Ashish; Leyffer, Sven; Munson, Todd; Wild, Stefan M.
2015-06-01
Coherent x-ray diffractive imaging is a novel imaging technique that utilizes phase retrieval and nonlinear optimization methods to image matter at nanometer scales. We explore how the convergence properties of a popular phase retrieval algorithm, Fienup's HIO, behave by introducing a reduced dimensionality problem allowing us to visualize and quantify convergence to local minima and the globally optimal solution. We then introduce generalizations of HIO that improve upon the original algorithm's ability to converge to the globally optimal solution.
Graphical representation of parallel algorithmic processes. Master's thesis
Williams, E.M.
1990-12-01
Algorithm animation is a visualization method used to enhance understanding of functioning of an algorithm or program. Visualization is used for many purposes, including education, algorithm research, performance analysis, and program debugging. This research applies algorithm animation techniques to programs developed for parallel architectures, with specific on the Intel iPSC/2 hypercube. While both P-time and NP-time algorithms can potentially benefit from using visualization techniques, the set of NP-complete problems provides fertile ground for developing parallel applications, since the combinatoric nature of the problems makes finding the optimum solution impractical. The primary goals for this visualization system are: Data should be displayed as it is generated. The interface to the targe program should be transparent, allowing the animation of existing programs. Flexibility - the system should be able to animate any algorithm. The resulting system incorporates and extends two AFIT products: the AFIT Algorithm Animation Research Facility (AAARF) and the Parallel Resource Analysis Software Environment (PRASE). AAARF is an algorithm animation system developed primarily for sequential programs, but is easily adaptable for use with parallel programs. PRASE is an instrumentation package that extracts system performance data from programs on the Intel hypercubes. Since performance data is an essential part of analyzing any parallel program, views of the performance data are provided as an elementary part of the system. Custom software is designed to interface these systems and to display the program data. The program chosen as the example for this study is a member of the NP-complete problem set; it is a parallel implementation of a general.
Structural Basis of UV DNA-Damage Recognition by the DDB1-DDB2...
Office of Scientific and Technical Information (OSTI)
Recognition by the DDB1-DDB2 Complex Citation Details In-Document Search Title: Structural Basis of UV DNA-Damage Recognition by the DDB1-DDB2 Complex Ultraviolet (UV) ...
Reducing Uncertainty in the Seismic Design Basis for the Waste Treatment Plant, Hanford, Washington
Brouns, Thomas M.; Rohay, Alan C.; Reidel, Steve; Gardner, Martin G.
2007-02-27
The seismic design basis for the Waste Treatment Plant (WTP) at the Department of Energy’s (DOE) Hanford Site near Richland was re-evaluated in 2005, resulting in an increase by up to 40% in the seismic design basis. The original seismic design basis for the WTP was established in 1999 based on a probabilistic seismic hazard analysis completed in 1996. The 2005 analysis was performed to address questions raised by the Defense Nuclear Facilities Safety Board (DNFSB) about the assumptions used in developing the original seismic criteria and adequacy of the site geotechnical surveys. The updated seismic response analysis used existing and newly acquired seismic velocity data, statistical analysis, expert elicitation, and ground motion simulation to develop interim design ground motion response spectra which enveloped the remaining uncertainties. The uncertainties in these response spectra were enveloped at approximately the 84th percentile to produce conservative design spectra, which contributed significantly to the increase in the seismic design basis.
Basis for Interim Operation for the K-Reactor in Cold Standby
Shedrow, B.
1998-10-19
The Basis for Interim Operation (BIO) document for K Reactor in Cold Standby and the L- and P-Reactor Disassembly Basins was prepared in accordance with the draft DOE standard for BIO preparation (dated October 26, 1993).
NSS 18.3 Verification of Authorization Basis Documentation 12/8/03
The objective of this surveillance is for the Facility Representative to verify that the facility's configuration and operations remain consistent with the authorization basis.Â As defined in DOE...
Incremental k-core decomposition: Algorithms and evaluation
Sariyuce, Ahmet Erdem; Gedik, Bugra; Jacques-SIlva, Gabriela; Wu, Kun -Lung; Catalyurek, Umit V.
2016-02-01
A k-core of a graph is a maximal connected subgraph in which every vertex is connected to at least k vertices in the subgraph. k-core decomposition is often used in large-scale network analysis, such as community detection, protein function prediction, visualization, and solving NP-hard problems on real networks efficiently, like maximal clique finding. In many real-world applications, networks change over time. As a result, it is essential to develop efficient incremental algorithms for dynamic graph data. In this paper, we propose a suite of incremental k-core decomposition algorithms for dynamic graph data. These algorithms locate a small subgraph that ismoreÂ Â» guaranteed to contain the list of vertices whose maximum k-core values have changed and efficiently process this subgraph to update the k-core decomposition. We present incremental algorithms for both insertion and deletion operations, and propose auxiliary vertex state maintenance techniques that can further accelerate these operations. Our results show a significant reduction in runtime compared to non-incremental alternatives. We illustrate the efficiency of our algorithms on different types of real and synthetic graphs, at varying scales. Furthermore, for a graph of 16 million vertices, we observe relative throughputs reaching a million times, relative to the non-incremental algorithms.Â«Â less
Monitoring and Commissioning Verification Algorithms for CHP Systems
Brambley, Michael R.; Katipamula, Srinivas; Jiang, Wei
2008-03-31
This document provides the algorithms for CHP system performance monitoring and commissioning verification (CxV). It starts by presenting system-level and component-level performance metrics, followed by descriptions of algorithms for performance monitoring and commissioning verification, using the metric presented earlier. Verification of commissioning is accomplished essentially by comparing actual measured performance to benchmarks for performance provided by the system integrator and/or component manufacturers. The results of these comparisons are then automatically interpreted to provide conclusions regarding whether the CHP system and its components have been properly commissioned and where problems are found, guidance is provided for corrections. A discussion of uncertainty handling is then provided, which is followed by a description of how simulations models can be used to generate data for testing the algorithms. A model is described for simulating a CHP system consisting of a micro-turbine, an exhaust-gas heat recovery unit that produces hot water, a absorption chiller and a cooling tower. The process for using this model for generating data for testing the algorithms for a selected set of faults is described. The next section applies the algorithms developed to CHP laboratory and field data to illustrate their use. The report then concludes with a discussion of the need for laboratory testing of the algorithms on a physical CHP systems and identification of the recommended next steps.
Impacts of Time Delays on Distributed Algorithms for Economic Dispatch
Yang, Tao; Wu, Di; Sun, Yannan; Lian, Jianming
2015-07-26
Economic dispatch problem (EDP) is an important problem in power systems. It can be formulated as an optimization problem with the objective to minimize the total generation cost subject to the power balance constraint and generator capacity limits. Recently, several consensus-based algorithms have been proposed to solve EDP in a distributed manner. However, impacts of communication time delays on these distributed algorithms are not fully understood, especially for the case where the communication network is directed, i.e., the information exchange is unidirectional. This paper investigates communication time delay effects on a distributed algorithm for directed communication networks. The algorithm has been tested by applying time delays to different types of information exchange. Several case studies are carried out to evaluate the effectiveness and performance of the algorithm in the presence of time delays in communication networks. It is found that time delay effects have negative effects on the convergence rate, and can even result in an incorrect converge value or fail the algorithm to converge.
A background correction algorithm for Van Allen Probes MagEIS...
Office of Scientific and Technical Information (OSTI)
Title: A background correction algorithm for Van Allen Probes MagEIS electron flux measurements We describe an automated computer algorithm designed to remove background ...
Technical Basis Spent Nuclear Fuel (SNF) Project Radiation and Contamination Trending Program
KURTZ, J.E.
2000-05-10
This report documents the technical basis for the Spent Nuclear Fuel (SNF) Program radiation and contamination trending program. The program consists of standardized radiation and contamination surveys of the KE Basin, radiation surveys of the KW basin, and radiation surveys of the Cold Vacuum Drying Facility (CVD) with the associated tracking. This report also discusses the remainder of radiological areas within the SNFP that do not have standardized trending programs and the basis for not having this program in those areas.
Non-homogeneous solutions of a Coulomb Schrödinger equation as basis set for scattering problems
Del Punta, J. A.; Ambrosio, M. J.; Gasaneo, G.; Zaytsev, S. A.; Ancarani, L. U.
2014-05-15
We introduce and study two-body Quasi Sturmian functions which are proposed as basis functions for applications in three-body scattering problems. They are solutions of a two-body non-homogeneous Schrödinger equation. We present different analytic expressions, including asymptotic behaviors, for the pure Coulomb potential with a driven term involving either Slater-type or Laguerre-type orbitals. The efficiency of Quasi Sturmian functions as basis set is numerically illustrated through a two-body scattering problem.
Theoretical and experimental study on regenerative rotary displacer Stirling engine
Raggi, L.; Katsuta, Masafumi; Isshiki, Naotsugu; Isshiki, Seita
1997-12-31
Recently a quite new type of hot air engine called rotary displacer engine, in which the displacer is a rotating disk enclosed in a cylinder, has been conceived and developed. The working gas, contained in a notch excavated in the disk, is heated and cooled alternately, on account of the heat transferred through the enclosing cylinder that is heated at one side and cooled at the opposite one. The gas temperature oscillations cause the pressure fluctuations that get out mechanical power acting on a power piston. In order to attempt to increase the performances for this kind of engine, the authors propose three different regeneration methods. The first one comprises two coaxial disks that, revolving in opposite ways, cause a temperature gradient on the cylinder wall and a regenerative axial heat conduction through fins shaped on the cylinder inner wall. The other two methods are based on the heat transferred by a proper closed circuit that in one case has a circulating liquid inside and in the other one is formed by several heat pipes working each one for different temperatures. An engine based on the first principle, the Regenerative Tandem Contra-Rotary Displacer Stirling Engine, has been realized and experimented. In this paper experimental results with and without regeneration are reported comparatively with a detailed description of the unity. A basic explanation of the working principle of this engine and a theoretical analysis investigating the main influential parameters for the regenerative effect are done. This new rotating displacer Stirling engines, for their simplicity, are expected to attain high rotational speed especially for applications as demonstration and hobby unities.
THEORETICAL EVOLUTION OF OPTICAL STRONG LINES ACROSS COSMIC TIME
Kewley, Lisa J.; Dopita, Michael A.; Sutherland, Ralph; Leitherer, Claus; Dave, Romeel; Allen, Mark; Groves, Brent
2013-09-10
We use the chemical evolution predictions of cosmological hydrodynamic simulations with our latest theoretical stellar population synthesis, photoionization, and shock models to predict the strong line evolution of ensembles of galaxies from z = 3 to the present day. In this paper, we focus on the brightest optical emission-line ratios, [N II]/H{alpha} and [O III]/H{beta}. We use the optical diagnostic Baldwin-Phillips-Terlevich (BPT) diagram as a tool for investigating the spectral properties of ensembles of active galaxies. We use four redshift windows chosen to exploit new near-infrared multi-object spectrographs. We predict how the BPT diagram will appear in these four redshift windows given different sets of assumptions. We show that the position of star-forming galaxies on the BPT diagram traces the interstellar medium conditions and radiation field in galaxies at a given redshift. Galaxies containing active galactic nucleus (AGN) form a mixing sequence with purely star-forming galaxies. This mixing sequence may change dramatically with cosmic time, due to the metallicity sensitivity of the optical emission-lines. Furthermore, the position of the mixing sequence may probe metallicity gradients in galaxies as a function of redshift, depending on the size of the AGN narrow-line region. We apply our latest slow shock models for gas shocked by galactic-scale winds. We show that at high redshift, galactic wind shocks are clearly separated from AGN in line ratio space. Instead, shocks from galactic winds mimic high metallicity starburst galaxies. We discuss our models in the context of future large near-infrared spectroscopic surveys.
Visual Empirical Region of Influence (VERI) Pattern Recognition Algorithms
2002-05-01
We developed new pattern recognition (PR) algorithms based on a human visual perception model. We named these algorithms Visual Empirical Region of Influence (VERI) algorithms. To compare the new algorithm's effectiveness against othe PR algorithms, we benchmarked their clustering capabilities with a standard set of two-dimensional data that is well known in the PR community. The VERI algorithm succeeded in clustering all the data correctly. No existing algorithm had previously clustered all the pattens in the data set successfully. The commands to execute VERI algorithms are quite difficult to master when executed from a DOS command line. The algorithm requires several parameters to operate correctly. From our own experiences we realized that if we wanted to provide a new data analysis tool to the PR community we would have to provide a new data analysis tool to the PR community we would have to make the tool powerful, yet easy and intuitive to use. That was our motivation for developing graphical user interfaces (GUI's) to the VERI algorithms. We developed GUI's to control the VERI algorithm in a single pass mode and in an optimization mode. We also developed a visualization technique that allows users to graphically animate and visually inspect multi-dimensional data after it has been classified by the VERI algorithms. The visualization technique that allows users to graphically animate and visually inspect multi-dimensional data after it has been classified by the VERI algorithms. The visualization package is integrated into the single pass interface. Both the single pass interface and optimization interface are part of the PR software package we have developed and make available to other users. The single pass mode only finds PR results for the sets of features in the data set that are manually requested by the user. The optimization model uses a brute force method of searching through the cominations of features in a data set for features that produce the
Visual Empirical Region of Influence (VERI) Pattern Recognition Algorithms
Energy Science and Technology Software Center (OSTI)
2002-05-01
We developed new pattern recognition (PR) algorithms based on a human visual perception model. We named these algorithms Visual Empirical Region of Influence (VERI) algorithms. To compare the new algorithm's effectiveness against othe PR algorithms, we benchmarked their clustering capabilities with a standard set of two-dimensional data that is well known in the PR community. The VERI algorithm succeeded in clustering all the data correctly. No existing algorithm had previously clustered all the pattens inmoreÂ Â» the data set successfully. The commands to execute VERI algorithms are quite difficult to master when executed from a DOS command line. The algorithm requires several parameters to operate correctly. From our own experiences we realized that if we wanted to provide a new data analysis tool to the PR community we would have to provide a new data analysis tool to the PR community we would have to make the tool powerful, yet easy and intuitive to use. That was our motivation for developing graphical user interfaces (GUI's) to the VERI algorithms. We developed GUI's to control the VERI algorithm in a single pass mode and in an optimization mode. We also developed a visualization technique that allows users to graphically animate and visually inspect multi-dimensional data after it has been classified by the VERI algorithms. The visualization technique that allows users to graphically animate and visually inspect multi-dimensional data after it has been classified by the VERI algorithms. The visualization package is integrated into the single pass interface. Both the single pass interface and optimization interface are part of the PR software package we have developed and make available to other users. The single pass mode only finds PR results for the sets of features in the data set that are manually requested by the user. The optimization model uses a brute force method of searching through the cominations of features in a data set for features that produce
Comparison of Theoretical Efficiencies of Multi-junction Concentrator Solar Cells
Kurtz, S.; Myers, D.; McMahon, W. E.; Geisz, J.; Steiner, M.
2008-01-01
Champion concentrator cell efficiencies have surpassed 40% and now many are asking whether the efficiencies will surpass 50%. Theoretical efficiencies of >60% are described for many approaches, but there is often confusion about the theoretical efficiency for a specific structure. The detailed balance approach to calculating theoretical efficiency gives an upper bound that can be independent of material parameters and device design. Other models predict efficiencies that are closer to those that have been achieved. Changing reference spectra and the choice of concentration further complicate comparison of theoretical efficiencies. This paper provides a side-by-side comparison of theoretical efficiencies of multi-junction solar cells calculated with the detailed balance approach and a common one-dimensional-transport model for different spectral and irradiance conditions. Also, historical experimental champion efficiencies are compared with the theoretical efficiencies.
New Theoretical Model of the Complex Edge Region of Fusion Plasmas...
New Theoretical Model of the Complex Edge Region of Fusion Plasmas Proves Accurate Fusion Energy Sciences (FES) FES Home About Research Facilities Science Highlights Benefits of ...
Numerical Study of Velocity Shear Stabilization of 3D and Theoretical...
Office of Scientific and Technical Information (OSTI)
We studied the feasibility of resonantly driving GAMs in tokamaks. A numerical simulation ... Theoretical support was provided for the Maryland Centrifugal Experiment, funded in a ...
Theoretical investigations of defects in a Si-based digital ferromagne...
Office of Scientific and Technical Information (OSTI)
digital ferromagnetic heterostructure - a spintronic material Citation Details In-Document Search Title: Theoretical investigations of defects in a Si-based digital ...
Impacts of the Niigataken ChÅ«etsu-Oki Earthquake to the Kashiwazaki-Kariwa Nuclear Power Plant, Post-Earthquake Response, and Lessons Learned: U.S. Perspective for Design Basis Earthquakes and Beyond Design Basis Earthquakes
2D/3D registration algorithm for lung brachytherapy
Zvonarev, P. S.; Farrell, T. J.; Hunter, R.; Wierzbicki, M.; Hayward, J. E.; Sur, R. K.
2013-02-15
Purpose: A 2D/3D registration algorithm is proposed for registering orthogonal x-ray images with a diagnostic CT volume for high dose rate (HDR) lung brachytherapy. Methods: The algorithm utilizes a rigid registration model based on a pixel/voxel intensity matching approach. To achieve accurate registration, a robust similarity measure combining normalized mutual information, image gradient, and intensity difference was developed. The algorithm was validated using a simple body and anthropomorphic phantoms. Transfer catheters were placed inside the phantoms to simulate the unique image features observed during treatment. The algorithm sensitivity to various degrees of initial misregistration and to the presence of foreign objects, such as ECG leads, was evaluated. Results: The mean registration error was 2.2 and 1.9 mm for the simple body and anthropomorphic phantoms, respectively. The error was comparable to the interoperator catheter digitization error of 1.6 mm. Preliminary analysis of data acquired from four patients indicated a mean registration error of 4.2 mm. Conclusions: Results obtained using the proposed algorithm are clinically acceptable especially considering the complications normally encountered when imaging during lung HDR brachytherapy.
Mesh Algorithms for PDE with Sieve I: Mesh Distribution
Knepley, Matthew G.; Karpeev, Dmitry A.
2009-01-01
We have developed a new programming framework, called Sieve, to support parallel numerical partial differential equation(s) (PDE) algorithms operating over distributed meshes. We have also developed a reference implementation of Sieve in C++ as a library of generic algorithms operating on distributed containers conforming to the Sieve interface. Sieve makes instances of the incidence relation, or arrows, the conceptual first-class objects represented in the containers. Further, generic algorithms acting on this arrow container are systematically used to provide natural geometric operations on the topology and also, through duality, on the data. Finally, coverings and duality are used to encode notmoreÂ Â» only individual meshes, but all types of hierarchies underlying PDE data structures, including multigrid and mesh partitions. In order to demonstrate the usefulness of the framework, we show how the mesh partition data can be represented and manipulated using the same fundamental mechanisms used to represent meshes. We present the complete description of an algorithm to encode a mesh partition and then distribute a mesh, which is independent of the mesh dimension, element shape, or embedding. Moreover, data associated with the mesh can be similarly distributed with exactly the same algorithm. The use of a high level of abstraction within the Sieve leads to several benefits in terms of code reuse, simplicity, and extensibility. We discuss these benefits and compare our approach to other existing mesh libraries.Â«Â less
DOE Data Explorer [Office of Scientific and Technical Information (OSTI)]
Chitra Sivaraman; Connor Flynn
2004-10-01
1-minute Raman Lidar: aerosol scattering ratio and backscattering coefficient profiles, from first Ferrare algorithm
DOE Data Explorer [Office of Scientific and Technical Information (OSTI)]
Newsom, Rob; Goldsmith, John
1998-03-01
10-minute Raman Lidar: aerosol scattering ratio and backscattering coefficient profiles, from first Ferrare algorithm
DOE Data Explorer [Office of Scientific and Technical Information (OSTI)]
Laura Riihimaki
1997-03-21
SIRS: derived, correction of downwelling shortwave diffuse hemispheric measurements using Dutton and full algorithm
DOE Data Explorer [Office of Scientific and Technical Information (OSTI)]
Chitra Sivaraman; Connor Flynn
1998-03-01
10-minute Raman Lidar: aerosol extinction profiles and aerosol optical thickness, from first Ferrare algorithm
DOE Data Explorer [Office of Scientific and Technical Information (OSTI)]
Chitra Sivaraman; Connor Flynn
2004-10-01
10-second Raman Lidar: aerosol scattering ratio and backscattering coefficient profiles, from first Ferrare algorithm
DOE Data Explorer [Office of Scientific and Technical Information (OSTI)]
Chitra Sivaraman; Connor Flynn
2004-10-01
1-minute Raman Lidar: aerosol extinction profiles and aerosol optical thickness, from first Ferrare algorithm
DOE Data Explorer [Office of Scientific and Technical Information (OSTI)]
Sivaraman, Chitra; Flynn, Connor
2004-10-01
2-minute Raman Lidar: aerosol scattering ratio and backscattering coefficient profiles, from first Ferrare algorithm
DOE Data Explorer [Office of Scientific and Technical Information (OSTI)]
Chitra Sivaraman; Connor Flynn
10-second Raman Lidar: aerosol scattering ratio and backscattering coefficient profiles, from first Ferrare algorithm
DOE Data Explorer [Office of Scientific and Technical Information (OSTI)]
Chitra Sivaraman; Connor Flynn
10-minute Raman Lidar: aerosol scattering ratio and backscattering coefficient profiles, from first Ferrare algorithm
DOE Data Explorer [Office of Scientific and Technical Information (OSTI)]
Chitra Sivaraman; Connor Flynn
10-minute Raman Lidar: aerosol extinction profiles and aerosol optical thickness, from first Ferrare algorithm
DOE Data Explorer [Office of Scientific and Technical Information (OSTI)]
Sivaraman, Chitra; Flynn, Connor
2-minute Raman Lidar: aerosol scattering ratio and backscattering coefficient profiles, from first Ferrare algorithm
DOE Data Explorer [Office of Scientific and Technical Information (OSTI)]
Chitra Sivaraman; Connor Flynn
1-minute Raman Lidar: aerosol extinction profiles and aerosol optical thickness, from first Ferrare algorithm
DOE Data Explorer [Office of Scientific and Technical Information (OSTI)]
Chitra Sivaraman; Connor Flynn
1-minute Raman Lidar: aerosol scattering ratio and backscattering coefficient profiles, from first Ferrare algorithm
Time-dependent density functional theory quantum transport simulation in non-orthogonal basis
Kwok, Yan Ho; Xie, Hang; Yam, Chi Yung; Chen, Guan Hua; Zheng, Xiao
2013-12-14
Basing on the earlier works on the hierarchical equations of motion for quantum transport, we present in this paper a first principles scheme for time-dependent quantum transport by combining time-dependent density functional theory (TDDFT) and Keldysh's non-equilibrium Green's function formalism. This scheme is beyond the wide band limit approximation and is directly applicable to the case of non-orthogonal basis without the need of basis transformation. The overlap between the basis in the lead and the device region is treated properly by including it in the self-energy and it can be shown that this approach is equivalent to a lead-device orthogonalization. This scheme has been implemented at both TDDFT and density functional tight-binding level. Simulation results are presented to demonstrate our method and comparison with wide band limit approximation is made. Finally, the sparsity of the matrices and computational complexity of this method are analyzed.
Hamiltonian Light-Front Ffield Theory in a Basis Function Approach
Vary, J.P.; Honkanen, H.; Li, Jun; Maris, P.; Brodsky, S.J.; Harindranath, A.; de Teramond, G.F.; Sternberg, P.; Ng, E.G.; Yang, C.
2009-05-15
Hamiltonian light-front quantum field theory constitutes a framework for the non-perturbative solution of invariant masses and correlated parton amplitudes of self-bound systems. By choosing the light-front gauge and adopting a basis function representation, we obtain a large, sparse, Hamiltonian matrix for mass eigenstates of gauge theories that is solvable by adapting the ab initio no-core methods of nuclear many-body theory. Full covariance is recovered in the continuum limit, the infinite matrix limit. There is considerable freedom in the choice of the orthonormal and complete set of basis functions with convenience and convergence rates providing key considerations. Here, we use a two-dimensional harmonic oscillator basis for transverse modes that corresponds with eigensolutions of the soft-wall AdS/QCD model obtained from light-front holography. We outline our approach, present illustrative features of some non-interacting systems in a cavity and discuss the computational challenges.
Analysis of the Multi-Phase Copying Garbage Collection Algorithm
Podhorszki, Norbert
2009-01-01
The multi-phase copying garbage collection was designed to avoid the need for large amount of reserved memory usually required for the copying types of garbage collection algorithms. The collection is performed in multiple phases using the available free memory. This paper proves that the number of phases depends on the size of the reserved memory and the ratio of the garbage and accessible objects. The performance of the implemented algorithm is tested in a fine-grained parallel Prolog system. We find that reserving only 10% of memory for garbage collection is sufficient for good performance in practice. Additionally, an improvement of the generic algorithm specifically for the tested parallel Prolog system is described.
EAGLE: 'EAGLE'Is an' Algorithmic Graph Library for Exploration
Energy Science and Technology Software Center (OSTI)
2015-01-16
The Resource Description Framework (RDF) and SPARQL Protocol and RDF Query Language (SPARQL) were introduced about a decade ago to enable flexible schema-free data interchange on the Semantic Web. Today data scientists use the framework as a scalable graph representation for integrating, querying, exploring and analyzing data sets hosted at different sources. With increasing adoption, the need for graph mining capabilities for the Semantic Web has emerged. Today there is no tools to conduct "graphmoreÂ Â» mining" on RDF standard data sets. We address that need through implementation of popular iterative Graph Mining algorithms (Triangle count, Connected component analysis, degree distribution, diversity degree, PageRank, etc.). We implement these algorithms as SPARQL queries, wrapped within Python scripts and call our software tool as EAGLE. In RDF style, EAGLE stands for "EAGLE 'Is an' algorithmic graph library for exploration. EAGLE is like 'MATLAB' for 'Linked Data.'Â«Â less
EAGLE: 'EAGLE'Is an' Algorithmic Graph Library for Exploration
2015-01-16
The Resource Description Framework (RDF) and SPARQL Protocol and RDF Query Language (SPARQL) were introduced about a decade ago to enable flexible schema-free data interchange on the Semantic Web. Today data scientists use the framework as a scalable graph representation for integrating, querying, exploring and analyzing data sets hosted at different sources. With increasing adoption, the need for graph mining capabilities for the Semantic Web has emerged. Today there is no tools to conduct "graph mining" on RDF standard data sets. We address that need through implementation of popular iterative Graph Mining algorithms (Triangle count, Connected component analysis, degree distribution, diversity degree, PageRank, etc.). We implement these algorithms as SPARQL queries, wrapped within Python scripts and call our software tool as EAGLE. In RDF style, EAGLE stands for "EAGLE 'Is an' algorithmic graph library for exploration. EAGLE is like 'MATLAB' for 'Linked Data.'
Nonlinear Global Optimization Using Curdling Algorithm in Mathematica Environmet
Energy Science and Technology Software Center (OSTI)
1997-08-05
An algorithm for performing optimization which is a derivative-free, grid-refinement approach to nonlinear optimization was developed and implemented in software as OPTIMIZE. This approach overcomes a number of deficiencies in existing approaches. Most notably, it finds extremal regions rather than only single extremal points. the program is interactive and collects information on control parameters and constraints using menus. For up to two (and potentially three) dimensions, function convergence is displayed graphically. Because the algorithm doesmoreÂ Â» not compute derivatives, gradients, or vectors, it is numerically stable. It can find all the roots of a polynomial in one pass. It is an inherently parallel algorithm. OPTIMIZE-M is a modification of OPTIMIZE designed for use within the Mathematica environment created by Wolfram Research.Â«Â less
Fast Nonlinear Seismic SSI Analysis for Low-Rise Concrete Shearwall Buildings for Design-Basis and Beyond Design Applications
Protocol for Enhanced Evaluations of Beyond Design Basis Events Supporting Implementation of Operating Experience Report 2013-01
Gradient maintenance: A new algorithm for fast online replanning
Ahunbay, Ergun E. Li, X. Allen
2015-06-15
Purpose: Clinical use of online adaptive replanning has been hampered by the unpractically long time required to delineate volumes based on the image of the day. The authors propose a new replanning algorithm, named gradient maintenance (GM), which does not require the delineation of organs at risk (OARs), and can enhance automation, drastically reducing planning time and improving consistency and throughput of online replanning. Methods: The proposed GM algorithm is based on the hypothesis that if the dose gradient toward each OAR in daily anatomy can be maintained the same as that in the original plan, the intended plan quality of the original plan would be preserved in the adaptive plan. The algorithm requires a series of partial concentric rings (PCRs) to be automatically generated around the target toward each OAR on the planning and the daily images. The PCRs are used in the daily optimization objective function. The PCR dose constraints are generated with doseâ€“volume data extracted from the original plan. To demonstrate this idea, GM plans generated using daily images acquired using an in-room CT were compared to regular optimization and image guided radiation therapy repositioning plans for representative prostate and pancreatic cancer cases. Results: The adaptive replanning using the GM algorithm, requiring only the target contour from the CT of the day, can be completed within 5 min without using high-power hardware. The obtained adaptive plans were almost as good as the regular optimization plans and were better than the repositioning plans for the cases studied. Conclusions: The newly proposed GM replanning algorithm, requiring only target delineation, not full delineation of OARs, substantially increased planning speed for online adaptive replanning. The preliminary results indicate that the GM algorithm may be a solution to improve the ability for automation and may be especially suitable for sites with small-to-medium size targets surrounded by
Basis for Identification of Disposal Options for R and D for Spent Nuclear
Office of Energy Efficiency and Renewable Energy (EERE) (indexed site)
Fuel and High-Level Waste | Department of Energy Basis for Identification of Disposal Options for R and D for Spent Nuclear Fuel and High-Level Waste Basis for Identification of Disposal Options for R and D for Spent Nuclear Fuel and High-Level Waste The Used Fuel Disposition campaign (UFD) is selecting a set of geologic media for further study that spans a suite of behavior characteristics that impose a broad range of potential conditions on the design of the repository, the engineered
Technical Basis Spent Nuclear Fuel (SNF) Project Radiation and Contamination Trending Program
ELGIN, J.C.
2000-10-02
This report documents the technical basis for the Spent Nuclear Fuel (SNF) Program radiation and contamination trending program. The program consists of standardized radiation and contamination surveys of the KE Basin, radiation surveys of the KW basin, radiation surveys of the Cold Vacuum Drying Facility (CVD), and radiation surveys of the Canister Storage Building (CSB) with the associated tracking. This report also discusses the remainder of radiological areas within the SNFP that do not have standardized trending programs and the basis for not having this program in those areas.
Electron Anomalous Magnetic Moment in Basis Light-Front Quantization Approach
Zhao, Xingbo; Honkanen, Heli; Maris, Pieter; Vary, James P.; Brodsky, Stanley J.; /SLAC
2012-02-17
We apply the Basis Light-Front Quantization (BLFQ) approach to the Hamiltonian field theory of Quantum Electrodynamics (QED) in free space. We solve for the mass eigenstates corresponding to an electron interacting with a single photon in light-front gauge. Based on the resulting non-perturbative ground state light-front amplitude we evaluate the electron anomalous magnetic moment. The numerical results from extrapolating to the infinite basis limit reproduce the perturbative Schwinger result with relative deviation less than 1.2%. We report significant improvements over previous works including the development of analytic methods for evaluating the vertex matrix elements of QED.
Universal basis of two-center functions. Test computations of certain diatomic molecules and ions
Kirnos, V.F.; Samsonov, B.F.; Cheglokov, E.I.
1987-05-01
It is shown that the basis of two-center functions is universal. The dependence of the nuclei of atoms comprising a molecule on charges and on the intranuclear spacing is separated explicitly in the integrals used in analyzing diatomic molecules. The basis integrals constructed once permitted rapid and effective execution of computations for the ground state potential curves for a number of electron systems: H/sub 2/, He/sub 2//sup 2 +/, HeH/sup +/, He/sub 2/, LiH, Li/sub 2/, HeB/sup +/, Be/sub 2/.
Basis for Section 3116 Determination for Salt Waste Disposal at the
Office of Environmental Management (EM)
Fuel and High-Level Waste | Department of Energy Basis for Identification of Disposal Options for R and D for Spent Nuclear Fuel and High-Level Waste Basis for Identification of Disposal Options for R and D for Spent Nuclear Fuel and High-Level Waste The Used Fuel Disposition campaign (UFD) is selecting a set of geologic media for further study that spans a suite of behavior characteristics that impose a broad range of potential conditions on the design of the repository, the engineered
Analytic matrix elements for the two-electron atomic basis with logarithmic terms
Liverts, Evgeny Z.; Barnea, Nir
2014-08-01
The two-electron problem for the helium-like atoms in S-state is considered. The basis containing the integer powers of ln r, where r is a radial variable of the Fock expansion, is studied. In this basis, the analytic expressions for the matrix elements of the corresponding Hamiltonian are presented. These expressions include only elementary and special functions, what enables very fast and accurate computation of the matrix elements. The decisive contribution of the correct logarithmic terms to the behavior of the two-electron wave function in the vicinity of the triple-coalescence point is reaffirmed.
AN ALGORITHM FOR PARALLEL SN SWEEPS ON UNSTRUCTURED MESHES
S. D. PAUTZ
2000-12-01
We develop a new algorithm for performing parallel S{sub n} sweeps on unstructured meshes. The algorithm uses a low-complexity list ordering heuristic to determine a sweep ordering on any partitioned mesh. For typical problems and with ''normal'' mesh partitionings we have observed nearly linear speedups on up to 126 processors. This is an important and desirable result, since although analyses of structured meshes indicate that parallel sweeps will not scale with normal partitioning approaches, we do not observe any severe asymptotic degradation in the parallel efficiency with modest ({le}100) levels of parallelism. This work is a fundamental step in the development of parallel S{sub n} methods.
Incremental Clustering Algorithm For Earth Science Data Mining
Vatsavai, Raju
2009-01-01
Remote sensing data plays a key role in understanding the complex geographic phenomena. Clustering is a useful tool in discovering interesting patterns and structures within the multivariate geospatial data. One of the key issues in clustering is the specication of appropriate number of clusters, which is not obvious in many practical situations. In this paper we provide an extension of G-means algorithm which automatically learns the number of clusters present in the data and avoids over estimation of the number of clusters. Experimental evaluation on simulated and remotely sensed image data shows the effectiveness of our algorithm.
Structure-preserving Geometric Algorithms & Exascale Computing | Princeton
U.S. Department of Energy (DOE) all webpages (Extended Search)
Plasma Physics Lab November 16, 2016, 4:15pm to 5:30pm Colloquia MBG Auditorium Structure-preserving Geometric Algorithms & Exascale Computing Dr. Hong Qin PPPL and University of Science and Technology of China It is difficult for the standard numerical algorithms currently adopted by the plasma physics community to meet the long-term accuracy and fidelity requirement in large-scale numerical studies of multi-scale, complex dynamics of plasmas in space and laboratory. To overcome this
COLLOQUIUM: Introduction to Quantum Algorithms | Princeton Plasma Physics
U.S. Department of Energy (DOE) all webpages (Extended Search)
Lab December 9, 2015, 4:15pm to 5:30pm MBG AUDITORIUM COLLOQUIUM: Introduction to Quantum Algorithms Dr. Nadya Shirokova University of Santa Clara Quantum computers are not an abstraction anymore - Google, NASA and USRA recently announced formation of the Quantum Artificial Intelligence Lab equipped with 1,000-qubit quantum computer. In this talk we will focus on quantum algorithms such as Deutsch, Shor's and Grover's and will discuss why they are faster than the classical ones. We will also
ITP Steel: Theoretical Minimum Energies to Produce Steel for Selected Conditions, March 2000
The absolute theoretical minimum energies to produce liquid steel from idealized scrap (100% Fe) and ore (100% Fe2O3) are much lower than consumed in practice, as are the theoretical minimum energies to roll the steel into its final shape.
SPEQTACLE: An automated generalized fuzzy C-means algorithm for tumor delineation in PET
Lapuyade-Lahorgue, JÃ©rÃ´me; Visvikis, Dimitris; Hatt, Mathieu; Pradier, Olivier; Cheze Le Rest, Catherine
2015-10-15
Purpose: Accurate tumor delineation in positron emission tomography (PET) images is crucial in oncology. Although recent methods achieved good results, there is still room for improvement regarding tumors with complex shapes, low signal-to-noise ratio, and high levels of uptake heterogeneity. Methods: The authors developed and evaluated an original clustering-based method called spatial positron emission quantification of tumorâ€”Automatic Lp-norm estimation (SPEQTACLE), based on the fuzzy C-means (FCM) algorithm with a generalization exploiting a Hilbertian norm to more accurately account for the fuzzy and non-Gaussian distributions of PET images. An automatic and reproducible estimation scheme of the norm on an image-by-image basis was developed. Robustness was assessed by studying the consistency of results obtained on multiple acquisitions of the NEMA phantom on three different scanners with varying acquisition parameters. Accuracy was evaluated using classification errors (CEs) on simulated and clinical images. SPEQTACLE was compared to another FCM implementation, fuzzy local information C-means (FLICM) and fuzzy locally adaptive Bayesian (FLAB). Results: SPEQTACLE demonstrated a level of robustness similar to FLAB (variability of 14% Â± 9% vs 14% Â± 7%, p = 0.15) and higher than FLICM (45% Â± 18%, p < 0.0001), and improved accuracy with lower CE (14% Â± 11%) over both FLICM (29% Â± 29%) and FLAB (22% Â± 20%) on simulated images. Improvement was significant for the more challenging cases with CE of 17% Â± 11% for SPEQTACLE vs 28% Â± 22% for FLAB (p = 0.009) and 40% Â± 35% for FLICM (p < 0.0001). For the clinical cases, SPEQTACLE outperformed FLAB and FLICM (15% Â± 6% vs 37% Â± 14% and 30% Â± 17%, p < 0.004). Conclusions: SPEQTACLE benefitted from the fully automatic estimation of the norm on a case-by-case basis. This promising approach will be extended to multimodal images and multiclass estimation in future developments.
Madduri, Kamesh; Bader, David A.
2009-02-15
Graph-theoretic abstractions are extensively used to analyze massive data sets. Temporal data streams from socioeconomic interactions, social networking web sites, communication traffic, and scientific computing can be intuitively modeled as graphs. We present the first study of novel high-performance combinatorial techniques for analyzing large-scale information networks, encapsulating dynamic interaction data in the order of billions of entities. We present new data structures to represent dynamic interaction networks, and discuss algorithms for processing parallel insertions and deletions of edges in small-world networks. With these new approaches, we achieve an average performance rate of 25 million structural updates per second and a parallel speedup of nearly28 on a 64-way Sun UltraSPARC T2 multicore processor, for insertions and deletions to a small-world network of 33.5 million vertices and 268 million edges. We also design parallel implementations of fundamental dynamic graph kernels related to connectivity and centrality queries. Our implementations are freely distributed as part of the open-source SNAP (Small-world Network Analysis and Partitioning) complex network analysis framework.
CRAD, Safety Basis- Y-12 Enriched Uranium Operations Oxide Conversion Facility
A section of Appendix C to DOE G 226.1-2 "Federal Line Management Oversight of Department of Energy Nuclear Facilities." Consists of Criteria Review and Approach Documents (CRADs) used for a January 2005 assessment of the Safety Basis at the Y-12 - Enriched Uranium Operations Oxide Conversion Facility.
CRAD, Safety Basis- Oak Ridge National Laboratory High Flux Isotope Reactor
A section of Appendix C to DOE G 226.1-2 "Federal Line Management Oversight of Department of Energy Nuclear Facilities." Consists of Criteria Review and Approach Documents (CRADs) used for a February 2007 assessment of the Safety Basis in preparation for restart of the Oak Ridge National Laboratory High Flux Isotope Reactor.
CRAD, Safety Basis- Oak Ridge National Laboratory High Flux Isotope Reactor Contractor ORR
A section of Appendix C to DOE G 226.1-2 "Federal Line Management Oversight of Department of Energy Nuclear Facilities." Consists of Criteria Review and Approach Documents (CRADs) used for a February 2007 assessment of the Safety Basis portion of an Operational Readiness Review of the Oak Ridge National Laboratory High Flux Isotope Reactor.
Preliminary tank characterization report for single-shell tank 241-A-104: best basis inventory
Hodgson, K.M.
1997-07-01
An effort is underway to provide waste inventory estimates that will serve as standard characterization source terms for the various waste management activities. As part of this effort, an evaluation of available information for single-shell tank 241-A-104 was performed, and a best-basis inventory was established. This work follows the methodology that was established by the standard inventory task.
CRAD, Safety Basis- Los Alamos National Laboratory TA 55 SST Facility
A section of Appendix C to DOE G 226.1-2 "Federal Line Management Oversight of Department of Energy Nuclear Facilities." Consists of Criteria Review and Approach Documents (CRADs) used for an assessment of the Safety Basis at the Los Alamos National Laboratory TA 55 SST Facility.
MANGAN, D.
2003-03-20
The purpose of the Technical Basis Document is to determine the consequences and frequency of aboveground structure failures. These failures include drops of contained equipment, such as a pump, from a SST or DST, a crane failure resulting in a load drop onto a HEPA filter. These failures can result in an uncontrolled release of radiological and toxicological material.
A section of Appendix C to DOE G 226.1-2 "Federal Line Management Oversight of Department of Energy Nuclear Facilities." Consists of Criteria Review and Approach Documents (CRADs) used for an assessment of the Safety Basis portion of an Operational Readiness Review at the Los Alamos National Laboratory Waste Characterization, Reduction, and Repackaging Facility.
CRAD, Safety Basis- Oak Ridge National Laboratory TRU ALPHA LLWT Project
A section of Appendix C to DOE G 226.1-2 "Federal Line Management Oversight of Department of Energy Nuclear Facilities." Consists of Criteria Review and Approach Documents (CRADs) used for a November 2003 assessment of the Safety Basis portion of an Operational Readiness Review of the Oak Ridge National Laboratory TRU ALPHA LLWT Project.
The Haldaneâ€“Shastry spin chain and the topological basis realization
Sun, Chunfang; Gou, Lidan; Wang, Gangcheng; Du, Guijiao; Zhou, Chengcheng; Xue, Kang
2013-06-15
In this paper, based on the topological basis states, we investigate the Hamiltonian family (H{sub 2},H{sub 3},H{sub 4}) of a closed four-qubit Haldaneâ€“Shastry spin chain. Not only the two-qubit interaction form, but also the three-qubit interaction form and the four-qubit interaction form are presented in terms of spin operators. Meanwhile, we explore some particular properties of the topological basis states in these systems. With Yangian algebra, the symmetry of the systems and the transitions between the eigenstates have been investigated. We find a really useful effect of Y(sl(2)) operators (J{sub Â±},J{sub 3}), which is that they can describe the transitions between the spin single state and the spin triple states. Furthermore, we construct a new Hamiltonian, whose energy degeneracies can be changed by adjusting the strengths of the two-qubit interactions, three-qubit interactions, four-qubit interactions, and the external magnetic field. -- Highlights: â€¢We study the Haldaneâ€“Shastry model based on the topological basis realization. â€¢Really useful effects of Yangian operators are found. â€¢We explore some particular properties of the topological basis in these systems. â€¢We construct a new Hamiltonian whose energy degeneracy can be changed.
Margin of Safety Definition and Examples Used in Safety Basis Documents and the USQ Process
Beaulieu, R. A.
2013-10-03
The Nuclear Safety Management final rule, 10 CFR 830, provides an undefined term, margin of safety (MOS). Safe harbors listed in 10 CFR 830, Table 2, such as DOE-STD-3009 use but do not define the term. This lack of definition has created the need for the definition. This paper provides a definition of MOS and documents examples of MOS as applied in a U.S. Department of Energy (DOE) approved safety basis for an existing nuclear facility. If we understand what MOS looks like regarding Technical Safety Requirements (TSR) parameters, then it helps us compare against other parameters that do not involve a MOS. This paper also documents parameters that are not MOS. These criteria could be used to determine if an MOS exists in safety basis documents. This paper helps DOE, including the National Nuclear Security Administration (NNSA) and its contractors responsible for the safety basis improve safety basis documents and the unreviewed safety question (USQ) process with respect to MOS.
Tank Waste Remediation System (TWRS) Retrieval Authorization Basis Amendment Task Plan
HARRIS, J.P.
1999-08-31
This task plan is a documented agreement between Nuclear Safety and Licensing and Retrieval Engineering. The purpose of this task plan is to identify the scope of work, tasks and deliverables, responsibilities, manpower, and schedules associated with an authorization basis amendment as a result of the Waste Feed Delivery Program, Project W-211, Project W-521, and Project W-522.
Tank Waste Remediation System (TWRS) Retrieval Authorization Basis Amendment Task Plan
HARRIS, J.P.
2000-03-27
This task plan is a documented agreement between Nuclear Safety and Licensing and Retrieval Engineering. The purpose of this task plan is to identify the scope of work, tasks and deliverables, responsibilities, manpower, and schedules associated with an authorization basis amendment as a result of the Waste Feed Delivery Program, Project W-211, Project W-521, and Project W-522.
Sensitivity of the Properties of Ruthenium “Blue Dimer” to Method, Basis Set, and Continuum Model
Ozkanlar, Abdullah; Clark, Aurora E.
2012-05-23
The ruthenium “blue dimer” [(bpy)2RuIIIOH2]2O4+ is best known as the first well-defined molecular catalyst for water oxidation. It has been subject to numerous computational studies primarily employing density functional theory. However, those studies have been limited in the functionals, basis sets, and continuum models employed. The controversy in the calculated electronic structure and the reaction energetics of this catalyst highlights the necessity of benchmark calculations that explore the role of density functionals, basis sets, and continuum models upon the essential features of blue-dimer reactivity. In this paper, we report Kohn-Sham complete basis set (KS-CBS) limit extrapolations of the electronic structure of “blue dimer” using GGA (BPW91 and BP86), hybrid-GGA (B3LYP), and meta-GGA (M06-L) density functionals. The dependence of solvation free energy corrections on the different cavity types (UFF, UA0, UAHF, UAKS, Bondi, and Pauling) within polarizable and conductor-like polarizable continuum model has also been investigated. The most common basis sets of double-zeta quality are shown to yield results close to the KS-CBS limit; however, large variations are observed in the reaction energetics as a function of density functional and continuum cavity model employed.
Prasad, Rajiv; Hibler, Lyle F.; Coleman, Andre M.; Ward, Duane L.
2011-11-01
The purpose of this document is to describe approaches and methods for estimation of the design-basis flood at nuclear power plant sites. Chapter 1 defines the design-basis flood and lists the U.S. Nuclear Regulatory Commission's (NRC) regulations that require estimation of the design-basis flood. For comparison, the design-basis flood estimation methods used by other Federal agencies are also described. A brief discussion of the recommendations of the International Atomic Energy Agency for estimation of the design-basis floods in its member States is also included.
Multi-jagged: A scalable parallel spatial partitioning algorithm
Deveci, Mehmet; Rajamanickam, Sivasankaran; Devine, Karen D.; Catalyurek, Umit V.
2015-03-18
Geometric partitioning is fast and effective for load-balancing dynamic applications, particularly those requiring geometric locality of data (particle methods, crash simulations). We present, to our knowledge, the first parallel implementation of a multidimensional-jagged geometric partitioner. In contrast to the traditional recursive coordinate bisection algorithm (RCB), which recursively bisects subdomains perpendicular to their longest dimension until the desired number of parts is obtained, our algorithm does recursive multi-section with a given number of parts in each dimension. By computing multiple cut lines concurrently and intelligently deciding when to migrate data while computing the partition, we minimize data movement compared to efficientmoreÂ Â» implementations of recursive bisection. We demonstrate the algorithm's scalability and quality relative to the RCB implementation in Zoltan on both real and synthetic datasets. Our experiments show that the proposed algorithm performs and scales better than RCB in terms of run-time without degrading the load balance. Lastly, our implementation partitions 24 billion points into 65,536 parts within a few seconds and exhibits near perfect weak scaling up to 6K cores.Â«Â less
PSO algorithm enhanced with Lozi Chaotic Map - Tuning experiment
Pluhacek, Michal; Senkerik, Roman; Zelinka, Ivan
2015-03-10
In this paper it is investigated the effect of tuning of control parameters of the Lozi Chaotic Map employed as a chaotic pseudo-random number generator for the particle swarm optimization algorithm. Three different benchmark functions are selected from the IEEE CEC 2013 competition benchmark set. The Lozi map is extensively tuned and the performance of PSO is evaluated.
Optimization of reliability allocation strategies through use of genetic algorithms
Campbell, J.E.; Painton, L.A.
1996-08-01
This paper examines a novel optimization technique called genetic algorithms and its application to the optimization of reliability allocation strategies. Reliability allocation should occur in the initial stages of design, when the objective is to determine an optimal breakdown or allocation of reliability to certain components or subassemblies in order to meet system specifications. The reliability allocation optimization is applied to the design of a cluster tool, a highly complex piece of equipment used in semiconductor manufacturing. The problem formulation is presented, including decision variables, performance measures and constraints, and genetic algorithm parameters. Piecewise ``effort curves`` specifying the amount of effort required to achieve a certain level of reliability for each component of subassembly are defined. The genetic algorithm evolves or picks those combinations of ``effort`` or reliability levels for each component which optimize the objective of maximizing Mean Time Between Failures while staying within a budget. The results show that the genetic algorithm is very efficient at finding a set of robust solutions. A time history of the optimization is presented, along with histograms or the solution space fitness, MTBF, and cost for comparative purposes.
Developing and Implementing the Data Mining Algorithms in RAVEN
Sen, Ramazan Sonat; Maljovec, Daniel Patrick; Alfonsi, Andrea; Rabiti, Cristian
2015-09-01
The RAVEN code is becoming a comprehensive tool to perform probabilistic risk assessment, uncertainty quantification, and verification and validation. The RAVEN code is being developed to support many programs and to provide a set of methodologies and algorithms for advanced analysis. Scientific computer codes can generate enormous amounts of data. To post-process and analyze such data might, in some cases, take longer than the initial software runtime. Data mining algorithms/methods help in recognizing and understanding patterns in the data, and thus discover knowledge in databases. The methodologies used in the dynamic probabilistic risk assessment or in uncertainty and error quantification analysis couple system/physics codes with simulation controller codes, such as RAVEN. RAVEN introduces both deterministic and stochastic elements into the simulation while the system/physics code model the dynamics deterministically. A typical analysis is performed by sampling values of a set of parameter values. A major challenge in using dynamic probabilistic risk assessment or uncertainty and error quantification analysis for a complex system is to analyze the large number of scenarios generated. Data mining techniques are typically used to better organize and understand data, i.e. recognizing patterns in the data. This report focuses on development and implementation of Application Programming Interfaces (APIs) for different data mining algorithms, and the application of these algorithms to different databases.
Numerical Optimization Algorithms and Software for Systems Biology
Saunders, Michael
2013-02-02
The basic aims of this work are: to develop reliable algorithms for solving optimization problems involving large stoi- chiometric matrices; to investigate cyclic dependency between metabolic and macromolecular biosynthetic networks; and to quantify the significance of thermodynamic constraints on prokaryotic metabolism.
A Parallel Ghosting Algorithm for The Flexible Distributed Mesh Database
Mubarak, Misbah; Seol, Seegyoung; Lu, Qiukai; Shephard, Mark S.
2013-01-01
Critical to the scalability of parallel adaptive simulations are parallel control functions including load balancing, reduced inter-process communication and optimal data decomposition. In distributed meshes, many mesh-based applications frequently access neighborhood information for computational purposes which must be transmitted efficiently to avoid parallel performance degradation when the neighbors are on different processors. This article presents a parallel algorithm of creating and deleting data copies, referred to as ghost copies, which localize neighborhood data for computation purposes while minimizing inter-process communication. The key characteristics of the algorithm are: (1) It can create ghost copies of any permissible topological order inmoreÂ Â» a 1D, 2D or 3D mesh based on selected adjacencies. (2) It exploits neighborhood communication patterns during the ghost creation process thus eliminating all-to-all communication. (3) For applications that need neighbors of neighbors, the algorithm can create n number of ghost layers up to a point where the whole partitioned mesh can be ghosted. Strong and weak scaling results are presented for the IBM BG/P and Cray XE6 architectures up to a core count of 32,768 processors. The algorithm also leads to scalable results when used in a parallel super-convergent patch recovery error estimator, an application that frequently accesses neighborhood data to carry out computation.Â«Â less
Finding the needle in the haystack: Algorithms for conformational optimization
Andricioaei, I.; Straub, J.E.
1996-09-01
Algorithms are given for comformational optimization of proteins. The protein folding problems is regarded as a problem of global energy mimimization. Since proteins have hundreds of atoms, finding the lowest-energy conformation in a many-dimensional configuration space becomes a computationally demanding problem.{copyright} {ital American Institute of Physics.}
A subzone reconstruction algorithm for efficient staggered compatible remapping
Starinshak, D.P. Owen, J.M.
2015-09-01
Staggered-grid Lagrangian hydrodynamics algorithms frequently make use of subzonal discretization of state variables for the purposes of improved numerical accuracy, generality to unstructured meshes, and exact conservation of mass, momentum, and energy. For Arbitrary Lagrangianâ€“Eulerian (ALE) methods using a geometric overlay, it is difficult to remap subzonal variables in an accurate and efficient manner due to the number of subzoneâ€“subzone intersections that must be computed. This becomes prohibitive in the case of 3D, unstructured, polyhedral meshes. A new procedure is outlined in this paper to avoid direct subzonal remapping. The new algorithm reconstructs the spatial profile of a subzonal variable using remapped zonal and nodal representations of the data. The reconstruction procedure is cast as an under-constrained optimization problem. Enforcing conservation at each zone and node on the remapped mesh provides the set of equality constraints; the objective function corresponds to a quadratic variation per subzone between the values to be reconstructed and a set of target reference values. Numerical results for various pure-remapping and hydrodynamics tests are provided. Ideas for extending the algorithm to staggered-grid radiation-hydrodynamics are discussed as well as ideas for generalizing the algorithm to include inequality constraints.
Genetic algorithms and their use in Geophysical Problems
Parker, Paul B.
1999-04-01
Genetic algorithms (GAs), global optimization methods that mimic Darwinian evolution are well suited to the nonlinear inverse problems of geophysics. A standard genetic algorithm selects the best or ''fittest'' models from a ''population'' and then applies operators such as crossover and mutation in order to combine the most successful characteristics of each model and produce fitter models. More sophisticated operators have been developed, but the standard GA usually provides a robust and efficient search. Although the choice of parameter settings such as crossover and mutation rate may depend largely on the type of problem being solved, numerous results show that certain parameter settings produce optimal performance for a wide range of problems and difficulties. In particular, a low (about half of the inverse of the population size) mutation rate is crucial for optimal results, but the choice of crossover method and rate do not seem to affect performance appreciably. Optimal efficiency is usually achieved with smaller (< 50) populations. Lastly, tournament selection appears to be the best choice of selection methods due to its simplicity and its autoscaling properties. However, if a proportional selection method is used such as roulette wheel selection, fitness scaling is a necessity, and a high scaling factor (> 2.0) should be used for the best performance. Three case studies are presented in which genetic algorithms are used to invert for crustal parameters. The first is an inversion for basement depth at Yucca mountain using gravity data, the second an inversion for velocity structure in the crust of the south island of New Zealand using receiver functions derived from teleseismic events, and the third is a similar receiver function inversion for crustal velocities beneath the Mendocino Triple Junction region of Northern California. The inversions demonstrate that genetic algorithms are effective in solving problems with reasonably large numbers of free
An algorithm to estimate the object support in truncated images
Hsieh, Scott S.; Nett, Brian E.; Cao, Guangzhi; Pelc, Norbert J.
2014-07-15
Purpose: Truncation artifacts in CT occur if the object to be imaged extends past the scanner field of view (SFOV). These artifacts impede diagnosis and could possibly introduce errors in dose plans for radiation therapy. Several approaches exist for correcting truncation artifacts, but existing correction algorithms do not accurately recover the skin line (or support) of the patient, which is important in some dose planning methods. The purpose of this paper was to develop an iterative algorithm that recovers the support of the object. Methods: The authors assume that the truncated portion of the image is made up of soft tissue of uniform CT number and attempt to find a shape consistent with the measured data. Each known measurement in the sinogram is interpreted as an estimate of missing mass along a line. An initial estimate of the object support is generated by thresholding a reconstruction made using a previous truncation artifact correction algorithm (e.g., water cylinder extrapolation). This object support is iteratively deformed to reduce the inconsistency with the measured data. The missing data are estimated using this object support to complete the dataset. The method was tested on simulated and experimentally truncated CT data. Results: The proposed algorithm produces a better defined skin line than water cylinder extrapolation. On the experimental data, the RMS error of the skin line is reduced by about 60%. For moderately truncated images, some soft tissue contrast is retained near the SFOV. As the extent of truncation increases, the soft tissue contrast outside the SFOV becomes unusable although the skin line remains clearly defined, and in reformatted images it varies smoothly from slice to slice as expected. Conclusions: The support recovery algorithm provides a more accurate estimate of the patient outline than thresholded, basic water cylinder extrapolation, and may be preferred in some radiation therapy applications.
Williams, M. G.; Mouser, M. R.; Simon, J. B.
2012-07-01
The AP1000{sup R} plant is an 1100-MWe pressurized water reactor with passive safety features and extensive plant simplifications that enhance construction, operation, maintenance, safety and cost. The passive safety features are designed to function without safety-grade support systems such as component cooling water, service water, compressed air or HVAC. The AP1000 passive safety features achieve and maintain safe shutdown in case of a design-basis accident for 72 hours without need for operator action, meeting the expectations provided in the European Utility Requirements and the Utility Requirement Document for passive plants. Limited operator actions may be required to maintain safe conditions in the spent fuel pool (SFP) via passive means. This safety approach therefore minimizes the reliance on operator action for accident mitigation, and this paper examines the operator interaction with the Human-System Interface (HSI) as the severity of an accident increases from an anticipated transient to a design basis accident and finally, to a beyond-design-basis event. The AP1000 Control Room design provides an extremely effective environment for addressing the first 72 hours of design-basis events and transients, providing ease of information dissemination and minimal reliance upon operator actions. Symptom-based procedures including Emergency Operating Procedures (EOPs), Abnormal Operating Procedures (AOPs) and Alarm Response Procedures (ARPs) are used to mitigate design basis transients and accidents. Use of the Computerized Procedure System (CPS) aids the operators during mitigation of the event. The CPS provides cues and direction to the operators as the event progresses. If the event becomes progressively worse or lasts longer than 72 hours, and depending upon the nature of failures that may have occurred, minimal operator actions may be required outside of the control room in areas that have been designed to be accessible using components that have been
Control algorithms for effective operation of variable-speed wind turbines
Not Available
1993-10-01
This report describes a computer code, called ASYM and provides results from its application in simulating the control of the 34-m Test Bed vertical-axis wind turbine (VAWT) in Bushland, Texas. The code synthesizes dynamic wind speeds on a second-by-second basis in the time domain. The wind speeds conform to a predetermined spectral content governed by the hourly average wind speed that prevails at each hour of the simulation. The hourly average values are selected in a probabilistic sense through the application of Markov chains, but their cumulative frequency of occurrence conforms to a Rayleigh distribution that is governed by the mean annual wind speed of the site selected. The simulated wind speeds then drive a series of control algorithms that enable the code to predict key operational parameters such as number of annual starts and stops, annual energy production, and annual fatigue damage at a critically stressed joint on the wind turbine. This report also presents results from the application of ASYM that pertain to low wind speed cut-in and cut-out conditions and controlled operation near critical speed ranges that excite structural vibrations that can lead to accelerated fatigue damage.
Correlation consistent basis sets for actinides. I. The Th and U atoms
Peterson, Kirk A.
2015-02-21
New correlation consistent basis sets based on both pseudopotential (PP) and all-electron Douglas-Kroll-Hess (DKH) Hamiltonians have been developed from double- to quadruple-zeta quality for the actinide atoms thorium and uranium. Sets for valence electron correlation (5f6s6p6d), cc âˆ’ pV nZ âˆ’ PP and cc âˆ’ pV nZ âˆ’ DK3, as well as outer-core correlation (valence + 5s5p5d), cc âˆ’ pwCV nZ âˆ’ PP and cc âˆ’ pwCV nZ âˆ’ DK3, are reported (n = D, T, Q). The -PP sets are constructed in conjunction with small-core, 60-electron PPs, while the -DK3 sets utilized the 3rd-order Douglas-Kroll-Hess scalar relativistic Hamiltonian. Both series of basis sets show systematic convergence towards the complete basis set limit, both at the Hartree-Fock and correlated levels of theory, making them amenable to standard basis set extrapolation techniques. To assess the utility of the new basis sets, extensive coupled cluster composite thermochemistry calculations of ThF{sub n} (n = 2 âˆ’ 4), ThO{sub 2}, and UF{sub n} (n = 4 âˆ’ 6) have been carried out. After accurately accounting for valence and outer-core correlation, spin-orbit coupling, and even Lamb shift effects, the final 298 K atomization enthalpies of ThF{sub 4}, ThF{sub 3}, ThF{sub 2}, and ThO{sub 2} are all within their experimental uncertainties. Bond dissociation energies of ThF{sub 4} and ThF{sub 3}, as well as UF{sub 6} and UF{sub 5}, were similarly accurate. The derived enthalpies of formation for these species also showed a very satisfactory agreement with experiment, demonstrating that the new basis sets allow for the use of accurate composite schemes just as in molecular systems composed only of lighter atoms. The differences between the PP and DK3 approaches were found to increase with the change in formal oxidation state on the actinide atom, approaching 5-6 kcal/mol for the atomization enthalpies of ThF{sub 4} and ThO{sub 2}. The DKH3 atomization energy of ThO{sub 2} was calculated to be smaller than the DKH2
Brambley, Michael R.; Katipamula, Srinivas
2006-10-06
Pacific Northwest National Laboratory (PNNL) is assisting the U.S. Department of Energy (DOE) Distributed Energy (DE) Program by developing advanced control algorithms that would lead to development of tools to enhance performance and reliability, and reduce emissions of distributed energy technologies, including combined heat and power technologies. This report documents phase 2 of the program, providing a detailed functional specification for algorithms for performance monitoring and commissioning verification, scheduled for development in FY 2006. The report identifies the systems for which algorithms will be developed, the specific functions of each algorithm, metrics which the algorithms will output, and inputs required by each algorithm.
105-K Basin material design basis feed description for spent nuclear fuel project facilities
Praga, A.N.
1998-01-08
Revisions 0 and 0A of this document provided estimated chemical and radionuclide inventories of spent nuclear fuel and sludge currently stored within the Hanford Site`s 105-K Basins. This Revision (Rev. 1) incorporates the following changes into Revision 0A: (1) updates the tables to reflect: improved cross section data, a decision to use accountability data as the basis for total Pu, a corrected methodology for selection of the heat generation basis fee, and a revised decay date; (2) adds section 3.3.3.1 to expand the description of the approach used to calculate the inventory values and explain why that approach yields conservative results; (3) changes the pre-irradiation braze beryllium value.
FY2001 Tank Characterization Technical Sampling Basis & Waste Information Requirements Document
ADAMS, M.R.
2000-08-02
The Fiscal Year 2001 Tank Characterization Technical Sampling Basis and Waste Information Requirements Document (TSB-WIRD) has the following purposes: (1) To identify and integrate sampling and analysis needs for fiscal year (FY) 2001 and beyond. (2) To describe the overall drivers that require characterization information and to document their source. (3) To describe the process for identifying, prioritizing, and weighting issues that require characterization information to resolve. (4) To define the method for determining sampling priorities and to present the sampling priorities on a tank-by-tank basis. (5) To define how the characterization program is going to satisfy the drivers, close issues, and report progress. (6)To describe deliverables and acceptance criteria for characterization deliverables.
Development of Technical Basis for Burnup Credit Regulatory Guidance in the United States
Parks, Cecil V; Wagner, John C; Mueller, Don; Gauld, Ian C
2011-01-01
In the United States (U.S.) there has been and continues to be considerable interest in the increased use of burnup credit as part of the safety basis for SNF systems and this interest has motivated numerous technical studies related to the application of burnup credit for maintaining subcriticality. Responding to industry requests and needs, the U.S. Nuclear Regulatory Commission initiated a burnup credit research program, with support from the Oak Ridge National Laboratory, to develop regulatory guidance and the supporting technical basis for allowing and expanding the use of burnup credit in pressurized-water reactor SNF storage and transport applications. The objective of this paper is to summarize the work and significant accomplishments, with references to the technical reports and publications for complete details.
Comparison of CRBR design-basis events with those of foreign LMFBR plants
Agrawal, A.K.
1983-04-01
As part of the Construction Permit (CP) review of the Clinch River Breeder Reactor Plant (CRBR), the Brookhaven National Laboratory was asked to compare the Design Basis Accidents that are considered in CRBR Preliminary Safety Analysis Report with those of the foreign contemporary plants (PHENIX, SUPER-PHENIX, SNR-300, PFR, and MONJU). A brief introductory review of any special or unusual characteristics of these plants is given. This is followed by discussions of the design basis accidents and their acceptance criteria. In spite of some discrepancies due either to semantics or to licensing decisions, there appears to be a considerable degree of unanimity in the selection (definition) of DBAs in all of these plants.
Predicting the detectability of thin gaseous plumes in hyperspectral images using basis vectors
Anderson, Kevin K.; Tardiff, Mark F.; Chilton, Lawrence
2010-09-01
This paper describes a new method for predicting the detectability of thin gaseous plumes in hyperspectral images. The novelty of this method is the use of basis vectors for each of the spectral channels of a collection instrument to calculate noise-equivalent concentration-pathlengths instead of matching scene pixels to absorbance spectra of gases in a library. This method provides insight into regions of the spectrum where gas detection will be relatively easier or harder, as influenced by ground emissivity, temperature contrast, and the atmosphere. We relate a three-layer physics-based radiance model to basis vector noise-equivalent concentration-pathlengths, to signal-to-noise ratios, and finally to minimum detectable concentration-pathlengths. We illustrate the method using an Airborne Hyperspectral Imager image. Our results show that data collection planning could be inÂ°uenced by information about when potential plumes are likely to be over background segments that are most conducive to detection.
Houghton, W.J.
1980-06-01
A probabilistic risk assessment (PRA) approach is proposed to be used to scrutinize selection of accident sequences. A technique is described in this Licensing Topical Report to identify candidates for Design Basis Accidents (DBAs) utilizing the risk assessment results. As a part of this technique, it is proposed that events with frequencies below a specified limit would not be candidates. The use of the methodology described is supplementary to the traditional, deterministic approach and may result, in some cases, in the selection of multiple failure sequences as DBAs; it may also provide a basis for not considering some traditionally postulated events as being DBAs. A process is then described for selecting a list of DBAs based on the candidates from PRA as supplementary to knowledge and judgments from past licensing practice. These DBAs would be the events considered in Chapter 15 of Safety Analysis Reports of high-temperature gas-cooled reactors (HTGRs).
Structural Basis for the Coevolution of a Viral RNA-Protein Complex
Chao,J.; Patskovsky, Y.; Almo, S.; Singer, R.
2008-01-01
The cocrystal structure of the PP7 bacteriophage coat protein in complex with its translational operator identifies a distinct mode of sequence-specific RNA recognition when compared to the well-characterized MS2 coat protein-RNA complex. The structure reveals the molecular basis of the PP7 coat protein's ability to selectively bind its cognate RNA, and it demonstrates that the conserved beta-sheet surface is a flexible architecture that can evolve to recognize diverse RNA hairpins.
Tank waste remediation system retrieval and disposal mission authorization basis amendment task plan
Goetz, T.G.
1998-01-08
This task plan is a documented agreement between Nuclear Safety and Licensing and the Process Development group within the Waste Feed Delivery organization. The purpose of this task plan is to identify the scope of work, tasks and deliverables, responsibilities, manpower, and schedules associated with an authorization basis amendment as a result of the Waste Feed Waste Delivery Program, Project W-211, and Project W-TBD.
Technical Basis for U. S. Department of Energy Nuclear Safety Policy, DOE Policy 420.1
This document provides the technical basis for the Department of Energy (DOE) Policy (P) 420.1, Nuclear Safety Policy, dated 2-8-2011. It includes an analysis of the revised Policy to determine whether it provides the necessary and sufficient high-level expectations that will lead DOE to establish and implement appropriate requirements to assure protection of the public, workers, and the environment from the hazards of DOEâ€™s operation of nuclear facilities.
Technical Basis for Work Place Air Monitoring for the Plutonium Finishing Plan (PFP)
JONES, R.A.
1999-10-06
This document establishes the basis for the Plutonium Finishing Plant's (PFP) work place air monitoring program in accordance with the following requirements: Title 10, Code of Federal Regulations (CFR), Part 835 ''Occupational Radiation Protection''; Hanford Site Radiological Control Manual (HSRCM-1); HNF-PRO-33 1, Work Place Air Monitoring; WHC-SD-CP-SAR-021, Plutonium Finishing Plant Final Safety Analysis Report; and Applicable recognized national standards invoked by DOE Orders and Policies.
Slide Presentation by Rich Davies, Kami Lowry, Mike Schlender, Pacific Northwest National Laboratory (PNNL) and Ted Pietrok, Pacific Northwest Site Office (PNSO). Integrated Safety Management System as the Basis for Work Planning and Control for Research and Development. Work Planning and Control (WP&C) is essential to assuring the safety of workers and the public regardless of the scope of work Research and Development (R&D) activities are no exception.
Brumburgh, G.
1994-08-31
The Lawrence Livermore National Laboratory (LLNL) Plutonium Facility conducts numerous involving plutonium to include device fabrication, development of fabrication techniques, metallurgy research, and laser isotope separation. A Safety Analysis Report (SAR) for the building 332 Plutonium Facility was completed rational safety and acceptable risk to employees, the public, government property, and the environment. This paper outlines the PRA analysis of the Evaluation Basis Fire (EDF) operational accident. The EBF postulates the worst-case programmatic impact event for the Plutonium Facility.