Deterministic algorithms for 2-d convex programming and 3-d online linear programming
Chan, T.M.
1997-06-01
We present a deterministic algorithm for solving two-dimensional convex programs with a linear objective function. The algorithm requires O(k log k) primitive operations for k constraints; if a feasible point is given, the bound reduces to O(k log k/ log log k). As a consequence, we can decide whether k convex n-gons in the plane have a common intersection in O(k log n min (log k, log log n)) worst-case time. Furthermore, we can solve the three-dimensional online linear programming problem in o(log{sup 3} n) worst-case time per operation.
Algorithmic advances in stochastic programming
Morton, D.P.
1993-07-01
Practical planning problems with deterministic forecasts of inherently uncertain parameters often yield unsatisfactory solutions. Stochastic programming formulations allow uncertain parameters to be modeled as random variables with known distributions, but the size of the resulting mathematical programs can be formidable. Decomposition-based algorithms take advantage of special structure and provide an attractive approach to such problems. We consider two classes of decomposition-based stochastic programming algorithms. The first type of algorithm addresses problems with a ``manageable`` number of scenarios. The second class incorporates Monte Carlo sampling within a decomposition algorithm. We develop and empirically study an enhanced Benders decomposition algorithm for solving multistage stochastic linear programs within a prespecified tolerance. The enhancements include warm start basis selection, preliminary cut generation, the multicut procedure, and decision tree traversing strategies. Computational results are presented for a collection of ``real-world`` multistage stochastic hydroelectric scheduling problems. Recently, there has been an increased focus on decomposition-based algorithms that use sampling within the optimization framework. These approaches hold much promise for solving stochastic programs with many scenarios. A critical component of such algorithms is a stopping criterion to ensure the quality of the solution. With this as motivation, we develop a stopping rule theory for algorithms in which bounds on the optimal objective function value are estimated by sampling. Rules are provided for selecting sample sizes and terminating the algorithm under which asymptotic validity of confidence interval statements for the quality of the proposed solution can be verified. Issues associated with the application of this theory to two sampling-based algorithms are considered, and preliminary empirical coverage results are presented.
Two linear time, low overhead algorithms for graph layout
Energy Science and Technology Software Center (OSTI)
2008-01-10
The software comprises two algorithms designed to perform a 2D layout of a graph structure in time linear with respect to the vertices and edges in the graph, whereas most other layout algorithms have a running time that is quadratic with respect to the number of vertices or greater. Although these layout algorithms run in a fraction of the time as their competitors, they provide competitive results when applied to most real-world graphs. These algorithmsmore » also have a low constant running time and small memory footprint, making them useful for small to large graphs.« less
APPLICATION OF NEURAL NETWORK ALGORITHMS FOR BPM LINEARIZATION
Musson, John C.; Seaton, Chad; Spata, Mike F.; Yan, Jianxun
2012-11-01
Stripline BPM sensors contain inherent non-linearities, as a result of field distortions from the pickup elements. Many methods have been devised to facilitate corrections, often employing polynomial fitting. The cost of computation makes real-time correction difficult, particulalry when integer math is utilized. The application of neural-network technology, particularly the multi-layer perceptron algorithm, is proposed as an efficient alternative for electrode linearization. A process of supervised learning is initially used to determine the weighting coefficients, which are subsequently applied to the incoming electrode data. A non-linear layer, known as an ?activation layer,? is responsible for the removal of saturation effects. Implementation of a perceptron in an FPGA-based software-defined radio (SDR) is presented, along with performance comparisons. In addition, efficient calculation of the sigmoidal activation function via the CORDIC algorithm is presented.
Toward portable programming of numerical linear algebra on manycore...
Office of Scientific and Technical Information (OSTI)
Toward portable programming of numerical linear algebra on manycore nodes. Citation Details In-Document Search Title: Toward portable programming of numerical linear algebra on ...
Planning under uncertainty solving large-scale stochastic linear programs
Infanger, G. . Dept. of Operations Research Technische Univ., Vienna . Inst. fuer Energiewirtschaft)
1992-12-01
For many practical problems, solutions obtained from deterministic models are unsatisfactory because they fail to hedge against certain contingencies that may occur in the future. Stochastic models address this shortcoming, but up to recently seemed to be intractable due to their size. Recent advances both in solution algorithms and in computer technology now allow us to solve important and general classes of practical stochastic problems. We show how large-scale stochastic linear programs can be efficiently solved by combining classical decomposition and Monte Carlo (importance) sampling techniques. We discuss the methodology for solving two-stage stochastic linear programs with recourse, present numerical results of large problems with numerous stochastic parameters, show how to efficiently implement the methodology on a parallel multi-computer and derive the theory for solving a general class of multi-stage problems with dependency of the stochastic parameters within a stage and between different stages.
Comparison of open-source linear programming solvers.
Gearhart, Jared Lee; Adair, Kristin Lynn; Durfee, Justin D.; Jones, Katherine A.; Martin, Nathaniel; Detry, Richard Joseph
2013-10-01
When developing linear programming models, issues such as budget limitations, customer requirements, or licensing may preclude the use of commercial linear programming solvers. In such cases, one option is to use an open-source linear programming solver. A survey of linear programming tools was conducted to identify potential open-source solvers. From this survey, four open-source solvers were tested using a collection of linear programming test problems and the results were compared to IBM ILOG CPLEX Optimizer (CPLEX) [1], an industry standard. The solvers considered were: COIN-OR Linear Programming (CLP) [2], [3], GNU Linear Programming Kit (GLPK) [4], lp_solve [5] and Modular In-core Nonlinear Optimization System (MINOS) [6]. As no open-source solver outperforms CPLEX, this study demonstrates the power of commercial linear programming software. CLP was found to be the top performing open-source solver considered in terms of capability and speed. GLPK also performed well but cannot match the speed of CLP or CPLEX. lp_solve and MINOS were considerably slower and encountered issues when solving several test problems.
Lewis, Robert Michael (College of William and Mary, Williamsburg, VA); Torczon, Virginia Joanne (College of William and Mary, Williamsburg, VA); Kolda, Tamara Gibson
2006-08-01
We consider the solution of nonlinear programs in the case where derivatives of the objective function and nonlinear constraints are unavailable. To solve such problems, we propose an adaptation of a method due to Conn, Gould, Sartenaer, and Toint that proceeds by approximately minimizing a succession of linearly constrained augmented Lagrangians. Our modification is to use a derivative-free generating set direct search algorithm to solve the linearly constrained subproblems. The stopping criterion proposed by Conn, Gould, Sartenaer and Toint for the approximate solution of the subproblems requires explicit knowledge of derivatives. Such information is presumed absent in the generating set search method we employ. Instead, we show that stationarity results for linearly constrained generating set search methods provide a derivative-free stopping criterion, based on a step-length control parameter, that is sufficient to preserve the convergence properties of the original augmented Lagrangian algorithm.
Efficient Algorithms for Estimating the Absorption Spectrum within Linear Response TDDFT
Brabec, Jiri; Lin, Lin; Shao, Meiyue; Govind, Niranjan; Yang, Chao; Saad, Yousef; Ng, Esmond
2015-10-06
We present two iterative algorithms for approximating the absorption spectrum of molecules within linear response of time-dependent density functional theory (TDDFT) framework. These methods do not attempt to compute eigenvalues or eigenvectors of the linear response matrix. They are designed to approximate the absorption spectrum as a function directly. They take advantage of the special structure of the linear response matrix. Neither method requires the linear response matrix to be constructed explicitly. They only require a procedure that performs the multiplication of the linear response matrix with a vector. These methods can also be easily modified to efficiently estimate the density of states (DOS) of the linear response matrix without computing the eigenvalues of this matrix. We show by computational experiments that the methods proposed in this paper can be much more efficient than methods that are based on the exact diagonalization of the linear response matrix. We show that they can also be more efficient than real-time TDDFT simulations. We compare the pros and cons of these methods in terms of their accuracy as well as their computational and storage cost.
Grant, C W; Lenderman, J S; Gansemer, J D
2011-02-24
This document is an update to the 'ADIS Algorithm Evaluation Project Plan' specified in the Statement of Work for the US-VISIT Identity Matching Algorithm Evaluation Program, as deliverable II.D.1. The original plan was delivered in August 2010. This document modifies the plan to reflect modified deliverables reflecting delays in obtaining a database refresh. This document describes the revised schedule of the program deliverables. The detailed description of the processes used, the statistical analysis processes and the results of the statistical analysis will be described fully in the program deliverables. The US-VISIT Identity Matching Algorithm Evaluation Program is work performed by Lawrence Livermore National Laboratory (LLNL) under IAA HSHQVT-07-X-00002 P00004 from the Department of Homeland Security (DHS).
Shang, Yu; Yu, Guoqiang
2014-09-29
Conventional semi-infinite analytical solutions of correlation diffusion equation may lead to errors when calculating blood flow index (BFI) from diffuse correlation spectroscopy (DCS) measurements in tissues with irregular geometries. Very recently, we created an algorithm integrating a Nth-order linear model of autocorrelation function with the Monte Carlo simulation of photon migrations in homogenous tissues with arbitrary geometries for extraction of BFI (i.e., αD{sub B}). The purpose of this study is to extend the capability of the Nth-order linear algorithm for extracting BFI in heterogeneous tissues with arbitrary geometries. The previous linear algorithm was modified to extract BFIs in different types of tissues simultaneously through utilizing DCS data at multiple source-detector separations. We compared the proposed linear algorithm with the semi-infinite homogenous solution in a computer model of adult head with heterogeneous tissue layers of scalp, skull, cerebrospinal fluid, and brain. To test the capability of the linear algorithm for extracting relative changes of cerebral blood flow (rCBF) in deep brain, we assigned ten levels of αD{sub B} in the brain layer with a step decrement of 10% while maintaining αD{sub B} values constant in other layers. Simulation results demonstrate the accuracy (errors < 3%) of high-order (N ≥ 5) linear algorithm in extracting BFIs in different tissue layers and rCBF in deep brain. By contrast, the semi-infinite homogenous solution resulted in substantial errors in rCBF (34.5% ≤ errors ≤ 60.2%) and BFIs in different layers. The Nth-order linear model simplifies data analysis, thus allowing for online data processing and displaying. Future study will test this linear algorithm in heterogeneous tissues with different levels of blood flow variations and noises.
Microgrid Reliability Modeling and Battery Scheduling Using Stochastic Linear Programming
Cardoso, Goncalo; Stadler, Michael; Siddiqui, Afzal; Marnay, Chris; DeForest, Nicholas; Barbosa-Povoa, Ana; Ferrao, Paulo
2013-05-23
This paper describes the introduction of stochastic linear programming into Operations DER-CAM, a tool used to obtain optimal operating schedules for a given microgrid under local economic and environmental conditions. This application follows previous work on optimal scheduling of a lithium-iron-phosphate battery given the output uncertainty of a 1 MW molten carbonate fuel cell. Both are in the Santa Rita Jail microgrid, located in Dublin, California. This fuel cell has proven unreliable, partially justifying the consideration of storage options. Several stochastic DER-CAM runs are executed to compare different scenarios to values obtained by a deterministic approach. Results indicate that using a stochastic approach provides a conservative yet more lucrative battery schedule. Lower expected energy bills result, given fuel cell outages, in potential savings exceeding 6percent.
Object detection utilizing a linear retrieval algorithm for thermal infrared imagery
Ramsey, M.S. [Arizona State Univ., Tempe, AZ (United States)
1996-11-01
Thermal infrared (TIR) spectroscopy and remote sensing have been proven to be extremely valuable tools for mineralogic discrimination. One technique for sub-pixel detection and data reduction, known as a spectral retrieval or unmixing algorithm, will prove useful in the analysis of data from scheduled TIR orbital instruments. This study represents the first quantitative attempt to identify the limits of the model, specifically concentrating on the TIR. The algorithm was written and applied to laboratory data, testing the effects of particle size, noise, and multiple endmembers, then adapted to operate on airborne Thermal Infrared Multispectral Scanner data of the Kelso Dunes, CA, Meteor Crater, AZ, and Medicine Lake Volcano, CA. Results indicate that linear spectral unmixmg can produce accurate endmember detection to within an average of 5%. In addition, the effects of vitrification and textural variations were modeled. The ability to predict mineral or rock abundances becomes extremely useful in tracking sediment transport, decertification, and potential hazard assessment in remote volcanic regions. 26 refs., 3 figs.
Fast Combinatorial Algorithm for the Solution of Linearly Constrained Least Squares Problems
Van Benthem, Mark H.; Keenan, Michael R.
2008-11-11
A fast combinatorial algorithm can significantly reduce the computational burden when solving general equality and inequality constrained least squares problems with large numbers of observation vectors. The combinatorial algorithm provides a mathematically rigorous solution and operates at great speed by reorganizing the calculations to take advantage of the combinatorial nature of the problems to be solved. The combinatorial algorithm exploits the structure that exists in large-scale problems in order to minimize the number of arithmetic operations required to obtain a solution.
Abdel-Rehim, A M; Stathopoulos, Andreas; Orginos, Kostas
2014-08-01
The technique that was used to build the EigCG algorithm for sparse symmetric linear systems is extended to the nonsymmetric case using the BiCG algorithm. We show that, similarly to the symmetric case, we can build an algorithm that is capable of computing a few smallest magnitude eigenvalues and their corresponding left and right eigenvectors of a nonsymmetric matrix using only a small window of the BiCG residuals while simultaneously solving a linear system with that matrix. For a system with multiple right-hand sides, we give an algorithm that computes incrementally more eigenvalues while solving the first few systems and then uses the computed eigenvectors to deflate BiCGStab for the remaining systems. Our experiments on various test problems, including Lattice QCD, show the remarkable ability of EigBiCG to compute spectral approximations with accuracy comparable to that of the unrestarted, nonsymmetric Lanczos. Furthermore, our incremental EigBiCG followed by appropriately restarted and deflated BiCGStab provides a competitive method for systems with multiple right-hand sides.
Sixth SIAM conference on applied linear algebra: Final program and abstracts. Final technical report
1997-12-31
Linear algebra plays a central role in mathematics and applications. The analysis and solution of problems from an amazingly wide variety of disciplines depend on the theory and computational techniques of linear algebra. In turn, the diversity of disciplines depending on linear algebra also serves to focus and shape its development. Some problems have special properties (numerical, structural) that can be exploited. Some are simply so large that conventional approaches are impractical. New computer architectures motivate new algorithms, and fresh ways to look at old ones. The pervasive nature of linear algebra in analyzing and solving problems means that people from a wide spectrum--universities, industrial and government laboratories, financial institutions, and many others--share an interest in current developments in linear algebra. This conference aims to bring them together for their mutual benefit. Abstracts of papers presented are included.
Application and implementation of transient algorithms in computer programs
Benson, D.J.
1985-07-01
This presentation gives a brief introduction to the nonlinear finite element programs developed at Lawrence Livermore National Laboratory by the Methods Development Group in the Mechanical Engineering Department. The four programs are DYNA3D and DYNA2D, which are explicit hydrocodes, and NIKE3D and NIKE2D, which are implicit programs. The presentation concentrates on DYNA3D with asides about the other programs. During the past year several new features were added to DYNA3D, and major improvements were made in the computational efficiency of the shell and beam elements. Most of these new features and improvements will eventually make their way into the other programs. The emphasis in our computational mechanics effort has always been, and continues to be, efficiency. To get the most out of our supercomputers, all Crays, we have vectorized the programs as much as possible. Several of the more interesting capabilities of DYNA3D will be described and their impact on efficiency will be discussed. Some of the recent work on NIKE3D and NIKE2D will also be presented. In the belief that a single example is worth a thousand equations, we are skipping the theory entirely and going directly to the examples.
Office of Scientific and Technical Information (OSTI)
1 are estimated us- ing the conventional MCMC (C-MCMC) with 60,000 model executions (red-solid lines), the linear, quadratic, and cubic surrogate systems with 9226, 4375, 3765...
Obtaining lower bounds from the progressive hedging algorithm for stochastic mixed-integer programs
Gade, Dinakar; Hackebeil, Gabriel; Ryan, Sarah M.; Watson, Jean -Paul; Wets, Roger J.-B.; Woodruff, David L.
2016-04-02
We present a method for computing lower bounds in the progressive hedging algorithm (PHA) for two-stage and multi-stage stochastic mixed-integer programs. Computing lower bounds in the PHA allows one to assess the quality of the solutions generated by the algorithm contemporaneously. The lower bounds can be computed in any iteration of the algorithm by using dual prices that are calculated during execution of the standard PHA. In conclusion, we report computational results on stochastic unit commitment and stochastic server location problem instances, and explore the relationship between key PHA parameters and the quality of the resulting lower bounds.
Refining and end use study of coal liquids II - linear programming analysis
Lowe, C.; Tam, S.
1995-12-31
A DOE-funded study is underway to determine the optimum refinery processing schemes for producing transportation fuels that will meet CAAA regulations from direct and indirect coal liquids. The study consists of three major parts: pilot plant testing of critical upgrading processes, linear programming analysis of different processing schemes, and engine emission testing of final products. Currently, fractions of a direct coal liquid produced form bituminous coal are being tested in sequence of pilot plant upgrading processes. This work is discussed in a separate paper. The linear programming model, which is the subject of this paper, has been completed for the petroleum refinery and is being modified to handle coal liquids based on the pilot plant test results. Preliminary coal liquid evaluation studies indicate that, if a refinery expansion scenario is adopted, then the marginal value of the coal liquid (over the base petroleum crude) is $3-4/bbl.
SLFP: A stochastic linear fractional programming approach for sustainable waste management
Zhu, H.; Huang, G.H.
2011-12-15
Highlights: > A new fractional programming (SLFP) method is developed for waste management. > SLFP can solve ratio optimization problems associated with random inputs. > A case study of waste flow allocation demonstrates its applicability. > SLFP helps compare objectives of two aspects and reflect system efficiency. > This study supports in-depth analysis of tradeoffs among multiple system criteria. - Abstract: A stochastic linear fractional programming (SLFP) approach is developed for supporting sustainable municipal solid waste management under uncertainty. The SLFP method can solve ratio optimization problems associated with random information, where chance-constrained programming is integrated into a linear fractional programming framework. It has advantages in: (1) comparing objectives of two aspects, (2) reflecting system efficiency, (3) dealing with uncertainty expressed as probability distributions, and (4) providing optimal-ratio solutions under different system-reliability conditions. The method is applied to a case study of waste flow allocation within a municipal solid waste (MSW) management system. The obtained solutions are useful for identifying sustainable MSW management schemes with maximized system efficiency under various constraint-violation risks. The results indicate that SLFP can support in-depth analysis of the interrelationships among system efficiency, system cost and system-failure risk.
Djukanovic, M.; Babic, B.; Milosevic, B.; Sobajic, D.J.; Pao, Y.H. |
1996-05-01
In this paper the blending/transloading facilities are modeled using an interactive fuzzy linear programming (FLP), in order to allow the decision-maker to solve the problem of uncertainty of input information within the fuel scheduling optimization. An interactive decision-making process is formulated in which decision-maker can learn to recognize good solutions by considering all possibilities of fuzziness. The application of the fuzzy formulation is accompanied by a careful examination of the definition of fuzziness, appropriateness of the membership function and interpretation of results. The proposed concept provides a decision support system with integration-oriented features, whereby the decision-maker can learn to recognize the relative importance of factors in the specific domain of optimal fuel scheduling (OFS) problem. The formulation of a fuzzy linear programming problem to obtain a reasonable nonfuzzy solution under consideration of the ambiguity of parameters, represented by fuzzy numbers, is introduced. An additional advantage of the FLP formulation is its ability to deal with multi-objective problems.
Library of Continuation Algorithms
Energy Science and Technology Software Center (OSTI)
2005-03-01
LOCA (Library of Continuation Algorithms) is scientific software written in C++ that provides advanced analysis tools for nonlinear systems. In particular, it provides parameter continuation algorithms. bifurcation tracking algorithms, and drivers for linear stability analysis. The algorithms are aimed at large-scale applications that use Newtons method for their nonlinear solve.
Frenkel, G.; Paterson, T.S.; Smith, M.E.
1988-04-01
The Institute for Defense Analyses (IDA) has collected and analyzed information on battle management algorithm technology that is relevant to Battle Management/Command, Control and Communications (BM/C3). This Memorandum Report represents a program plan that will provide the BM/C3 Directorate of the Strategic Defense Initiative Organization (SDIO) with administrative and technical insight into algorithm technology. This program plan focuses on current activity in algorithm development and provides information and analysis to the SDIO to be used in formulating budget requirements for FY 1988 and beyond. Based upon analysis of algorithm requirements and ongoing programs, recommendations have been made for research areas that should be pursued, including both the continuation of current work and the initiation of new tasks. This final report includes all relevant material from interim reports as well as new results.
1995-03-01
A model was developed for use in the Bechtel PIMS (Process Industry Modeling System) linear programming software to simulate a generic Midwest (PADD II) petroleum refinery of the future. This ``petroleum-only`` version of the model establishes the size and complexity of the refinery after the year 2000 and prior to the introduction of coal liquids. It should be noted that no assumption has been made on when a plant can be built to produce coal liquids except that it will be after the year 2000. The year 2000 was chosen because it is the latest year where fuel property and emission standards have been set by the Environmental Protection Agency. It assumes the refinery has been modified to accept crudes that are heavier in gravity and higher in sulfur than today`s average crude mix. In addition, the refinery has also been modified to produce a product slate of transportation fuels of the future (i.e. 40% reformulated gasolines). This model will be used as a basis for determining the optimum scheme for processing coal liquids in a petroleum refinery. This report summarizes the design basis for this ``petroleum only`` LP refinery model. A report detailing the refinery configuration when coal liquids are processed will be provided at a later date.
New Effective Multithreaded Matching Algorithms
Manne, Fredrik; Halappanavar, Mahantesh
2014-05-19
Matching is an important combinatorial problem with a number of applications in areas such as community detection, sparse linear algebra, and network alignment. Since computing optimal matchings can be very time consuming, several fast approximation algorithms, both sequential and parallel, have been suggested. Common to the algorithms giving the best solutions is that they tend to be sequential by nature, while algorithms more suitable for parallel computation give solutions of less quality. We present a new simple 1 2 -approximation algorithm for the weighted matching problem. This algorithm is both faster than any other suggested sequential 1 2 -approximation algorithm on almost all inputs and also scales better than previous multithreaded algorithms. We further extend this to a general scalable multithreaded algorithm that computes matchings of weight comparable with the best sequential algorithms. The performance of the suggested algorithms is documented through extensive experiments on different multithreaded architectures.
Independent Oversight Inspection, Stanford Linear Accelerator...
Office of Energy Efficiency and Renewable Energy (EERE) (indexed site)
Safety, and Health Programs at the Stanford Linear Accelerator Center This report provides the results of an inspection of the environment, safety, and health programs at the ...
Christofilos, N.C.; Polk, I.J.
1959-02-17
Improvements in linear particle accelerators are described. A drift tube system for a linear ion accelerator reduces gap capacity between adjacent drift tube ends. This is accomplished by reducing the ratio of the diameter of the drift tube to the diameter of the resonant cavity. Concentration of magnetic field intensity at the longitudinal midpoint of the external sunface of each drift tube is reduced by increasing the external drift tube diameter at the longitudinal center region.
U.S. Department of Energy (DOE) all webpages (Extended Search)
Linear Accelerator (LINAC) The core of the LANSCE facility is one of the nation's most powerful proton linear accelerators or LINAC. The LINAC at LANSCE has served the nation since 1972, providing the beam current required by all the experimental areas that support NNSA-DP and other DOE missions. The LINAC's capability to reliably deliver beam current is the key to the LANSCE's ability to do research-and thus the key to meeting NNSA and DOE mission deliverables. The LANSCE Accelerator The LANSCE
Colgate, S.A.
1958-05-27
An improvement is presented in linear accelerators for charged particles with respect to the stable focusing of the particle beam. The improvement consists of providing a radial electric field transverse to the accelerating electric fields and angularly introducing the beam of particles in the field. The results of the foregoing is to achieve a beam which spirals about the axis of the acceleration path. The combination of the electric fields and angular motion of the particles cooperate to provide a stable and focused particle beam.
Final Report-Optimization Under Uncertainty and Nonconvexity: Algorithms and Software
Jeff Linderoth
2008-10-10
The goal of this research was to develop new algorithmic techniques for solving large-scale numerical optimization problems, focusing on problems classes that have proven to be among the most challenging for practitioners: those involving uncertainty and those involving nonconvexity. This research advanced the state-of-the-art in solving mixed integer linear programs containing symmetry, mixed integer nonlinear programs, and stochastic optimization problems.
Belos Block Linear Solvers Package
Energy Science and Technology Software Center (OSTI)
2004-03-01
Belos is an extensible and interoperable framework for large-scale, iterative methods for solving systems of linear equations with multiple right-hand sides. The motivation for this framework is to provide a generic interface to a collection of algorithms for solving large-scale linear systems. Belos is interoperable because both the matrix and vectors are considered to be opaque objects--only knowledge of the matrix and vectors via elementary operations is necessary. An implementation of Balos is accomplished viamore » the use of interfaces. One of the goals of Belos is to allow the user flexibility in specifying the data representation for the matrix and vectors and so leverage any existing software investment. The algorithms that will be included in package are Krylov-based linear solvers, like Block GMRES (Generalized Minimal RESidual) and Block CG (Conjugate-Gradient).« less
Voila: A visual object-oriented iterative linear algebra problem solving environment
Edwards, H.C.; Hayes, L.J.
1994-12-31
Application of iterative methods to solve a large linear system of equations currently involves writing a program which calls iterative method subprograms from a large software package. These subprograms have complex interfaces which are difficult to use and even more difficult to program. A problem solving environment specifically tailored to the development and application of iterative methods is needed. This need will be fulfilled by Voila, a problem solving environment which provides a visual programming interface to object-oriented iterative linear algebra kernels. Voila will provide several quantum improvements over current iterative method problem solving environments. First, programming and applying iterative methods is considerably simplified through Voila`s visual programming interface. Second, iterative method algorithm implementations are independent of any particular sparse matrix data structure through Voila`s object-oriented kernels. Third, the compile-link-debug process is eliminated as Voila operates as an interpreter.
Miller, Naomi J.; Perrin, Tess E.; Royer, Michael P.; Wilkerson, Andrea M.; Beeson, Tracy A.
2014-05-20
Although lensed troffers are numerous, there are many other types of optical systems as well. This report looked at the performance of three linear (T8) LED lamps chosen primarily based on their luminous intensity distributions (narrow, medium, and wide beam angles) as well as a benchmark fluorescent lamp in five different troffer types. Also included are the results of a subjective evaluation. Results show that linear (T8) LED lamps can improve luminaire efficiency in K12-lensed and parabolic-louvered troffers, effect little change in volumetric and high-performance diffuse-lensed type luminaires, but reduce efficiency in recessed indirect troffers. These changes can be accompanied by visual appearance and visual comfort consequences, especially when LED lamps with clear lenses and narrow distributions are installed. Linear (T8) LED lamps with diffuse apertures exhibited wider beam angles, performed more similarly to fluorescent lamps, and received better ratings from observers. Guidance is provided on which luminaires are the best candidates for retrofitting with linear (T8) LED lamps.
U.S. Department of Energy (DOE) all webpages (Extended Search)
Programming Programming Compiling and linking programs on Euclid. Compiling Codes How to compile and link MPI codes on Euclid. Read More » Using the ACML Math Library How to compile and link a code with the ACML library and include the $ACML environment variable. Read More » Process Limits The hard and soft process limits are listed. Read More » Last edited: 2016-04-29 11:35:11
U.S. Department of Energy (DOE) all webpages (Extended Search)
Programming Programming The genepool system has a diverse set of software development tools and a rich environment for delivering their functionality to users. Genepool has adopted a modular system which has been adapted from the Programming Environments similar to those provided on the Cray systems at NERSC. The Programming Environment is managed by a meta-module named similar to "PrgEnv-gnu/4.6". The "gnu" indicates that it is providing the GNU environment, principally GCC,
U.S. Department of Energy (DOE) all webpages (Extended Search)
Read More Programming Tuning Options Tips for tuning performance on the Hopper system ... The ACML library is also supported on Hopper and Franklin. Read More PGAS Language ...
U.S. Department of Energy (DOE) all webpages (Extended Search)
Storage & File Systems Application Performance Data & Analytics Job Logs & Statistics ... Each programming environment contains the full set of compatible compilers and libraries. ...
Positrons for linear colliders
Ecklund, S.
1987-11-01
The requirements of a positron source for a linear collider are briefly reviewed, followed by methods of positron production and production of photons by electromagnetic cascade showers. Cross sections for the electromagnetic cascade shower processes of positron-electron pair production and Compton scattering are compared. A program used for Monte Carlo analysis of electromagnetic cascades is briefly discussed, and positron distributions obtained from several runs of the program are discussed. Photons from synchrotron radiation and from channeling are also mentioned briefly, as well as positron collection, transverse focusing techniques, and longitudinal capture. Computer ray tracing is then briefly discussed, followed by space-charge effects and thermal heating and stress due to showers. (LEW)
U.S. Department of Energy (DOE) all webpages (Extended Search)
using MPI and OpenMP on NERSC systems, the same does not always exist for other supported parallel programming models such as UPC or Chapel. At the same time, we know that these...
An optimal point spread function subtraction algorithm for high...
Office of Scientific and Technical Information (OSTI)
An optimal point spread function subtraction algorithm for high-contrast imaging: a ... This image is built as a linear combination of all available images and is optimized ...
Gropp, William D.
2014-06-23
With the coming end of Moore's law, it has become essential to develop new algorithms and techniques that can provide the performance needed by demanding computational science applications, especially those that are part of the DOE science mission. This work was part of a multi-institution, multi-investigator project that explored several approaches to develop algorithms that would be effective at the extreme scales and with the complex processor architectures that are expected at the end of this decade. The work by this group developed new performance models that have already helped guide the development of highly scalable versions of an algebraic multigrid solver, new programming approaches designed to support numerical algorithms on heterogeneous architectures, and a new, more scalable version of conjugate gradient, an important algorithm in the solution of very large linear systems of equations.
FPGA-based Klystron linearization implementations in scope of ILC
Omet, M.; Michizono, S.; Varghese, P.; Schlarb, H.; Branlard, J.; Cichalewski, W.
2015-01-23
We report the development and implementation of four FPGA-based predistortion-type klystron linearization algorithms. Klystron linearization is essential for the realization of ILC, since it is required to operate the klystrons 7% in power below their saturation. The work presented was performed in international collaborations at the Fermi National Accelerator Laboratory (FNAL), USA and the Deutsches Elektronen Synchrotron (DESY), Germany. With the newly developed algorithms, the generation of correction factors on the FPGA was improved compared to past algorithms, avoiding quantization and decreasing memory requirements. At FNAL, three algorithms were tested at the Advanced Superconducting Test Accelerator (ASTA), demonstrating a successful implementation for one algorithm and a proof of principle for two algorithms. Furthermore, the functionality of the algorithm implemented at DESY was demonstrated successfully in a simulation.
FPGA-based Klystron linearization implementations in scope of ILC
Omet, M.; Michizono, S.; Matsumoto, T.; Miura, T.; Qiu, F.; Chase, B.; Varghese, P.; Schlarb, H.; Branlard, J.; Cichalewski, W.
2015-01-23
We report the development and implementation of four FPGA-based predistortion-type klystron linearization algorithms. Klystron linearization is essential for the realization of ILC, since it is required to operate the klystrons 7% in power below their saturation. The work presented was performed in international collaborations at the Fermi National Accelerator Laboratory (FNAL), USA and the Deutsches Elektronen Synchrotron (DESY), Germany. With the newly developed algorithms, the generation of correction factors on the FPGA was improved compared to past algorithms, avoiding quantization and decreasing memory requirements. At FNAL, three algorithms were tested at the Advanced Superconducting Test Accelerator (ASTA), demonstrating a successfulmore » implementation for one algorithm and a proof of principle for two algorithms. Furthermore, the functionality of the algorithm implemented at DESY was demonstrated successfully in a simulation.« less
Daniel, David J; Mc Pherson, Allen; Thorp, John R; Barrett, Richard; Clay, Robert; De Supinski, Bronis; Dube, Evi; Heroux, Mike; Janssen, Curtis; Langer, Steve; Laros, Jim
2011-01-14
A programming model is a set of software technologies that support the expression of algorithms and provide applications with an abstract representation of the capabilities of the underlying hardware architecture. The primary goals are productivity, portability and performance.
Linear induction accelerator parameter options
Birx, D.L.; Caporaso, G.J.; Reginato, L.L.
1986-04-21
The principal undertaking of the Beam Research Program over the past decade has been the investigation of propagating intense self-focused beams. Recently, the major activity of the program has shifted toward the investigation of converting high quality electron beams directly to laser radiation. During the early years of the program, accelerator development was directed toward the generation of very high current (>10 kA), high energy beams (>50 MeV). In its new mission, the program has shifted the emphasis toward the production of lower current beams (>3 kA) with high brightness (>10/sup 6/ A/(rad-cm)/sup 2/) at very high average power levels. In efforts to produce these intense beams, the state of the art of linear induction accelerators (LIA) has been advanced to the point of satisfying not only the current requirements but also future national needs.
Translation and integration of numerical atomic orbitals in linear molecules
Heinäsmäki, Sami
2014-02-14
We present algorithms for translation and integration of atomic orbitals for LCAO calculations in linear molecules. The method applies to arbitrary radial functions given on a numerical mesh. The algorithms are based on pseudospectral differentiation matrices in two dimensions and the corresponding two-dimensional Gaussian quadratures. As a result, multicenter overlap and Coulomb integrals can be evaluated effectively.
Algorithms for builder guidelines
Balcomb, J.D.; Lekov, A.B.
1989-06-01
The Builder Guidelines are designed to make simple, appropriate guidelines available to builders for their specific localities. Builders may select from passive solar and conservation strategies with different performance potentials. They can then compare the calculated results for their particular house design with a typical house in the same location. Algorithms used to develop the Builder Guidelines are described. The main algorithms used are the monthly solar ratio (SLR) method for winter heating, the diurnal heat capacity (DHC) method for temperature swing, and a new simplified calculation method (McCool) for summer cooling. This paper applies the algorithms to estimate the performance potential of passive solar strategies, and the annual heating and cooling loads of various combinations of conservation and passive solar strategies. The basis of the McCool method is described. All three methods are implemented in a microcomputer program used to generate the guideline numbers. Guidelines for Denver, Colorado, are used to illustrate the results. The structure of the guidelines and worksheet booklets are also presented. 5 refs., 3 tabs.
Asynchronous parallel generating set search for linearly-constrained optimization.
Lewis, Robert Michael; Griffin, Joshua D.; Kolda, Tamara Gibson
2006-08-01
Generating set search (GSS) is a family of direct search methods that encompasses generalized pattern search and related methods. We describe an algorithm for asynchronous linearly-constrained GSS, which has some complexities that make it different from both the asynchronous bound-constrained case as well as the synchronous linearly-constrained case. The algorithm has been implemented in the APPSPACK software framework and we present results from an extensive numerical study using CUTEr test problems. We discuss the results, both positive and negative, and conclude that GSS is a reliable method for solving small-to-medium sized linearly-constrained optimization problems without derivatives.
Fault tolerant linear actuator
Tesar, Delbert
2004-09-14
In varying embodiments, the fault tolerant linear actuator of the present invention is a new and improved linear actuator with fault tolerance and positional control that may incorporate velocity summing, force summing, or a combination of the two. In one embodiment, the invention offers a velocity summing arrangement with a differential gear between two prime movers driving a cage, which then drives a linear spindle screw transmission. Other embodiments feature two prime movers driving separate linear spindle screw transmissions, one internal and one external, in a totally concentric and compact integrated module.
Snapshot: Linear Lamps (TLEDs)
A report using LED Lighting Facts data to examine the current state of the market for linear fluorescent lamps. (8 pages, July 2016)
Linear phase compressive filter
McEwan, Thomas E.
1995-01-01
A phase linear filter for soliton suppression is in the form of a laddered series of stages of non-commensurate low pass filters with each low pass filter having a series coupled inductance (L) and a reverse biased, voltage dependent varactor diode, to ground which acts as a variable capacitance (C). L and C values are set to levels which correspond to a linear or conventional phase linear filter. Inductance is mapped directly from that of an equivalent nonlinear transmission line and capacitance is mapped from the linear case using a large signal equivalent of a nonlinear transmission line.
Linear phase compressive filter
McEwan, T.E.
1995-06-06
A phase linear filter for soliton suppression is in the form of a laddered series of stages of non-commensurate low pass filters with each low pass filter having a series coupled inductance (L) and a reverse biased, voltage dependent varactor diode, to ground which acts as a variable capacitance (C). L and C values are set to levels which correspond to a linear or conventional phase linear filter. Inductance is mapped directly from that of an equivalent nonlinear transmission line and capacitance is mapped from the linear case using a large signal equivalent of a nonlinear transmission line. 2 figs.
Inpainting with sparse linear combinations of exemplars
Wohlberg, Brendt
2008-01-01
We introduce a new exemplar-based inpainting algorithm based on representing the region to be inpainted as a sparse linear combination of blocks extracted from similar parts of the image being inpainted. This method is conceptually simple, being computed by functional minimization, and avoids the complexity of correctly ordering the filling in of missing regions of other exemplar-based methods. Initial performance comparisons on small inpainting regions indicate that this method provides similar or better performance than other recent methods.
Linearly polarized fiber amplifier
Kliner, Dahv A.; Koplow, Jeffery P.
2004-11-30
Optically pumped rare-earth-doped polarizing fibers exhibit significantly higher gain for one linear polarization state than for the orthogonal state. Such a fiber can be used to construct a single-polarization fiber laser, amplifier, or amplified-spontaneous-emission (ASE) source without the need for additional optical components to obtain stable, linearly polarized operation.
Rios, A. B.; Valda, A.; Somacal, H.
2007-10-26
Usually tomographic procedure requires a set of projections around the object under study and a mathematical processing of such projections through reconstruction algorithms. An accurate reconstruction requires a proper number of projections (angular sampling) and a proper number of elements in each projection (linear sampling). However in several practical cases it is not possible to fulfill these conditions leading to the so-called problem of few projections. In this case, iterative reconstruction algorithms are more suitable than analytic ones. In this work we present a program written in C++ that provides an environment for two iterative algorithm implementations, one algebraic and the other statistical. The software allows the user a full definition of the acquisition and reconstruction geometries used for the reconstruction algorithms but also to perform projection and backprojection operations. A set of analysis tools was implemented for the characterization of the convergence process. We analyze the performance of the algorithms on numerical phantoms and present the reconstruction of experimental data with few projections coming from transmission X-ray and micro PIXE (Particle-Induced X-Ray Emission) images.
Energy Science and Technology Software Center (OSTI)
002651IBMPC00 Algorithm for Accounting for the Interactions of Multiple Renewable Energy Technologies in Estimation of Annual Performance
PC Basic Linear Algebra Subroutines
Energy Science and Technology Software Center (OSTI)
1992-03-09
PC-BLAS is a highly optimized version of the Basic Linear Algebra Subprograms (BLAS), a standardized set of thirty-eight routines that perform low-level operations on vectors of numbers in single and double-precision real and complex arithmetic. Routines are included to find the index of the largest component of a vector, apply a Givens or modified Givens rotation, multiply a vector by a constant, determine the Euclidean length, perform a dot product, swap and copy vectors, andmore » find the norm of a vector. The BLAS have been carefully written to minimize numerical problems such as loss of precision and underflow and are designed so that the computation is independent of the interface with the calling program. This independence is achieved through judicious use of Assembly language macros. Interfaces are provided for Lahey Fortran 77, Microsoft Fortran 77, and Ryan-McFarland IBM Professional Fortran.« less
A cooperative control algorithm for camera based observational systems.
Young, Joseph G.
2012-01-01
Over the last several years, there has been considerable growth in camera based observation systems for a variety of safety, scientific, and recreational applications. In order to improve the effectiveness of these systems, we frequently desire the ability to increase the number of observed objects, but solving this problem is not as simple as adding more cameras. Quite often, there are economic or physical restrictions that prevent us from adding additional cameras to the system. As a result, we require methods that coordinate the tracking of objects between multiple cameras in an optimal way. In order to accomplish this goal, we present a new cooperative control algorithm for a camera based observational system. Specifically, we present a receding horizon control where we model the underlying optimal control problem as a mixed integer linear program. The benefit of this design is that we can coordinate the actions between each camera while simultaneously respecting its kinematics. In addition, we further improve the quality of our solution by coupling our algorithm with a Kalman filter. Through this integration, we not only add a predictive component to our control, but we use the uncertainty estimates provided by the filter to encourage the system to periodically observe any outliers in the observed area. This combined approach allows us to intelligently observe the entire region of interest in an effective and thorough manner.
Bosamykin, V.S.; Pavlovskiy, A.I.
1984-03-01
A linear induction accelerator of charged particles, containing inductors and an acceleration circuit, characterized by the fact that, for the purpose of increasing the power of the accelerator, each inductor is made in the form of a toroidal line with distributed parameters, from one end of which in the gap of the line a ring commutator is included, and from the other end of the ine a resistor is hooked up, is described.
Combustion powered linear actuator
Fischer, Gary J.
2007-09-04
The present invention provides robotic vehicles having wheeled and hopping mobilities that are capable of traversing (e.g. by hopping over) obstacles that are large in size relative to the robot and, are capable of operation in unpredictable terrain over long range. The present invention further provides combustion powered linear actuators, which can include latching mechanisms to facilitate pressurized fueling of the actuators, as can be used to provide wheeled vehicles with a hopping mobility.
Buttram, M.T.; Ginn, J.W.
1988-06-21
A linear induction accelerator includes a plurality of adder cavities arranged in a series and provided in a structure which is evacuated so that a vacuum inductance is provided between each adder cavity and the structure. An energy storage system for the adder cavities includes a pulsed current source and a respective plurality of bipolar converting networks connected thereto. The bipolar high-voltage, high-repetition-rate square pulse train sets and resets the cavities. 4 figs.
Emma, P.
1995-06-01
The Stanford Linear Collider (SLC) is the first and only high-energy e{sup +}e{sup {minus}} linear collider in the world. Its most remarkable features are high intensity, submicron sized, polarized (e{sup {minus}}) beams at a single interaction point. The main challenges posed by these unique characteristics include machine-wide emittance preservation, consistent high intensity operation, polarized electron production and transport, and the achievement of a high degree of beam stability on all time scales. In addition to serving as an important machine for the study of Z{sup 0} boson production and decay using polarized beams, the SLC is also an indispensable source of hands-on experience for future linear colliders. Each new year of operation has been highlighted with a marked improvement in performance. The most significant improvements for the 1994-95 run include new low impedance vacuum chambers for the damping rings, an upgrade to the optics and diagnostics of the final focus systems, and a higher degree of polarization from the electron source. As a result, the average luminosity has nearly doubled over the previous year with peaks approaching 10{sup 30} cm{sup {minus}2}s{sup {minus}1} and an 80% electron polarization at the interaction point. These developments as well as the remaining identifiable performance limitations will be discussed.
Graphical representation of parallel algorithmic processes. Master's thesis
Williams, E.M.
1990-12-01
Algorithm animation is a visualization method used to enhance understanding of functioning of an algorithm or program. Visualization is used for many purposes, including education, algorithm research, performance analysis, and program debugging. This research applies algorithm animation techniques to programs developed for parallel architectures, with specific on the Intel iPSC/2 hypercube. While both P-time and NP-time algorithms can potentially benefit from using visualization techniques, the set of NP-complete problems provides fertile ground for developing parallel applications, since the combinatoric nature of the problems makes finding the optimum solution impractical. The primary goals for this visualization system are: Data should be displayed as it is generated. The interface to the targe program should be transparent, allowing the animation of existing programs. Flexibility - the system should be able to animate any algorithm. The resulting system incorporates and extends two AFIT products: the AFIT Algorithm Animation Research Facility (AAARF) and the Parallel Resource Analysis Software Environment (PRASE). AAARF is an algorithm animation system developed primarily for sequential programs, but is easily adaptable for use with parallel programs. PRASE is an instrumentation package that extracts system performance data from programs on the Intel hypercubes. Since performance data is an essential part of analyzing any parallel program, views of the performance data are provided as an elementary part of the system. Custom software is designed to interface these systems and to display the program data. The program chosen as the example for this study is a member of the NP-complete problem set; it is a parallel implementation of a general.
Parallelism of the SANDstorm hash algorithm.
Torgerson, Mark Dolan; Draelos, Timothy John; Schroeppel, Richard Crabtree
2009-09-01
Mainstream cryptographic hashing algorithms are not parallelizable. This limits their speed and they are not able to take advantage of the current trend of being run on multi-core platforms. Being limited in speed limits their usefulness as an authentication mechanism in secure communications. Sandia researchers have created a new cryptographic hashing algorithm, SANDstorm, which was specifically designed to take advantage of multi-core processing and be parallelizable on a wide range of platforms. This report describes a late-start LDRD effort to verify the parallelizability claims of the SANDstorm designers. We have shown, with operating code and bench testing, that the SANDstorm algorithm may be trivially parallelized on a wide range of hardware platforms. Implementations using OpenMP demonstrates a linear speedup with multiple cores. We have also shown significant performance gains with optimized C code and the use of assembly instructions to exploit particular platform capabilities.
Confirming the Lanchestrian linear-logarithmic model of attrition
Hartley, D.S. III.
1990-12-01
This paper is the fourth in a series of reports on the breakthrough research in historical validation of attrition in conflict. Significant defense policy decisions, including weapons acquisition and arms reduction, are based in part on models of conflict. Most of these models are driven by their attrition algorithms, usually forms of the Lanchester square and linear laws. None of these algorithms have been validated. The results of this paper confirm the results of earlier papers, using a large database of historical results. The homogeneous linear-logarithmic Lanchestrian attrition model is validated to the extent possible with current initial and final force size data and is consistent with the Iwo Jima data. A particular differential linear-logarithmic model is described that fits the data very well. A version of Helmbold's victory predicting parameter is also confirmed, with an associated probability function. 37 refs., 73 figs., 68 tabs.
Linear Fresnel | Department of Energy
Office of Energy Efficiency and Renewable Energy (EERE) (indexed site)
Concentrating Solar Power » Linear Fresnel Linear Fresnel DOE funds solar research and development (R&D) in linear Fresnel systems as one of four CSP technologies aiming to meet the goals of the SunShot Initiative. Linear Fresnel systems, which are a type of linear concentrator, are active in Germany, Spain, Australia, India, and the United States. The SunShot Initiative funds R&D on linear Fresnel systems and related aspects within the industry, national laboratories and universities
Van Atta, C.M.; Beringer, R.; Smith, L.
1959-01-01
A linear accelerator of heavy ions is described. The basic contributions of the invention consist of a method and apparatus for obtaining high energy particles of an element with an increased charge-to-mass ratio. The method comprises the steps of ionizing the atoms of an element, accelerating the resultant ions to an energy substantially equal to one Mev per nucleon, stripping orbital electrons from the accelerated ions by passing the ions through a curtain of elemental vapor disposed transversely of the path of the ions to provide a second charge-to-mass ratio, and finally accelerating the resultant stripped ions to a final energy of at least ten Mev per nucleon.
Energy Science and Technology Software Center (OSTI)
2006-11-17
Software that simulates and inverts electromagnetic field data for subsurface electrical properties (electrical conductivity) of geological media. The software treats data produced by a time harmonic source field excitation arising from the following antenna geometery: loops and grounded bipoles, as well as point electric and magnetic dioples. The inversion process is carried out using a non-linear conjugate gradient optimization scheme, which minimizes the misfit between field data and model data using a least squares criteria.more » The software is an upgrade from the code NLCGCS_MP ver 1.0. The upgrade includes the following components: Incorporation of new 1 D field sourcing routines to more accurately simulate the 3D electromagnetic field for arbitrary geologic& media, treatment for generalized finite length transmitting antenna geometry (antennas with vertical and horizontal component directions). In addition, the software has been upgraded to treat transverse anisotropy in electrical conductivity.« less
History of Proton Linear Accelerators
DOE R&D Accomplishments [OSTI]
Alvarez, L. W.
1987-01-01
Some personal recollections are presented that relate to the author`s experience developing linear accelerators, particularly for protons. (LEW)
Brambley, Michael R.; Katipamula, Srinivas
2006-10-06
Pacific Northwest National Laboratory (PNNL) is assisting the U.S. Department of Energy (DOE) Distributed Energy (DE) Program by developing advanced control algorithms that would lead to development of tools to enhance performance and reliability, and reduce emissions of distributed energy technologies, including combined heat and power technologies. This report documents phase 2 of the program, providing a detailed functional specification for algorithms for performance monitoring and commissioning verification, scheduled for development in FY 2006. The report identifies the systems for which algorithms will be developed, the specific functions of each algorithm, metrics which the algorithms will output, and inputs required by each algorithm.
Kliman, G.B.; Brynsvold, G.V.; Jahns, T.M.
1989-08-22
A winding and method of winding for a submersible linear pump for pumping liquid sodium are disclosed. The pump includes a stator having a central cylindrical duct preferably vertically aligned. The central vertical duct is surrounded by a system of coils in slots. These slots are interleaved with magnetic flux conducting elements, these magnetic flux conducting elements forming a continuous magnetic field conduction path along the stator. The central duct has placed therein a cylindrical magnetic conducting core, this core having a cylindrical diameter less than the diameter of the cylindrical duct. The core once placed to the duct defines a cylindrical interstitial pumping volume of the pump. This cylindrical interstitial pumping volume preferably defines an inlet at the bottom of the pump, and an outlet at the top of the pump. Pump operation occurs by static windings in the outer stator sequentially conveying toroidal fields from the pump inlet at the bottom of the pump to the pump outlet at the top of the pump. The winding apparatus and method of winding disclosed uses multiple slots per pole per phase with parallel winding legs on each phase equal to or less than the number of slots per pole per phase. The slot sequence per pole per phase is chosen to equalize the variations in flux density of the pump sodium as it passes into the pump at the pump inlet with little or no flux and acquires magnetic flux in passage through the pump to the pump outlet. 4 figs.
Kliman, Gerald B.; Brynsvold, Glen V.; Jahns, Thomas M.
1989-01-01
A winding and method of winding for a submersible linear pump for pumping liquid sodium is disclosed. The pump includes a stator having a central cylindrical duct preferably vertically aligned. The central vertical duct is surrounded by a system of coils in slots. These slots are interleaved with magnetic flux conducting elements, these magnetic flux conducting elements forming a continuous magnetic field conduction path along the stator. The central duct has placed therein a cylindrical magnetic conducting core, this core having a cylindrical diameter less than the diameter of the cylindrical duct. The core once placed to the duct defines a cylindrical interstitial pumping volume of the pump. This cylindrical interstitial pumping volume preferably defines an inlet at the bottom of the pump, and an outlet at the top of the pump. Pump operation occurs by static windings in the outer stator sequentially conveying toroidal fields from the pump inlet at the bottom of the pump to the pump outlet at the top of the pump. The winding apparatus and method of winding disclosed uses multiple slots per pole per phase with parallel winding legs on each phase equal to or less than the number of slots per pole per phase. The slot sequence per pole per phase is chosen to equalize the variations in flux density of the pump sodium as it passes into the pump at the pump inlet with little or no flux and acquires magnetic flux in passage through the pump to the pump outlet.
Meisner, John W.; Moore, Robert M.; Bienvenue, Louis L.
1985-03-19
Electromagnetic linear induction pump for liquid metal which includes a unitary pump duct. The duct comprises two substantially flat parallel spaced-apart wall members, one being located above the other and two parallel opposing side members interconnecting the wall members. Located within the duct are a plurality of web members interconnecting the wall members and extending parallel to the side members whereby the wall members, side members and web members define a plurality of fluid passageways, each of the fluid passageways having substantially the same cross-sectional flow area. Attached to an outer surface of each side member is an electrically conductive end bar for the passage of an induced current therethrough. A multi-phase, electrical stator is located adjacent each of the wall members. The duct, stators, and end bars are enclosed in a housing which is provided with an inlet and outlet in fluid communication with opposite ends of the fluid passageways in the pump duct. In accordance with a preferred embodiment, the inlet and outlet includes a transition means which provides for a transition from a round cross-sectional flow path to a substantially rectangular cross-sectional flow path defined by the pump duct.
Energy Science and Technology Software Center (OSTI)
2013-07-29
The OpenEIS Algorithm package seeks to provide a low-risk path for building owners, service providers and managers to explore analytical methods for improving building control and operational efficiency. Users of this software can analyze building data, and learn how commercial implementations would provide long-term value. The code also serves as a reference implementation for developers who wish to adapt the algorithms for use in commercial tools or service offerings.
Linear Collider Physics Resource Book Snowmass 2001
Ronan , M.T.
2001-06-01
The American particle physics community can look forward to a well-conceived and vital program of experimentation for the next ten years, using both colliders and fixed target beams to study a wide variety of pressing questions. Beyond 2010, these programs will be reaching the end of their expected lives. The CERN LHC will provide an experimental program of the first importance. But beyond the LHC, the American community needs a coherent plan. The Snowmass 2001 Workshop and the deliberations of the HEPAP subpanel offer a rare opportunity to engage the full community in planning our future for the next decade or more. A major accelerator project requires a decade from the beginning of an engineering design to the receipt of the first data. So it is now time to decide whether to begin a new accelerator project that will operate in the years soon after 2010. We believe that the world high-energy physics community needs such a project. With the great promise of discovery in physics at the next energy scale, and with the opportunity for the uncovering of profound insights, we cannot allow our field to contract to a single experimental program at a single laboratory in the world. We believe that an e{sup +}e{sup -} linear collider is an excellent choice for the next major project in high-energy physics. Applying experimental techniques very different from those used at hadron colliders, an e{sup +}e{sup -} linear collider will allow us to build on the discoveries made at the Tevatron and the LHC, and to add a level of precision and clarity that will be necessary to understand the physics of the next energy scale. It is not necessary to anticipate specific results from the hadron collider programs to argue for constructing an e{sup +}e{sup -} linear collider; in any scenario that is now discussed, physics will benefit from the new information that e{sup +}e{sup -} experiments can provide. This last point merits further emphasis. If a new accelerator could be designed and
A robust return-map algorithm for general multisurface plasticity
Adhikary, Deepak P.; Jayasundara, Chandana T.; Podgorney, Robert K.; Wilkins, Andy H.
2016-06-16
Three new contributions to the field of multisurface plasticity are presented for general situations with an arbitrary number of nonlinear yield surfaces with hardening or softening. A method for handling linearly dependent flow directions is described. A residual that can be used in a line search is defined. An algorithm that has been implemented and comprehensively tested is discussed in detail. Examples are presented to illustrate the computational cost of various components of the algorithm. The overall result is that a single Newton-Raphson iteration of the algorithm costs between 1.5 and 2 times that of an elastic calculation. Examples alsomore » illustrate the successful convergence of the algorithm in complicated situations. For example, without using the new contributions presented here, the algorithm fails to converge for approximately 50% of the trial stresses for a common geomechanical model of sedementary rocks, while the current algorithm results in complete success. Since it involves no approximations, the algorithm is used to quantify the accuracy of an efficient, pragmatic, but approximate, algorithm used for sedimentary-rock plasticity in a commercial software package. Furthermore, the main weakness of the algorithm is identified as the difficulty of correctly choosing the set of initially active constraints in the general setting.« less
Amesos2 and Belos: Direct and Iterative Solvers for Large Sparse Linear Systems
Bavier, Eric; Hoemmen, Mark; Rajamanickam, Sivasankaran; Thornquist, Heidi
2012-01-01
Solvers for large sparse linear systems come in two categories: direct and iterative. Amesos2, a package in the Trilinos software project, provides direct methods, and Belos, another Trilinos package, provides iterative methods. Amesos2 offers a common interface to many different sparse matrix factorization codes, and can handle any implementation of sparse matrices and vectors, via an easy-to-extend C++ traits interface. It can also factor matrices whose entries have arbitrary “Scalar” type, enabling extended-precision and mixed-precision algorithms. Belos includes many different iterative methods for solving large sparse linear systems and least-squares problems. Unlike competing iterative solver libraries, Belos completely decouples themore » algorithms from the implementations of the underlying linear algebra objects. This lets Belos exploit the latest hardware without changes to the code. Belos favors algorithms that solve higher-level problems, such as multiple simultaneous linear systems and sequences of related linear systems, faster than standard algorithms. The package also supports extended-precision and mixed-precision algorithms. Together, Amesos2 and Belos form a complete suite of sparse linear solvers.« less
The TESLA superconducting linear collider
the TESLA Collaboration
1997-03-01
This paper summarizes the present status of the studies for a superconducting Linear Collider (TESLA). {copyright} {ital 1997 American Institute of Physics.}
2d PDE Linear Symmetric Matrix Solver
Energy Science and Technology Software Center (OSTI)
1983-10-01
ICCG2 (Incomplete Cholesky factorized Conjugate Gradient algorithm for 2d symmetric problems) was developed to solve a linear symmetric matrix system arising from a 9-point discretization of two-dimensional elliptic and parabolic partial differential equations found in plasma physics applications, such as resistive MHD, spatial diffusive transport, and phase space transport (Fokker-Planck equation) problems. These problems share the common feature of being stiff and requiring implicit solution techniques. When these parabolic or elliptic PDE''s are discretized withmore » finite-difference or finite-element methods,the resulting matrix system is frequently of block-tridiagonal form. To use ICCG2, the discretization of the two-dimensional partial differential equation and its boundary conditions must result in a block-tridiagonal supermatrix composed of elementary tridiagonal matrices. The incomplete Cholesky conjugate gradient algorithm is used to solve the linear symmetric matrix equation. Loops are arranged to vectorize on the Cray1 with the CFT compiler, wherever possible. Recursive loops, which cannot be vectorized, are written for optimum scalar speed. For matrices lacking symmetry, ILUCG2 should be used. Similar methods in three dimensions are available in ICCG3 and ILUCG3. A general source containing extensions and macros, which must be processed by a pre-compiler to obtain the standard FORTRAN source, is provided along with the standard FORTRAN source because it is believed to be more readable. The pre-compiler is not included, but pre-compilation may be performed by a text editor as described in the UCRL-88746 Preprint.« less
Nonlinear Global Optimization Using Curdling Algorithm
Energy Science and Technology Software Center (OSTI)
1996-03-01
An algorithm for performing curdling optimization which is a derivative-free, grid-refinement approach to nonlinear optimization was developed and implemented in software. This approach overcomes a number of deficiencies in existing approaches. Most notably, it finds extremal regions rather than only single external extremal points. The program is interactive and collects information on control parameters and constraints using menus. For up to four dimensions, function convergence is displayed graphically. Because the algorithm does not compute derivatives,more » gradients or vectors, it is numerically stable. It can find all the roots of a polynomial in one pass. It is an inherently parallel algorithm. Constraints are handled as being initially fuzzy, but become tighter with each iteration.« less
Statistics of voltage drop in distribution circuits: a dynamic programming approach
Turitsyn, Konstantin S
2010-01-01
We analyze a power distribution line with high penetration of distributed generation and strong variations of power consumption and generation levels. In the presence of uncertainty the statistical description of the system is required to assess the risks of power outages. In order to find the probability of exceeding the constraints for voltage levels we introduce the probability distribution of maximal voltage drop and propose an algorithm for finding this distribution. The algorithm is based on the assumption of random but statistically independent distribution of loads on buses. Linear complexity in the number of buses is achieved through the dynamic programming technique. We illustrate the performance of the algorithm by analyzing a simple 4-bus system with high variations of load levels.
Energy Science and Technology Software Center (OSTI)
2005-03-30
The Robotic Follow Algorithm enables allows any robotic vehicle to follow a moving target while reactively choosing a route around nearby obstacles. The robotic follow behavior can be used with different camera systems and can be used with thermal or visual tracking as well as other tracking methods such as radio frequency tags.
DOE - Office of Legacy Management -- Stanford Linear Accelerator Center -
Office of Legacy Management (LM)
005 Stanford Linear Accelerator Center - 005 FUSRAP Considered Sites Site: Stanford Linear Accelerator Center (005) More information at www.slac.stanford.edu Designated Name: Not Designated under FUSRAP Alternate Name: SLAC Location: Palo Alto, California Evaluation Year: Not considered for FUSRAP - in another program Site Operations: Research Site Disposition: Remediation completed by DOE Office of Environmental Management in 2014. DOE Office of Science is responsible for long-term
Acoustic emission linear pulse holography
Collins, H.D.; Busse, L.J.; Lemon, D.K.
1983-10-25
This device relates to the concept of and means for performing Acoustic Emission Linear Pulse Holography, which combines the advantages of linear holographic imaging and Acoustic Emission into a single non-destructive inspection system. This unique system produces a chronological, linear holographic image of a flaw by utilizing the acoustic energy emitted during crack growth. The innovation is the concept of utilizing the crack-generated acoustic emission energy to generate a chronological series of images of a growing crack by applying linear, pulse holographic processing to the acoustic emission data. The process is implemented by placing on a structure an array of piezoelectric sensors (typically 16 or 32 of them) near the defect location. A reference sensor is placed between the defect and the array.
Linear Accelerator | Advanced Photon Source
U.S. Department of Energy (DOE) all webpages (Extended Search)
Linear Accelerator Producing brilliant x-ray beams at the APS begins with electrons emitted from a cathode heated to 1100 C. The electrons are accelerated by high-voltage...
Automating linear accelerator quality assurance
Eckhause, Tobias; Thorwarth, Ryan; Moran, Jean M.; Al-Hallaq, Hania; Farrey, Karl; Ritter, Timothy; DeMarco, John; Pawlicki, Todd; Kim, Gwe-Ya; Popple, Richard; Sharma, Vijeshwar; Park, SungYong; Perez, Mario; Booth, Jeremy T.
2015-10-15
Purpose: The purpose of this study was 2-fold. One purpose was to develop an automated, streamlined quality assurance (QA) program for use by multiple centers. The second purpose was to evaluate machine performance over time for multiple centers using linear accelerator (Linac) log files and electronic portal images. The authors sought to evaluate variations in Linac performance to establish as a reference for other centers. Methods: The authors developed analytical software tools for a QA program using both log files and electronic portal imaging device (EPID) measurements. The first tool is a general analysis tool which can read and visually represent data in the log file. This tool, which can be used to automatically analyze patient treatment or QA log files, examines the files for Linac deviations which exceed thresholds. The second set of tools consists of a test suite of QA fields, a standard phantom, and software to collect information from the log files on deviations from the expected values. The test suite was designed to focus on the mechanical tests of the Linac to include jaw, MLC, and collimator positions during static, IMRT, and volumetric modulated arc therapy delivery. A consortium of eight institutions delivered the test suite at monthly or weekly intervals on each Linac using a standard phantom. The behavior of various components was analyzed for eight TrueBeam Linacs. Results: For the EPID and trajectory log file analysis, all observed deviations which exceeded established thresholds for Linac behavior resulted in a beam hold off. In the absence of an interlock-triggering event, the maximum observed log file deviations between the expected and actual component positions (such as MLC leaves) varied from less than 1% to 26% of published tolerance thresholds. The maximum and standard deviations of the variations due to gantry sag, collimator angle, jaw position, and MLC positions are presented. Gantry sag among Linacs was 0.336 ± 0.072 mm. The
Large scale tracking algorithms.
Hansen, Ross L.; Love, Joshua Alan; Melgaard, David Kennett; Karelitz, David B.; Pitts, Todd Alan; Zollweg, Joshua David; Anderson, Dylan Z.; Nandy, Prabal; Whitlow, Gary L.; Bender, Daniel A.; Byrne, Raymond Harry
2015-01-01
Low signal-to-noise data processing algorithms for improved detection, tracking, discrimination and situational threat assessment are a key research challenge. As sensor technologies progress, the number of pixels will increase signi cantly. This will result in increased resolution, which could improve object discrimination, but unfortunately, will also result in a significant increase in the number of potential targets to track. Many tracking techniques, like multi-hypothesis trackers, suffer from a combinatorial explosion as the number of potential targets increase. As the resolution increases, the phenomenology applied towards detection algorithms also changes. For low resolution sensors, "blob" tracking is the norm. For higher resolution data, additional information may be employed in the detection and classfication steps. The most challenging scenarios are those where the targets cannot be fully resolved, yet must be tracked and distinguished for neighboring closely spaced objects. Tracking vehicles in an urban environment is an example of such a challenging scenario. This report evaluates several potential tracking algorithms for large-scale tracking in an urban environment.
Energy Science and Technology Software Center (OSTI)
1998-07-01
GenOpt is a generic optimization program for nonlinear, constrained optimization. For evaluating the objective function, any simulation program that communicates over text files can be coupled to GenOpt without code modification. No analytic properties of the objective function are used by GenOpt. ptimization algorithms and numerical methods can be implemented in a library and shared among users. Gencpt offers an interlace between the optimization algorithm and its kernel to make the implementation of new algorithmsmore » fast and easy. Different algorithms of constrained and unconstrained minimization can be added to a library. Algorithms for approximation derivatives and performing line-search will be implemented. The objective function is evaluated as a black-box function by an external simulation program. The kernel of GenOpt deals with the data I/O, result sotrage and report, interlace to the external simulation program, and error handling. An abstract optimization class offers methods to interface the GenOpt kernel and the optimization algorithm library.« less
MineSeis - A MATLAB@ GUI Program to
Office of Scientific and Technical Information (OSTI)
i MineSeis - A MATLAB@ GUI Program to Calculate Synthetic Seismograms from a Linear, ... The program was written with MATLAB Graphical User Interface (GUI) technique ...
Integrated Network Decompositions and Dynamic Programming for Graph Optimization (INDDGO)
Energy Science and Technology Software Center (OSTI)
2012-05-31
The INDDGO software package offers a set of tools for finding exact solutions to graph optimization problems via tree decompositions and dynamic programming algorithms. Currently the framework offers serial and parallel (distributed memory) algorithms for finding tree decompositions and solving the maximum weighted independent set problem. The parallel dynamic programming algorithm is implemented on top of the MADNESS task-based runtime.
A new augmentation based algorithm for extracting maximal chordal subgraphs
Bhowmick, Sanjukta; Chen, Tzu-Yi; Halappanavar, Mahantesh
2014-10-18
If every cycle of a graph is chordal length greater than three then it contains an edge between non-adjacent vertices. Chordal graphs are of interest both theoretically, since they admit polynomial time solutions to a range of NP-hard graph problems, and practically, since they arise in many applications including sparse linear algebra, computer vision, and computational biology. A maximal chordal subgraph is a chordal subgraph that is not a proper subgraph of any other chordal subgraph. Existing algorithms for computing maximal chordal subgraphs depend on dynamically ordering the vertices, which is an inherently sequential process and therefore limits the algorithms’more » parallelizability. In our paper we explore techniques to develop a scalable parallel algorithm for extracting a maximal chordal subgraph. We demonstrate that an earlier attempt at developing a parallel algorithm may induce a non-optimal vertex ordering and is therefore not guaranteed to terminate with a maximal chordal subgraph. We then give a new algorithm that first computes and then repeatedly augments a spanning chordal subgraph. After proving that the algorithm terminates with a maximal chordal subgraph, we then demonstrate that this algorithm is more amenable to parallelization and that the parallel version also terminates with a maximal chordal subgraph. That said, the complexity of the new algorithm is higher than that of the previous parallel algorithm, although the earlier algorithm computes a chordal subgraph which is not guaranteed to be maximal. Finally, we experimented with our augmentation-based algorithm on both synthetic and real-world graphs. We provide scalability results and also explore the effect of different choices for the initial spanning chordal subgraph on both the running time and on the number of edges in the maximal chordal subgraph.« less
2d PDE Linear Asymmetric Matrix Solver
Energy Science and Technology Software Center (OSTI)
1983-10-01
ILUCG2 (Incomplete LU factorized Conjugate Gradient algorithm for 2d problems) was developed to solve a linear asymmetric matrix system arising from a 9-point discretization of two-dimensional elliptic and parabolic partial differential equations found in plasma physics applications, such as plasma diffusion, equilibria, and phase space transport (Fokker-Planck equation) problems. These equations share the common feature of being stiff and requiring implicit solution techniques. When these parabolic or elliptic PDE''s are discretized with finite-difference or finite-elementmore » methods, the resulting matrix system is frequently of block-tridiagonal form. To use ILUCG2, the discretization of the two-dimensional partial differential equation and its boundary conditions must result in a block-tridiagonal supermatrix composed of elementary tridiagonal matrices. A generalization of the incomplete Cholesky conjugate gradient algorithm is used to solve the matrix equation. Loops are arranged to vectorize on the Cray1 with the CFT compiler, wherever possible. Recursive loops, which cannot be vectorized, are written for optimum scalar speed. For problems having a symmetric matrix ICCG2 should be used since it runs up to four times faster and uses approximately 30% less storage. Similar methods in three dimensions are available in ICCG3 and ILUCG3. A general source, containing extensions and macros, which must be processed by a pre-compiler to obtain the standard FORTRAN source, is provided along with the standard FORTRAN source because it is believed to be more readable. The pre-compiler is not included, but pre-compilation may be performed by a text editor as described in the UCRL-88746 Preprint.« less
MEMORANDUM OF UNDERSTANDING Between The Numerical Algorithms Group Ltd
U.S. Department of Energy (DOE) all webpages (Extended Search)
Between The Numerical Algorithms Group Ltd and The University of California, as Management and Operating Contractor for Lawrence Berkeley National Laboratory on a Visitor Exchange Program This Memorandum of Understanding (MOU) is by and between the Numerical Algorithms Group Ltd (NAG) with a registered address at: Wilkinson House, Jordan hill Road, Oxford, UK and the University of California, as Management and Operating Contractor for Lawrence Berkeley National Laboratory, including its
Improved multiprocessor garbage collection algorithms
Newman, I.A.; Stallard, R.P.; Woodward, M.C.
1983-01-01
Outlines the results of an investigation of existing multiprocessor garbage collection algorithms and introduces two new algorithms which significantly improve some aspects of the performance of their predecessors. The two algorithms arise from different starting assumptions. One considers the case where the algorithm will terminate successfully whatever list structure is being processed and assumes that the extra data space should be minimised. The other seeks a very fast garbage collection time for list structures that do not contain loops. Results of both theoretical and experimental investigations are given to demonstrate the efficacy of the algorithms. 7 references.
Henry, J.J.
1961-09-01
A linear count-rate meter is designed to provide a highly linear output while receiving counting rates from one cycle per second to 100,000 cycles per second. Input pulses enter a linear discriminator and then are fed to a trigger circuit which produces positive pulses of uniform width and amplitude. The trigger circuit is connected to a one-shot multivibrator. The multivibrator output pulses have a selected width. Feedback means are provided for preventing transistor saturation in the multivibrator which improves the rise and decay times of the output pulses. The multivibrator is connected to a diode-switched, constant current metering circuit. A selected constant current is switched to an averaging circuit for each pulse received, and for a time determined by the received pulse width. The average output meter current is proportional to the product of the counting rate, the constant current, and the multivibrator output pulse width.
A Numerical Algorithm for the Solution of a Phase-Field Model of Polycrystalline Materials
Dorr, M R; Fattebert, J; Wickett, M E; Belak, J F; Turchi, P A
2008-12-04
We describe an algorithm for the numerical solution of a phase-field model (PFM) of microstructure evolution in polycrystalline materials. The PFM system of equations includes a local order parameter, a quaternion representation of local orientation and a species composition parameter. The algorithm is based on the implicit integration of a semidiscretization of the PFM system using a backward difference formula (BDF) temporal discretization combined with a Newton-Krylov algorithm to solve the nonlinear system at each time step. The BDF algorithm is combined with a coordinate projection method to maintain quaternion unit length, which is related to an important solution invariant. A key element of the Newton-Krylov algorithm is the selection of a preconditioner to accelerate the convergence of the Generalized Minimum Residual algorithm used to solve the Jacobian linear system in each Newton step. Results are presented for the application of the algorithm to 2D and 3D examples.
Linear Fresnel Power Plant Illustration
With this concentrating solar power (CSP) graphic, flat or slightly curved mirrors mounted on trackers on the ground are configured to reflect sunlight onto a receiver tube fixed in space above these mirrors. A small parabolic mirror is sometimes added atop the receiver to further focus the sunlight. Linear CSP collectors capture the sun's energy with large mirrors that reflect and focus the sunlight onto a linear receiver tube. The receiver contains a fluid that is heated by the sunlight and then used to create superheated steam that spins a turbine that drives a generator to produce electricity.
Linear electric field mass spectrometry
McComas, D.J.; Nordholt, J.E.
1992-12-01
A mass spectrometer and methods for mass spectrometry are described. The apparatus is compact and of low weight and has a low power requirement, making it suitable for use on a space satellite and as a portable detector for the presence of substances. High mass resolution measurements are made by timing ions moving through a gridless cylindrically symmetric linear electric field. 8 figs.
Linear electric field mass spectrometry
McComas, David J.; Nordholt, Jane E.
1992-01-01
A mass spectrometer and methods for mass spectrometry. The apparatus is compact and of low weight and has a low power requirement, making it suitable for use on a space satellite and as a portable detector for the presence of substances. High mass resolution measurements are made by timing ions moving through a gridless cylindrically symmetric linear electric field.
Solar Power Ramp Events Detection Using an Optimized Swinging Door Algorithm
Cui, Mingjian; Zhang, Jie; Florita, Anthony; Hodge, Bri-Mathias; Ke, Deping; Sun, Yuanzhang
2015-08-05
Solar power ramp events (SPREs) significantly influence the integration of solar power on non-clear days and threaten the reliable and economic operation of power systems. Accurately extracting solar power ramps becomes more important with increasing levels of solar power penetrations in power systems. In this paper, we develop an optimized swinging door algorithm (OpSDA) to enhance the state of the art in SPRE detection. First, the swinging door algorithm (SDA) is utilized to segregate measured solar power generation into consecutive segments in a piecewise linear fashion. Then we use a dynamic programming approach to combine adjacent segments into significant ramps when the decision thresholds are met. In addition, the expected SPREs occurring in clear-sky solar power conditions are removed. Measured solar power data from Tucson Electric Power is used to assess the performance of the proposed methodology. OpSDA is compared to two other ramp detection methods: the SDA and the L1-Ramp Detect with Sliding Window (L1-SW) method. The statistical results show the validity and effectiveness of the proposed method. OpSDA can significantly improve the performance of the SDA, and it can perform as well as or better than L1-SW with substantially less computation time.
Solar Power Ramp Events Detection Using an Optimized Swinging Door Algorithm: Preprint
Cui, Mingjian; Zhang, Jie; Florita, Anthony; Hodge, Bri-Mathias; Ke, Deping; Sun, Yuanzhang
2015-08-07
Solar power ramp events (SPREs) are those that significantly influence the integration of solar power on non-clear days and threaten the reliable and economic operation of power systems. Accurately extracting solar power ramps becomes more important with increasing levels of solar power penetrations in power systems. In this paper, we develop an optimized swinging door algorithm (OpSDA) to detection. First, the swinging door algorithm (SDA) is utilized to segregate measured solar power generation into consecutive segments in a piecewise linear fashion. Then we use a dynamic programming approach to combine adjacent segments into significant ramps when the decision thresholds are met. In addition, the expected SPREs occurring in clear-sky solar power conditions are removed. Measured solar power data from Tucson Electric Power is used to assess the performance of the proposed methodology. OpSDA is compared to two other ramp detection methods: the SDA and the L1-Ramp Detect with Sliding Window (L1-SW) method. The statistical results show the validity and effectiveness of the proposed method. OpSDA can significantly improve the performance of the SDA, and it can perform as well as or better than L1-SW with substantially less computation time.
International Linear Collider Technical Design Report - Volume...
Office of Scientific and Technical Information (OSTI)
International Linear Collider Technical Design Report - Volume 2: Physics Citation Details In-Document Search Title: International Linear Collider Technical Design Report - Volume ...
Algorithmic crystal chemistry: A cellular automata approach
Krivovichev, S. V.
2012-01-15
Atomic-molecular mechanisms of crystal growth can be modeled based on crystallochemical information using cellular automata (a particular case of finite deterministic automata). In particular, the formation of heteropolyhedral layered complexes in uranyl selenates can be modeled applying a one-dimensional three-colored cellular automaton. The use of the theory of calculations (in particular, the theory of automata) in crystallography allows one to interpret crystal growth as a computational process (the realization of an algorithm or program with a finite number of steps).
Cast dielectric composite linear accelerator
Sanders, David M.; Sampayan, Stephen; Slenes, Kirk; Stoller, H. M.
2009-11-10
A linear accelerator having cast dielectric composite layers integrally formed with conductor electrodes in a solventless fabrication process, with the cast dielectric composite preferably having a nanoparticle filler in an organic polymer such as a thermosetting resin. By incorporating this cast dielectric composite the dielectric constant of critical insulating layers of the transmission lines of the accelerator are increased while simultaneously maintaining high dielectric strengths for the accelerator.
Segmented rail linear induction motor
Cowan, M. Jr.; Marder, B.M.
1996-09-03
A segmented rail linear induction motor has a segmented rail consisting of a plurality of nonferrous electrically conductive segments aligned along a guideway. The motor further includes a carriage including at least one pair of opposed coils fastened to the carriage for moving the carriage. A power source applies an electric current to the coils to induce currents in the conductive surfaces to repel the coils from adjacent edges of the conductive surfaces. 6 figs.
Segmented rail linear induction motor
Cowan, Jr., Maynard; Marder, Barry M.
1996-01-01
A segmented rail linear induction motor has a segmented rail consisting of a plurality of nonferrous electrically conductive segments aligned along a guideway. The motor further includes a carriage including at least one pair of opposed coils fastened to the carriage for moving the carriage. A power source applies an electric current to the coils to induce currents in the conductive surfaces to repel the coils from adjacent edges of the conductive surfaces.
Precision linear ramp function generator
Jatko, W. Bruce (Knoxville, TN); McNeilly, David R. (Maryville, TN); Thacker, Louis H. (Knoxville, TN)
1986-01-01
A ramp function generator is provided which produces a precise linear ramp unction which is repeatable and highly stable. A derivative feedback loop is used to stabilize the output of an integrator in the forward loop and control the ramp rate. The ramp may be started from a selected baseline voltage level and the desired ramp rate is selected by applying an appropriate constant voltage to the input of the integrator.
Precision linear ramp function generator
Jatko, W.B.; McNeilly, D.R.; Thacker, L.H.
1984-08-01
A ramp function generator is provided which produces a precise linear ramp function which is repeatable and highly stable. A derivative feedback loop is used to stabilize the output of an integrator in the forward loop and control the ramp rate. The ramp may be started from a selected baseline voltage level and the desired ramp rate is selected by applying an appropriate constant voltage to the input of the integrator.
St Aubin, J. Keyvanloo, A.; Fallone, B. G.; Vassiliev, O.
2015-02-15
Purpose: Accurate radiotherapy dose calculation algorithms are essential to any successful radiotherapy program, considering the high level of dose conformity and modulation in many of todays treatment plans. As technology continues to progress, such as is the case with novel MRI-guided radiotherapy systems, the necessity for dose calculation algorithms to accurately predict delivered dose in increasingly challenging scenarios is vital. To this end, a novel deterministic solution has been developed to the first order linear Boltzmann transport equation which accurately calculates x-ray based radiotherapy doses in the presence of magnetic fields. Methods: The deterministic formalism discussed here with the inclusion of magnetic fields is outlined mathematically using a discrete ordinates angular discretization in an attempt to leverage existing deterministic codes. It is compared against the EGSnrc Monte Carlo code, utilizing the emf-macros addition which calculates the effects of electromagnetic fields. This comparison is performed in an inhomogeneous phantom that was designed to present a challenging calculation for deterministic calculations in 0, 0.6, and 3 T magnetic fields oriented parallel and perpendicular to the radiation beam. The accuracy of the formalism discussed here against Monte Carlo was evaluated with a gamma comparison using a standard 2%/2 mm and a more stringent 1%/1 mm criterion for a standard reference 10 10 cm{sup 2} field as well as a smaller 2 2 cm{sup 2} field. Results: Greater than 99.8% (94.8%) of all points analyzed passed a 2%/2 mm (1%/1 mm) gamma criterion for all magnetic field strengths and orientations investigated. All dosimetric changes resulting from the inclusion of magnetic fields were accurately calculated using the deterministic formalism. However, despite the algorithms high degree of accuracy, it is noticed that this formalism was not unconditionally stable using a discrete ordinate angular discretization
Cubit Adaptive Meshing Algorithm Library
Energy Science and Technology Software Center (OSTI)
2004-09-01
CAMAL (Cubit adaptive meshing algorithm library) is a software component library for mesh generation. CAMAL 2.0 includes components for triangle, quad and tetrahedral meshing. A simple Application Programmers Interface (API) takes a discrete boundary definition and CAMAL computes a quality interior unstructured grid. The triangle and quad algorithms may also import a geometric definition of a surface on which to define the grid. CAMALs triangle meshing uses a 3D space advancing front method, the quadmore » meshing algorithm is based upon Sandias patented paving algorithm and the tetrahedral meshing algorithm employs the GHS3D-Tetmesh component developed by INRIA, France.« less
Acoustic emission linear pulse holography
Collins, H. Dale; Busse, Lawrence J.; Lemon, Douglas K.
1985-01-01
Defects in a structure are imaged as they propagate, using their emitted acoustic energy as a monitored source. Short bursts of acoustic energy propagate through the structure to a discrete element receiver array. A reference timing transducer located between the array and the inspection zone initiates a series of time-of-flight measurements. A resulting series of time-of-flight measurements are then treated as aperture data and are transferred to a computer for reconstruction of a synthetic linear holographic image. The images can be displayed and stored as a record of defect growth.
Nonferromagnetic linear variable differential transformer
Ellis, James F.; Walstrom, Peter L.
1977-06-14
A nonferromagnetic linear variable differential transformer for accurately measuring mechanical displacements in the presence of high magnetic fields is provided. The device utilizes a movable primary coil inside a fixed secondary coil that consists of two series-opposed windings. Operation is such that the secondary output voltage is maintained in phase (depending on polarity) with the primary voltage. The transducer is well-suited to long cable runs and is useful for measuring small displacements in the presence of high or alternating magnetic fields.
Energy Science and Technology Software Center (OSTI)
2013-07-24
Version 00 Calculations of the decay heat is of great importance for the design of the shielding of discharged fuel, the design and transport of fuel-storage flasks and the management of the resulting radioactive waste. These are relevant to safety and have large economic and legislative consequences. In the HEATKAU code, a new approach has been proposed to evaluate the decay heat power after a fission burst of a fissile nuclide for short cooling time.more » This method is based on the numerical solution of coupled linear differential equations that describe decays and buildups of the minor fission products (MFPs) nuclides. HEATKAU is written entirely in the MATLAB programming environment. The MATLAB data can be stored in a standard, fast and easy-access, platform- independent binary format which is easy to visualize.« less
DOE Publishes CALiPER Report on Linear (T8) LED Lamps in Recessed Troffers
The U.S. Department of Energy's CALiPER program has released Report 21.2, which is part of a series of investigations on linear LED lamps. Report 21.2 focuses on the performance of three linear (T8...
Wang, C. L.
2016-05-17
On the basis of FluoroBancroft linear-algebraic method [S.B. Andersson, Opt. Exp. 16, 18714 (2008)] three highly-resolved positioning methodswere proposed for wavelength-shifting fiber (WLSF) neutron detectors. Using a Gaussian or exponential-decay light-response function (LRF), the non-linear relation of photon-number profiles vs. x-pixels was linearized and neutron positions were determined. The proposed algorithms give an average 0.03-0.08 pixel position error, much smaller than that (0.29 pixel) from a traditional maximum photon algorithm (MPA). The new algorithms result in better detector uniformity, less position misassignment (ghosting), better spatial resolution, and an equivalent or better instrument resolution in powder diffraction than the MPA. Moreover,more » these characters will facilitate broader applications of WLSF detectors at time-of-flight neutron powder diffraction beamlines, including single-crystal diffraction and texture analysis.« less
International linear collider reference design report
Aarons, G.
2007-06-22
The International Linear Collider will give physicists a new cosmic doorway to explore energy regimes beyond the reach of today's accelerators. A proposed electron-positron collider, the ILC will complement the Large Hadron Collider, a proton-proton collider at the European Center for Nuclear Research (CERN) in Geneva, Switzerland, together unlocking some of the deepest mysteries in the universe. With LHC discoveries pointing the way, the ILC -- a true precision machine -- will provide the missing pieces of the puzzle. Consisting of two linear accelerators that face each other, the ILC will hurl some 10 billion electrons and their anti-particles, positrons, toward each other at nearly the speed of light. Superconducting accelerator cavities operating at temperatures near absolute zero give the particles more and more energy until they smash in a blazing crossfire at the centre of the machine. Stretching approximately 35 kilometres in length, the beams collide 14,000 times every second at extremely high energies -- 500 billion-electron-volts (GeV). Each spectacular collision creates an array of new particles that could answer some of the most fundamental questions of all time. The current baseline design allows for an upgrade to a 50-kilometre, 1 trillion-electron-volt (TeV) machine during the second stage of the project. This reference design provides the first detailed technical snapshot of the proposed future electron-positron collider, defining in detail the technical parameters and components that make up each section of the 31-kilometer long accelerator. The report will guide the development of the worldwide R&D program, motivate international industrial studies and serve as the basis for the final engineering design needed to make an official project proposal later this decade.
Reticle stage based linear dosimeter
Berger, Kurt W.
2007-03-27
A detector to measure EUV intensity employs a linear array of photodiodes. The detector is particularly suited for photolithography systems that includes: (i) a ringfield camera; (ii) a source of radiation; (iii) a condenser for processing radiation from the source of radiation to produce a ringfield illumination field for illuminating a mask; (iv) a reticle that is positioned at the ringfield camera's object plane and from which a reticle image in the form of an intensity profile is reflected into the entrance pupil of the ringfield camera, wherein the reticle moves in a direction that is transverse to the length of the ringfield illumination field that illuminates the reticle; (v) detector for measuring the entire intensity along the length of the ringfield illumination field that is projected onto the reticle; and (vi) a wafer onto which the reticle imaged is projected from the ringfield camera.
Reticle stage based linear dosimeter
Berger, Kurt W.
2005-06-14
A detector to measure EUV intensity employs a linear array of photodiodes. The detector is particularly suited for photolithography systems that includes: (i) a ringfield camera; (ii) a source of radiation; (iii) a condenser for processing radiation from the source of radiation to produce a ringfield illumination field for illuminating a mask; (iv) a reticle that is positioned at the ringfield camera's object plane and from which a reticle image in the form of an intensity profile is reflected into the entrance pupil of the ringfield camera, wherein the reticle moves in a direction that is transverse to the length of the ringfield illumination field that illuminates the reticle; (v) detector for measuring the entire intensity along the length of the ringfield illumination field that is projected onto the reticle; and (vi) a wafer onto which the reticle imaged is projected from the ringfield camera.
Belief network algorithms: A study of performance
Jitnah, N.
1996-12-31
This abstract gives an overview of the work. We present a survey of Belief Network algorithms and propose a domain characterization system to be used as a basis for algorithm comparison and for predicting algorithm performance.
Optimized Algorithms Boost Combustion Research
U.S. Department of Energy (DOE) all webpages (Extended Search)
Optimized Algorithms Boost Combustion Research Optimized Algorithms Boost Combustion Research Methane Flame Simulations Run 6x Faster on NERSC's Hopper Supercomputer November 25, 2014 Contact: Kathy Kincade, +1 510 495 2124, kkincade@lbl.gov Turbulent combustion simulations, which provide input to the design of more fuel-efficient combustion systems, have gotten their own efficiency boost, thanks to researchers from the Computational Research Division (CRD) at Lawrence Berkeley National
Modeling patterns in data using linear and related models
Engelhardt, M.E.
1996-06-01
This report considers the use of linear models for analyzing data related to reliability and safety issues of the type usually associated with nuclear power plants. The report discusses some of the general results of linear regression analysis, such as the model assumptions and properties of the estimators of the parameters. The results are motivated with examples of operational data. Results about the important case of a linear regression model with one covariate are covered in detail. This case includes analysis of time trends. The analysis is applied with two different sets of time trend data. Diagnostic procedures and tests for the adequacy of the model are discussed. Some related methods such as weighted regression and nonlinear models are also considered. A discussion of the general linear model is also included. Appendix A gives some basic SAS programs and outputs for some of the analyses discussed in the body of the report. Appendix B is a review of some of the matrix theoretic results which are useful in the development of linear models.
High Performance Preconditioners and Linear Solvers
Energy Science and Technology Software Center (OSTI)
2006-07-27
Hypre is a software library focused on the solution of large, sparse linear systems of equations on massively parallel computers.
Berkeley Algorithms Help Researchers Understand Dark Energy
U.S. Department of Energy (DOE) all webpages (Extended Search)
Algorithms Help Researchers Understand Dark Energy Berkeley Algorithms Help Researchers Understand Dark Energy November 24, 2014 Contact: Linda Vu, +1 510 495 2402, lvu@lbl.gov ...
The Computational Physics Program of the national MFE Computer Center
Mirin, A.A.
1989-01-01
Since June 1974, the MFE Computer Center has been engaged in a significant computational physics effort. The principal objective of the Computational Physics Group is to develop advanced numerical models for the investigation of plasma phenomena and the simulation of present and future magnetic confinement devices. Another major objective of the group is to develop efficient algorithms and programming techniques for current and future generations of supercomputers. The Computational Physics Group has been involved in several areas of fusion research. One main area is the application of Fokker-Planck/quasilinear codes to tokamaks. Another major area is the investigation of resistive magnetohydrodynamics in three dimensions, with applications to tokamaks and compact toroids. A third area is the investigation of kinetic instabilities using a 3-D particle code; this work is often coupled with the task of numerically generating equilibria which model experimental devices. Ways to apply statistical closure approximations to study tokamak-edge plasma turbulence have been under examination, with the hope of being able to explain anomalous transport. Also, we are collaborating in an international effort to evaluate fully three-dimensional linear stability of toroidal devices. In addition to these computational physics studies, the group has developed a number of linear systems solvers for general classes of physics problems and has been making a major effort at ascertaining how to efficiently utilize multiprocessor computers. A summary of these programs are included in this paper. 6 tabs.
A Linac Simulation Code for Macro-Particles Tracking and Steering Algorithm Implementation
sun, yipeng
2012-05-03
In this paper, a linac simulation code written in Fortran90 is presented and several simulation examples are given. This code is optimized to implement linac alignment and steering algorithms, and evaluate the accelerator errors such as RF phase and acceleration gradient, quadrupole and BPM misalignment. It can track a single particle or a bunch of particles through normal linear accelerator elements such as quadrupole, RF cavity, dipole corrector and drift space. One-to-one steering algorithm and a global alignment (steering) algorithm are implemented in this code.
GPU Accelerated Event Detection Algorithm
Energy Science and Technology Software Center (OSTI)
2011-05-25
Smart grid external require new algorithmic approaches as well as parallel formulations. One of the critical components is the prediction of changes and detection of anomalies within the power grid. The state-of-the-art algorithms are not suited to handle the demands of streaming data analysis. (i) need for events detection algorithms that can scale with the size of data, (ii) need for algorithms that can not only handle multi dimensional nature of the data, but alsomore » model both spatial and temporal dependencies in the data, which, for the most part, are highly nonlinear, (iii) need for algorithms that can operate in an online fashion with streaming data. The GAEDA code is a new online anomaly detection techniques that take into account spatial, temporal, multi-dimensional aspects of the data set. The basic idea behind the proposed approach is to (a) to convert a multi-dimensional sequence into a univariate time series that captures the changes between successive windows extracted from the original sequence using singular value decomposition (SVD), and then (b) to apply known anomaly detection techniques for univariate time series. A key challenge for the proposed approach is to make the algorithm scalable to huge datasets by adopting techniques from perturbation theory, incremental SVD analysis. We used recent advances in tensor decomposition techniques which reduce computational complexity to monitor the change between successive windows and detect anomalies in the same manner as described above. Therefore we propose to develop the parallel solutions on many core systems such as GPUs, because these algorithms involve lot of numerical operations and are highly data-parallelizable.« less
Frolov, Vladimir; Backhaus, Scott; Chertkov, Misha
2014-10-24
We explore optimization methods for planning the placement, sizing and operations of Flexible Alternating Current Transmission System (FACTS) devices installed to relieve transmission grid congestion. We limit our selection of FACTS devices to Series Compensation (SC) devices that can be represented by modification of the inductance of transmission lines. Our master optimization problem minimizes the l1 norm of the inductance modification subject to the usual line thermal-limit constraints. We develop heuristics that reduce this non-convex optimization to a succession of Linear Programs (LP) which are accelerated further using cutting plane methods. The algorithm solves an instance of the MatPower Polishmore » Grid model (3299 lines and 2746 nodes) in 40 seconds per iteration on a standard laptop—a speed up that allows the sizing and placement of a family of SC devices to correct a large set of anticipated congestions. We observe that our algorithm finds feasible solutions that are always sparse, i.e., SC devices are placed on only a few lines. In a companion manuscript, we demonstrate our approach on realistically-sized networks that suffer congestion from a range of causes including generator retirement. In this manuscript, we focus on the development of our approach, investigate its structure on a small test system subject to congestion from uniform load growth, and demonstrate computational efficiency on a realistically-sized network.« less
Frolov, Vladimir; Backhaus, Scott; Chertkov, Misha
2014-10-24
We explore optimization methods for planning the placement, sizing and operations of Flexible Alternating Current Transmission System (FACTS) devices installed to relieve transmission grid congestion. We limit our selection of FACTS devices to Series Compensation (SC) devices that can be represented by modification of the inductance of transmission lines. Our master optimization problem minimizes the l_{1} norm of the inductance modification subject to the usual line thermal-limit constraints. We develop heuristics that reduce this non-convex optimization to a succession of Linear Programs (LP) which are accelerated further using cutting plane methods. The algorithm solves an instance of the MatPower Polish Grid model (3299 lines and 2746 nodes) in 40 seconds per iteration on a standard laptop—a speed up that allows the sizing and placement of a family of SC devices to correct a large set of anticipated congestions. We observe that our algorithm finds feasible solutions that are always sparse, i.e., SC devices are placed on only a few lines. In a companion manuscript, we demonstrate our approach on realistically-sized networks that suffer congestion from a range of causes including generator retirement. In this manuscript, we focus on the development of our approach, investigate its structure on a small test system subject to congestion from uniform load growth, and demonstrate computational efficiency on a realistically-sized network.
Frolov, Vladimir; Backhaus, Scott N.; Chertkov, Michael
2014-01-14
We explore optimization methods for planning the placement, sizing and operations of Flexible Alternating Current Transmission System (FACTS) devices installed to relieve transmission grid congestion. We limit our selection of FACTS devices to Series Compensation (SC) devices that can be represented by modification of the inductance of transmission lines. Our master optimization problem minimizes the l_{1} norm of the inductance modification subject to the usual line thermal-limit constraints. We develop heuristics that reduce this non-convex optimization to a succession of Linear Programs (LP) which are accelerated further using cutting plane methods. The algorithm solves an instance of the MatPower Polish Grid model (3299 lines and 2746 nodes) in 40 seconds per iteration on a standard laptop—a speed up that allows the sizing and placement of a family of SC devices to correct a large set of anticipated congestions. We observe that our algorithm finds feasible solutions that are always sparse, i.e., SC devices are placed on only a few lines. In a companion manuscript, we demonstrate our approach on realistically-sized networks that suffer congestion from a range of causes including generator retirement. In this manuscript, we focus on the development of our approach, investigate its structure on a small test system subject to congestion from uniform load growth, and demonstrate computational efficiency on a realistically-sized network.
Visual Empirical Region of Influence (VERI) Pattern Recognition Algorithms
2002-05-01
best pattern recognition results. With a small number of features in a data set an exact solution can be determined. However, the number of possible combinations increases exponentially with the number of features and an alternate means of finding a solution must be found. We developed and implemented a technique for finding solutions in data sets with both small and large numbers of features. The VERI interface tools were written using the Tcl/Tk GUI programming language, version 8.1. Although the Tcl/Tk packages are designed to run on multiple computer platforms, we have concentrated our efforts to develop a user interface for the ubiquitous DOS environment. The VERI algorithms are compiled, executable programs. The interfaces run the VERI algorithms in Leave-One-Out mode using the Euclidean metric.
Visual Empirical Region of Influence (VERI) Pattern Recognition Algorithms
Energy Science and Technology Software Center (OSTI)
2002-05-01
the best pattern recognition results. With a small number of features in a data set an exact solution can be determined. However, the number of possible combinations increases exponentially with the number of features and an alternate means of finding a solution must be found. We developed and implemented a technique for finding solutions in data sets with both small and large numbers of features. The VERI interface tools were written using the Tcl/Tk GUI programming language, version 8.1. Although the Tcl/Tk packages are designed to run on multiple computer platforms, we have concentrated our efforts to develop a user interface for the ubiquitous DOS environment. The VERI algorithms are compiled, executable programs. The interfaces run the VERI algorithms in Leave-One-Out mode using the Euclidean metric.« less
Automated DNA Base Pair Calling Algorithm
Energy Science and Technology Software Center (OSTI)
1999-07-07
The procedure solves the problem of calling the DNA base pair sequence from two channel electropherogram separations in an automated fashion. The core of the program involves a peak picking algorithm based upon first, second, and third derivative spectra for each electropherogram channel, signal levels as a function of time, peak spacing, base pair signal to noise sequence patterns, frequency vs ratio of the two channel histograms, and confidence levels generated during the run. Themore » ratios of the two channels at peak centers can be used to accurately and reproducibly determine the base pair sequence. A further enhancement is a novel Gaussian deconvolution used to determine the peak heights used in generating the ratio.« less
Nonlinear Global Optimization Using Curdling Algorithm in Mathematica Environmet
Energy Science and Technology Software Center (OSTI)
1997-08-05
An algorithm for performing optimization which is a derivative-free, grid-refinement approach to nonlinear optimization was developed and implemented in software as OPTIMIZE. This approach overcomes a number of deficiencies in existing approaches. Most notably, it finds extremal regions rather than only single extremal points. the program is interactive and collects information on control parameters and constraints using menus. For up to two (and potentially three) dimensions, function convergence is displayed graphically. Because the algorithm doesmore » not compute derivatives, gradients, or vectors, it is numerically stable. It can find all the roots of a polynomial in one pass. It is an inherently parallel algorithm. OPTIMIZE-M is a modification of OPTIMIZE designed for use within the Mathematica environment created by Wolfram Research.« less
Optimized Algorithm for Collision Probability Calculations in Cubic Geometry
Garcia, R.D.M.
2004-06-15
An optimized algorithm for implementing a recently developed method of computing collision probabilities (CPs) in three dimensions is reported in this work for the case of a homogeneous cube. Use is made of the geometrical regularity of the domain to rewrite, in a very compact way, the approximate formulas for calculating CPs in general three-dimensional geometry that were derived in a previous work by the author. The ensuing gain in computation time is found to be substantial: While the computation time associated with the general formulas increases as K{sup 2}, where K is the number of elements used in the calculation, that of the specific formulas increases only linearly with K. Accurate numerical results are given for several test cases, and an extension of the algorithm for computing the self-collision probability for a hexahedron is reported at the end of the work.
International Workshop on Linear Colliders 2010
None
2011-10-06
IWLC2010 International Workshop on Linear Colliders 2010ECFA-CLIC-ILC joint meeting: Monday 18 October - Friday 22 October 2010Venue: CERN and CICG (International Conference Centre Geneva, Switzerland)This year, the International Workshop on Linear Colliders organized by the European Committee for Future Accelerators (ECFA) will study the physics, detectors and accelerator complex of a linear collider covering both CLIC and ILC options.Contact Workshop SecretariatIWLC2010 is hostedby CERN
International Workshop on Linear Colliders 2010
None
2011-10-06
IWLC2010 International Workshop on Linear Colliders 2010ECFA-CLIC-ILC joint meeting: Monday 18 October - Friday 22 October 2010Venue: CERN and CICG (International Conference Centre Geneva, Switzerland) This year, the International Workshop on Linear Colliders organized by the European Committee for Future Accelerators (ECFA) will study the physics, detectors and accelerator complex of a linear collider covering both CLIC and ILC options.Contact Workshop Secretariat IWLC2010 is hosted by CERN
Developing and Implementing the Data Mining Algorithms in RAVEN
Sen, Ramazan Sonat; Maljovec, Daniel Patrick; Alfonsi, Andrea; Rabiti, Cristian
2015-09-01
The RAVEN code is becoming a comprehensive tool to perform probabilistic risk assessment, uncertainty quantification, and verification and validation. The RAVEN code is being developed to support many programs and to provide a set of methodologies and algorithms for advanced analysis. Scientific computer codes can generate enormous amounts of data. To post-process and analyze such data might, in some cases, take longer than the initial software runtime. Data mining algorithms/methods help in recognizing and understanding patterns in the data, and thus discover knowledge in databases. The methodologies used in the dynamic probabilistic risk assessment or in uncertainty and error quantification analysis couple system/physics codes with simulation controller codes, such as RAVEN. RAVEN introduces both deterministic and stochastic elements into the simulation while the system/physics code model the dynamics deterministically. A typical analysis is performed by sampling values of a set of parameter values. A major challenge in using dynamic probabilistic risk assessment or uncertainty and error quantification analysis for a complex system is to analyze the large number of scenarios generated. Data mining techniques are typically used to better organize and understand data, i.e. recognizing patterns in the data. This report focuses on development and implementation of Application Programming Interfaces (APIs) for different data mining algorithms, and the application of these algorithms to different databases.
International Linear Collider Technical Design Report - Volume...
Office of Scientific and Technical Information (OSTI)
Linear Collider Technical Design Report - Volume 2: Physics Baer, Howard; Barklow, Tim; Fujii, Keisuke; Gao, Yuanning; Hoang, Andre; Kanemura, Shinya; List, Jenny; Logan, Heather...
LED Replacements for Linear Fluorescent Lamps Webcast
In this June 20, 2011 webcast on LED products marketed as replacements for linear fluorescent lamps, Jason Tuenge of the Pacific Northwest National Laboratory (PNNL) discussed current Lighting...
Linear Thermite Charge - Energy Innovation Portal
U.S. Department of Energy (DOE) all webpages (Extended Search)
The Linear Thermite Charge (LTC) is designed to rapidly cut through concrete and steel ... Can cut both concrete and steel at one time making rebarconcrete structural elements ...
International Linear Collider Technical Design Report - Volume...
Office of Scientific and Technical Information (OSTI)
Design Report - Volume 2: Physics Citation Details In-Document Search Title: International Linear Collider Technical Design Report - Volume 2: Physics You are accessing a ...
Ultra-high vacuum photoelectron linear accelerator
Yu, David U.L.; Luo, Yan
2013-07-16
An rf linear accelerator for producing an electron beam. The outer wall of the rf cavity of said linear accelerator being perforated to allow gas inside said rf cavity to flow to a pressure chamber surrounding said rf cavity and having means of ultra high vacuum pumping of the cathode of said rf linear accelerator. Said rf linear accelerator is used to accelerate polarized or unpolarized electrons produced by a photocathode, or to accelerate thermally heated electrons produced by a thermionic cathode, or to accelerate rf heated field emission electrons produced by a field emission cathode.
Shang, Yu; Lin, Yu; Yu, Guoqiang; Li, Ting; Chen, Lei; Toborek, Michal
2014-05-12
Conventional semi-infinite solution for extracting blood flow index (BFI) from diffuse correlation spectroscopy (DCS) measurements may cause errors in estimation of BFI (αD{sub B}) in tissues with small volume and large curvature. We proposed an algorithm integrating Nth-order linear model of autocorrelation function with the Monte Carlo simulation of photon migrations in tissue for the extraction of αD{sub B}. The volume and geometry of the measured tissue were incorporated in the Monte Carlo simulation, which overcome the semi-infinite restrictions. The algorithm was tested using computer simulations on four tissue models with varied volumes/geometries and applied on an in vivo stroke model of mouse. Computer simulations shows that the high-order (N ≥ 5) linear algorithm was more accurate in extracting αD{sub B} (errors < ±2%) from the noise-free DCS data than the semi-infinite solution (errors: −5.3% to −18.0%) for different tissue models. Although adding random noises to DCS data resulted in αD{sub B} variations, the mean values of errors in extracting αD{sub B} were similar to those reconstructed from the noise-free DCS data. In addition, the errors in extracting the relative changes of αD{sub B} using both linear algorithm and semi-infinite solution were fairly small (errors < ±2.0%) and did not rely on the tissue volume/geometry. The experimental results from the in vivo stroke mice agreed with those in simulations, demonstrating the robustness of the linear algorithm. DCS with the high-order linear algorithm shows the potential for the inter-subject comparison and longitudinal monitoring of absolute BFI in a variety of tissues/organs with different volumes/geometries.
Adaptive protection algorithm and system
Hedrick, Paul (Pittsburgh, PA) [Pittsburgh, PA; Toms, Helen L. (Irwin, PA) [Irwin, PA; Miller, Roger M. (Mars, PA) [Mars, PA
2009-04-28
An adaptive protection algorithm and system for protecting electrical distribution systems traces the flow of power through a distribution system, assigns a value (or rank) to each circuit breaker in the system and then determines the appropriate trip set points based on the assigned rank.
Mesh Algorithms for PDE with Sieve I: Mesh Distribution
Knepley, Matthew G.; Karpeev, Dmitry A.
2009-01-01
We have developed a new programming framework, called Sieve, to support parallel numerical partial differential equation(s) (PDE) algorithms operating over distributed meshes. We have also developed a reference implementation of Sieve in C++ as a library of generic algorithms operating on distributed containers conforming to the Sieve interface. Sieve makes instances of the incidence relation, or arrows, the conceptual first-class objects represented in the containers. Further, generic algorithms acting on this arrow container are systematically used to provide natural geometric operations on the topology and also, through duality, on the data. Finally, coverings and duality are used to encode notmore » only individual meshes, but all types of hierarchies underlying PDE data structures, including multigrid and mesh partitions. In order to demonstrate the usefulness of the framework, we show how the mesh partition data can be represented and manipulated using the same fundamental mechanisms used to represent meshes. We present the complete description of an algorithm to encode a mesh partition and then distribute a mesh, which is independent of the mesh dimension, element shape, or embedding. Moreover, data associated with the mesh can be similarly distributed with exactly the same algorithm. The use of a high level of abstraction within the Sieve leads to several benefits in terms of code reuse, simplicity, and extensibility. We discuss these benefits and compare our approach to other existing mesh libraries.« less
Parallel Algorithms for Graph Optimization using Tree Decompositions
Sullivan, Blair D; Weerapurage, Dinesh P; Groer, Christopher S
2012-06-01
Although many $\\cal{NP}$-hard graph optimization problems can be solved in polynomial time on graphs of bounded tree-width, the adoption of these techniques into mainstream scientific computation has been limited due to the high memory requirements of the necessary dynamic programming tables and excessive runtimes of sequential implementations. This work addresses both challenges by proposing a set of new parallel algorithms for all steps of a tree decomposition-based approach to solve the maximum weighted independent set problem. A hybrid OpenMP/MPI implementation includes a highly scalable parallel dynamic programming algorithm leveraging the MADNESS task-based runtime, and computational results demonstrate scaling. This work enables a significant expansion of the scale of graphs on which exact solutions to maximum weighted independent set can be obtained, and forms a framework for solving additional graph optimization problems with similar techniques.
Finite element analyses of a linear-accelerator electron gun
Iqbal, M. E-mail: muniqbal@ihep.ac.cn; Wasy, A.; Islam, G. U.; Zhou, Z.
2014-02-15
Thermo-structural analyses of the Beijing Electron-Positron Collider (BEPCII) linear-accelerator, electron gun, were performed for the gun operating with the cathode at 1000 °C. The gun was modeled in computer aided three-dimensional interactive application for finite element analyses through ANSYS workbench. This was followed by simulations using the SLAC electron beam trajectory program EGUN for beam optics analyses. The simulations were compared with experimental results of the assembly to verify its beam parameters under the same boundary conditions. Simulation and test results were found to be in good agreement and hence confirmed the design parameters under the defined operating temperature. The gun is operating continuously since commissioning without any thermal induced failures for the BEPCII linear accelerator.
Swiler, Laura Painton; Eldred, Michael Scott
2009-09-01
This report documents the results of an FY09 ASC V&V Methods level 2 milestone demonstrating new algorithmic capabilities for mixed aleatory-epistemic uncertainty quantification. Through the combination of stochastic expansions for computing aleatory statistics and interval optimization for computing epistemic bounds, mixed uncertainty analysis studies are shown to be more accurate and efficient than previously achievable. Part I of the report describes the algorithms and presents benchmark performance results. Part II applies these new algorithms to UQ analysis of radiation effects in electronic devices and circuits for the QASPR program.
SEP Program Planning Template ("Program Planning Template") ...
Office of Energy Efficiency and Renewable Energy (EERE) (indexed site)
SEP Program Planning Template ("Program Planning Template") SEP Program Planning Template ("Program Planning Template") Program Planning Template More Documents & Publications...
Program Evaluation: Program Life Cycle
In general, different types of evaluation are carried out over different parts of a program's life cycle (e.g., Creating a program, Program is underway, or Closing out or end of program)....
On Parallel Push-Relabel based Algorithms for Bipartite Maximum Matching
Langguth, Johannes; Azad, Md Ariful; Halappanavar, Mahantesh; Manne, Fredrik
2014-07-01
We study multithreaded push-relabel based algorithms for computing maximum cardinality matching in bipartite graphs. Matching is a fundamental combinatorial (graph) problem with applications in a wide variety of problems in science and engineering. We are motivated by its use in the context of sparse linear solvers for computing maximum transversal of a matrix. We implement and test our algorithms on several multi-socket multicore systems and compare their performance to state-of-the-art augmenting path-based serial and parallel algorithms using a testset comprised of a wide range of real-world instances. Building on several heuristics for enhancing performance, we demonstrate good scaling for the parallel push-relabel algorithm. We show that it is comparable to the best augmenting path-based algorithms for bipartite matching. To the best of our knowledge, this is the first extensive study of multithreaded push-relabel based algorithms. In addition to a direct impact on the applications using matching, the proposed algorithmic techniques can be extended to preflow-push based algorithms for computing maximum flow in graphs.
Non-Linear Seismic Soil Structure Interaction (SSI) Method for...
Office of Environmental Management (EM)
Non-Linear Seismic Soil Structure Interaction (SSI) Method for Developing Non-Linear Seismic SSI Analysis Techniques Non-Linear Seismic Soil Structure Interaction (SSI) Method for ...
Non-linear Seismic Soil Structure Interaction Method for Developing...
Office of Environmental Management (EM)
Non-Linearity in Seismic SSI Analysis Commercial Software Elements Commercial Software Non-Linear Constitutive Models Non-Linear Seismic SSI Damping ...
Voltage regulation in linear induction accelerators
Parsons, William M.
1992-01-01
Improvement in voltage regulation in a Linear Induction Accelerator wherein a varistor, such as a metal oxide varistor, is placed in parallel with the beam accelerating cavity and the magnetic core. The non-linear properties of the varistor result in a more stable voltage across the beam accelerating cavity than with a conventional compensating resistance.
Voltage regulation in linear induction accelerators
Parsons, W.M.
1992-12-29
Improvement in voltage regulation in a linear induction accelerator wherein a varistor, such as a metal oxide varistor, is placed in parallel with the beam accelerating cavity and the magnetic core is disclosed. The non-linear properties of the varistor result in a more stable voltage across the beam accelerating cavity than with a conventional compensating resistance. 4 figs.
U.S. Department of Energy (DOE) all webpages (Extended Search)
and its Use in Coupling Codes for Multiphysics Simulations Rod Schmidt, Noel Belcourt, Russell Hooper, and Roger Pawlowski Sandia National Laboratories P.O. Box 5800...
U.S. Department of Energy (DOE) all webpages (Extended Search)
of the vehicle. Here the two domains are the fluid exterior to the vehicle (compressible, turbulent fluid flow) and the interior of the vehicle (structural dynamics)...
Practical application of equivalent linearization approaches to nonlinear piping systems
Park, Y.J.; Hofmayer, C.H.
1995-05-01
The use of mechanical energy absorbers as an alternative to conventional hydraulic and mechanical snubbers for piping supports has attracted a wide interest among researchers and practitioners in the nuclear industry. The basic design concept of energy absorbers (EA) is to dissipate the vibration energy of piping systems through nonlinear hysteretic actions of EA!s under design seismic loads. Therefore, some type of nonlinear analysis needs to be performed in the seismic design of piping systems with EA supports. The equivalent linearization approach (ELA) can be a practical analysis tool for this purpose, particularly when the response approach (RSA) is also incorporated in the analysis formulations. In this paper, the following ELA/RSA methods are presented and compared to each other regarding their practice and numerical accuracy: Response approach using the square root of sum of squares (SRSS) approximation (denoted RS in this paper). Classical ELA based on modal combinations and linear random vibration theory (denoted CELA in this paper). Stochastic ELA based on direct solution of response covariance matrix (denoted SELA in this paper). New algorithms to convert response spectra to the equivalent power spectral density (PSD) functions are presented for both the above CELA and SELA methods. The numerical accuracy of the three EL are studied through a parametric error analysis. Finally, the practicality of the presented analysis is demonstrated in two application examples for piping systems with EA supports.
Time Variant Floating Mean Counting Algorithm
Energy Science and Technology Software Center (OSTI)
1999-06-03
This software was written to test a time variant floating mean counting algorithm. The algorithm was developed by Westinghouse Savannah River Company and a provisional patent has been filed on the algorithm. The test software was developed to work with the Val Tech model IVB prototype version II count rate meter hardware. The test software was used to verify the algorithm developed by WSRC could be correctly implemented with the vendor''s hardware.
Daylighting simulation: methods, algorithms, and resources
Carroll, William L.
1999-12-01
This document presents work conducted as part of Subtask C, ''Daylighting Design Tools'', Subgroup C2, ''New Daylight Algorithms'', of the IEA SHC Task 21 and the ECBCS Program Annex 29 ''Daylight in Buildings''. The search for and collection of daylighting analysis methods and algorithms led to two important observations. First, there is a wide range of needs for different types of methods to produce a complete analysis tool. These include: Geometry; Light modeling; Characterization of the natural illumination resource; Materials and components properties, representations; and Usability issues (interfaces, interoperability, representation of analysis results, etc). Second, very advantageously, there have been rapid advances in many basic methods in these areas, due to other forces. They are in part driven by: The commercial computer graphics community (commerce, entertainment); The lighting industry; Architectural rendering and visualization for projects; and Academia: Course materials, research. This has led to a very rich set of information resources that have direct applicability to the small daylighting analysis community. Furthermore, much of this information is in fact available online. Because much of the information about methods and algorithms is now online, an innovative reporting strategy was used: the core formats are electronic, and used to produce a printed form only secondarily. The electronic forms include both online WWW pages and a downloadable .PDF file with the same appearance and content. Both electronic forms include live primary and indirect links to actual information sources on the WWW. In most cases, little additional commentary is provided regarding the information links or citations that are provided. This in turn allows the report to be very concise. The links are expected speak for themselves. The report consists of only about 10+ pages, with about 100+ primary links, but with potentially thousands of indirect links. For purposes of
Optically isolated signal coupler with linear response
Kronberg, James W.
1994-01-01
An optocoupler for isolating electrical signals that translates an electrical input signal linearly to an electrical output signal. The optocoupler comprises a light emitter, a light receiver, and a light transmitting medium. The light emitter, preferably a blue, silicon carbide LED, is of the type that provides linear, electro-optical conversion of electrical signals within a narrow wavelength range. Correspondingly, the light receiver, which converts light signals to electrical signals and is preferably a cadmium sulfide photoconductor, is linearly responsive to light signals within substantially the same wavelength range as the blue LED.
LINEAR COLLIDER PHYSICS RESOURCE BOOK FOR SNOWMASS 2001.
ABE,T.; DAWSON,S.; HEINEMEYER,S.; MARCIANO,W.; PAIGE,F.; TURCOT,A.S.; ET AL
2001-05-03
The American particle physics community can look forward to a well-conceived and vital program of experimentation for the next ten years, using both colliders and fixed target beams to study a wide variety of pressing questions. Beyond 2010, these programs will be reaching the end of their expected lives. The CERN LHC will provide an experimental program of the first importance. But beyond the LHC, the American community needs a coherent plan. The Snowmass 2001 Workshop and the deliberations of the HEPAP subpanel offer a rare opportunity to engage the full community in planning our future for the next decade or more. A major accelerator project requires a decade from the beginning of an engineering design to the receipt of the first data. So it is now time to decide whether to begin a new accelerator project that will operate in the years soon after 2010. We believe that the world high-energy physics community needs such a project. With the great promise of discovery in physics at the next energy scale, and with the opportunity for the uncovering of profound insights, we cannot allow our field to contract to a single experimental program at a single laboratory in the world. We believe that an e{sup +}e{sup {minus}} linear collider is an excellent choice for the next major project in high-energy physics. Applying experimental techniques very different from those used at hadron colliders, an e{sup +}e{sup {minus}} linear collider will allow us to build on the discoveries made at the Tevatron and the LHC, and to add a level of precision and clarity that will be necessary to understand the physics of the next energy scale. It is not necessary to anticipate specific results from the hadron collider programs to argue for constructing an e{sup +}e{sup {minus}} linear collider; in any scenario that is now discussed, physics will benefit from the new information that e{sup +}e{sup {minus}} experiments can provide.
Directives, Delegations, and Requirements [Office of Management (MA)]
1997-08-21
This volume describes program administration that establishes and maintains effective organizational management and control of the emergency management program. Canceled by DOE G 151.1-3.
Residences participating in the Home Energy Rebate or New Home Rebate Program may not also participate in the Weatherization Program.
Status of the SLC (Stanford Linear Collider)
Coupal, D.P.
1989-07-01
This report presents a brief review of the status of the Stanford Linear Collider. Topics covered are: Beam luminosity, Detectors and backgrounds; and Future prospects. 3 refs., 8 figs., 1 tab. (LSP)
Constant-complexity stochastic simulation algorithm with optimal binning
Sanft, Kevin R.; Othmer, Hans G.
2015-08-21
At the molecular level, biochemical processes are governed by random interactions between reactant molecules, and the dynamics of such systems are inherently stochastic. When the copy numbers of reactants are large, a deterministic description is adequate, but when they are small, such systems are often modeled as continuous-time Markov jump processes that can be described by the chemical master equation. Gillespie’s Stochastic Simulation Algorithm (SSA) generates exact trajectories of these systems, but the amount of computational work required for each step of the original SSA is proportional to the number of reaction channels, leading to computational complexity that scales linearly with the problem size. The original SSA is therefore inefficient for large problems, which has prompted the development of several alternative formulations with improved scaling properties. We describe an exact SSA that uses a table data structure with event time binning to achieve constant computational complexity with respect to the number of reaction channels for weakly coupled reaction networks. We present a novel adaptive binning strategy and discuss optimal algorithm parameters. We compare the computational efficiency of the algorithm to existing methods and demonstrate excellent scaling for large problems. This method is well suited for generating exact trajectories of large weakly coupled models, including those that can be described by the reaction-diffusion master equation that arises from spatially discretized reaction-diffusion processes.
A fast contour descriptor algorithm for supernova imageclassification
Aragon, Cecilia R.; Aragon, David Bradburn
2006-07-16
We describe a fast contour descriptor algorithm and its application to a distributed supernova detection system (the Nearby Supernova Factory) that processes 600,000 candidate objects in 80 GB of image data per night. Our shape-detection algorithm reduced the number of false positives generated by the supernova search pipeline by 41% while producing no measurable impact on running time. Fourier descriptors are an established method of numerically describing the shapes of object contours, but transform-based techniques are ordinarily avoided in this type of application due to their computational cost. We devised a fast contour descriptor implementation for supernova candidates that meets the tight processing budget of the application. Using the lowest-order descriptors (F{sub 1} and F{sub -1}) and the total variance in the contour, we obtain one feature representing the eccentricity of the object and another denoting its irregularity. Because the number of Fourier terms to be calculated is fixed and small, the algorithm runs in linear time, rather than the O(n log n) time of an FFT. Constraints on object size allow further optimizations so that the total cost of producing the required contour descriptors is about 4n addition/subtraction operations, where n is the length of the contour.
An efficient algorithm for incompressible N-phase flows
Dong, S.
2014-11-01
We present an efficient algorithm within the phase field framework for simulating the motion of a mixture of N (N?2) immiscible incompressible fluids, with possibly very different physical properties such as densities, viscosities, and pairwise surface tensions. The algorithm employs a physical formulation for the N-phase system that honors the conservations of mass and momentum and the second law of thermodynamics. We present a method for uniquely determining the mixing energy density coefficients involved in the N-phase model based on the pairwise surface tensions among the N fluids. Our numerical algorithm has several attractive properties that make it computationally very efficient: (i) it has completely de-coupled the computations for different flow variables, and has also completely de-coupled the computations for the (N?1) phase field functions; (ii) the algorithm only requires the solution of linear algebraic systems after discretization, and no nonlinear algebraic solve is needed; (iii) for each flow variable the linear algebraic system involves only constant and time-independent coefficient matrices, which can be pre-computed during pre-processing, despite the variable density and variable viscosity of the N-phase mixture; (iv) within a time step the semi-discretized system involves only individual de-coupled Helmholtz-type (including Poisson) equations, despite the strongly-coupled phasefield system of fourth spatial order at the continuum level; (v) the algorithm is suitable for large density contrasts and large viscosity contrasts among the N fluids. Extensive numerical experiments have been presented for several problems involving multiple fluid phases, large density contrasts and large viscosity contrasts. In particular, we compare our simulations with the de Gennes theory, and demonstrate that our method produces physically accurate results for multiple fluid phases. We also demonstrate the significant and sometimes dramatic effects of the gravity
Visiting Faculty Program Program Description
U.S. Department of Energy (DOE) all webpages (Extended Search)
Visiting Faculty Program Program Description The Visiting Faculty Program seeks to increase the research competitiveness of faculty members and their students at institutions historically underrepresented in the research community in order to expand the workforce vital to Department of Energy mission areas. As part of the program, selected university/college faculty members collaborate with DOE laboratory research staff on a research project of mutual interest. Program Objective The program is
Visiting Faculty Program Program Description
U.S. Department of Energy (DOE) all webpages (Extended Search)
covers stipend and travel reimbursement for the 10-week program. Teacherfaculty participants: 1 Program Coordinator: Scott Robbins Email: srobbins@lanl.gov Phone number: 663-5621...
Kok, J.
1988-01-01
To the human programmer the ease of coding distributed computing is highly dependent on the suitability of the employed programming language. But with a particular language it is also important whether the possibilities of one or more parallel architectures can efficiently be addressed by available language constructs. In this paper the possibilities are discussed of the high-level language Ada and in particular of its tasking concept as a descriptional tool for the design and implementation of numerical and other algorithms that allow execution of parts in parallel. Language tools are explained and their use for common applications is shown. Conclusions are drawn about the usefulness of several Ada concepts.
U.S. Department of Energy (DOE) all webpages (Extended Search)
Community Programs Community Environmental Documents Tours Community Programs Friends of Berkeley Lab ⇒ Navigate Section Community Environmental Documents Tours Community Programs Friends of Berkeley Lab Community Education Programs Workforce Development & Education As part of the Lab's education mission to inspire and prepare the next generation of scientists and engineers, the Workforce Development & Education runs numerous education programs for all ages of students-from elementary
Linac Alignment Algorithm: Analysis on 1-to-1 Steering
Sun, Yipeng; Adolphsen, Chris; /SLAC
2011-08-19
In a linear accelerator, it is important to achieve a good alignment between all of its components (such as quadrupoles, RF cavities, beam position monitors et al.), in order to better preserve the beam quality during acceleration. After the survey of the main linac components, there are several beam-based alignment (BBA) techniques to be applied, to further optimize the beam trajectory and calculate the corresponding steering magnets strength. Among these techniques the most simple and straightforward one is the one-to-one (1-to-1) steering technique, which steers the beam from quad center to center, and removes the betatron oscillation from quad focusing. For a future linear collider such as the International Linear Collider (ILC), the initial beam emittance is very small in the vertical plane (flat beam with {gamma}{epsilon}{sub y} = 20-40nm), which means the alignment requirement is very tight. In this note, we evaluate the emittance growth with one-to-one correction algorithm employed, both analytically and numerically. Then the ILC main linac accelerator is taken as an example to compare the vertical emittance growth after 1-to-1 steering, both from analytical formulae and multi-particle tracking simulation. It is demonstrated that the estimated emittance growth from the derived formulae agrees well with the results from numerical simulation, with and without acceleration, respectively.
Computer programs for multilocus haplotyping of general pedigrees
Weeks, D.E.; O`Connell, J.R.; Sobel, E.
1995-06-01
We have recently developed and implemented three different computer algorithms for accurate haplotyping with large numbers of codominant markers. Each of these algorithms employs likelihood criteria that correctly incorporate all intermarker recombination fractions. The three programs, HAPLO, SIMCROSS, and SIMWALK, are now available for haplotying general pedigrees. The HAPLO program will be distributed as part of the Programs for Pedigree Analysis package by Kenneth Lange. The SIMCROSS and SIMWALK programs are available by anonymous ftp from watson.hgen.pitt.edu. Each program is written in FORTRAN 77 and is distributed as source code. 15 refs.
Brau, James E
2013-04-22
The U.S Linear Collider Detector R&D program, supported by the DOE and NSF umbrella grants to the University of Oregon, made significant advances on many critical aspects of the ILC detector program. Progress advanced on vertex detector sensor development, silicon and TPC tracking, calorimetry on candidate technologies, and muon detection, as well as on beamline measurements of luminosity, energy, and polarization.
Slattery, Stuart R.
2015-12-02
In this study we analyze and extend mesh-free algorithms for three-dimensional data transfer problems in partitioned multiphysics simulations. We first provide a direct comparison between a mesh-based weighted residual method using the common-refinement scheme and two mesh-free algorithms leveraging compactly supported radial basis functions: one using a spline interpolation and one using a moving least square reconstruction. Through the comparison we assess both the conservation and accuracy of the data transfer obtained from each of the methods. We do so for a varying set of geometries with and without curvature and sharp features and for functions with and without smoothnessmore » and with varying gradients. Our results show that the mesh-based and mesh-free algorithms are complementary with cases where each was demonstrated to perform better than the other. We then focus on the mesh-free methods by developing a set of algorithms to parallelize them based on sparse linear algebra techniques. This includes a discussion of fast parallel radius searching in point clouds and restructuring the interpolation algorithms to leverage data structures and linear algebra services designed for large distributed computing environments. The scalability of our new algorithms is demonstrated on a leadership class computing facility using a set of basic scaling studies. Finally, these scaling studies show that for problems with reasonable load balance, our new algorithms for both spline interpolation and moving least square reconstruction demonstrate both strong and weak scalability using more than 100,000 MPI processes with billions of degrees of freedom in the data transfer operation.« less
Slattery, Stuart R.
2015-12-02
In this study we analyze and extend mesh-free algorithms for three-dimensional data transfer problems in partitioned multiphysics simulations. We first provide a direct comparison between a mesh-based weighted residual method using the common-refinement scheme and two mesh-free algorithms leveraging compactly supported radial basis functions: one using a spline interpolation and one using a moving least square reconstruction. Through the comparison we assess both the conservation and accuracy of the data transfer obtained from each of the methods. We do so for a varying set of geometries with and without curvature and sharp features and for functions with and without smoothness and with varying gradients. Our results show that the mesh-based and mesh-free algorithms are complementary with cases where each was demonstrated to perform better than the other. We then focus on the mesh-free methods by developing a set of algorithms to parallelize them based on sparse linear algebra techniques. This includes a discussion of fast parallel radius searching in point clouds and restructuring the interpolation algorithms to leverage data structures and linear algebra services designed for large distributed computing environments. The scalability of our new algorithms is demonstrated on a leadership class computing facility using a set of basic scaling studies. Finally, these scaling studies show that for problems with reasonable load balance, our new algorithms for both spline interpolation and moving least square reconstruction demonstrate both strong and weak scalability using more than 100,000 MPI processes with billions of degrees of freedom in the data transfer operation.
Dual-range linearized transimpedance amplifier system
Wessendorf, Kurt O.
2010-11-02
A transimpedance amplifier system is disclosed which simultaneously generates a low-gain output signal and a high-gain output signal from an input current signal using a single transimpedance amplifier having two different feedback loops with different amplification factors to generate two different output voltage signals. One of the feedback loops includes a resistor, and the other feedback loop includes another resistor in series with one or more diodes. The transimpedance amplifier system includes a signal linearizer to linearize one or both of the low- and high-gain output signals by scaling and adding the two output voltage signals from the transimpedance amplifier. The signal linearizer can be formed either as an analog device using one or two summing amplifiers, or alternately can be formed as a digital device using two analog-to-digital converters and a digital signal processor (e.g. a microprocessor or a computer).
Linear transformer driver for pulse generation
Kim, Alexander A; Mazarakis, Michael G; Sinebryukhov, Vadim A; Volkov, Sergey N; Kondratiev, Sergey S; Alexeenko, Vitaly M; Bayol, Frederic; Demol, Gauthier; Stygar, William A
2015-04-07
A linear transformer driver includes at least one ferrite ring positioned to accept a load. The linear transformer driver also includes a first power delivery module that includes a first charge storage devices and a first switch. The first power delivery module sends a first energy in the form of a first pulse to the load. The linear transformer driver also includes a second power delivery module including a second charge storage device and a second switch. The second power delivery module sends a second energy in the form of a second pulse to the load. The second pulse has a frequency that is approximately three times the frequency of the first pulse. The at least one ferrite ring is positioned to force the first pulse and the second pulse to the load by temporarily isolating the first pulse and the second pulse from an electrical ground.
An implementation analysis of the linear discontinuous finite element method
Becker, T. L.
2013-07-01
This paper provides an implementation analysis of the linear discontinuous finite element method (LD-FEM) that spans the space of (l, x, y, z). A practical implementation of LD includes 1) selecting a computationally efficient algorithm to solve the 4 x 4 matrix system Ax = b that describes the angular flux in a mesh element, and 2) choosing how to store the data used to construct the matrix A and the vector b to either reduce memory consumption or increase computational speed. To analyze the first of these, three algorithms were selected to solve the 4 x 4 matrix equation: Cramer's rule, a streamlined implementation of Gaussian elimination, and LAPACK's Gaussian elimination subroutine dgesv. The results indicate that Cramer's rule and the streamlined Gaussian elimination algorithm perform nearly equivalently and outperform LAPACK's implementation of Gaussian elimination by a factor of 2. To analyze the second implementation detail, three formulations of the discretized LD-FEM equations were provided for implementation in a transport solver: 1) a low-memory formulation, which relies heavily on 'on-the-fly' calculations and less on the storage of pre-computed data, 2) a high-memory formulation, which pre-computes much of the data used to construct A and b, and 3) a reduced-memory formulation, which lies between the low - and high-memory formulations. These three formulations were assessed in the Jaguar transport solver based on relative memory footprint and computational speed for increasing mesh size and quadrature order. The results indicated that the memory savings of the low-memory formulation were not sufficient to warrant its implementation. The high-memory formulation resulted in a significant speed advantage over the reduced-memory option (10-50%), but also resulted in a proportional increase in memory consumption (5-45%) for increasing quadrature order and mesh count; therefore, the practitioner should weigh the system memory constraints against any
Light Water Reactor Sustainability Program - Integrated Program...
Office of Environmental Management (EM)
Program - Integrated Program Plan Light Water Reactor Sustainability Program - Integrated Program Plan The Light Water Reactor Sustainability (LWRS) Program is a research and ...
Linear Transformation Method for Multinuclide Decay Calculation
Ding Yuan
2010-12-29
A linear transformation method for generic multinuclide decay calculations is presented together with its properties and implications. The method takes advantage of the linear form of the decay solution N(t) = F(t)N{sub 0}, where N(t) is a column vector that represents the numbers of atoms of the radioactive nuclides in the decay chain, N{sub 0} is the initial value vector of N(t), and F(t) is a lower triangular matrix whose time-dependent elements are independent of the initial values of the system.
Linear and angular retroreflecting interferometric alignment target
Maxey, L. Curtis
2001-01-01
The present invention provides a method and apparatus for measuring both the linear displacement and angular displacement of an object using a linear interferometer system and an optical target comprising a lens, a reflective surface and a retroreflector. The lens, reflecting surface and retroreflector are specifically aligned and fixed in optical connection with one another, creating a single optical target which moves as a unit that provides multi-axis displacement information for the object with which it is associated. This displacement information is useful in many applications including machine tool control systems and laser tracker systems, among others.
Linear Concentrator Solar Power Plant Illustration
This graphic illustrates linear concentrating solar power (CSP) collectors that capture the sun's energy with large mirrors that reflect and focus the sunlight onto a linear receiver tube. The receiver contains a fluid that is heated by the sunlight and then used to create superheated steam that spins a turbine that drives a generator to produce electricity. Alternatively, steam can be generated directly in the solar field, eliminating the need for costly heat exchangers. In a parabolic trough system, the receiver tube is positioned along the focal line of each parabola-shaped reflector.
Beamstrahlung spectra in next generation linear colliders
Barklow, T.; Chen, P. ); Kozanecki, W. )
1992-04-01
For the next generation of linear colliders, the energy loss due to beamstrahlung during the collision of the e{sup +}e{sup {minus}} beams is expected to substantially influence the effective center-of-mass energy distribution of the colliding particles. In this paper, we first derive analytical formulae for the electron and photon energy spectra under multiple beamstrahlung processes, and for the e{sup +}e{sup {minus}} and {gamma}{gamma} differential luminosities. We then apply our formulation to various classes of 500 GeV e{sup +}e{sup {minus}} linear collider designs currently under study.
DOE Publishes CALiPER Report on Cost-Effectiveness of Linear (T8) LED Lamps
The U.S. Department of Energy's CALiPER program has released Report 21.3, which is part of a series of investigations on linear LED lamps. Report 21.3 details a set of life-cycle cost simulations...
Student's algorithm solves real-world problem
U.S. Department of Energy (DOE) all webpages (Extended Search)
Student's algorithm solves real-world problem Supercomputing Challenge: student's algorithm solves real-world problem Students learn how to use powerful computers to analyze, model, and solve real-world problems. April 3, 2012 Jordon Medlock of Albuquerque's Manzano High School won the 2012 Lab-sponsored Supercomputing Challenge Jordon Medlock of Albuquerque's Manzano High School won the 2012 Lab-sponsored Supercomputing Challenge by creating a computer algorithm that automates the process of
Hager, Robert; Yoon, E. S.; Ku, S.; D'Azevedo, E. F.; Worley, P. H.; Chang, C. S.
2016-04-04
Fusion edge plasmas can be far from thermal equilibrium and require the use of a non-linear collision operator for accurate numerical simulations. The non-linear single-species Fokker–Planck–Landau collision operator developed by Yoon and Chang (2014) [9] is generalized to include multiple particle species. Moreover, the finite volume discretization used in this work naturally yields exact conservation of mass, momentum, and energy. The implementation of this new non-linear Fokker–Planck–Landau operator in the gyrokinetic particle-in-cell codes XGC1 and XGCa is described and results of a verification study are discussed. Finally, the numerical techniques that make our non-linear collision operator viable on high-performance computingmore » systems are described, including specialized load balancing algorithms and nested OpenMP parallelization. As a result, the collision operator's good weak and strong scaling behavior are shown.« less
Solar Position Algorithm (SPA) - Energy Innovation Portal
U.S. Department of Energy (DOE) all webpages (Extended Search)
Thermal Solar Thermal Energy Analysis Energy Analysis Find More Like This Return to Search Solar Position Algorithm (SPA) National Renewable Energy Laboratory Contact NREL About ...
Java implementation of Class Association Rule algorithms
Energy Science and Technology Software Center (OSTI)
2007-08-30
Java implementation of three Class Association Rule mining algorithms, NETCAR, CARapriori, and clustering based rule mining. NETCAR algorithm is a novel algorithm developed by Makio Tamura. The algorithm is discussed in a paper: UCRL-JRNL-232466-DRAFT, and would be published in a peer review scientific journal. The software is used to extract combinations of genes relevant with a phenotype from a phylogenetic profile and a phenotype profile. The phylogenetic profiles is represented by a binary matrix andmore » a phenotype profile is represented by a binary vector. The present application of this software will be in genome analysis, however, it could be applied more generally.« less
Generation of attributes for learning algorithms
Hu, Yuh-Jyh; Kibler, D.
1996-12-31
Inductive algorithms rely strongly on their representational biases. Constructive induction can mitigate representational inadequacies. This paper introduces the notion of a relative gain measure and describes a new constructive induction algorithm (GALA) which is independent of the learning algorithm. Unlike most previous research on constructive induction, our methods are designed as preprocessing step before standard machine learning algorithms are applied. We present the results which demonstrate the effectiveness of GALA on artificial and real domains for several learners: C4.5, CN2, perceptron and backpropagation.
Hybrid Discrete - Continuum Algorithms for Stochastic Reaction...
Office of Scientific and Technical Information (OSTI)
for Stochastic Reaction Networks. Citation Details In-Document Search Title: Hybrid Discrete - Continuum Algorithms for Stochastic Reaction Networks. Abstract not provided. ...
KLU2 Direct Linear Solver Package
Energy Science and Technology Software Center (OSTI)
2012-01-04
KLU2 is a direct sparse solver for solving unsymmetric linear systems. It is related to the existing KLU solver, (in Amesos package and also as a stand-alone package from University of Florida) but provides template support for scalar and ordinal types. It uses a left looking LU factorization method.
Physics Case for the International Linear Collider
Fujii, Keisuke; Grojean, Christophe; Peskin, Michael E.; Barklow, Tim; Gao, Yuanning; Kanemura, Shinya; Kim, Hyungdo; List, Jenny; Nojiri, Mihoko; Perelstein, Maxim; Poeschl, Roman; Reuter, Juergen; Simon, Frank; Tanabe, Tomohiko; Yu, Jaehoon; Wells, James D.; Murayama, Hitoshi; Yamamoto, Hitoshi; /Tohoku U.
2015-06-23
We summarize the physics case for the International Linear Collider (ILC). We review the key motivations for the ILC presented in the literature, updating the projected measurement uncertainties for the ILC experiments in accord with the expected schedule of operation of the accelerator and the results of the most recent simulation studies.
Notes on beam dynamics in linear accelerators
Gluckstern, R.L.
1980-09-01
A collection of notes, on various aspects of beam dynamics in linear accelerators, which were produced by the author during five years (1975 to 1980) of consultation for the LASL Accelerator Technology (AT) Division and Medium-Energy Physics (MP) Division is presented.
A microcomputer-controlled linear heater
Schuck, V.; Rahimi, S. )
1991-10-01
In this note the circuits and principles of operation of a relatively simple and inexpensive linear temperature ramp generator are described. The upper-temperature limit and the heating rate are controlled by an Apple II microcomputer. The temperature versus time is displayed on the screen and may be plotted by an {ital x}-{ital y} plotter.
Finite Element Interface to Linear Solvers
Energy Science and Technology Software Center (OSTI)
2005-03-18
Sparse systems of linear equations arise in many engineering applications, including finite elements, finite volumes, and others. The solution of linear systems is often the most computationally intensive portion of the application. Depending on the complexity of problems addressed by the application, there may be no single solver capable of solving all of the linear systems that arise. This motivates the desire to switch an application from one solver librwy to another, depending on themore » problem being solved. The interfaces provided by solver libraries differ greatly, making it difficult to switch an application code from one library to another. The amount of library-specific code in an application Can be greatly reduced by having an abstraction layer between solver libraries and the application, putting a common "face" on various solver libraries. One such abstraction layer is the Finite Element Interface to Linear Solvers (EEl), which has seen significant use by finite element applications at Sandia National Laboratories and Lawrence Livermore National Laboratory.« less
Linear and non-linear forced response of a conical, ducted, laminar premixed flame
Karimi, Nader; Brear, Michael J.; Jin, Seong-Ho; Monty, Jason P. [Department of Mechanical Engineering, University of Melbourne, Parkville, 3010 Vic. (Australia)
2009-11-15
This paper presents an experimental study on the dynamics of a ducted, conical, laminar premixed flame subjected to acoustic excitation of varying amplitudes. The flame transfer function is measured over a range of forcing frequencies and equivalence ratios. In keeping with previous works, the measured flame transfer function is in good agreement with that predicted by linear kinematic theory at low amplitudes of acoustic velocity excitation. However, a systematic departure from linear behaviour is observed as the amplitude of the velocity forcing upstream of the flame increases. This non-linearity is mostly in the phase of the transfer function and manifests itself as a roughly constant phase at high forcing amplitude. Nonetheless, as predicted by non-linear kinematic arguments, the response always remains close to linear at low forcing frequencies, regardless of the forcing amplitude. The origin of this phase behaviour is then sought through optical data post-processing. (author)
U.S. Department of Energy (DOE) all webpages (Extended Search)
Library Services » Retiree Program Retiree Program The Research Library offers a 1 year library card to retired LANL employees that allows usage of Library materials. This service is only available to retired LANL employees. Who is eligible? Any Laboratory retiree, not participating in any other program (ie, Guest Scientist, Affiliate). Upon completion of your application, you will be notified of your acceptance into the program. This does not include past students. What is the term of the
U.S. Department of Energy (DOE) all webpages (Extended Search)
New Commercial Program Development Commercial Current Promotions Industrial Federal Agriculture Heating Ventilation and Air Conditioning Energy efficient Heating Ventilation and...
Neutrino mass, dark energy, and the linear growth factor (Journal...
Office of Scientific and Technical Information (OSTI)
dark energy, and the linear growth factor Citation Details In-Document Search Title: Neutrino mass, dark energy, and the linear growth factor We study the degeneracies between ...
Updates to the International Linear Collider Damping Rings Baseline...
Office of Scientific and Technical Information (OSTI)
Updates to the International Linear Collider Damping Rings Baseline Design Citation Details In-Document Search Title: Updates to the International Linear Collider Damping Rings...
International Linear Collider-A Technical Progress Report (Technical...
Office of Scientific and Technical Information (OSTI)
Technical Report: International Linear Collider-A Technical Progress Report Citation Details In-Document Search Title: International Linear Collider-A Technical Progress Report The ...
A Linear Theory of Microwave Instability in Electron Storage...
Office of Scientific and Technical Information (OSTI)
Journal Article: A Linear Theory of Microwave Instability in Electron Storage Rings Citation Details In-Document Search Title: A Linear Theory of Microwave Instability in Electron...
MHK Technologies/Ocean Current Linear Turbine | Open Energy Informatio...
Open Energy Information (Open El) [EERE & EIA]
Current Linear Turbine < MHK Technologies Jump to: navigation, search << Return to the MHK database homepage Ocean Current Linear Turbine.jpg Technology Profile Primary...
SuperLU{_}DIST: A scalable distributed-memory sparse direct solver for unsymmetric linear systems
Li, Xiaoye S.; Demmel, James W.
2002-03-27
In this paper, we present the main algorithmic features in the software package SuperLU{_}DIST, a distributed-memory sparse direct solver for large sets of linear equations. We give in detail our parallelization strategies, with focus on scalability issues, and demonstrate the parallel performance and scalability on current machines. The solver is based on sparse Gaussian elimination, with an innovative static pivoting strategy proposed earlier by the authors. The main advantage of static pivoting over classical partial pivoting is that it permits a priori determination of data structures and communication pattern for sparse Gaussian elimination, which makes it more scalable on distributed memory machines. Based on this a priori knowledge, we designed highly parallel and scalable algorithms for both LU decomposition and triangular solve and we show that they are suitable for large-scale distributed memory machines.
Phase and Radial Motion in Ion Linear Accelerators
Energy Science and Technology Software Center (OSTI)
2007-03-29
Parmila is an ion-linac particle-dynamics code. The name comes from the phrase, "Phase and Radial Motion in Ion Linear Accelerators." The code generates DTL, CCDTL, and CCL accelerating cells and, using a "drift-kick" method, transforms the beam, represented by a collection of particles, through the linac. The code includes a 2-D and 3-D space-charge calculations. Parmila uses data generated by the Poisson Superfish postprocessor SEC. This version of Parmila was written by Harunori Takeda andmore » was supported through Feb. 2006 by James H. Billen. Setup installs executable programs Parmila.EXE, Lingraf.EXE, and ReadPMI.EXE in the LANL directory. The directory LANL\\Examples\\Parmila contains several subdirectories with sample files for Parmila.« less
An overview of SuperLU: Algorithms, implementation, and userinterface
Li, Xiaoye S.
2003-09-30
We give an overview of the algorithms, design philosophy,and implementation techniques in the software SuperLU, for solving sparseunsymmetric linear systems. In particular, we highlight the differencesbetween the sequential SuperLU (including its multithreaded extension)and parallel SuperLU_DIST. These include the numerical pivoting strategy,the ordering strategy for preserving sparsity, the ordering in which theupdating tasks are performed, the numerical kernel, and theparallelization strategy. Because of the scalability concern, theparallel code is drastically different from the sequential one. Wedescribe the user interfaces ofthe libraries, and illustrate how to usethe libraries most efficiently depending on some matrix characteristics.Finally, we give some examples of how the solver has been used inlarge-scale scientific applications, and the performance.
A High-Order Finite-Volume Algorithm for Fokker-Planck Collisions in Magnetized Plasmas
Xiong, Z; Cohen, R H; Rognlien, T D; Xu, X Q
2007-04-18
A high-order finite volume algorithm is developed for the Fokker-Planck Operator (FPO) describing Coulomb collisions in strongly magnetized plasmas. The algorithm is based on a general fourth-order reconstruction scheme for an unstructured grid in the velocity space spanned by parallel velocity and magnetic moment. The method provides density conservation and high-order-accurate evaluation of the FPO independent of the choice of the velocity coordinates. As an example, a linearized FPO in constant-of-motion coordinates, i.e. the total energy and the magnetic moment, is developed using the present algorithm combined with a cut-cell merging procedure. Numerical tests include the Spitzer thermalization problem and the return to isotropy for distributions initialized with velocity space loss cones. Utilization of the method for a nonlinear FPO is straightforward but requires evaluation of the Rosenbluth potentials.
Petascale algorithms for reactor hydrodynamics.
Fischer, P.; Lottes, J.; Pointer, W. D.; Siegel, A.
2008-01-01
We describe recent algorithmic developments that have enabled large eddy simulations of reactor flows on up to P = 65, 000 processors on the IBM BG/P at the Argonne Leadership Computing Facility. Petascale computing is expected to play a pivotal role in the design and analysis of next-generation nuclear reactors. Argonne's SHARP project is focused on advanced reactor simulation, with a current emphasis on modeling coupled neutronics and thermal-hydraulics (TH). The TH modeling comprises a hierarchy of computational fluid dynamics approaches ranging from detailed turbulence computations, using DNS (direct numerical simulation) and LES (large eddy simulation), to full core analysis based on RANS (Reynolds-averaged Navier-Stokes) and subchannel models. Our initial study is focused on LES of sodium-cooled fast reactor cores. The aim is to leverage petascale platforms at DOE's Leadership Computing Facilities (LCFs) to provide detailed information about heat transfer within the core and to provide baseline data for less expensive RANS and subchannel models.
Initial borehole acoustic televiewer data processing algorithms
Moore, T.K.
1988-06-01
With the development of a new digital televiewer, several algorithms have been developed in support of off-line data processing. This report describes the initial set of utilities developed to support data handling as well as data display. Functional descriptions, implementation details, and instructions for use of the seven algorithms are provided. 5 refs., 33 figs., 1 tab.
Close, E.; Fong, C; Lee, E.
1991-10-30
Although this report is called a program document, it is not simply a user's guide to running HILDA nor is it a programmer's guide to maintaining and updating HILDA. It is a guide to HILDA as a program and as a model for designing and costing a heavy ion fusion (HIF) driver. HILDA represents the work and ideas of many people; as does the model upon which it is based. The project was initiated by Denis Keefe, the leader of the LBL HIFAR project. He suggested the name HILDA, which is an acronym for Heavy Ion Linac Driver Analysis. The conventions and style of development of the HILDA program are based on the original goals. It was desired to have a computer program that could estimate the cost and find an optimal design for Heavy Ion Fusion induction linac drivers. This program should model near-term machines as well as fullscale drivers. The code objectives were: (1) A relatively detailed, but easily understood model. (2) Modular, structured code to facilitate making changes in the model, the analysis reports, and the user interface. (3) Documentation that defines, and explains the system model, cost algorithm, program structure, and generated reports. With this tool a knowledgeable user would be able to examine an ensemble of drivers and find the driver that is minimum in cost, subject to stated constraints. This document contains a report section that describes how to use HILDA, some simple illustrative examples, and descriptions of the models used for the beam dynamics and component design. Associated with this document, as files on floppy disks, are the complete HILDA source code, much information that is needed to maintain and update HILDA, and some complete examples. These examples illustrate that the present version of HILDA can generate much useful information about the design of a HIF driver. They also serve as guides to what features would be useful to include in future updates. The HPD represents the current state of development of this project.
Non-Linear Seismic Soil Structure Interaction (SSI) Method for Developing Non-Linear Seismic SSI Analysis Techniques Justin Coleman, P.E. October 25th, 2011
Linear optics measurements and corrections using an AC dipole in RHIC
Wang, G.; Bai, M.; Yang, L.
2010-05-23
We report recent experimental results on linear optics measurements and corrections using ac dipole. In RHIC 2009 run, the concept of the SVD correction algorithm is tested at injection energy for both identifying the artificial gradient errors and correcting it using the trim quadrupoles. The measured phase beatings were reduced by 30% and 40% respectively for two dedicated experiments. In RHIC 2010 run, ac dipole is used to measure {beta}* and chromatic {beta} function. For the 0.65m {beta}* lattice, we observed a factor of 3 discrepancy between model and measured chromatic {beta} function in the yellow ring.
Enhanced dielectric-wall linear accelerator
Sampayan, S.E.; Caporaso, G.J.; Kirbie, H.C.
1998-09-22
A dielectric-wall linear accelerator is enhanced by a high-voltage, fast e-time switch that includes a pair of electrodes between which are laminated alternating layers of isolated conductors and insulators. A high voltage is placed between the electrodes sufficient to stress the voltage breakdown of the insulator on command. A light trigger, such as a laser, is focused along at least one line along the edge surface of the laminated alternating layers of isolated conductors and insulators extending between the electrodes. The laser is energized to initiate a surface breakdown by a fluence of photons, thus causing the electrical switch to close very promptly. Such insulators and lasers are incorporated in a dielectric wall linear accelerator with Blumlein modules, and phasing is controlled by adjusting the length of fiber optic cables that carry the laser light to the insulator surface. 6 figs.
Noise in phase-preserving linear amplifiers
Pandey, Shashank; Jiang, Zhang; Combes, Joshua; Caves, Carlton M.
2014-12-04
The purpose of a phase-preserving linear amplifier is to make a small signal larger, so that it can be perceived by instruments incapable of resolving the original signal, while sacrificing as little as possible in signal-to-noise. Quantum mechanics limits how well this can be done: the noise added by the amplifier, referred to the input, must be at least half a quantum at the operating frequency. This well-known quantum limit only constrains the second moments of the added noise. Here we provide the quantum constraints on the entire distribution of added noise: any phasepreserving linear amplifier is equivalent to a parametric amplifier with a physical state ? for the ancillary mode; ? determines the properties of the added noise.
Enhanced dielectric-wall linear accelerator
Sampayan, Stephen E.; Caporaso, George J.; Kirbie, Hugh C.
1998-01-01
A dielectric-wall linear accelerator is enhanced by a high-voltage, fast e-time switch that includes a pair of electrodes between which are laminated alternating layers of isolated conductors and insulators. A high voltage is placed between the electrodes sufficient to stress the voltage breakdown of the insulator on command. A light trigger, such as a laser, is focused along at least one line along the edge surface of the laminated alternating layers of isolated conductors and insulators extending between the electrodes. The laser is energized to initiate a surface breakdown by a fluence of photons, thus causing the electrical switch to close very promptly. Such insulators and lasers are incorporated in a dielectric wall linear accelerator with Blumlein modules, and phasing is controlled by adjusting the length of fiber optic cables that carry the laser light to the insulator surface.
Jimenez, Edward Steven,
2013-09-01
The goal of this work is to develop a fast computed tomography (CT) reconstruction algorithm based on graphics processing units (GPU) that achieves significant improvement over traditional central processing unit (CPU) based implementations. The main challenge in developing a CT algorithm that is capable of handling very large datasets is parallelizing the algorithm in such a way that data transfer does not hinder performance of the reconstruction algorithm. General Purpose Graphics Processing (GPGPU) is a new technology that the Science and Technology (S&T) community is starting to adopt in many fields where CPU-based computing is the norm. GPGPU programming requires a new approach to algorithm development that utilizes massively multi-threaded environments. Multi-threaded algorithms in general are difficult to optimize since performance bottlenecks occur that are non-existent in single-threaded algorithms such as memory latencies. If an efficient GPU-based CT reconstruction algorithm can be developed; computational times could be improved by a factor of 20. Additionally, cost benefits will be realized as commodity graphics hardware could potentially replace expensive supercomputers and high-end workstations. This project will take advantage of the CUDA programming environment and attempt to parallelize the task in such a way that multiple slices of the reconstruction volume are computed simultaneously. This work will also take advantage of the GPU memory by utilizing asynchronous memory transfers, GPU texture memory, and (when possible) pinned host memory so that the memory transfer bottleneck inherent to GPGPU is amortized. Additionally, this work will take advantage of GPU-specific hardware (i.e. fast texture memory, pixel-pipelines, hardware interpolators, and varying memory hierarchy) that will allow for additional performance improvements.
TRACC: Algorithm for Predicting and Tracking Barges on Inland Waterways
Energy Science and Technology Software Center (OSTI)
2010-04-23
Algorithm developed in this work is used to predict the location and estimate the traveling speed of a barge moving in inland waterway network. Measurements obtained from GPS or other systems are corrupted with measurement noise and reported at large, irregular time intervals. Thus, creating uncertainty about the current location of the barge and minimizing the effectiveness of emergency response activities in case of an accident or act of terrorism. Developing a prediction algorithm becomemore » a non-trivial problem due to estimation of speed becomes challenging, attributed to the complex interactions between multiple systems associated in the process. This software, uses systems approach in modeling the motion dynamics of the barge and estimates the location and speed of the barge at next, user defined, time interval. In this work, first, to estimate the speed a non-linear, stochastic modeling technique was developed that take local variations and interactions existing in the system. Output speed is then used as an observation in a statistically optimal filtering technique, Kalman filter, formulated in state-space to minimize numerous errors observed in the system. The combined system synergistically fuses the local information available with measurements obtained to predict the location and speed of traveling of the barge accurately.« less
Radio frequency quadrupole resonator for linear accelerator
Moretti, Alfred
1985-01-01
An RFQ resonator for a linear accelerator having a reduced level of interfering modes and producing a quadrupole mode for focusing, bunching and accelerating beams of heavy charged particles, with the construction being characterized by four elongated resonating rods within a cylinder with the rods being alternately shorted and open electrically to the shell at common ends of the rods to provide an LC parallel resonant circuit when activated by a magnetic field transverse to the longitudinal axis.
Communications circuit including a linear quadratic estimator
Ferguson, Dennis D.
2015-07-07
A circuit includes a linear quadratic estimator (LQE) configured to receive a plurality of measurements a signal. The LQE is configured to weight the measurements based on their respective uncertainties to produce weighted averages. The circuit further includes a controller coupled to the LQE and configured to selectively adjust at least one data link parameter associated with a communication channel in response to receiving the weighted averages.
High gradient accelerators for linear light sources
Barletta, W.A.
1988-09-26
Ultra-high gradient radio frequency linacs powered by relativistic klystrons appear to be able to provide compact sources of radiation at XUV and soft x-ray wavelengths with a duration of 1 picosecond or less. This paper provides a tutorial review of the physics applicable to scaling the present experience of the accelerator community to the regime applicable to compact linear light sources. 22 refs., 11 figs., 21 tabs.
The Next Linear Collider: NLC2001
D. Burke et al.
2002-01-14
Recent studies in elementary particle physics have made the need for an e{sup +}e{sup -} linear collider able to reach energies of 500 GeV and above with high luminosity more compelling than ever [1]. Observations and measurements completed in the last five years at the SLC (SLAC), LEP (CERN), and the Tevatron (FNAL) can be explained only by the existence of at least one particle or interaction that has not yet been directly observed in experiment. The Higgs boson of the Standard Model could be that particle. The data point strongly to a mass for the Higgs boson that is just beyond the reach of existing colliders. This brings great urgency and excitement to the potential for discovery at the upgraded Tevatron early in this decade, and almost assures that later experiments at the LHC will find new physics. But the next generation of experiments to be mounted by the world-wide particle physics community must not only find this new physics, they must find out what it is. These experiments must also define the next important threshold in energy. The need is to understand physics at the TeV energy scale as well as the physics at the 100-GeV energy scale is now understood. This will require both the LHC and a companion linear electron-positron collider. A first Zeroth-Order Design Report (ZDR) [2] for a second-generation electron-positron linear collider, the Next Linear Collider (NLC), was published five years ago. The NLC design is based on a high-frequency room-temperature rf accelerator. Its goal is exploration of elementary particle physics at the TeV center-of-mass energy, while learning how to design and build colliders at still higher energies. Many advances in accelerator technologies and improvements in the design of the NLC have been made since 1996. This Report is a brief update of the ZDR.
Towards a Future Linear Collider and The Linear Collider Studies at CERN
None
2011-10-06
During the week 18-22 October, more than 400 physicists will meet at CERN and in the CICG (International Conference Centre Geneva) to review the global progress towards a future linear collider. The 2010 International Workshop on Linear Colliders will study the physics, detectors and accelerator complex of a linear collider covering both the CLIC and ILC options. Among the topics presented and discussed will be the progress towards the CLIC Conceptual Design Report in 2011, the ILC Technical Design Report in 2012, physics and detector studies linked to these reports, and an increasing numbers of common working group activities. The seminar will give an overview of these topics and also CERN?s linear collider studies, focusing on current activities and initial plans for the period 2011-16. n.b: The Council Chamber is also reserved for this colloquium with a live transmission from the Main Auditorium.
Independent Oversight Inspection, Stanford Linear Accelerator Center -
Office of Environmental Management (EM)
Energy Plant - June 2009 Independent Oversight Inspection, Pantex Plant - June 2009 June 2009 Inspection of Environment, Safety, and Health Programs at the Pantex Plant This report documents the results of an inspection of the environment, safety, and health programs at the Department of Energy's (DOE) Pantex Plant. The inspection was performed during March and April 2009 by the DOE Office of Independent Oversight's Office of Environment, Safety and Health Evaluations, which is within the
U.S. Department of Energy (DOE) all webpages (Extended Search)
Program Description SAGE, the Summer of Applied Geophysical Experience, is a unique educational program designed to introduce students in geophysics and related fields to "hands on" geophysical exploration and research. The program emphasizes both teaching of field methods and research related to basic science and a variety of applied problems. SAGE is hosted by the National Security Education Center and the Earth and Environmental Sciences Division of the Los Alamos National
U.S. Department of Energy (DOE) all webpages (Extended Search)
National VolunteerMatch Retired and Senior Volunteer Program United Way of Northern New Mexico United Way of Santa Fe County Giving Employee Giving Campaign Holiday Food Drive...
National Nuclear Security Administration (NNSA)
and dispose of many different hazardous substances, including radioactive materials, toxic chemicals, and biological agents and toxins.
There are a few programs NNSA uses...
Directives, Delegations, and Requirements [Office of Management (MA)]
1992-09-04
To establish the policies, procedures, and specific responsibilities for the Department of Energy (DOE) Counterintelligence (CI) Program. This directive does not cancel any other directive.
U.S. Department of Energy (DOE) all webpages (Extended Search)
Applied Geophysical Experience, is a unique educational program designed to introduce students in geophysics and related fields to "hands on" geophysical exploration and research....
Directives, Delegations, and Requirements [Office of Management (MA)]
1997-05-21
This chapter addresses plans for the acquisition and installation of operating environment hardware and software and design of a training program.
U.S. Department of Energy (DOE) all webpages (Extended Search)
their potential and pursue opportunities in science, technology, engineering and mathematics. Through Expanding Your Horizon (EYH) Network programs, we provide STEM role models...
Office of Energy Efficiency and Renewable Energy (EERE)
Headquarters Human Resources Operations promotes a variety of hiring flexibilities for managers to attract a diverse workforce, from Student Internship Program opportunities (Pathways), Veteran...
Directives, Delegations, and Requirements [Office of Management (MA)]
2004-12-10
The Order establishes Counterintelligence Program requirements and responsibilities for the Department of Energy, including the National Nuclear Security Administration. Supersedes DOE 5670.3.
U.S. Department of Energy (DOE) all webpages (Extended Search)
Program Description Inspiring girls to recognize their potential and pursue opportunities in science, technology, engineering and mathematics. Through Expanding Your Horizon (EYH) ...
IMPACTS: Industrial Technologies Program, Summary of Program...
Office of Energy Efficiency and Renewable Energy (EERE) (indexed site)
IMPACTS: Industrial Technologies Program, Summary of Program Results for CY2009 IMPACTS: Industrial Technologies Program, Summary of Program Results for CY2009 ...
Performance Models for the Spike Banded Linear System Solver
Manguoglu, Murat; Saied, Faisal; Sameh, Ahmed; Grama, Ananth
2011-01-01
With availability of large-scale parallel platforms comprised of tens-of-thousands of processors and beyond, there is significant impetus for the development of scalable parallel sparse linear system solvers and preconditioners. An integral part of this design process is the development of performance models capable of predicting performance and providing accurate cost models for the solvers and preconditioners. There has been some work in the past on characterizing performance of the iterative solvers themselves. In this paper, we investigate the problem of characterizing performance and scalability of banded preconditioners. Recent work has demonstrated the superior convergence properties and robustness of banded preconditioners,more » compared to state-of-the-art ILU family of preconditioners as well as algebraic multigrid preconditioners. Furthermore, when used in conjunction with efficient banded solvers, banded preconditioners are capable of significantly faster time-to-solution. Our banded solver, the Truncated Spike algorithm is specifically designed for parallel performance and tolerance to deep memory hierarchies. Its regular structure is also highly amenable to accurate performance characterization. Using these characteristics, we derive the following results in this paper: (i) we develop parallel formulations of the Truncated Spike solver, (ii) we develop a highly accurate pseudo-analytical parallel performance model for our solver, (iii) we show excellent predication capabilities of our model – based on which we argue the high scalability of our solver. Our pseudo-analytical performance model is based on analytical performance characterization of each phase of our solver. These analytical models are then parameterized using actual runtime information on target platforms. An important consequence of our performance models is that they reveal underlying performance bottlenecks in both serial and parallel formulations. All of our results are validated
Densmore, J.D.; Park, H.; Wollaber, A.B.; Rauenzahn, R.M.; Knoll, D.A.
2015-03-01
We present a moment-based acceleration algorithm applied to Monte Carlo simulation of thermal radiative-transfer problems. Our acceleration algorithm employs a continuum system of moments to accelerate convergence of stiff absorption–emission physics. The combination of energy-conserving tallies and the use of an asymptotic approximation in optically thick regions remedy the difficulties of local energy conservation and mitigation of statistical noise in such regions. We demonstrate the efficiency and accuracy of the developed method. We also compare directly to the standard linearization-based method of Fleck and Cummings [1]. A factor of 40 reduction in total computational time is achieved with the new algorithm for an equivalent (or more accurate) solution as compared with the Fleck–Cummings algorithm.
Student Internship Programs Program Description
U.S. Department of Energy (DOE) all webpages (Extended Search)
Student Internship Programs Program Description The objective of the Laboratory's student internship programs is to provide students with opportunities for meaningful hands- on experience supporting educational progress in their selected scientific or professional fields. The most significant impact of these internship experiences is observed in the intellectual growth experienced by the participants. Student interns are able to appreciate the practical value of their education efforts in their
The design of a parallel adaptive paving all-quadrilateral meshing algorithm
Tautges, T.J.; Lober, R.R.; Vaughan, C.
1995-08-01
Adaptive finite element analysis demands a great deal of computational resources, and as such is most appropriately solved in a massively parallel computer environment. This analysis will require other parallel algorithms before it can fully utilize MP computers, one of which is parallel adaptive meshing. A version of the paving algorithm is being designed which operates in parallel but which also retains the robustness and other desirable features present in the serial algorithm. Adaptive paving in a production mode is demonstrated using a Babuska-Rheinboldt error estimator on a classic linearly elastic plate problem. The design of the parallel paving algorithm is described, and is based on the decomposition of a surface into {open_quotes}virtual{close_quotes} surfaces. The topology of the virtual surface boundaries is defined using mesh entities (mesh nodes and edges) so as to allow movement of these boundaries with smoothing and other operations. This arrangement allows the use of the standard paving algorithm on subdomain interiors, after the negotiation of the boundary mesh.
Program Description | Robotics Internship Program
U.S. Department of Energy (DOE) all webpages (Extended Search)
March 4, 2016. Apply Now for the Robotics Internship About the Internship Program Description Start of Appointment Renewal of Appointment End of Appointment Stipend Information...
Microfabricated linear Paul-Straubel ion trap
Mangan, Michael A.; Blain, Matthew G.; Tigges, Chris P.; Linker, Kevin L.
2011-04-19
An array of microfabricated linear Paul-Straubel ion traps can be used for mass spectrometric applications. Each ion trap comprises two parallel inner RF electrodes and two parallel outer DC control electrodes symmetric about a central trap axis and suspended over an opening in a substrate. Neighboring ion traps in the array can share a common outer DC control electrode. The ions confined transversely by an RF quadrupole electric field potential well on the ion trap axis. The array can trap a wide array of ions.
Micromechanism linear actuator with capillary force sealing
Sniegowski, Jeffry J.
1997-01-01
A class of micromachine linear actuators whose function is based on gas driven pistons in which capillary forces are used to seal the gas behind the piston. The capillary forces also increase the amount of force transmitted from the gas pressure to the piston. In a major subclass of such devices, the gas bubble is produced by thermal vaporization of a working fluid. Because of their dependence on capillary forces for sealing, such devices are only practical on the sub-mm size scale, but in that regime they produce very large force times distance (total work) values.
Rf power sources for linear colliders
Allen, M.A.; Callin, R.S.; Caryotakis, G.; Deruyter, H.; Eppley, K.R.; Fant, K.S.; Farkas, Z.D.; Fowkes, W.R.; Hoag, H.A.; Feinstein, J.; Ko, K.; Koontz, R.F.; Kroll, N.M.; Lavine, T.L.; Lee, T.G.; Loew, G.A.; Miller, R.H.; Nelson, E.M.; Ruth, R.D.; Vlieks, A.E.; Wang, J.W.; Wilson, P.B. ); Boyd, J.K.; Houk, T.; Ryne, R.D.; Westenskow, G.A.; Yu, S.S. (Lawrence Live
1990-06-01
The next generation of linear colliders requires peak power sources of over 200 MW per meter at frequencies above 10 GHz at pulse widths of less than 100 nsec. Several power sources are under active development, including a conventional klystron with rf pulse compression, a relativistic klystron (RK) and a crossed-field amplifier. Power from one of these has energized a 0.5 meter two- section High Gradient Accelerator (HGA) and accelerated a beam at over 80 MeV meter. Results of tests with these experimental devices are presented here.
Genetic algorithms at UC Davis/LLNL
Vemuri, V.R.
1993-12-31
A tutorial introduction to genetic algorithms is given. This brief tutorial should serve the purpose of introducing the subject to the novice. The tutorial is followed by a brief commentary on the term project reports that follow.
Tracking Algorithm for Multi- Dimensional Array Transposition
U.S. Department of Energy (DOE) all webpages (Extended Search)
192002 Yun (Helen) He, SC2002 1 MPI and OpenMP Paradigms on Cluster of SMP Architectures: the Vacancy Tracking Algorithm for Multi- Dimensional Array Transposition Yun (Helen) He...
Advanced Imaging Algorithms for Radiation Imaging Systems
Marleau, Peter
2015-10-01
The intent of the proposed work, in collaboration with University of Michigan, is to develop the algorithms that will bring the analysis from qualitative images to quantitative attributes of objects containing SNM. The first step to achieving this is to develop an indepth understanding of the intrinsic errors associated with the deconvolution and MLEM algorithms. A significant new effort will be undertaken to relate the image data to a posited three-dimensional model of geometric primitives that can be adjusted to get the best fit. In this way, parameters of the model such as sizes, shapes, and masses can be extracted for both radioactive and non-radioactive materials. This model-based algorithm will need the integrated response of a hypothesized configuration of material to be calculated many times. As such, both the MLEM and the model-based algorithm require significant increases in calculation speed in order to converge to solutions in practical amounts of time.
Advanced CHP Control Algorithms: Scope Specification
Katipamula, Srinivas; Brambley, Michael R.
2006-04-28
The primary objective of this multiyear project is to develop algorithms for combined heat and power systems to ensure optimal performance, increase reliability, and lead to the goal of clean, efficient, reliable and affordable next generation energy systems.
Drainage Algorithm for Geospatial Knowledge
Energy Science and Technology Software Center (OSTI)
2006-08-15
The Pacific Northwest National Laboratory (PNNL) has developed a prototype stream extraction algorithm that semi-automatically extracts and characterizes streams using a variety of multisensor imagery and digital terrain elevation data (DTEDÃÂ¯ÃÂÃÂ¢) data. The system is currently optimized for three types of single-band imagery: radar, visible, and thermal. Method of Solution: DRAGON: (1) classifies pixels into clumps of water objects based on the classification of water pixels by spectral signatures and neighborhood relationships, (2) uses themore » morphology operations (erosion and dilation) to separate out large lakes (or embayment), isolated lakes, ponds, wide rivers and narrow rivers, and (3) translates the river objects into vector objects. In detail, the process can be broken down into the following steps. A. Water pixels are initially identified using on the extend range and slope values (if an optional DEM file is available). B. Erode to the distance that defines a large water body and then dilate back. The resulting mask can be used to identify large lake and embayment objects that are then removed from the image. Since this operation be time consuming it is only performed if a simple test (i.e. a large box can be found somewhere in the image that contains only water pixels) that indicates a large water body is present. C. All water pixels are ÃÂ¢ÃÂÃÂclumpedÃÂ¢ÃÂÃÂ (in Imagine terminology clumping is when pixels of a common classification that touch are connected) and clumps which do not contain pure water pixels (e.g. dark cloud shadows) are removed D. The resulting true water pixels are clumped and water objects which are too small (e.g. ponds) or isolated lakes (i.e. isolated objects with a small compactness ratio) are removed. Note that at this point lakes have been identified has a byproduct of the filtering process and can be output has vector layers if needed. E. At this point only river pixels
Limits on linearity of missile allocation optimization
Canavan, G.H.
1997-12-01
Optimizations of missile allocation based on linearized exchange equations produce accurate allocations, but the limits of validity of the linearization are not known. These limits are explored in the context of the upload of weapons by one side to initially small, equal forces of vulnerable and survivable weapons. The analysis compares analytic and numerical optimizations and stability induces based on aggregated interactions of the two missile forces, the first and second strikes they could deliver, and they resulting costs. This note discusses the costs and stability indices induced by unilateral uploading of weapons to an initially symmetrical low force configuration. These limits are quantified for forces with a few hundred missiles by comparing analytic and numerical optimizations of first strike costs. For forces of 100 vulnerable and 100 survivable missiles on each side, the analytic optimization agrees closely with the numerical solution. For 200 vulnerable and 200 survivable missiles on each side, the analytic optimization agrees with the induces to within about 10%, but disagrees with the allocation of the side with more weapons by about 50%. The disagreement comes from the interaction of the possession of more weapons with the shift of allocation from missiles to value that they induce.
Terahertz-driven linear electron acceleration
Nanni, Emilio A.; Huang, Wenqian R.; Hong, Kyung-Han; Ravi, Koustuban; Fallahi, Arya; Moriena, Gustavo; Dwayne Miller, R. J.; Kärtner, Franz X.
2015-10-06
The cost, size and availability of electron accelerators are dominated by the achievable accelerating gradient. Conventional high-brightness radio-frequency accelerating structures operate with 30–50 MeVm^{-1 }gradients. Electron accelerators driven with optical or infrared sources have demonstrated accelerating gradients orders of magnitude above that achievable with conventional radio-frequency structures. However, laser-driven wakefield accelerators require intense femtosecond sources and direct laser-driven accelerators suffer from low bunch charge, sub-micron tolerances and sub-femtosecond timing requirements due to the short wavelength of operation. Here we demonstrate linear acceleration of electrons with keV energy gain using optically generated terahertz pulses. Terahertz-driven accelerating structures enable high-gradient electron/proton accelerators with simple accelerating structures, high repetition rates and significant charge per bunch. As a result, these ultra-compact terahertz accelerators with extremely short electron bunches hold great potential to have a transformative impact for free electron lasers, linear colliders, ultrafast electron diffraction, X-ray science and medical therapy with X-rays and electron beams.
Terahertz-driven linear electron acceleration
Nanni, Emilio A.; Huang, Wenqian R.; Hong, Kyung-Han; Ravi, Koustuban; Fallahi, Arya; Moriena, Gustavo; Dwayne Miller, R. J.; Kärtner, Franz X.
2015-10-06
The cost, size and availability of electron accelerators are dominated by the achievable accelerating gradient. Conventional high-brightness radio-frequency accelerating structures operate with 30–50 MeVm-1 gradients. Electron accelerators driven with optical or infrared sources have demonstrated accelerating gradients orders of magnitude above that achievable with conventional radio-frequency structures. However, laser-driven wakefield accelerators require intense femtosecond sources and direct laser-driven accelerators suffer from low bunch charge, sub-micron tolerances and sub-femtosecond timing requirements due to the short wavelength of operation. Here we demonstrate linear acceleration of electrons with keV energy gain using optically generated terahertz pulses. Terahertz-driven accelerating structures enable high-gradient electron/proton acceleratorsmore » with simple accelerating structures, high repetition rates and significant charge per bunch. As a result, these ultra-compact terahertz accelerators with extremely short electron bunches hold great potential to have a transformative impact for free electron lasers, linear colliders, ultrafast electron diffraction, X-ray science and medical therapy with X-rays and electron beams.« less
Linear dimensions and volumes of human lungs
Hickman, David P.
2012-03-30
TOTAL LUNG Capacity is defined as “the inspiratory capacity plus the functional residual capacity; the volume of air contained in the lungs at the end of a maximal inspiration; also equals vital capacity plus residual volume” (from MediLexicon.com). Within the Results and Discussion section of their April 2012 Health Physics paper, Kramer et al. briefly noted that the lungs of their experimental subjects were “not fully inflated.” By definition and failure to obtain maximal inspiration, Kramer et. al. did not measure Total Lung Capacity (TLC). The TLC equation generated from this work will tend to underestimate TLC and does notmore » improve or update total lung capacity data provided by ICRP and others. Likewise, the five linear measurements performed by Kramer et. al. are only representative of the conditions of the measurement (i.e., not at-rest volume, but not fully inflated either). While there was significant work performed and the data are interesting, the data does not represent a maximal situation, a minimal situation, or an at-rest situation. Moreover, while interesting, the linear data generated by this study is limited by the conditions of the experiment and may not be fully comparative with other lung or inspiratory parameters, measures, or physical dimensions.« less
General Transient Fluid Flow Algorithm
Energy Science and Technology Software Center (OSTI)
1992-03-12
SALE2D calculates two-dimensional fluid flows at all speeds, from the incompressible limit to highly supersonic. An implicit treatment of the pressure calculation similar to that in the Implicit Continuous-fluid Eulerian (ICE) technique provides this flow speed flexibility. In addition, the computing mesh may move with the fluid in a typical Lagrangian fashion, be held fixed in an Eulerian manner, or move in some arbitrarily specified way to provide a continuous rezoning capability. This latitude resultsmore » from use of an Arbitrary Lagrangian-Eulerian (ALE) treatment of the mesh. The partial differential equations solved are the Navier-Stokes equations and the mass and internal energy equations. The fluid pressure is determined from an equation of state and supplemented with an artificial viscous pressure for the computation of shock waves. The computing mesh consists of a two-dimensional network of quadrilateral cells for either cylindrical or Cartesian coordinates, and a variety of user-selectable boundary conditions are provided in the program.« less
Parallel Algorithms and Patterns (Technical Report) | SciTech...
Office of Scientific and Technical Information (OSTI)
Parallel Algorithms and Patterns Citation Details In-Document Search Title: Parallel Algorithms and Patterns Authors: Robey, Robert W. 1 + Show Author Affiliations Los Alamos ...
Efficient algorithm for generating spectra using line-by-lne...
Office of Scientific and Technical Information (OSTI)
Citation Details In-Document Search Title: Efficient algorithm for generating spectra ... Subject: 74 ATOMIC AND MOLECULAR PHYSICS; 70 PLASMA PHYSICS AND FUSION; ALGORITHMS; ...
Robust Algorithm for Computing Statistical Stark Broadening of...
Office of Scientific and Technical Information (OSTI)
Citation Details In-Document Search Title: Robust Algorithm for Computing Statistical ... Language: English Subject: 70 PLASMA PHYSICS AND FUSION; ACCURACY; ALGORITHMS; ...
Solar Position Algorithm for Solar Radiation Applications (Revised...
Office of Scientific and Technical Information (OSTI)
Solar Position Algorithm for Solar Radiation Applications (Revised) Citation Details In-Document Search Title: Solar Position Algorithm for Solar Radiation Applications (Revised) ...
New Algorithm Enables Faster Simulations of Ultrafast Processes
U.S. Department of Energy (DOE) all webpages (Extended Search)
Algorithm Enables Faster Simulations of Ultrafast Processes New Algorithm Enables Faster ... Academy of Sciences, have developed a new real-time time-dependent density function ...
New Design Methods and Algorithms for Multi-component Distillation...
Office of Energy Efficiency and Renewable Energy (EERE) (indexed site)
Design Methods and Algorithms for Multi-component Distillation Processes New Design Methods and Algorithms for Multi-component Distillation Processes multicomponent.pdf (517.32 KB) ...
A Tensor Product Formulation of Strassen's Matrix Multiplication Algorithm with Memory Reduction
Kumar, B.; Huang, C.-H.; Sadayappan, P.; Johnson, R. W.
1995-01-01
In this article, we present a program generation strategy of Strassen's matrix multiplication algorithm using a programming methodology based on tensor product formulas. In this methodology, block recursive programs such as the fast Fourier Transforms and Strassen's matrix multiplication algorithm are expressed as algebraic formulas involving tensor products and other matrix operations. Such formulas can be systematically translated to high-performance parallel/vector codes for various architectures. In this article, we present a nonrecursive implementation of Strassen's algorithm for shared memory vector processors such as the Cray Y-MP. A previous implementation of Strassen's algorithm synthesized from tensor product formulas required workingmore » storage of size O(7 n ) for multiplying 2 n × 2 n matrices. We present a modified formulation in which the working storage requirement is reduced to O(4 n ). The modified formulation exhibits sufficient parallelism for efficient implementation on a shared memory multiprocessor. Performance results on a Cray Y-MP8/64 are presented.« less
The culture of the DOE community will be based on standards. Technical standards will formally integrate part of all DOE facility, program and project activities. The DOE will be recognized as a...
Energy Science and Technology Software Center (OSTI)
1999-02-18
The program is suitable for a lot of applications in applied mathematics, experimental physics, signal analytical system and some engineering applications range i.e. deconvolution spectrum, signal analysis and system property analysis etc.
U.S. Department of Energy (DOE) all webpages (Extended Search)
Program Review (IPR) Quarterly Business Review (QBR) Access to Capital Debt Management July 2013 Aug. 2013 Sept. 2013 Oct. 2013 Nov. 2013 Dec. 2013 Jan. 2014 Feb. 2014 March...
U.S. Department of Energy (DOE) all webpages (Extended Search)
The focal point for basic and applied R&D programs with a primary focus on energy but also encompassing medical, biotechnology, high-energy physics, and advanced scientific ...
A successful candidate in this position will serve as an Program Analyst for the System Operations team in the area of regulatory compliance. The successful candidate will also become a subject...
The computational physics program of the National MFE Computer Center
Mirin, A.A.
1988-01-01
The principal objective of the Computational Physics Group is to develop advanced numerical models for the investigation of plasma phenomena and the simulation of present and future magnetic confinement devices. Another major objective of the group is to develop efficient algorithms and programming techniques for current and future generation of supercomputers. The computational physics group is involved in several areas of fusion research. One main area is the application of Fokker-Planck/quasilinear codes to tokamaks. Another major area is the investigation of resistive magnetohydrodynamics in three dimensions, with applications to compact toroids. Another major area is the investigation of kinetic instabilities using a 3-D particle code. This work is often coupled with the task of numerically generating equilibria which model experimental devices. Ways to apply statistical closure approximations to study tokamak-edge plasma turbulence are being examined. In addition to these computational physics studies, the group has developed a number of linear systems solvers for general classes of physics problems and has been making a major effort at ascertaining how to efficiently utilize multiprocessor computers.
A flexible uncertainty quantification method for linearly coupled multi-physics systems
Chen, Xiao Ng, Brenda; Sun, Yunwei; Tong, Charles
2013-09-01
Highlights: We propose a modularly hybrid UQ methodology suitable for independent development of module-based multi-physics simulation. Our algorithmic framework allows for each module to have its own UQ method (either intrusive or non-intrusive). Information from each module is combined systematically to propagate global uncertainty. Our proposed approach can allow for easy swapping of new methods for any modules without the need to address incompatibilities. We demonstrate the proposed framework on a practical application involving a multi-species reactive transport model. -- Abstract: This paper presents a novel approach to building an integrated uncertainty quantification (UQ) methodology suitable for modern-day component-based approach for multi-physics simulation development. Our hybrid UQ methodology supports independent development of the most suitable UQ method, intrusive or non-intrusive, for each physics module by providing an algorithmic framework to couple these stochastic modules for propagating global uncertainties. We address algorithmic and computational issues associated with the construction of this hybrid framework. We demonstrate the utility of such a framework on a practical application involving a linearly coupled multi-species reactive transport model.
U.S. Department of Energy (DOE) all webpages (Extended Search)
Educational Programs Educational Programs A collaboration between Los Alamos National Laboratory and the University of California at San Diego (UCSD) Jacobs School of Engineering Contact Institute Director Charles Farrar (505) 663-5330 Email UCSD EI Director Michael Todd (858) 534-5951 Professional Staff Assistant Ellie Vigil (505) 667-2818 Email Administrative Assistant Rebecca Duran (505) 665-8899 Email There are two educational components to the Engineering Institute. The Los Alamos Dynamic
U.S. Department of Energy (DOE) all webpages (Extended Search)
Program Leadership - Sandia Energy Energy Search Icon Sandia Home Locations Contact Us Employee Locator Energy & Climate Secure & Sustainable Energy Future Stationary Power Energy Conversion Efficiency Solar Energy Wind Energy Water Power Supercritical CO2 Geothermal Natural Gas Safety, Security & Resilience of the Energy Infrastructure Energy Storage Nuclear Power & Engineering Grid Modernization Battery Testing Nuclear Energy Defense Waste Management Programs Advanced Nuclear
U.S. Department of Energy (DOE) all webpages (Extended Search)
Volunteer Program Volunteer Program Our good neighbor pledge includes active employee engagement in our communities through volunteering. More than 3,000 current and retired Lab employees have logged more than 1.8 million volunteer hours since 2007. August 19, 2015 Los Alamos National Laboratory employee volunteers with Mountain Canine Corps Lab employee Debbi Miller volunteers for the Mountain Canine Corps with her search and rescue dogs. She also volunteers with another search and rescue
U.S. Department of Energy (DOE) all webpages (Extended Search)
Program Summaries Basic Energy Sciences (BES) BES Home About Research Facilities Science Highlights Benefits of BES Funding Opportunities Basic Energy Sciences Advisory Committee (BESAC) Community Resources Program Summaries Brochures Reports Accomplishments Presentations BES and Congress Science for Energy Flow Seeing Matter Nano for Energy Scale of Things Chart Contact Information Basic Energy Sciences U.S. Department of Energy SC-22/Germantown Building 1000 Independence Ave., SW Washington,
TLD linearity vs. beam energy and modality
Troncalli, Andrew J.; Chapman, Jane
2002-12-31
Thermoluminescent dosimetry (TLD) is considered to be a valuable dosimetric tool in determining patient dose. Lithium fluoride doped with magnesium and titanium (TLD-100) is widely used, as it does not display widely divergent energy dependence. For many years, we have known that TLD-100 shows supralinearity to dose. In a radiotherapy clinic, there are multiple energies and modality beams. This work investigates whether individual linearity corrections must be used for each beam or whether a single correction can be applied to all beams. The response of TLD as a function of dose was measured from 25 cGy to 1000 cGy on both electrons and photons from 6 to 18 MeV. This work shows that, within our measurement uncertainty, TLD-100 exhibits supralinearity at all megavoltage energies and modalities.
Shortcuts to adiabaticity from linear response theory
Acconcia, Thiago V.; Bonança, Marcus V. S.; Deffner, Sebastian
2015-10-23
A shortcut to adiabaticity is a finite-time process that produces the same final state as would result from infinitely slow driving. We show that such shortcuts can be found for weak perturbations from linear response theory. Moreover, with the help of phenomenological response functions, a simple expression for the excess work is found—quantifying the nonequilibrium excitations. For two specific examples, i.e., the quantum parametric oscillator and the spin 1/2 in a time-dependent magnetic field, we show that finite-time zeros of the excess work indicate the existence of shortcuts. We finally propose a degenerate family of protocols, which facilitates shortcuts to adiabaticity for specific and very short driving times.
Shortcuts to adiabaticity from linear response theory
Acconcia, Thiago V.; Bonança, Marcus V. S.; Deffner, Sebastian
2015-10-23
A shortcut to adiabaticity is a finite-time process that produces the same final state as would result from infinitely slow driving. We show that such shortcuts can be found for weak perturbations from linear response theory. Moreover, with the help of phenomenological response functions, a simple expression for the excess work is found—quantifying the nonequilibrium excitations. For two specific examples, i.e., the quantum parametric oscillator and the spin 1/2 in a time-dependent magnetic field, we show that finite-time zeros of the excess work indicate the existence of shortcuts. We finally propose a degenerate family of protocols, which facilitates shortcuts tomore » adiabaticity for specific and very short driving times.« less
Linear nozzle with tailored gas plumes
Leon, David D.; Kozarek, Robert L.; Mansour, Adel; Chigier, Norman
2001-01-01
There is claimed a method for depositing fluid material from a linear nozzle in a substantially uniform manner across and along a surface. The method includes directing gaseous medium through said nozzle to provide a gaseous stream at the nozzle exit that entrains fluid material supplied to the nozzle, said gaseous stream being provided with a velocity profile across the nozzle width that compensates for the gaseous medium's tendency to assume an axisymmetric configuration after leaving the nozzle and before reaching the surface. There is also claimed a nozzle divided into respective side-by-side zones, or preferably chambers, through which a gaseous stream can be delivered in various velocity profiles across the width of said nozzle to compensate for the tendency of this gaseous medium to assume an axisymmetric configuration.
Linear nozzle with tailored gas plumes
Kozarek, Robert L.; Straub, William D.; Fischer, Joern E.; Leon, David D.
2003-01-01
There is claimed a method for depositing fluid material from a linear nozzle in a substantially uniform manner across and along a surface. The method includes directing gaseous medium through said nozzle to provide a gaseous stream at the nozzle exit that entrains fluid material supplied to the nozzle, said gaseous stream being provided with a velocity profile across the nozzle width that compensates for the gaseous medium's tendency to assume an axisymmetric configuration after leaving the nozzle and before reaching the surface. There is also claimed a nozzle divided into respective side-by-side zones, or preferably chambers, through which a gaseous stream can be delivered in various velocity profiles across the width of said nozzle to compensate for the tendency of this gaseous medium to assume an axisymmetric configuration.
Radio frequency focused interdigital linear accelerator
Swenson, Donald A.; Starling, W. Joel
2006-08-29
An interdigital (Wideroe) linear accelerator employing drift tubes, and associated support stems that couple to both the longitudinal and support stem electromagnetic fields of the linac, creating rf quadrupole fields along the axis of the linac to provide transverse focusing for the particle beam. Each drift tube comprises two separate electrodes operating at different electrical potentials as determined by cavity rf fields. Each electrode supports two fingers, pointing towards the opposite end of the drift tube, forming a four-finger geometry that produces an rf quadrupole field distribution along its axis. The fundamental periodicity of the structure is equal to one half of the particle wavelength .beta..lamda., where .beta. is the particle velocity in units of the velocity of light and .lamda. is the free space wavelength of the rf. Particles are accelerated in the gaps between drift tubes. The particle beam is focused in regions inside the drift tubes.
Faraday rotation assisted by linearly polarized light
Choi, Jai Min; Kim, Jang Myun; Cho, D.
2007-11-15
We demonstrate a type of chiral effect of an atomic medium. Polarization rotation of a probe beam is observed only when both a magnetic field and a linearly polarized coupling beam are present. We compare it with other chiral effects like optical activity, the Faraday effect, and the optically induced Faraday effect from the viewpoint of spatial inversion and time reversal transformations. As a theoretical model we consider a five-level configuration involving the cesium D2 transition. We use spin-polarized cold cesium atoms trapped in a magneto-optical trap to measure the polarization rotation versus probe detuning. The result shows reasonable agreement with a calculation from the master equation of the five-level configuration.
DOE Publishes CALiPER Report on Cost-Effectiveness of Linear (T8) LED Lamps
The U.S. Department of Energy's CALiPER program has released Report 21.3, which is part of a series of investigations on linear LED lamps. Report 21.3 details a set of life-cycle cost simulations that compared a two-lamp troffer using LED lamps (38W total power draw) or fluorescent lamps (51W total power draw) over a 10-year study period.
Linear-array systems for aerospace NDE
Smith, Robert A.; Willsher, Stephen J.; Bending, Jamie M.
1999-12-02
Rapid large-area inspection of composite structures for impact damage and multi-layered aluminum skins for corrosion has been a recognized priority for several years in both military and civil aerospace applications. Approaches to this requirement have followed two clearly different routes: the development of novel large-area inspection systems, and the enhancement of current ultrasonic or eddy-current methods to reduce inspection times. Ultrasonic inspection is possible with standard flaw detection equipment but the addition of a linear ultrasonic array could reduce inspection times considerably. In order to investigate their potential, 9-element and 17-element linear ultrasonic arrays for composites, and 64-element arrays for aluminum skins, have been developed to DERA specifications for use with the ANDSCAN area scanning system. A 5 m{sup 2} composite wing surface has been scanned with a scan resolution of approximately 3 mm in 6 hours. With subsequent software and hardware improvements all four composite wing surfaces (top/bottom, left/right) of a military fighter aircraft can potentially be inspected in less than a day. Array technology has been very widely used in the medical ultrasound field although rarely above 10 MHz, whereas lap-joint inspection requires a pulse center-frequency of 12 to 20 MHz in order to resolve the separate interfaces in the lap joint. A 128 mm-long multi-element array of 5 mmx2 mm ultrasonic elements for use with the ANDSCAN scanning software was produced to a DERA specification by an NDT manufacturer with experience in the medical imaging field. This paper analyses the performance of the transducers that have been produced and evaluates their use in scanning systems of different configurations.
Application Program Interface for Engineering and Scientific Applications
Energy Science and Technology Software Center (OSTI)
2001-10-18
An Application Program Interface (API) for engineering and scientific applications. This system allows application developers to write to a single uniform interface, obtaining access to all solvers in the Trilinos framwork. This includes linear solvers, eigensolvers, non-linear solvers, and time-dependent solvers.
Scenario Decomposition for 0-1 Stochastic Programs: Improvements and Asynchronous Implementation
Ryan, Kevin; Rajan, Deepak; Ahmed, Shabbir
2016-05-01
We recently proposed scenario decomposition algorithm for stochastic 0-1 programs finds an optimal solution by evaluating and removing individual solutions that are discovered by solving scenario subproblems. In our work, we develop an asynchronous, distributed implementation of the algorithm which has computational advantages over existing synchronous implementations of the algorithm. Improvements to both the synchronous and asynchronous algorithm are proposed. We also test the results on well known stochastic 0-1 programs from the SIPLIB test library and is able to solve one previously unsolved instance from the test set.
Project Profile: Commercial Development of an Advanced Linear...
Office of Energy Efficiency and Renewable Energy (EERE) (indexed site)
Project Profile: Commercial Development of an Advanced Linear-Fresnel-Based CSP Concept SkyFuel logo SkyFuel, under the CSP R&D FOA, is developing a commercial linear-Fresnel-based ...
CALiPER Application Summary Report 19. LED Linear Pendants
none,
2012-10-01
Report 19 reviews the independently tested performance of nine LED linear pendants and also evaluates a collection of 11 linear pendant products available in both an LED and fluorescent version.
Modeling and Algorithmic Approaches to Constitutively-Complex, Micro-structured Fluids
Forest, Mark Gregory
2014-05-06
The team for this Project made significant progress on modeling and algorithmic approaches to hydrodynamics of fluids with complex microstructure. Our advances are broken down into modeling and algorithmic approaches. In experiments a driven magnetic bead in a complex fluid accelerates out of the Stokes regime and settles into another apparent linear response regime. The modeling explains the take-off as a deformation of entanglements, and the longtime behavior is a nonlinear, far-from-equilibrium property. Furthermore, the model has predictive value, as we can tune microstructural properties relative to the magnetic force applied to the bead to exhibit all possible behaviors. Wave-theoretic probes of complex fluids have been extended in two significant directions, to small volumes and the nonlinear regime. Heterogeneous stress and strain features that lie beyond experimental capability were studied. It was shown that nonlinear penetration of boundary stress in confined viscoelastic fluids is not monotone, indicating the possibility of interlacing layers of linear and nonlinear behavior, and thus layers of variable viscosity. Models, algorithms, and codes were developed and simulations performed leading to phase diagrams of nanorod dispersion hydrodynamics in parallel shear cells and confined cavities representative of film and membrane processing conditions. Hydrodynamic codes for polymeric fluids are extended to include coupling between microscopic and macroscopic models, and to the strongly nonlinear regime.
McHugh, P.R.
1995-10-01
Fully coupled, Newton-Krylov algorithms are investigated for solving strongly coupled, nonlinear systems of partial differential equations arising in the field of computational fluid dynamics. Primitive variable forms of the steady incompressible and compressible Navier-Stokes and energy equations that describe the flow of a laminar Newtonian fluid in two-dimensions are specifically considered. Numerical solutions are obtained by first integrating over discrete finite volumes that compose the computational mesh. The resulting system of nonlinear algebraic equations are linearized using Newton`s method. Preconditioned Krylov subspace based iterative algorithms then solve these linear systems on each Newton iteration. Selected Krylov algorithms include the Arnoldi-based Generalized Minimal RESidual (GMRES) algorithm, and the Lanczos-based Conjugate Gradients Squared (CGS), Bi-CGSTAB, and Transpose-Free Quasi-Minimal Residual (TFQMR) algorithms. Both Incomplete Lower-Upper (ILU) factorization and domain-based additive and multiplicative Schwarz preconditioning strategies are studied. Numerical techniques such as mesh sequencing, adaptive damping, pseudo-transient relaxation, and parameter continuation are used to improve the solution efficiency, while algorithm implementation is simplified using a numerical Jacobian evaluation. The capabilities of standard Newton-Krylov algorithms are demonstrated via solutions to both incompressible and compressible flow problems. Incompressible flow problems include natural convection in an enclosed cavity, and mixed/forced convection past a backward facing step.
Student Internship Programs Program Description
U.S. Department of Energy (DOE) all webpages (Extended Search)
for a summer high school student to 75,000 for a Ph.D. student working full-time for a year. Program Coordinator: Scott Robbins Email: srobbins@lanl.gov Phone number: 663-5621...
Support Vector Machine algorithm for regression and classification
Energy Science and Technology Software Center (OSTI)
2001-08-01
The software is an implementation of the Support Vector Machine (SVM) algorithm that was invented and developed by Vladimir Vapnik and his co-workers at AT&T Bell Laboratories. The specific implementation reported here is an Active Set method for solving a quadratic optimization problem that forms the major part of any SVM program. The implementation is tuned to specific constraints generated in the SVM learning. Thus, it is more efficient than general-purpose quadratic optimization programs. Amore » decomposition method has been implemented in the software that enables processing large data sets. The size of the learning data is virtually unlimited by the capacity of the computer physical memory. The software is flexible and extensible. Two upper bounds are implemented to regulate the SVM learning for classification, which allow users to adjust the false positive and false negative rates. The software can be used either as a standalone, general-purpose SVM regression or classification program, or be embedded into a larger software system.« less
Close, E.; Fong, C; Lee, E.
1991-10-30
Although this report is called a program document, it is not simply a user`s guide to running HILDA nor is it a programmer`s guide to maintaining and updating HILDA. It is a guide to HILDA as a program and as a model for designing and costing a heavy ion fusion (HIF) driver. HILDA represents the work and ideas of many people; as does the model upon which it is based. The project was initiated by Denis Keefe, the leader of the LBL HIFAR project. He suggested the name HILDA, which is an acronym for Heavy Ion Linac Driver Analysis. The conventions and style of development of the HILDA program are based on the original goals. It was desired to have a computer program that could estimate the cost and find an optimal design for Heavy Ion Fusion induction linac drivers. This program should model near-term machines as well as fullscale drivers. The code objectives were: (1) A relatively detailed, but easily understood model. (2) Modular, structured code to facilitate making changes in the model, the analysis reports, and the user interface. (3) Documentation that defines, and explains the system model, cost algorithm, program structure, and generated reports. With this tool a knowledgeable user would be able to examine an ensemble of drivers and find the driver that is minimum in cost, subject to stated constraints. This document contains a report section that describes how to use HILDA, some simple illustrative examples, and descriptions of the models used for the beam dynamics and component design. Associated with this document, as files on floppy disks, are the complete HILDA source code, much information that is needed to maintain and update HILDA, and some complete examples. These examples illustrate that the present version of HILDA can generate much useful information about the design of a HIF driver. They also serve as guides to what features would be useful to include in future updates. The HPD represents the current state of development of this project.
Postdoctoral Program Program Description The Postdoctoral (Postdoc...
U.S. Department of Energy (DOE) all webpages (Extended Search)
Postdoctoral Program Program Description The Postdoctoral (Postdoc) Research program offers the opportunity for appointees to perform research in a robust scientific R&D...
Factorization using the quadratic sieve algorithm
Davis, J.A.; Holdridge, D.B.
1983-01-01
Since the cryptosecurity of the RSA two key cryptoalgorithm is no greater than the difficulty of factoring the modulus (product of two secret primes), a code that implements the Quadratic Sieve factorization algorithm on the CRAY I computer has been developed at the Sandia National Laboratories to determine as sharply as possible the current state-of-the-art in factoring. Because all viable attacks on RSA thus far proposed are equivalent to factorization of the modulus, sharper bounds on the computational difficulty of factoring permit improved estimates for the size of RSA parameters needed for given levels of cryptosecurity. Analysis of the Quadratic Sieve indicates that it may be faster than any previously published general purpose algorithm for factoring large integers. The high speed of the CRAY I coupled with the capability of the CRAY to pipeline certain vectorized operations make this algorithm (and code) the front runner in current factoring techniques.
Factorization using the quadratic sieve algorithm
Davis, J.A.; Holdridge, D.B.
1983-12-01
Since the cryptosecurity of the RSA two key cryptoalgorithm is no greater than the difficulty of factoring the modulus (product of two secret primes), a code that implements the Quadratic Sieve factorization algorithm on the CRAY I computer has been developed at the Sandia National Laboratories to determine as sharply as possible the current state-of-the-art in factoring. Because all viable attacks on RSA thus far proposed are equivalent to factorization of the modulus, sharper bounds on the computational difficulty of factoring permit improved estimates for the size of RSA parameters needed for given levels of cryptosecurity. Analysis of the Quadratic Sieve indicates that it may be faster than any previously published general purpose algorithm for factoring large integers. The high speed of the CRAY I coupled with the capability of the CRAY to pipeline certain vectorized operations make this algorithm (and code) the front runner in current factoring techniques.
Bootstrap performance profiles in stochastic algorithms assessment
Costa, Lino; Esprito Santo, Isabel A.C.P.; Oliveira, Pedro
2015-03-10
Optimization with stochastic algorithms has become a relevant research field. Due to its stochastic nature, its assessment is not straightforward and involves integrating accuracy and precision. Performance profiles for the mean do not show the trade-off between accuracy and precision, and parametric stochastic profiles require strong distributional assumptions and are limited to the mean performance for a large number of runs. In this work, bootstrap performance profiles are used to compare stochastic algorithms for different statistics. This technique allows the estimation of the sampling distribution of almost any statistic even with small samples. Multiple comparison profiles are presented for more than two algorithms. The advantages and drawbacks of each assessment methodology are discussed.
Resistive Network Optimal Power Flow: Uniqueness and Algorithms
Tan, CW; Cai, DWH; Lou, X
2015-01-01
The optimal power flow (OPF) problem minimizes the power loss in an electrical network by optimizing the voltage and power delivered at the network buses, and is a nonconvex problem that is generally hard to solve. By leveraging a recent development on the zero duality gap of OPF, we propose a second-order cone programming convex relaxation of the resistive network OPF, and study the uniqueness of the optimal solution using differential topology, especially the Poincare-Hopf Index Theorem. We characterize the global uniqueness for different network topologies, e.g., line, radial, and mesh networks. This serves as a starting point to design distributed local algorithms with global behaviors that have low complexity, are computationally fast, and can run under synchronous and asynchronous settings in practical power grids.
Governance of the International Linear Collider Project
Foster, B.; Barish, B.; Delahaye, J.P.; Dosselli, U.; Elsen, E.; Harrison, M.; Mnich, J.; Paterson, J.M.; Richard, F.; Stapnes, S.; Suzuki, A.; Wormser, G.; Yamada, S.; /KEK, Tsukuba
2012-05-31
Governance models for the International Linear Collider Project are examined in the light of experience from similar international projects around the world. Recommendations for one path which could be followed to realize the ILC successfully are outlined. The International Linear Collider (ILC) is a unique endeavour in particle physics; fully international from the outset, it has no 'host laboratory' to provide infrastructure and support. The realization of this project therefore presents unique challenges, in scientific, technical and political arenas. This document outlines the main questions that need to be answered if the ILC is to become a reality. It describes the methodology used to harness the wisdom displayed and lessons learned from current and previous large international projects. From this basis, it suggests both general principles and outlines a specific model to realize the ILC. It recognizes that there is no unique model for such a laboratory and that there are often several solutions to a particular problem. Nevertheless it proposes concrete solutions that the authors believe are currently the best choices in order to stimulate discussion and catalyze proposals as to how to bring the ILC project to fruition. The ILC Laboratory would be set up by international treaty and be governed by a strong Council to whom a Director General and an associated Directorate would report. Council would empower the Director General to give strong management to the project. It would take its decisions in a timely manner, giving appropriate weight to the financial contributions of the member states. The ILC Laboratory would be set up for a fixed term, capable of extension by agreement of all the partners. The construction of the machine would be based on a Work Breakdown Structure and value engineering and would have a common cash fund sufficiently large to allow the management flexibility to optimize the project's construction. Appropriate contingency, clearly
Navajo Electrification Demonstration Program
Office of Energy Efficiency and Renewable Energy (EERE) (indexed site)
Future Plans * Navajo Electrification Demonstration Program -Video OBJECTIVES OBJECTIVES " ... Navajo Electrification Demonstration Navajo Electrification Demonstration Program Program ...
Graph algorithms in the titan toolkit.
McLendon, William Clarence, III; Wylie, Brian Neil
2009-10-01
Graph algorithms are a key component in a wide variety of intelligence analysis activities. The Graph-Based Informatics for Non-Proliferation and Counter-Terrorism project addresses the critical need of making these graph algorithms accessible to Sandia analysts in a manner that is both intuitive and effective. Specifically we describe the design and implementation of an open source toolkit for doing graph analysis, informatics, and visualization that provides Sandia with novel analysis capability for non-proliferation and counter-terrorism.
Speckle imaging algorithms for planetary imaging
Johansson, E.
1994-11-15
I will discuss the speckle imaging algorithms used to process images of the impact sites of the collision of comet Shoemaker-Levy 9 with Jupiter. The algorithms use a phase retrieval process based on the average bispectrum of the speckle image data. High resolution images are produced by estimating the Fourier magnitude and Fourier phase of the image separately, then combining them and inverse transforming to achieve the final result. I will show raw speckle image data and high-resolution image reconstructions from our recent experiment at Lick Observatory.
Berkeley Algorithms Help Researchers Understand Dark Energy
U.S. Department of Energy (DOE) all webpages (Extended Search)
Algorithms Help Researchers Understand Dark Energy Berkeley Algorithms Help Researchers Understand Dark Energy November 24, 2014 Contact: Linda Vu, +1 510 495 2402, lvu@lbl.gov Scientists believe that dark energy-the mysterious force that is accelerating cosmic expansion-makes up about 70 percent of the mass and energy of the universe. But because they don't know what it is, they cannot observe it directly. To unlock the mystery of dark energy and its influence on the universe, researchers
Linear air-fuel sensor development
Garzon, F.; Miller, C.
1996-12-14
The electrochemical zirconia solid electrolyte oxygen sensor, is extensively used for monitoring oxygen concentrations in various fields. They are currently utilized in automobiles to monitor the exhaust gas composition and control the air-to-fuel ratio, thus reducing harmful emission components and improving fuel economy. Zirconia oxygen sensors, are divided into two classes of devices: (1) potentiometric or logarithmic air/fuel sensors; and (2) amperometric or linear air/fuel sensors. The potentiometric sensors are ideally suited to monitor the air-to-fuel ratio close to the complete combustion stoichiometry; a value of about 14.8 to 1 parts by volume. This occurs because the oxygen concentration changes by many orders of magnitude as the air/fuel ratio is varied through the stoichiometric value. However, the potentiometric sensor is not very sensitive to changes in oxygen partial pressure away from the stoichiometric point due to the logarithmic dependence of the output voltage signal on the oxygen partial pressure. It is often advantageous to operate gasoline power piston engines with excess combustion air; this improves fuel economy and reduces hydrocarbon emissions. To maintain stable combustion away from stoichiometry, and enable engines to operate in the excess oxygen (lean burn) region several limiting-current amperometric sensors have been reported. These sensors are based on the electrochemical oxygen ion pumping of a zirconia electrolyte. They typically show reproducible limiting current plateaus with an applied voltage caused by the gas diffusion overpotential at the cathode.
VINETA II: A linear magnetic reconnection experiment
Bohlin, H. Von Stechow, A.; Rahbarnia, K.; Grulke, O.; Klinger, T.; Ernst-Moritz-Arndt University, Domstr. 11, 17489 Greifswald
2014-02-15
A linear experiment dedicated to the study of driven magnetic reconnection is presented. The new device (VINETA II) is suitable for investigating both collisional and near collisionless reconnection. Reconnection is achieved by externally driving magnetic field lines towards an X-point, inducing a current in the background plasma which consequently modifies the magnetic field topology. Owing to the open field line configuration of the experiment, the current is limited by the axial sheath boundary conditions. A plasma gun is used as an additional electron source in order to counterbalance the charge separation effects and supply the required current. Two drive methods are used in the device. First, an oscillating current through two parallel conductors drive the reconnection. Second, a stationary X-point topology is formed by the parallel conductors, and the drive is achieved by an oscillating current through a third conductor. In the first setup, the magnetic field of the axial plasma current dominates the field topology near the X-point throughout most of the drive. The second setup allows for the amplitude of the plasma current as well as the motion of the flux to be set independently of the X-point topology of the parallel conductors.
Precision envelope detector and linear rectifier circuitry
Davis, Thomas J.
1980-01-01
Disclosed is a method and apparatus for the precise linear rectification and envelope detection of oscillatory signals. The signal is applied to a voltage-to-current converter which supplies current to a constant current sink. The connection between the converter and the sink is also applied through a diode and an output load resistor to a ground connection. The connection is also connected to ground through a second diode of opposite polarity from the diode in series with the load resistor. Very small amplitude voltage signals applied to the converter will cause a small change in the output current of the converter, and the difference between the output current and the constant current sink will be applied either directly to ground through the single diode, or across the output load resistor, dependent upon the polarity. Disclosed also is a full-wave rectifier utilizing constant current sinks and voltage-to-current converters. Additionally, disclosed is a combination of the voltage-to-current converters with differential integrated circuit preamplifiers to boost the initial signal amplitude, and with low pass filtering applied so as to obtain a video or signal envelope output.
Liquid cooled, linear focus solar cell receiver
Kirpich, A.S.
1983-12-08
Separate structures for electrical insulation and thermal conduction are established within a liquid cooled, linear focus solar cell receiver for use with parabolic or Fresnel optical concentrators. The receiver includes a V-shaped aluminum extrusion having a pair of outer faces each formed with a channel receiving a string of solar cells in thermal contact with the extrusion. Each cell string is attached to a continuous glass cover secured within the channel with spring clips to isolate the string from the external environment. Repair or replacement of solar cells is effected simply by detaching the spring clips to remove the cover/cell assembly without interrupting circulation of coolant fluid through the receiver. The lower surface of the channel in thermal contact with the cells of the string is anodized to establish a suitable standoff voltage capability between the cells and the extrusion. Primary electrical insulation is provided by a dielectric tape disposed between the coolant tube and extrusion. Adjacent solar cells are soldered to interconnect members designed to accommodate thermal expansion and mismatches. The coolant tube is clamped into the extrusion channel with a releasably attachable clamping strip to facilitate easy removal of the receiver from the coolant circuit.
Liquid cooled, linear focus solar cell receiver
Kirpich, Aaron S.
1985-01-01
Separate structures for electrical insulation and thermal conduction are established within a liquid cooled, linear focus solar cell receiver for use with parabolic or Fresnel optical concentrators. The receiver includes a V-shaped aluminum extrusion having a pair of outer faces each formed with a channel receiving a string of solar cells in thermal contact with the extrusion. Each cell string is attached to a continuous glass cover secured within the channel with spring clips to isolate the string from the external environment. Repair or replacement of solar cells is effected simply by detaching the spring clips to remove the cover/cell assembly without interrupting circulation of coolant fluid through the receiver. The lower surface of the channel in thermal contact with the cells of the string is anodized to establish a suitable standoff voltage capability between the cells and the extrusion. Primary electrical insulation is provided by a dielectric tape disposed between the coolant tube and extrusion. Adjacent solar cells are soldered to interconnect members designed to accommodate thermal expansion and mismatches. The coolant tube is clamped into the extrusion channel with a releasably attachable clamping strip to facilitate easy removal of the receiver from the coolant circuit.
Office of Energy Efficiency and Renewable Energy (EERE) (indexed site)
& DEVELOPMENT: PROGRAM ABSTRACTS Energy Efficiency and Renewable Energy Office of Transportation Technologies Office of Advanced Automotive Technologies Catalyst Layer Bipolar Plate Electrode Backing Layers INTEGRATED SYSTEMS Polymer Electrolyte Membrane Fuel Cells Fuel Cell Stack PEM STACK & STACK COMPONENTS Fuel Cell Stack System Air Management System Fuel Processor System For Transportation June 1999 ENERGY EFFICIENCY AND RENEWABLE ENERGY OFFICE OF TRANSPORTATION TECHNOLOGIES OFFICE
Drift problems in the automatic analysis of gamma-ray spectra using associative memory algorithms
Olmos, P.; Diaz, J.C.; Perez, J.M.; Aguayo, P. ); Gomez, P.; Rodellar, V. )
1994-06-01
Perturbations affecting nuclear radiation spectrometers during their operation frequently spoil the accuracy of automatic analysis methods. One of the problems usually found in practice refers to fluctuations in the spectrum gain and zero, produced by drifts in the detector and nuclear electronics. The pattern acquired in these conditions may be significantly different from that expected with stable instrumentation, thus complicating the identification and quantification of the radionuclides present in it. In this work, the performance of Associative Memory algorithms when dealing with spectra affected by drifts is explored considering a linear energy-calibration function. The formulation of the extended algorithm, constructed to quantify the possible presence of drifts in the spectrometer, is deduced and the results obtained from its application to several practical cases are commented.
Linear Fixed-Field Multi-Pass Arcs for Recirculating Linear Accelerators
V.S. Morozov, S.A. Bogacz, Y.R. Roblin, K.B. Beard
2012-06-01
Recirculating Linear Accelerators (RLA's) provide a compact and efficient way of accelerating particle beams to medium and high energies by reusing the same linac for multiple passes. In the conventional scheme, after each pass, the different energy beams coming out of the linac are separated and directed into appropriate arcs for recirculation, with each pass requiring a separate fixed-energy arc. In this paper we present a concept of an RLA return arc based on linear combined-function magnets, in which two and potentially more consecutive passes with very different energies are transported through the same string of magnets. By adjusting the dipole and quadrupole components of the constituting linear combined-function magnets, the arc is designed to be achromatic and to have zero initial and final reference orbit offsets for all transported beam energies. We demonstrate the concept by developing a design for a droplet-shaped return arc for a dog-bone RLA capable of transporting two beam passes with momenta different by a factor of two. We present the results of tracking simulations of the two passes and lay out the path to end-to-end design and simulation of a complete dog-bone RLA.
Lin, Youzuo; O'Malley, Daniel; Vesselinov, Velimir V.
2016-08-19
Inverse modeling seeks model parameters given a set of observations. However, for practical problems because the number of measurements is often large and the model parameters are also numerous, conventional methods for inverse modeling can be computationally expensive. We have developed a new, computationally-efficient parallel Levenberg-Marquardt method for solving inverse modeling problems with a highly parameterized model space. Levenberg-Marquardt methods require the solution of a linear system of equations which can be prohibitively expensive to compute for moderate to large-scale problems. Our novel method projects the original linear problem down to a Krylov subspace, such that the dimensionality of themore » problem can be significantly reduced. Furthermore, we store the Krylov subspace computed when using the first damping parameter and recycle the subspace for the subsequent damping parameters. The efficiency of our new inverse modeling algorithm is significantly improved using these computational techniques. We apply this new inverse modeling method to invert for random transmissivity fields in 2D and a random hydraulic conductivity field in 3D. Our algorithm is fast enough to solve for the distributed model parameters (transmissivity) in the model domain. The algorithm is coded in Julia and implemented in the MADS computational framework (http://mads.lanl.gov). By comparing with Levenberg-Marquardt methods using standard linear inversion techniques such as QR or SVD methods, our Levenberg-Marquardt method yields a speed-up ratio on the order of ~101 to ~102 in a multi-core computational environment. Furthermore, our new inverse modeling method is a powerful tool for characterizing subsurface heterogeneity for moderate- to large-scale problems.« less
Gamma-ray Spectral Analysis Algorithm Library
Energy Science and Technology Software Center (OSTI)
1997-09-25
The routines of the Gauss Algorithm library are used to implement special purpose products that need to analyze gamma-ray spectra from GE semiconductor detectors as a part of their function. These routines provide the ability to calibrate energy, calibrate peakwidth, search for peaks, search for regions, and fit the spectral data in a given region to locate gamma rays.
Gamma-ray spectral analysis algorithm library
Energy Science and Technology Software Center (OSTI)
2013-05-06
The routines of the Gauss Algorithms library are used to implement special purpose products that need to analyze gamma-ray spectra from Ge semiconductor detectors as a part of their function. These routines provide the ability to calibrate energy, calibrate peakwidth, search for peaks, search for regions, and fit the spectral data in a given region to locate gamma rays.
PDES. FIPS Standard Data Encryption Algorithm
Nessett, D.N.
1992-03-03
PDES performs the National Bureau of Standards FIPS Pub. 46 data encryption/description algorithm used for the cryptographic protection of computer data. The DES algorithm is designed to encipher and decipher blocks of data consisting of 64 bits under control of a 64-bit key. The key is generated in such a way that each of the 56 bits used directly by the algorithm are random and the remaining 8 error-detecting bits are set to make the parity of each 8-bit byte of the key odd, i.e. there is an odd number of 1 bits in each 8-bit byte. Each member of a group of authorized users of encrypted computer data must have the key that was used to encipher the data in order to use it. Data can be recovered from cipher only by using exactly the same key used to encipher it, but with the schedule of addressing the key bits altered so that the deciphering process is the reverse of the enciphering process. A block of data to be enciphered is subjected to an initial permutation, then to a complex key-dependent computation, and finally to a permutation which is the inverse of the initial permutation. Two PDES routines are included; both perform the same calculation. One, identified as FDES.MAR, is designed to achieve speed in execution, while the other identified as PDES.MAR, presents a clearer view of how the algorithm is executed.
Control algorithms for autonomous robot navigation
Jorgensen, C.C.
1985-09-20
This paper examines control algorithm requirements for autonomous robot navigation outside laboratory environments. Three aspects of navigation are considered: navigation control in explored terrain, environment interactions with robot sensors, and navigation control in unanticipated situations. Major navigation methods are presented and relevance of traditional human learning theory is discussed. A new navigation technique linking graph theory and incidental learning is introduced.
High-gradient compact linear accelerator
Carder, B.M.
1998-05-26
A high-gradient linear accelerator comprises a solid-state stack in a vacuum of five sets of disc-shaped Blumlein modules each having a center hole through which particles are sequentially accelerated. Each Blumlein module is a sandwich of two outer conductive plates that bracket an inner conductive plate positioned between two dielectric plates with different thicknesses and dielectric constants. A third dielectric core in the shape of a hollow cylinder forms a casing down the series of center holes, and it has a dielectric constant different that the two dielectric plates that sandwich the inner conductive plate. In operation, all the inner conductive plates are charged to the same DC potential relative to the outer conductive plates. Next, all the inner conductive plates are simultaneously shorted to the outer conductive plates at the outer diameters. The signal short will propagate to the inner diameters at two different rates in each Blumlein module. A faster wave propagates quicker to the third dielectric core across the dielectric plates with the closer spacing and lower dielectric constant. When the faster wave reaches the inner extents of the outer and inner conductive plates, it reflects back outward and reverses the field in that segment of the dielectric core. All the field segments in the dielectric core are then in unipolar agreement until the slower wave finally propagates to the third dielectric core across the dielectric plates with the wider spacing and higher dielectric constant. During such unipolar agreement, particles in the core are accelerated with gradients that exceed twenty megavolts per meter. 10 figs.
High-gradient compact linear accelerator
Carder, Bruce M.
1998-01-01
A high-gradient linear accelerator comprises a solid-state stack in a vacuum of five sets of disc-shaped Blumlein modules each having a center hole through which particles are sequentially accelerated. Each Blumlein module is a sandwich of two outer conductive plates that bracket an inner conductive plate positioned between two dielectric plates with different thicknesses and dielectric constants. A third dielectric core in the shape of a hollow cylinder forms a casing down the series of center holes, and it has a dielectric constant different that the two dielectric plates that sandwich the inner conductive plate. In operation, all the inner conductive plates are charged to the same DC potential relative to the outer conductive plates. Next, all the inner conductive plates are simultaneously shorted to the outer conductive plates at the outer diameters. The signal short will propagate to the inner diameters at two different rates in each Blumlein module. A faster wave propagates quicker to the third dielectric core across the dielectric plates with the closer spacing and lower dielectric constant. When the faster wave reaches the inner extents of the outer and inner conductive plates, it reflects back outward and reverses the field in that segment of the dielectric core. All the field segments in the dielectric core are then in unipolar agreement until the slower wave finally propagates to the third dielectric core across the dielectric plates with the wider spacing and higher dielectric constant. During such unipolar agreement, particles in the core are accelerated with gradients that exceed twenty megavolts per meter.
THE LEVENBERG-MARQUARDT ALGORITHM: IMPLEMENTATION AND THEORY
Office of Scientific and Technical Information (OSTI)
... Since is usually a nonlinear function of p, we linearize F(x+p) and obtain the linear least squares problem Hv) |1F(X) + F'(X)PI| . Of course, this linearization is not valid ...
Berkolaiko, G.; Kuipers, J.
2013-12-15
Electronic transport through chaotic quantum dots exhibits universal behaviour which can be understood through the semiclassical approximation. Within the approximation, calculation of transport moments reduces to codifying classical correlations between scattering trajectories. These can be represented as ribbon graphs and we develop an algorithmic combinatorial method to generate all such graphs with a given genus. This provides an expansion of the linear transport moments for systems both with and without time reversal symmetry. The computational implementation is then able to progress several orders further than previous semiclassical formulae as well as those derived from an asymptotic expansion of random matrix results. The patterns observed also suggest a general form for the higher orders.
U.S. Department of Energy (DOE) all webpages (Extended Search)
Develops Diagnostic Test Cases To Improve Building Energy Simulation Programs The National Renewable Energy Laboratory (NREL) Residential and Commercial Buildings research groups developed a set of diagnostic test cases for building energy simulations. Eight test cases were developed to test surface conduction heat transfer algorithms of building envelopes in building energy simulation programs. These algorithms are used to predict energy flow through external opaque surfaces such as walls,
Hart, W.E.; Istrail, S. [Sandia National Labs., Albuquerque, NM (United States). Algorithms and Discrete Mathematics Dept.
1996-08-09
This paper considers the protein structure prediction problem for lattice and off-lattice protein folding models that explicitly represent side chains. Lattice models of proteins have proven extremely useful tools for reasoning about protein folding in unrestricted continuous space through analogy. This paper provides the first illustration of how rigorous algorithmic analyses of lattice models can lead to rigorous algorithmic analyses of off-lattice models. The authors consider two side chain models: a lattice model that generalizes the HP model (Dill 85) to explicitly represent side chains on the cubic lattice, and a new off-lattice model, the HP Tangent Spheres Side Chain model (HP-TSSC), that generalizes this model further by representing the backbone and side chains of proteins with tangent spheres. They describe algorithms for both of these models with mathematically guaranteed error bounds. In particular, the authors describe a linear time performance guaranteed approximation algorithm for the HP side chain model that constructs conformations whose energy is better than 865 of optimal in a face centered cubic lattice, and they demonstrate how this provides a 70% performance guarantee for the HP-TSSC model. This is the first algorithm in the literature for off-lattice protein structure prediction that has a rigorous performance guarantee. The analysis of the HP-TSSC model builds off of the work of Dancik and Hannenhalli who have developed a 16/30 approximation algorithm for the HP model on the hexagonal close packed lattice. Further, the analysis provides a mathematical methodology for transferring performance guarantees on lattices to off-lattice models. These results partially answer the open question of Karplus et al. concerning the complexity of protein folding models that include side chains.
Office of Energy Efficiency and Renewable Energy (EERE) (indexed site)
QP001 Revision 0 Effective October 15, 2001 QUALITY PROGRAM Prepared by Electric Transportation Applications Prepared by: _______________________________ Date:__________ Jude M. Clark Approved by: _______________________________________________ Date: ______________ Donald B. Karner Procedure ETA-QP001 Revision 0 2 2001 Electric Transportation Applications All Rights Reserved TABLE OF CONTENTS 1.0 Objectives 3 2.0 Scope 3 3.0 Documentation 3 4.0 Prerequisites 4 5.0 Exclusions 5 6.0 Quality
Atencio, Julian J.
2014-05-01
This presentation covers how to go about developing a human reliability program. In particular, it touches on conceptual thinking, raising awareness in an organization, the actions that go into developing a plan. It emphasizes evaluating all positions, eliminating positions from the pool due to mitigating factors, and keeping the process transparent. It lists components of the process and objectives in process development. It also touches on the role of leadership and the necessity for audit.
Semi-Implicit Reversible Algorithms for Rigid Body Rotational Dynamics
Nukala, Phani K; Shelton Jr, William Allison
2006-09-01
This paper presents two semi-implicit algorithms based on splitting methodology for rigid body rotational dynamics. The first algorithm is a variation of partitioned Runge-Kutta (PRK) methodology that can be formulated as a splitting method. The second algorithm is akin to a multiple time stepping scheme and is based on modified Crouch-Grossman (MCG) methodology, which can also be expressed as a splitting algorithm. These algorithms are second-order accurate and time-reversible; however, they are not Poisson integrators, i.e., non-symplectic. These algorithms conserve some of the first integrals of motion, but some others are not conserved; however, the fluctuations in these invariants are bounded over exponentially long time intervals. These algorithms exhibit excellent long-term behavior because of their reversibility property and their (approximate) Poisson structure preserving property. The numerical results indicate that the proposed algorithms exhibit superior performance compared to some of the currently well known algorithms such as the Simo-Wong algorithm, Newmark algorithm, discrete Moser-Veselov algorithm, Lewis-Simo algorithm, and the LIEMID[EA] algorithm.
Maryland Efficiency Program Options
Office of Energy Efficiency and Renewable Energy (EERE)
Maryland Efficiency Program Options, from the Tool Kit Framework: Small Town University Energy Program (STEP).
STEP Program Benchmark Report, from the Tool Kit Framework: Small Town University Energy Program (STEP).
Program Evaluation: Program Logic | Department of Energy
Office of Energy Efficiency and Renewable Energy (EERE) (indexed site)
Program Logic Program Evaluation: Program Logic Step four will help you develop a logical model for your program (learn more about the other steps in general program evaluations): What is a Logic Model? Benefits of Using Logic Modeling Pitfalls and How to Avoid Them Steps to Developing a Logic Model What is a Logic Model? Logic modeling is a thought process program evaluators have found to be useful for at least forty years and has become increasingly popular with program managers during the
A garbage collection algorithm for shared memory parallel processors
Crammond, J. )
1988-12-01
This paper describes a technique for adapting the Morris sliding garbage collection algorithm to execute on parallel machines with shared memory. The algorithm is described within the framework of an implementation of the parallel logic language Parlog. However, the algorithm is a general one and can easily be adapted to parallel Prolog systems and to other languages. The performance of the algorithm executing a few simple Parlog benchmarks is analyzed. Finally, it is shown how the technique for parallelizing the sequential algorithm can be adapted for a semi-space copying algorithm.
Electronic Non-Contacting Linear Position Measuring System
Post, Richard F.
2005-06-14
A non-contacting linear position location system employs a special transmission line to encode and transmit magnetic signals to a receiver on the object whose position is to be measured. The invention is useful as a non-contact linear locator of moving objects, e.g., to determine the location of a magnetic-levitation train for the operation of the linear-synchronous motor drive system.
The Linear Engine Pathway of Transformation | Department of Energy
Office of Energy Efficiency and Renewable Energy (EERE) (indexed site)
The Linear Engine Pathway of Transformation The Linear Engine Pathway of Transformation This poster highlights the major milestones in the history of the linear engine in terms of technological advances, novel designs, and economic/social impact. p-06_covington.pdf (214.04 KB) More Documents & Publications Difficulty of Measuring Emissions from Heavy-Duty Engines Equipped with SCR and DPF Development of a Stand-Alone Urea-SCR System for NOx Reduction in Marine Diesel Engines Modeling the
Knot Undulator to Generate Linearly Polarized Photons with Low...
Office of Scientific and Technical Information (OSTI)
Heat load on beamline optics is a serious problem to generate pure linearly polarized ... Language: English Subject: 43 PARTICLE ACCELERATORS; OPTICS; PERMANENT MAGNETS; PHOTONS; ...
Vibronic coupling simulations for linear and nonlinear optical...
Office of Scientific and Technical Information (OSTI)
optical processes: Theory Citation Details In-Document Search Title: Vibronic coupling simulations for linear and nonlinear optical processes: Theory A comprehensive vibronic ...
DOE - Office of Legacy Management -- Stanford Linear Accelerator...
Office of Legacy Management (LM)
The Stanford Linear Accelerator Center was established in 1962 as a research facility for high energy particle physics. The Environmental Management mission at this site is to ...
Optimizing minimum free-energy crossing points in solution: Linear...
Office of Scientific and Technical Information (OSTI)
Optimizing minimum free-energy crossing points in solution: Linear-response free energyspin-flip density functional theory approach Citation Details In-Document Search Title:...
Linear electric field time-of-flight ion mass spectrometer
Funsten, Herbert O.; Feldman, William C.
2008-06-10
A linear electric field ion mass spectrometer having an evacuated enclosure with means for generating a linear electric field located in the evacuated enclosure and means for injecting a sample material into the linear electric field. A source of pulsed ionizing radiation injects ionizing radiation into the linear electric field to ionize atoms or molecules of the sample material, and timing means determine the time elapsed between ionization of atoms or molecules and arrival of an ion out of the ionized atoms or molecules at a predetermined position.
Top Quark Anomalous Couplings at the International Linear Collider...
Office of Scientific and Technical Information (OSTI)
to a precision of approximately 1% for each of two choices of beam polarization. ... INTERMEDIATE BOSONS; LINEAR COLLIDERS; POLARIZATION; PROBES; QUARKS; SILICON; SIMULATION; ...
The Klynac: An Integrated Klystron and Linear Accelerator
Potter, J. M., Schwellenbach, D., Meidinger, A.
2012-08-07
The Klynac concept integrates an electron gun, a radio frequency (RF) power source, and a coupled-cavity linear accelerator into a single resonant system
Knot Undulator to Generate Linearly Polarized Photons with Low...
Office of Scientific and Technical Information (OSTI)
pure linearly polarized photons in the third generation synchrotron radiation facilities. ... Sponsoring Org: USDOE Country of Publication: United States Language: English Subject: 43 ...
A posteriori error analysis of parameterized linear systems using...
Office of Scientific and Technical Information (OSTI)
Journal Article: A posteriori error analysis of parameterized linear systems using spectral methods. Citation Details In-Document Search Title: A posteriori error analysis of ...
Self-Sustained Micromechanical Oscillator with Linear Feedback...
Office of Scientific and Technical Information (OSTI)
Publisher's Accepted Manuscript: Self-Sustained Micromechanical Oscillator with Linear Feedback This content will become publicly available on July 1, 2017 Prev Next Title: ...
Entropy-based separation of linear chain molecules by exploiting...
U.S. Department of Energy (DOE) all webpages (Extended Search)
Entropy-based separation of linear chain molecules by exploiting differences in the saturation capacities in cage-type zeolites Previous Next List Rajamani Krishna, Jasper M. van...
Simultaneous linear optics and coupling correction for storage...
Office of Scientific and Technical Information (OSTI)
Journal Article: Simultaneous linear optics and coupling correction for storage rings with turn-by-turn beam position monitor data Citation Details In-Document Search Title:...
Unexpected Angular Dependence of X-Ray Magnetic Linear Dichroism
U.S. Department of Energy (DOE) all webpages (Extended Search)
Unexpected Angular Dependence of X-Ray Magnetic Linear Dichroism Print Using spectroscopic ... The effect is unique in that it allows us to distinguish which atomic species magnetism ...
A Linear Theory of Microwave Instability in Electron Storage...
Office of Scientific and Technical Information (OSTI)
Title: A Linear Theory of Microwave Instability in Electron Storage Rings The well-known Haissinski distribution provides a stable equilibrium of longitudinal beam distribution in ...
Linearly Polarized Thermal Emitter for More Efficient Thermophotovolta...
U.S. Department of Energy (DOE) all webpages (Extended Search)
Linearly Polarized Thermal Emitter for More Efficient Thermophotovoltaic Devices Ames ... than can be used to create more efficient thermophotovoltaic devices for power generation. ...
Intergovernmental Programs | Department of Energy
Office of Energy Efficiency and Renewable Energy (EERE) (indexed site)
Intergovernmental Programs Intergovernmental Programs Intergovernmental Programs Intergovernmental Programs Intergovernmental Programs Intergovernmental Programs Intergovernmental Programs Intergovernmental Programs Intergovernmental Programs Intergovernmental Programs Intergovernmental Programs Intergovernmental Programs Intergovernmental Programs Intergovernmental Programs The Office of Environmental Management supports, by means of grants and cooperative agreements, a number of
Office of Energy Efficiency and Renewable Energy (EERE) (indexed site)
April-June 2014 issue of the U.S. Department of Energy (DOE) Offce of Legacy Management (LM) Program Update. This publication is designed to provide a status of activities within LM. Please direct all comments and inquiries to lm@hq.doe.gov. April-June 2014 Visit us at http://energy.gov/lm/ Goal 4 Optimizing the Use of Federal Lands Through Disposition The foundation of the U.S. Department of Energy (DOE) Offce of Legacy Manage- ment's (LM) Goal 4, "Optimize the use of land and
Office of Energy Efficiency and Renewable Energy (EERE) (indexed site)
5 issue of the U.S. Department of Energy (DOE) Offce of Legacy Management (LM) Program Update. This publication is designed to provide a status of activities within LM. Please direct all comments and inquiries to lm@hq.doe.gov. January-March 2015 Visit us at http://energy.gov/lm/ Goal 4 Successful Transition from Mound Site to Mound Business Park Continues The Mound Business Park attracts a variety of businesses to the former U.S. Department of Energy (DOE) Mound, Ohio, Site in Miamisburg. In
Fast computation algorithms for speckle pattern simulation
Nascov, Victor; Samoilă, Cornel; Ursuţiu, Doru
2013-11-13
We present our development of a series of efficient computation algorithms, generally usable to calculate light diffraction and particularly for speckle pattern simulation. We use mainly the scalar diffraction theory in the form of Rayleigh-Sommerfeld diffraction formula and its Fresnel approximation. Our algorithms are based on a special form of the convolution theorem and the Fast Fourier Transform. They are able to evaluate the diffraction formula much faster than by direct computation and we have circumvented the restrictions regarding the relative sizes of the input and output domains, met on commonly used procedures. Moreover, the input and output planes can be tilted each to other and the output domain can be off-axis shifted.
Resource-Efficient Generataion of Linear Cluster States by Linear Optics with postselection
Uskov, Dmitry B; Alsing, Paul; Fanto, Michael; Kaplan, Lev; Kim, R; Szep, Atilla; Smith IV, Amos M
2015-01-01
We report on theoretical research in photonic cluster-state computing. Finding optimal schemes of generating non-classical photonic states is of critical importance for this field as physically implementable photon-photon entangling operations are currently limited to measurement-assisted stochastic transformations. A critical parameter for assessing the efficiency of such transformations is the success probability of a desired measurement outcome. At present there are several experimental groups that are capable of generating multi-photon cluster states carrying more than eight qubits. Separate photonic qubits or small clusters can be fused into a single cluster state by a probabilistic optical CZ gate conditioned on simultaneous detectionmore » of all photons with 1/9 success probability for each gate. This design mechanically follows the original theoretical scheme of cluster state generation proposed more than a decade ago by Raussendorf, Browne, and Briegel. The optimality of the destructive CZ gate in application to linear optical cluster state generation has not been analyzed previously. Our results reveal that this method is far from the optimal one. Employing numerical optimization we have identified that the maximal success probability of fusing n unentangled dual-rail optical qubits into a linear cluster state is equal to 1/2^n-1; an m-tuple of photonic Bell pair states, commonly generated via spontaneous parametric down-conversion, can be fused into a single cluster with the maximal success probability of 1/4^m-1.« less
Resource-Efficient Generataion of Linear Cluster States by Linear Optics with postselection
Uskov, Dmitry B; Alsing, Paul; Fanto, Michael; Kaplan, Lev; Kim, R; Szep, Atilla; Smith IV, Amos M
2015-01-01
We report on theoretical research in photonic cluster-state computing. Finding optimal schemes of generating non-classical photonic states is of critical importance for this field as physically implementable photon-photon entangling operations are currently limited to measurement-assisted stochastic transformations. A critical parameter for assessing the efficiency of such transformations is the success probability of a desired measurement outcome. At present there are several experimental groups that are capable of generating multi-photon cluster states carrying more than eight qubits. Separate photonic qubits or small clusters can be fused into a single cluster state by a probabilistic optical CZ gate conditioned on simultaneous detection of all photons with 1/9 success probability for each gate. This design mechanically follows the original theoretical scheme of cluster state generation proposed more than a decade ago by Raussendorf, Browne, and Briegel. The optimality of the destructive CZ gate in application to linear optical cluster state generation has not been analyzed previously. Our results reveal that this method is far from the optimal one. Employing numerical optimization we have identified that the maximal success probability of fusing n unentangled dual-rail optical qubits into a linear cluster state is equal to 1/2^n-1; an m-tuple of photonic Bell pair states, commonly generated via spontaneous parametric down-conversion, can be fused into a single cluster with the maximal success probability of 1/4^m-1.
Resource-efficient generation of linear cluster states by linear optics with postselection
Uskov, D. B.; Alsing, P. M.; Fanto, M. L.; Kaplan, L.; Kim, R.; Szep, A.; Smith, A. M.
2015-01-30
Here we report on theoretical research in photonic cluster-state computing. Finding optimal schemes of generating non-classical photonic states is of critical importance for this field as physically implementable photon-photon entangling operations are currently limited to measurement-assisted stochastic transformations. A critical parameter for assessing the efficiency of such transformations is the success probability of a desired measurement outcome. At present there are several experimental groups that are capable of generating multi-photon cluster states carrying more than eight qubits. Separate photonic qubits or small clusters can be fused into a single cluster state by a probabilistic optical CZ gate conditioned on simultaneousmore » detection of all photons with 1/9 success probability for each gate. This design mechanically follows the original theoretical scheme of cluster state generation proposed more than a decade ago by Raussendorf, Browne, and Briegel. The optimality of the destructive CZ gate in application to linear optical cluster state generation has not been analyzed previously. Our results reveal that this method is far from the optimal one. Employing numerical optimization we have identified that the maximal success probability of fusing n unentangled dual-rail optical qubits into a linear cluster state is equal to 1/2n-1; an m-tuple of photonic Bell pair states, commonly generated via spontaneous parametric down-conversion, can be fused into a single cluster with the maximal success probability of 1/4m-1.« less
Algorithmic Techniques for Massive Data Sets
Moses Charikar
2006-04-03
This report describes the progress made during the Early Career Principal Investigator (ECPI) project on Algorithmic Techniques for Large Data Sets. Research was carried out in the areas of dimension reduction, clustering and finding structure in data, aggregating information from different sources and designing efficient methods for similarity search for high dimensional data. A total of nine different research results were obtained and published in leading conferences and journals.
Automated Algorithm for MFRSR Data Analysis
U.S. Department of Energy (DOE) all webpages (Extended Search)
Automated Algorithm for MFRSR Data Analysis M. D. Alexandrov and B. Cairns Columbia University and National Aeronautics and Space Administration Goddard Institute for Space Studies New York, New York A. A. Lacis and B. E. Carlson National Aeronautics and Space Administration Goddard Institute for Space Studies New York, New York A. Marshak National Aeronautics and Space Administration Goddard Space Flight Center Greenbelt, Maryland We present a substantial upgrade of our previously developed
Solar receiver heliostat reflector having a linear drive and position information system
Horton, Richard H.
1980-01-01
A heliostat for a solar receiver system comprises an improved drive and control system for the heliostat reflector assembly. The heliostat reflector assembly is controllably driven in a predetermined way by a light-weight drive system so as to be angularly adjustable in both elevation and azimuth to track the sun and efficiently continuously reflect the sun's rays to a focal zone, i.e., heat receiver, which forms part of a solar energy utilization system, such as a solar energy fueled electrical power generation system. The improved drive system includes linear stepping motors which comprise low weight, low cost, electronic pulse driven components. One embodiment comprises linear stepping motors controlled by a programmed, electronic microprocessor. Another embodiment comprises a tape driven system controlled by a position control magnetic tape.
Weglein, Arthur B.; Stolt, Bob H.
2012-03-01
Extracting information from seismic data requires knowledge of seismic wave propagation and reflection. The commonly used method involves solving linearly for a reflectivity at every point within the Earth, but this book follows an alternative approach which invokes inverse scattering theory. By developing the theory of seismic imaging from basic principles, the authors relate the different models of seismic propagation, reflection and imaging - thus providing links to reflectivity-based imaging on the one hand and to nonlinear seismic inversion on the other. The comprehensive and physically complete linear imaging foundation developed presents new results at the leading edge of seismic processing for target location and identification. This book serves as a fundamental guide to seismic imaging principles and algorithms and their foundation in inverse scattering theory and is a valuable resource for working geoscientists, scientific programmers and theoretical physicists.
Calculation of excitation energies from the CC2 linear response theory using Cholesky decomposition
Baudin, Pablo; qLEAP Center for Theoretical Chemistry, Department of Chemistry, Aarhus University, Langelandsgade 140, DK-8000 Aarhus C ; Marn, Jos Snchez; Cuesta, Inmaculada Garca; Snchez de Mers, Alfredo M. J.
2014-03-14
A new implementation of the approximate coupled cluster singles and doubles CC2 linear response model is reported. It employs a Cholesky decomposition of the two-electron integrals that significantly reduces the computational cost and the storage requirements of the method compared to standard implementations. Our algorithm also exploits a partitioning form of the CC2 equations which reduces the dimension of the problem and avoids the storage of doubles amplitudes. We present calculation of excitation energies of benzene using a hierarchy of basis sets and compare the results with conventional CC2 calculations. The reduction of the scaling is evaluated as well as the effect of the Cholesky decomposition parameter on the quality of the results. The new algorithm is used to perform an extrapolation to complete basis set investigation on the spectroscopically interesting benzylallene conformers. A set of calculations on medium-sized molecules is carried out to check the dependence of the accuracy of the results on the decomposition thresholds. Moreover, CC2 singlet excitation energies of the free base porphin are also presented.
Scaling Up Coordinate Descent Algorithms for Large ?1 Regularization Problems
Scherrer, Chad; Halappanavar, Mahantesh; Tewari, Ambuj; Haglin, David J.
2012-07-03
We present a generic framework for parallel coordinate descent (CD) algorithms that has as special cases the original sequential algorithms of Cyclic CD and Stochastic CD, as well as the recent parallel Shotgun algorithm of Bradley et al. We introduce two novel parallel algorithms that are also special cases---Thread-Greedy CD and Coloring-Based CD---and give performance measurements for an OpenMP implementation of these.
Geothermal Technologies Program Overview - Peer Review Program
Milliken, JoAnn
2011-06-06
This Geothermal Technologies Program presentation was delivered on June 6, 2011 at a Program Peer Review meeting. It contains annual budget, Recovery Act, funding opportunities, upcoming program activities, and more.
Evaluating cloud retrieval algorithms with the ARM BBHRP framework
Mlawer,E.; Dunn,M.; Mlawer, E.; Shippert, T.; Troyan, D.; Johnson, K. L.; Miller, M. A.; Delamere, J.; Turner, D. D.; Jensen, M. P.; Flynn, C.; Shupe, M.; Comstock, J.; Long, C. N.; Clough, S. T.; Sivaraman, C.; Khaiyer, M.; Xie, S.; Rutan, D.; Minnis, P.
2008-03-10
Climate and weather prediction models require accurate calculations of vertical profiles of radiative heating. Although heating rate calculations cannot be directly validated due to the lack of corresponding observations, surface and top-of-atmosphere measurements can indirectly establish the quality of computed heating rates through validation of the calculated irradiances at the atmospheric boundaries. The ARM Broadband Heating Rate Profile (BBHRP) project, a collaboration of all the working groups in the program, was designed with these heating rate validations as a key objective. Given the large dependence of radiative heating rates on cloud properties, a critical component of BBHRP radiative closure analyses has been the evaluation of cloud microphysical retrieval algorithms. This evaluation is an important step in establishing the necessary confidence in the continuous profiles of computed radiative heating rates produced by BBHRP at the ARM Climate Research Facility (ACRF) sites that are needed for modeling studies. This poster details the continued effort to evaluate cloud property retrieval algorithms within the BBHRP framework, a key focus of the project this year. A requirement for the computation of accurate heating rate profiles is a robust cloud microphysical product that captures the occurrence, height, and phase of clouds above each ACRF site. Various approaches to retrieve the microphysical properties of liquid, ice, and mixed-phase clouds have been processed in BBHRP for the ACRF Southern Great Plains (SGP) and the North Slope of Alaska (NSA) sites. These retrieval methods span a range of assumptions concerning the parameterization of cloud location, particle density, size, shape, and involve different measurement sources. We will present the radiative closure results from several different retrieval approaches for the SGP site, including those from Microbase, the current 'reference' retrieval approach in BBHRP. At the NSA, mixed-phase clouds and
Variable-energy drift-tube linear accelerator
Swenson, Donald A.; Boyd, Jr., Thomas J.; Potter, James M.; Stovall, James E.
1984-01-01
A linear accelerator system includes a plurality of post-coupled drift-tubes wherein each post coupler is bistably positionable to either of two positions which result in different field distributions. With binary control over a plurality of post couplers, a significant accumlative effect in the resulting field distribution is achieved yielding a variable-energy drift-tube linear accelerator.
Drift tube suspension for high intensity linear accelerators
Liska, Donald J.; Schamaun, Roger G.; Clark, Donald C.; Potter, R. Christopher; Frank, Joseph A.
1982-01-01
The disclosure relates to a drift tube suspension for high intensity linear accelerators. The system comprises a series of box-sections girders independently adjustably mounted on a linear accelerator. A plurality of drift tube holding stems are individually adjustably mounted on each girder.
Drift tube suspension for high intensity linear accelerators
Liska, D.J.; Schamaun, R.G.; Clark, D.C.; Potter, R.C.; Frank, J.A.
1980-03-11
The disclosure relates to a drift tube suspension for high intensity linear accelerators. The system comprises a series of box-sections girders independently adjustably mounted on a linear accelerator. A plurality of drift tube holding stems are individually adjustably mounted on each girder.
Differentially pumped dual linear quadrupole ion trap mass spectrometer
Owen, Benjamin C.; Kenttamaa, Hilkka I.
2015-10-20
The present disclosure provides a new tandem mass spectrometer and methods of using the same for analyzing charged particles. The differentially pumped dual linear quadrupole ion trap mass spectrometer of the present disclose includes a combination of two linear quadrupole (LQIT) mass spectrometers with differentially pumped vacuum chambers.
Linear Concentrator System Basics for Concentrating Solar Power
Linear concentrating solar power (CSP) collectors capture the sun's energy with large mirrors that reflect and focus the sunlight onto a linear receiver tube. The receiver contains a fluid that is heated by the sunlight and then used to heat a traditional power cycle that spins a turbine that drives a generator to produce electricity.
Variable-energy drift-tube linear accelerator
Swenson, D.A.; Boyd, T.J. Jr.; Potter, J.M.; Stovall, J.E.
A linear accelerator system includes a plurality of post-coupled drift-tubes wherein each post coupler is bistably positionable to either of two positions which result in different field distributions. With binary control over a plurality of post couplers, a significant accumlative effect in the resulting field distribution is achieved yielding a variable-energy drift-tube linear accelerator.
Cable Damage Detection System and Algorithms Using Time Domain Reflectometry
Clark, G A; Robbins, C L; Wade, K A; Souza, P R
2009-03-24
This report describes the hardware system and the set of algorithms we have developed for detecting damage in cables for the Advanced Development and Process Technologies (ADAPT) Program. This program is part of the W80 Life Extension Program (LEP). The system could be generalized for application to other systems in the future. Critical cables can undergo various types of damage (e.g. short circuits, open circuits, punctures, compression) that manifest as changes in the dielectric/impedance properties of the cables. For our specific problem, only one end of the cable is accessible, and no exemplars of actual damage are available. This work addresses the detection of dielectric/impedance anomalies in transient time domain reflectometry (TDR) measurements on the cables. The approach is to interrogate the cable using time domain reflectometry (TDR) techniques, in which a known pulse is inserted into the cable, and reflections from the cable are measured. The key operating principle is that any important cable damage will manifest itself as an electrical impedance discontinuity that can be measured in the TDR response signal. Machine learning classification algorithms are effectively eliminated from consideration, because only a small number of cables is available for testing; so a sufficient sample size is not attainable. Nonetheless, a key requirement is to achieve very high probability of detection and very low probability of false alarm. The approach is to compare TDR signals from possibly damaged cables to signals or an empirical model derived from reference cables that are known to be undamaged. This requires that the TDR signals are reasonably repeatable from test to test on the same cable, and from cable to cable. Empirical studies show that the repeatability issue is the 'long pole in the tent' for damage detection, because it is has been difficult to achieve reasonable repeatability. This one factor dominated the project. The two-step model-based approach is
Nonlinear vs. linear biasing in Trp-cage folding simulations
Spiwok, Vojt?ch Oborsk, Pavel; Krlov, Blanka; Pazrikov, Jana
2015-03-21
Biased simulations have great potential for the study of slow processes, including protein folding. Atomic motions in molecules are nonlinear, which suggests that simulations with enhanced sampling of collective motions traced by nonlinear dimensionality reduction methods may perform better than linear ones. In this study, we compare an unbiased folding simulation of the Trp-cage miniprotein with metadynamics simulations using both linear (principle component analysis) and nonlinear (Isomap) low dimensional embeddings as collective variables. Folding of the mini-protein was successfully simulated in 200?ns simulation with linear biasing and non-linear motion biasing. The folded state was correctly predicted as the free energy minimum in both simulations. We found that the advantage of linear motion biasing is that it can sample a larger conformational space, whereas the advantage of nonlinear motion biasing lies in slightly better resolution of the resulting free energy surface. In terms of sampling efficiency, both methods are comparable.
New algorithms for the symmetric tridiagonal eigenvalue computation
Pan, V. |
1994-12-31
The author presents new algorithms that accelerate the bisection method for the symmetric eigenvalue problem. The algorithms rely on some new techniques, which include acceleration of Newton`s iteration and can also be further applied to acceleration of some other iterative processes, in particular, of iterative algorithms for approximating polynomial zeros.
Flexible Language Constructs for Large Parallel Programs
Rosing, Matt; Schnabel, Robert
1994-01-01
The goal of the research described in this article is to develop flexible language constructs for writing large data parallel numerical programs for distributed memory (multiple instruction multiple data [MIMD]) multiprocessors. Previously, several models have been developed to support synchronization and communication. Models for global synchronization include single instruction multiple data (SIMD), single program multiple data (SPMD), and sequential programs annotated with data distribution statements. The two primary models for communication include implicit communication based on shared memory and explicit communication based on messages. None of these models by themselves seem sufficient to permit the natural and efficient expression ofmore » the variety of algorithms that occur in large scientific computations. In this article, we give an overview of a new language that combines many of these programming models in a clean manner. This is done in a modular fashion such that different models can be combined to support large programs. Within a module, the selection of a model depends on the algorithm and its efficiency requirements. In this article, we give an overview of the language and discuss some of the critical implementation details.« less
Linear-scaling implementation of the direct random-phase approximation
Kllay, Mihly
2015-05-28
We report the linear-scaling implementation of the direct random-phase approximation (dRPA) for closed-shell molecular systems. As a bonus, linear-scaling algorithms are also presented for the second-order screened exchange extension of dRPA as well as for the second-order MllerPlesset (MP2) method and its spin-scaled variants. Our approach is based on an incremental scheme which is an extension of our previous local correlation method [Rolik et al., J. Chem. Phys. 139, 094105 (2013)]. The approach extensively uses local natural orbitals to reduce the size of the molecular orbital basis of local correlation domains. In addition, we also demonstrate that using natural auxiliary functions [M. Kllay, J. Chem. Phys. 141, 244113 (2014)], the size of the auxiliary basis of the domains and thus that of the three-center Coulomb integral lists can be reduced by an order of magnitude, which results in significant savings in computation time. The new approach is validated by extensive test calculations for energies and energy differences. Our benchmark calculations also demonstrate that the new method enables dRPA calculations for molecules with more than 1000 atoms and 10?000 basis functions on a single processor.
Expanded studies of linear collider final focus systems at the Final Focus Test Beam
Tenenbaum, P.G.
1995-12-01
In order to meet their luminosity goals, linear colliders operating in the center-of-mass energy range from 3,50 to 1,500 GeV will need to deliver beams which are as small as a few Manometers tall, with x:y aspect ratios as large as 100. The Final Focus Test Beam (FFTB) is a prototype for the final focus demanded by these colliders: its purpose is to provide demagnification equivalent to those in the future linear collider, which corresponds to a focused spot size in the FFTB of 1.7 microns (horizontal) by 60 manometers (vertical). In order to achieve the desired spot sizes, the FFTB beam optics must be tuned to eliminate aberrations and other errors, and to ensure that the optics conform to the desired final conditions and the measured initial conditions of the beam. Using a combination of incoming-beam diagnostics. beam-based local diagnostics, and global tuning algorithms, the FFTB beam size has been reduced to a stable final size of 1.7 microns by 70 manometers. In addition, the chromatic properties of the FFTB have been studied using two techniques and found to be acceptable. Descriptions of the hardware and techniques used in these studies are presented, along with results and suggestions for future research.
Machinist Pipeline/Apprentice Program Program Description
U.S. Department of Energy (DOE) all webpages (Extended Search)
cost effective than previous time-based programs Moves apprentices to journeyworker status more quickly Program Coordinator: Heidi Hahn Email: hahn@lanl.gov Phone number:...
EECBG Financing Program Annual ...
Office of Energy Efficiency and Renewable Energy (EERE) (indexed site)
Additional cost share required to administer the program Process Metrics-Underlying ... administering the Program and carrying out underlying activities supported by the Program. ...
Existing Facilities Rebate Program
The NYSERDA Existing Facilities program merges the former Peak Load Reduction and Enhanced Commercial and Industrial Performance programs. The new program offers a broad array of different...
Robust parallel iterative solvers for linear and least-squares problems, Final Technical Report
Saad, Yousef
2014-01-16
The primary goal of this project is to study and develop robust iterative methods for solving linear systems of equations and least squares systems. The focus of the Minnesota team is on algorithms development, robustness issues, and on tests and validation of the methods on realistic problems. 1. The project begun with an investigation on how to practically update a preconditioner obtained from an ILU-type factorization, when the coefficient matrix changes. 2. We investigated strategies to improve robustness in parallel preconditioners in a specific case of a PDE with discontinuous coefficients. 3. We explored ways to adapt standard preconditioners for solving linear systems arising from the Helmholtz equation. These are often difficult linear systems to solve by iterative methods. 4. We have also worked on purely theoretical issues related to the analysis of Krylov subspace methods for linear systems. 5. We developed an effective strategy for performing ILU factorizations for the case when the matrix is highly indefinite. The strategy uses shifting in some optimal way. The method was extended to the solution of Helmholtz equations by using complex shifts, yielding very good results in many cases. 6. We addressed the difficult problem of preconditioning sparse systems of equations on GPUs. 7. A by-product of the above work is a software package consisting of an iterative solver library for GPUs based on CUDA. This was made publicly available. It was the first such library that offers complete iterative solvers for GPUs. 8. We considered another form of ILU which blends coarsening techniques from Multigrid with algebraic multilevel methods. 9. We have released a new version on our parallel solver - called pARMS [new version is version 3]. As part of this we have tested the code in complex settings - including the solution of Maxwell and Helmholtz equations and for a problem of crystal growth.10. As an application of polynomial preconditioning we considered the
INDDGO: Integrated Network Decomposition & Dynamic programming for Graph Optimization
Groer, Christopher S; Sullivan, Blair D; Weerapurage, Dinesh P
2012-10-01
It is well-known that dynamic programming algorithms can utilize tree decompositions to provide a way to solve some \\emph{NP}-hard problems on graphs where the complexity is polynomial in the number of nodes and edges in the graph, but exponential in the width of the underlying tree decomposition. However, there has been relatively little computational work done to determine the practical utility of such dynamic programming algorithms. We have developed software to construct tree decompositions using various heuristics and have created a fast, memory-efficient dynamic programming implementation for solving maximum weighted independent set. We describe our software and the algorithms we have implemented, focusing on memory saving techniques for the dynamic programming. We compare the running time and memory usage of our implementation with other techniques for solving maximum weighted independent set, including a commercial integer programming solver and a semi-definite programming solver. Our results indicate that it is possible to solve some instances where the underlying decomposition has width much larger than suggested by the literature. For certain types of problems, our dynamic programming code runs several times faster than these other methods.
Evaluation of machine learning algorithms for prediction of regions of high RANS uncertainty
Ling, Julia; Templeton, Jeremy Alan
2015-08-04
Reynolds Averaged Navier Stokes (RANS) models are widely used in industry to predict fluid flows, despite their acknowledged deficiencies. Not only do RANS models often produce inaccurate flow predictions, but there are very limited diagnostics available to assess RANS accuracy for a given flow configuration. If experimental or higher fidelity simulation results are not available for RANS validation, there is no reliable method to evaluate RANS accuracy. This paper explores the potential of utilizing machine learning algorithms to identify regions of high RANS uncertainty. Three different machine learning algorithms were evaluated: support vector machines, Adaboost decision trees, and random forests.more » The algorithms were trained on a database of canonical flow configurations for which validated direct numerical simulation or large eddy simulation results were available, and were used to classify RANS results on a point-by-point basis as having either high or low uncertainty, based on the breakdown of specific RANS modeling assumptions. Classifiers were developed for three different basic RANS eddy viscosity model assumptions: the isotropy of the eddy viscosity, the linearity of the Boussinesq hypothesis, and the non-negativity of the eddy viscosity. It is shown that these classifiers are able to generalize to flows substantially different from those on which they were trained. As a result, feature selection techniques, model evaluation, and extrapolation detection are discussed in the context of turbulence modeling applications.« less
Evaluation of machine learning algorithms for prediction of regions of high RANS uncertainty
Ling, Julia; Templeton, Jeremy Alan
2015-08-04
Reynolds Averaged Navier Stokes (RANS) models are widely used in industry to predict fluid flows, despite their acknowledged deficiencies. Not only do RANS models often produce inaccurate flow predictions, but there are very limited diagnostics available to assess RANS accuracy for a given flow configuration. If experimental or higher fidelity simulation results are not available for RANS validation, there is no reliable method to evaluate RANS accuracy. This paper explores the potential of utilizing machine learning algorithms to identify regions of high RANS uncertainty. Three different machine learning algorithms were evaluated: support vector machines, Adaboost decision trees, and random forests. The algorithms were trained on a database of canonical flow configurations for which validated direct numerical simulation or large eddy simulation results were available, and were used to classify RANS results on a point-by-point basis as having either high or low uncertainty, based on the breakdown of specific RANS modeling assumptions. Classifiers were developed for three different basic RANS eddy viscosity model assumptions: the isotropy of the eddy viscosity, the linearity of the Boussinesq hypothesis, and the non-negativity of the eddy viscosity. It is shown that these classifiers are able to generalize to flows substantially different from those on which they were trained. As a result, feature selection techniques, model evaluation, and extrapolation detection are discussed in the context of turbulence modeling applications.
Ling, Julia; Templeton, Jeremy Alan
2015-08-04
Reynolds Averaged Navier Stokes (RANS) models are widely used in industry to predict fluid flows, despite their acknowledged deficiencies. Not only do RANS models often produce inaccurate flow predictions, but there are very limited diagnostics available to assess RANS accuracy for a given flow configuration. If experimental or higher fidelity simulation results are not available for RANS validation, there is no reliable method to evaluate RANS accuracy. This paper explores the potential of utilizing machine learning algorithms to identify regions of high RANS uncertainty. Three different machine learning algorithms were evaluated: support vector machines, Adaboost decision trees, and random forests. The algorithms were trained on a database of canonical flow configurations for which validated direct numerical simulation or large eddy simulation results were available, and were used to classify RANS results on a point-by-point basis as having either high or low uncertainty, based on the breakdown of specific RANS modeling assumptions. Classifiers were developed for three different basic RANS eddy viscosity model assumptions: the isotropy of the eddy viscosity, the linearity of the Boussinesq hypothesis, and the non-negativity of the eddy viscosity. It is shown that these classifiers are able to generalize to flows substantially different from those on which they were trained. As a result, feature selection techniques, model evaluation, and extrapolation detection are discussed in the context of turbulence modeling applications.
Ling, Julia; Templeton, Jeremy Alan
2015-08-04
Reynolds Averaged Navier Stokes (RANS) models are widely used in industry to predict fluid flows, despite their acknowledged deficiencies. Not only do RANS models often produce inaccurate flow predictions, but there are very limited diagnostics available to assess RANS accuracy for a given flow configuration. If experimental or higher fidelity simulation results are not available for RANS validation, there is no reliable method to evaluate RANS accuracy. This paper explores the potential of utilizing machine learning algorithms to identify regions of high RANS uncertainty. Three different machine learning algorithms were evaluated: support vector machines, Adaboost decision trees, and random forests.more » The algorithms were trained on a database of canonical flow configurations for which validated direct numerical simulation or large eddy simulation results were available, and were used to classify RANS results on a point-by-point basis as having either high or low uncertainty, based on the breakdown of specific RANS modeling assumptions. Classifiers were developed for three different basic RANS eddy viscosity model assumptions: the isotropy of the eddy viscosity, the linearity of the Boussinesq hypothesis, and the non-negativity of the eddy viscosity. It is shown that these classifiers are able to generalize to flows substantially different from those on which they were trained. As a result, feature selection techniques, model evaluation, and extrapolation detection are discussed in the context of turbulence modeling applications.« less
Better Buildings Neighborhood Program Business Models Guide: Program Administrator Description
Better Buildings Neighborhood Program Business Models Guide: Program Administrator Business Models, Program Administrator Description.
Krumel, L.J.
1996-12-31
The Atmospheric Radiation Measurement Program is a multi-laboratory, interagency program as part of DOE`s principal entry into the US Global Change Research Program. Two issues addressed are the radiation budget and its spectral dependence, and radiative and other properties of clouds. Measures of solar flux divergence and energy exchanges between clouds, the earth, its oceans, and the atmosphere through various altitudes are sought. Additionally, the program seeks to provide measurements to calibrate satellite radiance products and validate their associated flux retrieval algorithms. Unmanned Aerospace Vehicles fly long, extended missions. MPIR is one of the primary instruments on the ARM-UAV campaigns. A shutter mechanism has been developed and flown as part of an airborne imaging radiometer having application to spacecraft or other applications requiring low vibration, high reliability, and long life. The device could be employed in other cases where a reciprocating platform is needed. Typical shutters and choppers utilize a spinning disc, or in very small instruments, a vibrating vane to continually interrupt incident light or radiation that enters the system. A spinning disk requires some sort of bearings that usually have limited life, and at a minimum introduce issues of reliability. Friction, lubrication and contamination always remain critical areas of concern, as well as the need for power to operate. Dual vibrating vanes may be dynamically well balanced as a set and are frictionless. However, these are limited by size in a practical sense. In addition, multiples of these devices are difficult to synchronize.
A Unified Differential Evolution Algorithm for Global Optimization
Qiang, Ji; Mitchell, Chad
2014-06-24
Abstract?In this paper, we propose a new unified differential evolution (uDE) algorithm for single objective global optimization. Instead of selecting among multiple mutation strategies as in the conventional differential evolution algorithm, this algorithm employs a single equation as the mutation strategy. It has the virtue of mathematical simplicity and also provides users the flexbility for broader exploration of different mutation strategies. Numerical tests using twelve basic unimodal and multimodal functions show promising performance of the proposed algorithm in comparison to convential differential evolution algorithms.
A Monte Carlo algorithm for degenerate plasmas
Turrell, A.E. Sherlock, M.; Rose, S.J.
2013-09-15
A procedure for performing Monte Carlo calculations of plasmas with an arbitrary level of degeneracy is outlined. It has possible applications in inertial confinement fusion and astrophysics. Degenerate particles are initialised according to the FermiDirac distribution function, and scattering is via a Pauli blocked binary collision approximation. The algorithm is tested against degenerate electronion equilibration, and the degenerate resistivity transport coefficient from unmagnetised first order transport theory. The code is applied to the cold fuel shell and alpha particle equilibration problem of inertial confinement fusion.
Algorithms for Contact in a Mulitphysics Environment
Energy Science and Technology Software Center (OSTI)
2001-12-19
Many codes require either a contact capability or a need to determine geometric proximity of non-connected topological entities (which is a subset of what contact requires). ACME is a library to provide services to determine contact forces and/or geometric proximity interactions. This includes generic capabilities such as determining points in Cartesian volumes, finding faces in Cartesian volumes, etc. ACME can be run in single or multi-processor mode (the basic algorithms have been tested up tomore » 4500 processors).« less
Office of Energy Efficiency and Renewable Energy (EERE) (indexed site)
... seeks to evaluate a home's fixed characteristics, while holding occupant-determined ... algorithms & data: Sherman Air-leakage database, FSEC, RECS, Building America, NREL, ...
Human Reliability Program Overview
Bodin, Michael
2012-09-25
This presentation covers the high points of the Human Reliability Program, including certification/decertification, critical positions, due process, organizational structure, program components, personnel security, an overview of the US DOE reliability program, retirees and academia, and security program integration.
Vehicle Technologies Program Overview
none,
2006-09-05
Overview of the Vehicle Technologies Program including external assessment and market view; internal assessment, program history and progress; program justification and federal role; program vision, mission, approach, strategic goals, outputs, and outcomes; and performance goals.
Non-linear stochastic growth rates and redshift space distortions
Jennings, Elise; Jennings, David
2015-04-09
The linear growth rate is commonly defined through a simple deterministic relation between the velocity divergence and the matter overdensity in the linear regime. We introduce a formalism that extends this to a non-linear, stochastic relation between θ = ∇ ∙ v(x,t)/aH and δ. This provides a new phenomenological approach that examines the conditional mean <θ|δ>, together with the fluctuations of θ around this mean. We also measure these stochastic components using N-body simulations and find they are non-negative and increase with decreasing scale from ~10 per cent at k < 0.2 h Mpc-1 to 25 per cent at kmore » ~ 0.45 h Mpc-1 at z = 0. Both the stochastic relation and non-linearity are more pronounced for haloes, M ≤ 5 × 1012 M⊙ h-1, compared to the dark matter at z = 0 and 1. Non-linear growth effects manifest themselves as a rotation of the mean <θ|δ> away from the linear theory prediction -fLTδ, where fLT is the linear growth rate. This rotation increases with wavenumber, k, and we show that it can be well-described by second-order Lagrangian perturbation theory (2LPT) fork < 0.1 h Mpc-1. Furthermore, the stochasticity in the θ – δ relation is not so simply described by 2LPT, and we discuss its impact on measurements of fLT from two-point statistics in redshift space. Furthermore, given that the relationship between δ and θ is stochastic and non-linear, this will have implications for the interpretation and precision of fLT extracted using models which assume a linear, deterministic expression.« less
Easy and hard testbeds for real-time search algorithms
Koenig, S.; Simmons, R.G.
1996-12-31
Although researchers have studied which factors influence the behavior of traditional search algorithms, currently not much is known about how domain properties influence the performance of real-time search algorithms. In this paper we demonstrate, both theoretically and experimentally, that Eulerian state spaces (a super set of undirected state spaces) are very easy for some existing real-time search algorithms to solve: even real-time search algorithms that can be intractable, in general, are efficient for Eulerian state spaces. Because traditional real-time search testbeds (such as the eight puzzle and gridworlds) are Eulerian, they cannot be used to distinguish between efficient and inefficient real-time search algorithms. It follows that one has to use non-Eulerian domains to demonstrate the general superiority of a given algorithm. To this end, we present two classes of hard-to-search state spaces and demonstrate the performance of various real-time search algorithms on them.
An Adaptive Unified Differential Evolution Algorithm for Global Optimization
Qiang, Ji; Mitchell, Chad
2014-11-03
In this paper, we propose a new adaptive unified differential evolution algorithm for single-objective global optimization. Instead of the multiple mutation strate- gies proposed in conventional differential evolution algorithms, this algorithm employs a single equation unifying multiple strategies into one expression. It has the virtue of mathematical simplicity and also provides users the flexibility for broader exploration of the space of mutation operators. By making all control parameters in the proposed algorithm self-adaptively evolve during the process of optimization, it frees the application users from the burden of choosing appro- priate control parameters and also improves the performance of the algorithm. In numerical tests using thirteen basic unimodal and multimodal functions, the proposed adaptive unified algorithm shows promising performance in compari- son to several conventional differential evolution algorithms.
Utility Partnerships Program Overview
2014-10-03
Document describes the Utility Partnerships Program within the U.S. Department of Energy's Federal Energy Management Program.
STEM Education Program Inventory
U.S. Department of Energy (DOE) all webpages (Extended Search)
Issue for STEM Education Program Inventory Title of Program* Requestor Contact Information First Name* Last Name* Phone Number* E-mail* Fax Number Institution Name Program Description* Issue Information Leading Organization* Location of Program / Event Program Address Program Website To select multiple options, press CTRL and click. Type of Program (if Other, enter information in the box to the right.)* Workforce Development Student Programs Public Engagement in Life Long Learning
This document is the HQ Mediation Program's brochure. It generally discusses the services the program offers.
Residential Buildings Integration Program
Office of Energy Efficiency and Renewable Energy (EERE) (indexed site)
... Program Existing Homes HUD The residential program is grounded on technology and research. ... * Quantitative (reporting) * Qualitative (account management, peer exchange ...
Structure/Function Studies of Proteins Using Linear Scaling Quantum Mechanical Methodologies
Merz, K. M.
2004-07-19
We developed a linear-scaling semiempirical quantum mechanical (QM) program (DivCon). Using DivCon we can now routinely carry out calculations at the fully QM level on systems containing up to about 15 thousand atoms. We also implemented a Poisson-Boltzmann (PM) method into DivCon in order to compute solvation free energies and electrostatic properties of macromolecules in solution. This new suite of programs has allowed us to bring the power of quantum mechanics to bear on important biological problems associated with protein folding, drug design and enzyme catalysis. Hence, we have garnered insights into biological systems that have been heretofore impossible to obtain using classical simulation techniques.
Explicit 2-D Hydrodynamic FEM Program
Energy Science and Technology Software Center (OSTI)
1996-08-07
DYNA2D* is a vectorized, explicit, two-dimensional, axisymmetric and plane strain finite element program for analyzing the large deformation dynamic and hydrodynamic response of inelastic solids. DYNA2D* contains 13 material models and 9 equations of state (EOS) to cover a wide range of material behavior. The material models implemented in all machine versions are: elastic, orthotropic elastic, kinematic/isotropic elastic plasticity, thermoelastoplastic, soil and crushable foam, linear viscoelastic, rubber, high explosive burn, isotropic elastic-plastic, temperature-dependent elastic-plastic. Themore » isotropic and temperature-dependent elastic-plastic models determine only the deviatoric stresses. Pressure is determined by one of 9 equations of state including linear polynomial, JWL high explosive, Sack Tuesday high explosive, Gruneisen, ratio of polynomials, linear polynomial with energy deposition, ignition and growth of reaction in HE, tabulated compaction, and tabulated.« less
Explicit 2-D Hydrodynamic FEM Program
1996-08-07
DYNA2D* is a vectorized, explicit, two-dimensional, axisymmetric and plane strain finite element program for analyzing the large deformation dynamic and hydrodynamic response of inelastic solids. DYNA2D* contains 13 material models and 9 equations of state (EOS) to cover a wide range of material behavior. The material models implemented in all machine versions are: elastic, orthotropic elastic, kinematic/isotropic elastic plasticity, thermoelastoplastic, soil and crushable foam, linear viscoelastic, rubber, high explosive burn, isotropic elastic-plastic, temperature-dependent elastic-plastic. The isotropic and temperature-dependent elastic-plastic models determine only the deviatoric stresses. Pressure is determined by one of 9 equations of state including linear polynomial, JWL high explosive, Sack Tuesday high explosive, Gruneisen, ratio of polynomials, linear polynomial with energy deposition, ignition and growth of reaction in HE, tabulated compaction, and tabulated.
Instrument design and optimization using genetic algorithms
Hoelzel, Robert; Bentley, Phillip M.; Fouquet, Peter
2006-10-15
This article describes the design of highly complex physical instruments by using a canonical genetic algorithm (GA). The procedure can be applied to all instrument designs where performance goals can be quantified. It is particularly suited to the optimization of instrument design where local optima in the performance figure of merit are prevalent. Here, a GA is used to evolve the design of the neutron spin-echo spectrometer WASP which is presently being constructed at the Institut Laue-Langevin, Grenoble, France. A comparison is made between this artificial intelligence approach and the traditional manual design methods. We demonstrate that the search of parameter space is more efficient when applying the genetic algorithm, and the GA produces a significantly better instrument design. Furthermore, it is found that the GA increases flexibility, by facilitating the reoptimization of the design after changes in boundary conditions during the design phase. The GA also allows the exploration of 'nonstandard' magnet coil geometries. We conclude that this technique constitutes a powerful complementary tool for the design and optimization of complex scientific apparatus, without replacing the careful thought processes employed in traditional design methods.
Machinist Pipeline/Apprentice Program Program Description
U.S. Department of Energy (DOE) all webpages (Extended Search)
Machinist Pipeline/Apprentice Program Program Description The Machinist Pipeline Program was created by the Prototype Fabrication Division to fill a critical need for skilled journeyworker machinists. It is based on a program developed by the National Institute for Metalworking Skills (NIMS) in conjunction with metalworking trade associations to develop and maintain a globally competitive U.S. workforce. The goal is to develop and implement apprenticeship programs that are aligned with
Scalable System Software for Parallel Programming | Argonne Leadership
U.S. Department of Energy (DOE) all webpages (Extended Search)
Computing Facility Streamlines from an early time step of the Rayleigh-Taylor instability depend on scalable storage, communication, and data analysis algorithms developed at extreme scale using INCITE resources. Tom Peterka Scalable System Software for Parallel Programming PI Name: Robert Latham PI Email: robl@mcs.anl.gov Institution: Argonne National Laboratory Allocation Program: INCITE Allocation Hours at ALCF: 20 Million Year: 2013 Research Domain: Computer Science The purpose of this
Scalable System Software for Parallel Programming | Argonne Leadership
U.S. Department of Energy (DOE) all webpages (Extended Search)
Computing Facility depend on scalable storage, communication, and data analysis algorithms developed at extreme scale using INCITE resources. Tom Peterka, Argonne National Laboratory Scalable System Software for Parallel Programming PI Name: Robert Latham PI Email: robl@mcs.anl.gov Institution: Argonne National Laboratory Allocation Program: INCITE Allocation Hours at ALCF: 25 Million Year: 2014 Research Domain: Computer Science As hardware complexity in Leadership Class Facility systems
Scalable System Software for Parallel Programming | Argonne Leadership
U.S. Department of Energy (DOE) all webpages (Extended Search)
Computing Facility Streamlines from an early time step of the Rayleigh-Taylor instability depend on scalable storage, communication, and data analysis algorithms developed at extreme scale using INCITE resources. Credit: Tom Peterka, Argonne National Laboratory Scalable System Software for Parallel Programming PI Name: Robert Latham PI Email: robl@mcs.anl.gov Institution: Argonne National Laboratory Allocation Program: INCITE Allocation Hours at ALCF: 25 Million Year: 2015 Research Domain:
Effective Yukawa couplings and flavor-changing Higgs boson decays at linear colliders
Gabrielli, E.; Mele, B.
2011-04-01
We analyze the advantages of a linear-collider program for testing a recent theoretical proposal where the Higgs boson Yukawa couplings are radiatively generated, keeping unchanged the standard-model mechanism for electroweak-gauge-symmetry breaking. Fermion masses arise at a large energy scale through an unknown mechanism, and the standard model at the electroweak scale is regarded as an effective field theory. In this scenario, Higgs boson decays into photons and electroweak gauge-boson pairs are considerably enhanced for a light Higgs boson, which makes a signal observation at the LHC straightforward. On the other hand, the clean environment of a linear collider is required to directly probe the radiative fermionic sector of the Higgs boson couplings. Also, we show that the flavor-changing Higgs boson decays are dramatically enhanced with respect to the standard model. In particular, we find a measurable branching ratio in the range (10{sup -4}-10{sup -3}) for the decay H{yields}bs for a Higgs boson lighter than 140 GeV, depending on the high-energy scale where Yukawa couplings vanish. We present a detailed analysis of the Higgs boson production cross sections at linear colliders for interesting decay signatures, as well as branching-ratio correlations for different flavor-conserving/nonconserving fermionic decays.
Linear beam-beam tune shift calculations for the Tevatron Collider
Johnson, D.
1989-01-12
A realistic estimate of the linear beam-beam tune shift is necessary for the selection of an optimum working point in the tune diagram. Estimates of the beam-beam tune shift using the ''Round Beam Approximation'' (RBA) have over estimated the tune shift for the Tevatron. For a hadron machine with unequal lattice functions and beam sizes, an explicit calculation using the beam size at the crossings is required. Calculations for various Tevatron lattices used in Collider operation are presented. Comparisons between the RBA and the explicit calculation, for elliptical beams, are presented. This paper discusses the calculation of the linear tune shift using the program SYNCH. Selection of a working point is discussed. The magnitude of the tune shift is influenced by the choice of crossing points in the lattice as determined by the pbar ''cogging effects''. Also discussed is current cogging procedures and presents results of calculations for tune shifts at various crossing points in the lattice. Finally, a comparison of early pbar tune measurements with the present linear tune shift calculations is presented. 17 refs., 13 figs., 3 tabs.
Linear Scaling Electronic Structure Methods with Periodic Boundary Conditions
Gustavo E. Scuseria
2008-02-08
The methodological development and computational implementation of linear scaling quantum chemistry methods for the accurate calculation of electronic structure and properties of periodic systems (solids, surfaces, and polymers) and their application to chemical problems of DOE relevance.
Spin relaxation and linear-in-electric-field frequency shift...
Office of Scientific and Technical Information (OSTI)
arbitrary, time-independent magnetic field Citation Details In-Document Search Title: Spin relaxation and linear-in-electric-field frequency shift in an arbitrary, time-independen...
Linear Scaling of the Exciton Binding Energy versus the Band...
Office of Scientific and Technical Information (OSTI)
Linear Scaling of the Exciton Binding Energy versus the Band Gap of Two-Dimensional Materials This content will become publicly available on August 6, 2016 Prev Next Title:...
Linear and cubic response to the initial eccentricity in heavy...
Office of Scientific and Technical Information (OSTI)
Linear and cubic response to the initial eccentricity in heavy-ion collisions Citation Details In-Document Search This content will become publicly available on January 21, 2017 ...
Method and apparatus of highly linear optical modulation
DeRose, Christopher; Watts, Michael R.
2016-05-03
In a new optical intensity modulator, a nonlinear change in refractive index is used to balance the nonlinearities in the optical transfer function in a way that leads to highly linear optical intensity modulation.
Physics Case for the International Linear Collider (Technical...
Office of Scientific and Technical Information (OSTI)
We summarize the physics case for the International Linear Collider (ILC). We review the ... in accord with the expected schedule of operation of the accelerator and the results of ...
Fourth order resonance of a high intensity linear accelerator...
Office of Scientific and Technical Information (OSTI)
For a high intensity beam, the 4nu1 resonance of a linear accelerator is manifested through the octupolar term of space charge potential when the depressed phase advance sigma ...
Linear Concentrator System Basics for Concentrating Solar Power...
may be integrated with existing or new combined-cycle natural-gas- and coal-fired plants. ... Illustration of a linear concentrator power plant using parabolic trough collectors. ...
Soewono, C. N.; Takaki, N.
2012-07-01
In this work genetic algorithm was proposed to solve fuel loading pattern optimization problem in thorium fueled heavy water reactor. The objective function of optimization was to maximize the conversion ratio and minimize power peaking factor. Those objectives were simultaneously optimized using non-dominated Pareto-based population ranking optimal method. Members of non-dominated population were assigned selection probabilities based on their rankings in a manner similar to Baker's single criterion ranking selection procedure. A selected non-dominated member was bred through simple mutation or one-point crossover process to produce a new member. The genetic algorithm program was developed in FORTRAN 90 while neutronic calculation and analysis was done by COREBN code, a module of core burn-up calculation for SRAC. (authors)
Optimized Swinging Door Algorithm for Wind Power Ramp Event Detection: Preprint
Cui, Mingjian; Zhang, Jie; Florita, Anthony R.; Hodge, Bri-Mathias; Ke, Deping; Sun, Yuanzhang
2015-08-06
Significant wind power ramp events (WPREs) are those that influence the integration of wind power, and they are a concern to the continued reliable operation of the power grid. As wind power penetration has increased in recent years, so has the importance of wind power ramps. In this paper, an optimized swinging door algorithm (SDA) is developed to improve ramp detection performance. Wind power time series data are segmented by the original SDA, and then all significant ramps are detected and merged through a dynamic programming algorithm. An application of the optimized SDA is provided to ascertain the optimal parameter of the original SDA. Measured wind power data from the Electric Reliability Council of Texas (ERCOT) are used to evaluate the proposed optimized SDA.
International Linear Collider Technical Design Report - Volume 2: Physics
Office of Scientific and Technical Information (OSTI)
(Technical Report) | SciTech Connect International Linear Collider Technical Design Report - Volume 2: Physics Citation Details In-Document Search Title: International Linear Collider Technical Design Report - Volume 2: Physics Authors: Baer, Howard ; Barklow, Tim ; Fujii, Keisuke ; Gao, Yuanning ; Hoang, Andre ; Kanemura, Shinya ; List, Jenny ; Logan, Heather E. ; Nomerotski, Andrei ; Perelstein, Maxim ; Peskin, Michael E. ; Poschl, Roman ; Reuter, Jurgen ; Riemann, Sabine ; Savoy-Navarro,
Unexpected Angular Dependence of X-Ray Magnetic Linear Dichroism
U.S. Department of Energy (DOE) all webpages (Extended Search)
Unexpected Angular Dependence of X-Ray Magnetic Linear Dichroism Print Using spectroscopic information for magnetometry and magnetic microscopy obviously requires detailed theoretical understanding of spectral shape and magnitude of dichroism signals. A research team at ALS Beamline 4.0.2 has now shown unambiguously that, contrary to common belief, spectral shape and magnitude of x-ray magnetic linear dichroism (XMLD) are not only determined by the relative orientation of magnetic moments and
Unexpected Angular Dependence of X-Ray Magnetic Linear Dichroism
U.S. Department of Energy (DOE) all webpages (Extended Search)
Unexpected Angular Dependence of X-Ray Magnetic Linear Dichroism Print Using spectroscopic information for magnetometry and magnetic microscopy obviously requires detailed theoretical understanding of spectral shape and magnitude of dichroism signals. A research team at ALS Beamline 4.0.2 has now shown unambiguously that, contrary to common belief, spectral shape and magnitude of x-ray magnetic linear dichroism (XMLD) are not only determined by the relative orientation of magnetic moments and
Unexpected Angular Dependence of X-Ray Magnetic Linear Dichroism
U.S. Department of Energy (DOE) all webpages (Extended Search)
Unexpected Angular Dependence of X-Ray Magnetic Linear Dichroism Print Using spectroscopic information for magnetometry and magnetic microscopy obviously requires detailed theoretical understanding of spectral shape and magnitude of dichroism signals. A research team at ALS Beamline 4.0.2 has now shown unambiguously that, contrary to common belief, spectral shape and magnitude of x-ray magnetic linear dichroism (XMLD) are not only determined by the relative orientation of magnetic moments and
Unexpected Angular Dependence of X-Ray Magnetic Linear Dichroism
U.S. Department of Energy (DOE) all webpages (Extended Search)
Unexpected Angular Dependence of X-Ray Magnetic Linear Dichroism Print Using spectroscopic information for magnetometry and magnetic microscopy obviously requires detailed theoretical understanding of spectral shape and magnitude of dichroism signals. A research team at ALS Beamline 4.0.2 has now shown unambiguously that, contrary to common belief, spectral shape and magnitude of x-ray magnetic linear dichroism (XMLD) are not only determined by the relative orientation of magnetic moments and
Unexpected Angular Dependence of X-Ray Magnetic Linear Dichroism
U.S. Department of Energy (DOE) all webpages (Extended Search)
Unexpected Angular Dependence of X-Ray Magnetic Linear Dichroism Print Using spectroscopic information for magnetometry and magnetic microscopy obviously requires detailed theoretical understanding of spectral shape and magnitude of dichroism signals. A research team at ALS Beamline 4.0.2 has now shown unambiguously that, contrary to common belief, spectral shape and magnitude of x-ray magnetic linear dichroism (XMLD) are not only determined by the relative orientation of magnetic moments and
Unexpected Angular Dependence of X-Ray Magnetic Linear Dichroism
U.S. Department of Energy (DOE) all webpages (Extended Search)
Unexpected Angular Dependence of X-Ray Magnetic Linear Dichroism Unexpected Angular Dependence of X-Ray Magnetic Linear Dichroism Print Wednesday, 29 August 2007 00:00 Using spectroscopic information for magnetometry and magnetic microscopy obviously requires detailed theoretical understanding of spectral shape and magnitude of dichroism signals. A research team at ALS Beamline 4.0.2 has now shown unambiguously that, contrary to common belief, spectral shape and magnitude of x-ray magnetic
JLab Supports International Linear Collider Cavity Development Work |
U.S. Department of Energy (DOE) all webpages (Extended Search)
Jefferson Lab Supports International Linear Collider Cavity Development Work JLab Supports International Linear Collider Cavity Development Work NEWPORT NEWS, Va. Feb. 12, 2008 - It's not often that major-league baseball and nuclear physics get to share the limelight, but that's what's happening at the Department of Energy's Jefferson Lab. The baseball connection involves a nine-cell niobium cavity developed by KEK accelerator scientists in Japan as one of several designs being tested for
Free piston variable-stroke linear-alternator generator
Haaland, Carsten M.
1998-01-01
A free-piston variable stroke linear-alternator AC power generator for a combustion engine. An alternator mechanism and oscillator system generates AC current. The oscillation system includes two oscillation devices each having a combustion cylinder and a flying turnbuckle. The flying turnbuckle moves in accordance with the oscillation device. The alternator system is a linear alternator coupled between the two oscillation devices by a slotted connecting rod.
Producing Linear Alpha Olefins From Biomass - Energy Innovation Portal
U.S. Department of Energy (DOE) all webpages (Extended Search)
Producing Linear Alpha Olefins From Biomass Great Lakes Bioenergy Research Center Contact GLBRC About This Technology Technology Marketing Summary Linear alpha olefins (LAOs) are valuable commodity chemicals traditionally derived from petroleum. They are versatile building blocks for making a range of chemical products like polyethylene, synthetic oils, plasticizers, detergents and oilfield fluids. Relying on fossil fuel to manufacture LAOs is problematic. Not only are the standard methods
Direct Probes of Linearly Polarized Gluons inside Unpolarized Hadrons
Boer, Daniel; Brodsky, Stanley J.; Mulders, Piet J.; Pisano, Cristian; /Cagliari U. /INFN, Cagliari
2011-02-07
We show that the unmeasured distribution of linearly polarized gluons inside unpolarized hadrons can be directly probed in jet or heavy quark pair production both in electron-hadron and hadron-hadron collisions. We present expressions for the simplest cos 2{phi} asymmetries and estimate their maximal value in the particular case of electron-hadron collisions. Measurements of the linearly polarized gluon distribution in the proton should be feasible in future EIC or LHeC experiments.
Accessing the Distribution of Linearly Polarized Gluons in Unpolarized Hadrons
Boer, Daniel; Brodsky, Stanley J.; Mulders, Piet J.; Pisano, Cristian; /Cagliari U. /INFN, Cagliari
2011-08-19
Gluons inside unpolarized hadrons can be linearly polarized provided they have a nonzero transverse momentum. The simplest and theoretically safest way to probe this distribution of linearly polarized gluons is through cos2{phi} asymmetries in heavy quark pair or dijet production in electron-hadron collisions. Future Electron-Ion Collider (EIC) or Large Hadron electron Collider (LHeC) experiments are ideally suited for this purpose. Here we estimate the maximum asymmetries for EIC kinematics.
Free piston variable-stroke linear-alternator generator
Haaland, C.M.
1998-12-15
A free-piston variable stroke linear-alternator AC power generator for a combustion engine is described. An alternator mechanism and oscillator system generates AC current. The oscillation system includes two oscillation devices each having a combustion cylinder and a flying turnbuckle. The flying turnbuckle moves in accordance with the oscillation device. The alternator system is a linear alternator coupled between the two oscillation devices by a slotted connecting rod. 8 figs.
Proceedings of the Oak Ridge Electron Linear Accelerator (ORELA) Workshop
Dunn, M.E.
2006-02-27
The Oak Ridge National Laboratory (ORNL) organized a workshop at ORNL July 14-15, 2005, to highlight the unique measurement capabilities of the Oak Ridge Electron Linear Accelerator (ORELA) facility and to emphasize the important role of ORELA for performing differential cross-section measurements in the low-energy resonance region that is important for nuclear applications such as nuclear criticality safety, nuclear reactor and fuel cycle analysis, stockpile stewardship, weapons research, medical diagnosis, and nuclear astrophysics. The ORELA workshop (hereafter referred to as the Workshop) provided the opportunity to exchange ideas and information pertaining to nuclear cross-section measurements and their importance for nuclear applications from a variety of perspectives throughout the U.S. Department of Energy (DOE). Approximately 50 people, representing DOE, universities, and seven U.S. national laboratories, attended the Workshop. The objective of the Workshop was to emphasize the technical community endorsement for ORELA in meeting nuclear data challenges in the years to come. The Workshop further emphasized the need for a better understanding of the gaps in basic differential nuclear measurements and identified the efforts needed to return ORELA to a reliable functional measurement facility. To accomplish the Workshop objective, nuclear data experts from national laboratories and universities were invited to provide talks emphasizing the unique and vital role of the ORELA facility for addressing nuclear data needs. ORELA is operated on a full cost-recovery basis with no single sponsor providing complete base funding for the facility. Consequently, different programmatic sponsors benefit by receiving accurate cross-section data measurements at a reduced cost to their respective programs; however, leveraging support for a complex facility such as ORELA has a distinct disadvantage in that the programmatic funds are only used to support program
Some problems in sequencing and scheduling utilizing branch and bound algorithms
Gim, B.
1988-01-01
This dissertation deals with branch and bound algorithms which are applied to the two-machine flow-shop problem with sparse precedence constraints and the optimal sequencing and scheduling of multiple feedstocks in a batch-type digester problem. The problem studied here is to find a schedule which minimizes the maximum flow time with the requirement that the schedule does not violate a set of sparse precedence constraints. This research provides a branch and bound algorithm which employs a lower bounding rule and is based on an adjustment of the sequence obtained by applying Johnson's algorithm. It is demonstrated that this lower bounding procedure in conjunction with Kurisu's branching rule is effective for the sparse precedence constraints problem case. Biomass to methane production systems have the potential of supplying 25% of the national gas demand. The optimal operation of a batch digester system requires the sequencing and scheduling of all batches from multiple feedstocks during a fixed time horizon. A significant characteristic of these systems is that the feedstock decays in storage before use in the digester system. The operational problem is to determine the time to allocate to each batch of several feedstocks and then sequence the individual batches so as to maximize biogas production for a single batch type digester over a fixed planning horizon. This research provides a branch and bound algorithm for sequencing and a two-step hierarchical dynamic programming procedure for time allocation scheduling. An efficient heuristic algorithm is developed for large problems and demonstrated to yield excellent results.
High School Internship Program
U.S. Department of Energy (DOE) all webpages (Extended Search)
High School Internship Program High School Internship Program Point your career towards Los Alamos Lab: work with the best minds on the planet in an inclusive environment that is rich in intellectual vitality and opportunities for growth. Contact Program Manager Scott Robbins Student Programs (505) 667-3639 Email Program Coordinator Brenda Montoya Student Programs (505) 667-4866 Email Opportunities for Northern New Mexico high school seniors The High School Internship Program provides qualified
DOE Technical Assistance Program
Office of Energy Efficiency and Renewable Energy (EERE) (indexed site)
Designing Effective Renewables Programs Cheryl Jenkins Vermont Energy Investment Corporation DOE Technical Assistance Program Team 4 - Program & Project Development & Implementation September 28, 2010 2 | Designing Effective Renewables Programs eere.energy.gov Webinar Overview * Technical Assistance Project (TAP) Overview * The Framework for an Effective Program * Effective Program Design Approaches * Resources * Q&A 3 | Designing Effective Renewables Programs eere.energy.gov What is
Component evaluation testing and analysis algorithms.
Hart, Darren M.; Merchant, Bion John
2011-10-01
The Ground-Based Monitoring R&E Component Evaluation project performs testing on the hardware components that make up Seismic and Infrasound monitoring systems. The majority of the testing is focused on the Digital Waveform Recorder (DWR), Seismic Sensor, and Infrasound Sensor. In order to guarantee consistency, traceability, and visibility into the results of the testing process, it is necessary to document the test and analysis procedures that are in place. Other reports document the testing procedures that are in place (Kromer, 2007). This document serves to provide a comprehensive overview of the analysis and the algorithms that are applied to the Component Evaluation testing. A brief summary of each test is included to provide the context for the analysis that is to be performed.
Neurons to algorithms LDRD final report.
Rothganger, Fredrick H.; Aimone, James Bradley; Warrender, Christina E.; Trumbo, Derek
2013-09-01
Over the last three years the Neurons to Algorithms (N2A) LDRD project teams has built infrastructure to discover computational structures in the brain. This consists of a modeling language, a tool that enables model development and simulation in that language, and initial connections with the Neuroinformatics community, a group working toward similar goals. The approach of N2A is to express large complex systems like the brain as populations of a discrete part types that have specific structural relationships with each other, along with internal and structural dynamics. Such an evolving mathematical system may be able to capture the essence of neural processing, and ultimately of thought itself. This final report is a cover for the actual products of the project: the N2A Language Specification, the N2A Application, and a journal paper summarizing our methods.
Critical dynamics of cluster algorithms in the dilute Ising model
Hennecke, M. Universitaet Karlsruhe ); Heyken, U. )
1993-08-01
Autocorrelation times for thermodynamic quantities at [Tc] are calculated from Monte Carlo simulations of the site-diluted simple cubic Ising model, using the Swendsen-Wand and Wolff cluster algorithms. The results show that for these algorithms the autocorrelation times decrease when reducing the concentration of magnetic sites from 100% down to 40%. This is of crucial importance when estimating static properties of the model, since the variances of these estimators increase with autocorrelation time. The dynamical critical exponents are calculated for both algorithms, observing pronounced finite-size effects in the energy autocorrelation data for the algorithm of Wolff. It is concluded that, when applied to the dilute Ising model, cluster algorithms become even more effective than local algorithms, for which increasing autocorrelation times are expected. 33 refs., 5 figs., 2 tabs.
New Design Methods and Algorithms for Multi-component Distillation
Office of Energy Efficiency and Renewable Energy (EERE) (indexed site)
Processes | Department of Energy Design Methods and Algorithms for Multi-component Distillation Processes New Design Methods and Algorithms for Multi-component Distillation Processes multicomponent.pdf (517.32 KB) More Documents & Publications Development of Method and Algorithms To Identify Easily Implementable Energy-Efficient Low-Cost Multicomponent Distillation Column Trains With Large Energy Savings For Wide Number of Separations CX-100137 Categorical Exclusion Determination ITP
Solar and Moon Position Algorithm (SAMPA) - Energy Innovation Portal
U.S. Department of Energy (DOE) all webpages (Extended Search)
Energy Analysis Energy Analysis Find More Like This Return to Search Solar and Moon Position Algorithm (SAMPA) National Renewable Energy Laboratory Contact NREL About This Technology Technology Marketing Summary This algorithm calculates the solar and lunar zenith and azimuth angles in the period from the year -2000 to 6000, with uncertainties of +/- 0.0003 degrees for the Sun and +/- 0.003 degrees for the Moon, based on the date, time, and location on Earth. Description The algorithm can be
State Energy Program Competitive Financial Assistance Program
State Energy Program (SEP) dedicates a portion of its funding each year to provide competitively awarded financial assistance to U.S. states and territories to advance policies, programs, and market strategies.
Sunaguchi, Naoki; Yuasa, Tetsuya; Gupta, Rajiv; Ando, Masami
2015-12-21
The main focus of this paper is reconstruction of tomographic phase-contrast image from a set of projections. We propose an efficient reconstruction algorithm for differential phase-contrast computed tomography that can considerably reduce the number of projections required for reconstruction. The key result underlying this research is a projection theorem that states that the second derivative of the projection set is linearly related to the Laplacian of the tomographic image. The proposed algorithm first reconstructs the Laplacian image of the phase-shift distribution from the second-derivative of the projections using total variation regularization. The second step is to obtain the phase-shift distribution by solving a Poisson equation whose source is the Laplacian image previously reconstructed under the Dirichlet condition. We demonstrate the efficacy of this algorithm using both synthetically generated simulation data and projection data acquired experimentally at a synchrotron. The experimental phase data were acquired from a human coronary artery specimen using dark-field-imaging optics pioneered by our group. Our results demonstrate that the proposed algorithm can reduce the number of projections to approximately 33% as compared with the conventional filtered backprojection method, without any detrimental effect on the image quality.
Development of an Outdoor Temperature-Based Control Algorithm...
Office of Scientific and Technical Information (OSTI)
Development of an Outdoor Temperature-Based Control Algorithm for Residential Mechanical Ventilation Control Citation Details In-Document Search Title: Development of an Outdoor ...
A modern solver framework to manage solution algorithms in the...
Office of Scientific and Technical Information (OSTI)
A modern solver framework to manage solution algorithms in the Community Earth System Model Citation Details In-Document Search Title: A modern solver framework to manage solution ...
Algorithm for Finding Similar Shapes in Large Molecular Structures Libraries
Energy Science and Technology Software Center (OSTI)
1994-10-19
The SHAPES software consists of methods and algorithms for representing and rapidly comparing molecular shapes. Molecular shapes algorithms are a class of algorithm derived and applied for recognizing when two three-dimensional shapes share common features. They proceed from the notion that the shapes to be compared are regions in three-dimensional space. The algorithms allow recognition of when localized subregions from two or more different shapes could never be superimposed by any rigid-body motion. Rigid-body motionsmore » are arbitrary combinations of translations and rotations.« less
DEVELOPMENT OF METHOD AND ALGORITHMS TO IDENTIFY EASILY IMPLEMENTABLE...
Office of Energy Efficiency and Renewable Energy (EERE) (indexed site)
Purdue researchers will team with an IT company to develop a user-friendly interface for the algorithm and to better address software development and commercialization issues. The ...
A sequential implicit algorithm of chemo-thermo-poro-mechanics...
Office of Scientific and Technical Information (OSTI)
A sequential implicit algorithm of chemo-thermo-poro-mechanics for fractured geothermal ...emo-thermo-poro-mechanics for fractured geothermal reservoirs Authors: Kim, Jihoon ; ...
Numerical Analysis of Fixed Point Algorithms in the Presence...
Office of Scientific and Technical Information (OSTI)
in the Presence of Hardware Faults Citation Details In-Document Search Title: Numerical Analysis of Fixed Point Algorithms in the Presence of Hardware Faults You are ...
Evaluation of Monte Carlo Electron-Transport Algorithms in the...
Office of Scientific and Technical Information (OSTI)
Series Codes for Stochastic-Media Simulations. Citation Details In-Document Search Title: Evaluation of Monte Carlo Electron-Transport Algorithms in the Integrated Tiger Series ...
Use of a Radon Stripping Algorithm for Retrospective Assessment...
Office of Scientific and Technical Information (OSTI)
and beta spectroscopy system employing a passive implanted planar silicon (PIPS) detector. ... MODIFICATIONS; PROGENY; RADON; SILICON air monitoring, radon, algorithm, PIPS, ...
Problems Found Using a Radon Stripping Algorithm for Retrospective...
Office of Scientific and Technical Information (OSTI)
and beta spectroscopy system employing a passive implanted planar silicon (PIPS) detector. ... MODIFICATIONS; PROGENY; RADON; SILICON air monitoring, radon, algorithm, PIPS, ...
Use of a Radon Stripping Algorithm for Retrospective Assessment...
Office of Scientific and Technical Information (OSTI)
using a commercial alpha and beta spectroscopy system employing a passive implanted ... FLOW; ALGORITHMS; BETA SOURCES; BETA SPECTROSCOPY; EVALUATION; MODIFICATIONS; PROGENY; ...
This document provides background information and detail about the algorithms and calculations that drive the Energy Performance Indicator (EnPI) Tool.
NREL: Awards and Honors - Current Interrupt Charging Algorithm...
U.S. Department of Energy (DOE) all webpages (Extended Search)
Current Interrupt Charging Algorithm for Lead-Acid Batteries Developers: Matthew A. Keyser, Ahmad A. Pesaran, and Mark M. Mihalic, National Renewable Energy Laboratory; Robert F....
PREPRINT An Efficient Algorithm for Geocentric to Geodetic Coordinate...
Office of Scientific and Technical Information (OSTI)
Correlation. Datum Transformation, Modeling and Simulation Interoperability ABSTRACT ... This algorithm is discussed in the context of machines that have FPUs and legacy machines ...
Saad, Yousef
2014-03-19
The master project under which this work is funded had as its main objective to develop computational methods for modeling electronic excited-state and optical properties of various nanostructures. The specific goals of the computer science group were primarily to develop effective numerical algorithms in Density Functional Theory (DFT) and Time Dependent Density Functional Theory (TDDFT). There were essentially four distinct stated objectives. The first objective was to study and develop effective numerical algorithms for solving large eigenvalue problems such as those that arise in Density Functional Theory (DFT) methods. The second objective was to explore so-called linear scaling methods or Methods that avoid diagonalization. The third was to develop effective approaches for Time-Dependent DFT (TDDFT). Our fourth and final objective was to examine effective solution strategies for other problems in electronic excitations, such as the GW/Bethe-Salpeter method, and quantum transport problems.
Higher-degree linear approximations of nonlinear systems
Karahan, S.
1989-01-01
In this dissertation, the author develops a new method for obtaining higher degree linear approximations of nonlinear control systems. The standard approach in the analysis and synthesis of nonlinear systems is a first order approximation by a linear model. This is usually performed by obtaining a series expansion of the system at some nominal operating point and retaining only the first degree terms in the series. The accuracy of this approximation depends on how far the system moves away from the normal point, and on the relative magnitudes of the higher degree terms in the series expansion. The approximation is achieved by finding an appropriate nonlinear coordinate transformation-feedback pair to perform the higher degree linearization. With the proposed method, one can improve the accuracy of the approximation up to arbitrarily higher degrees, provided certain solvability conditions are satisfied. The Hunt-Su linearizability theorem makes these conditions precise. This approach is similar to Poincare's Normal Form Theorem in formulation, but different in its solution method. After some mathematical background the author derives a set of equations (called the Homological Equations). A solution to this system of linear equations is equivalent to the solution to the problem of approximate linearization. However, it is generally not possible to solve the system of equations exactly. He outlines a method for systematically finding approximate solutions to these equations using singular value decomposition, while minimizing an error with respect to some defined norm.
Kamph, Jerome Henri; Robinson, Darren; Wetter, Michael
2009-09-01
There is an increasing interest in the use of computer algorithms to identify combinations of parameters which optimise the energy performance of buildings. For such problems, the objective function can be multi-modal and needs to be approximated numerically using building energy simulation programs. As these programs contain iterative solution algorithms, they introduce discontinuities in the numerical approximation to the objective function. Metaheuristics often work well for such problems, but their convergence to a global optimum cannot be established formally. Moreover, different algorithms tend to be suited to particular classes of optimization problems. To shed light on this issue we compared the performance of two metaheuristics, the hybrid CMA-ES/HDE and the hybrid PSO/HJ, in minimizing standard benchmark functions and real-world building energy optimization problems of varying complexity. From this we find that the CMA-ES/HDE performs well on more complex objective functions, but that the PSO/HJ more consistently identifies the global minimum for simpler objective functions. Both identified similar values in the objective functions arising from energy simulations, but with different combinations of model parameters. This may suggest that the objective function is multi-modal. The algorithms also correctly identified some non-intuitive parameter combinations that were caused by a simplified control sequence of the building energy system that does not represent actual practice, further reinforcing their utility.
Vehicle Technologies Program Implementation
none,
2009-06-19
The Vehicle Technologies Program takes a systematic approach to Program implementation. Elements of this approach include the evaluation of new technologies, competitive selection of projects and partners, review of Program and project improvement, project tracking, and portfolio management and adjustment.
U.S. Department of Energy (DOE) all webpages (Extended Search)
Program Contacts Dr. Elizabeth Hoffman LDRD Program Manager Elizabeth.Hoffman@srnl.doe.gov 803.725.5475 Nixon J. Peralta Program Manager, CEM Office of Laboratory Oversight U.S....
U.S. Department of Energy (DOE) all webpages (Extended Search)
Apprentice Program Over the years Y-12 has produced numerous training programs. Many of them have been developed at Y-12 to meet special needs. The training programs have ranged...
Algorithms for Mathematical Programming with Emphasis on Bi-level Models
Goldfarb, Donald; Iyengar, Garud
2014-05-22
The research supported by this grant was focused primarily on first-order methods for solving large scale and structured convex optimization problems and convex relaxations of nonconvex problems. These include optimal gradient methods, operator and variable splitting methods, alternating direction augmented Lagrangian methods, and block coordinate descent methods.
New non-linear photovoltaic effect in uniform bipolar semiconductor
Volovichev, I.
2014-11-21
A linear theory of the new non-linear photovoltaic effect in the closed circuit consisting of a non-uniformly illuminated uniform bipolar semiconductor with neutral impurities is developed. The non-uniform photo-excitation of impurities results in the position-dependant current carrier mobility that breaks the semiconductor homogeneity and induces the photo-electromotive force (emf). As both the electron (or hole) mobility gradient and the current carrier generation rate depend on the light intensity, the photo-emf and the short-circuit current prove to be non-linear functions of the incident light intensity at an arbitrarily low illumination. The influence of the sample size on the photovoltaic effect magnitude is studied. Physical relations and distinctions between the considered effect and the Dember and bulk photovoltaic effects are also discussed.
Starke, G.
1994-12-31
For nonselfadjoint elliptic boundary value problems which are preconditioned by a substructuring method, i.e., nonoverlapping domain decomposition, the author introduces and studies the concept of subspace orthogonalization. In subspace orthogonalization variants of Krylov methods the computation of inner products and vector updates, and the storage of basis elements is restricted to a (presumably small) subspace, in this case the edge and vertex unknowns with respect to the partitioning into subdomains. The author investigates subspace orthogonalization for two specific iterative algorithms, GMRES and the full orthogonalization method (FOM). This is intended to eliminate certain drawbacks of the Arnoldi-based Krylov subspace methods mentioned above. Above all, the length of the Arnoldi recurrences grows linearly with the iteration index which is therefore restricted to the number of basis elements that can be held in memory. Restarts become necessary and this often results in much slower convergence. The subspace orthogonalization methods, in contrast, require the storage of only the edge and vertex unknowns of each basis element which means that one can iterate much longer before restarts become necessary. Moreover, the computation of inner products is also restricted to the edge and vertex points which avoids the disturbance of the computational flow associated with the solution of subdomain problems. The author views subspace orthogonalization as an alternative to restarting or truncating Krylov subspace methods for nonsymmetric linear systems of equations. Instead of shortening the recurrences, one restricts them to a subset of the unknowns which has to be carefully chosen in order to be able to extend this partial solution to the entire space. The author discusses the convergence properties of these iteration schemes and its advantages compared to restarted or truncated versions of Krylov methods applied to the full preconditioned system.
Magnetic levitation configuration incorporating levitation, guidance and linear synchronous motor
Coffey, Howard T.
1993-01-01
A propulsion and suspension system for an inductive repulsion type magnetically levitated vehicle which is propelled and suspended by a system which includes propulsion windings which form a linear synchronous motor and conductive guideways, adjacent to the propulsion windings, where both combine to partially encircling the vehicle-borne superconducting magnets. A three phase power source is used with the linear synchronous motor to produce a traveling magnetic wave which in conjunction with the magnets propel the vehicle. The conductive guideway combines with the superconducting magnets to provide for vehicle leviation.
Magnetic levitation configuration incorporating levitation, guidance and linear synchronous motor
Coffey, H.T.
1993-10-19
A propulsion and suspension system for an inductive repulsion type magnetically levitated vehicle which is propelled and suspended by a system which includes propulsion windings which form a linear synchronous motor and conductive guideways, adjacent to the propulsion windings, where both combine to partially encircling the vehicle-borne superconducting magnets. A three phase power source is used with the linear synchronous motor to produce a traveling magnetic wave which in conjunction with the magnets propel the vehicle. The conductive guideway combines with the superconducting magnets to provide for vehicle levitation. 3 figures.
LDRD final report : autotuning for scalable linear algebra.
Heroux, Michael Allen; Marker, Bryan
2011-09-01
This report summarizes the progress made as part of a one year lab-directed research and development (LDRD) project to fund the research efforts of Bryan Marker at the University of Texas at Austin. The goal of the project was to develop new techniques for automatically tuning the performance of dense linear algebra kernels. These kernels often represent the majority of computational time in an application. The primary outcome from this work is a demonstration of the value of model driven engineering as an approach to accurately predict and study performance trade-offs for dense linear algebra computations.
Search for Linear Polarization of the Cosmic Background Radiation
DOE R&D Accomplishments [OSTI]
Lubin, P. M.; Smoot, G. F.
1978-10-01
We present preliminary measurements of the linear polarization of the cosmic microwave background (3 deg K blackbody) radiation. These ground-based measurements are made at 9 mm wavelength. We find no evidence for linear polarization, and set an upper limit for a polarized component of 0.8 m deg K with a 95% confidence level. This implies that the present rate of expansion of the Universe is isotropic to one part in 10{sup 6}, assuming no re-ionization of the primordial plasma after recombination
Linear-array ultrasonic waveguide transducer for under sodium viewing.
Sheen, S. H.; Chien, H. T.; Wang, K.; Lawrence, W. P.; Engel, D.; Nuclear Engineering Division
2010-09-01
In this report, we first present the basic design of a low-noise waveguide and its performance followed by a review of the array transducer technology. The report then presents the concept and basic designs of arrayed waveguide transducers that can apply to under-sodium viewing for in-service inspection of fast reactors. Depending on applications, the basic waveguide arrays consist of designs for sideway and downward viewing. For each viewing application, two array geometries, linear and circular, are included in design analysis. Methods to scan a 2-D target using a linear array waveguide transducer are discussed. Future plan to develop a laboratory array waveguide prototype is also presented.
Klystron switching power supplies for the Internation Linear Collider
Fraioli, Andrea; /Cassino U. /INFN, Pisa
2009-12-01
The International Linear Collider is a majestic High Energy Physics particle accelerator that will give physicists a new cosmic doorway to explore energy regimes beyond the reach of today's accelerators. ILC will complement the Large Hadron Collider (LHC), a proton-proton collider at the European Center for Nuclear Research (CERN) in Geneva, Switzerland, by producing electron-positron collisions at center of mass energy of about 500 GeV. In particular, the subject of this dissertation is the R&D for a solid state Marx Modulator and relative switching power supply for the International Linear Collider Main LINAC Radio Frequency stations.
Beamstrahlung spectra in next generation linear colliders. Revision
Barklow, T.; Chen, P.; Kozanecki, W.
1992-04-01
For the next generation of linear colliders, the energy loss due to beamstrahlung during the collision of the e{sup +}e{sup {minus}} beams is expected to substantially influence the effective center-of-mass energy distribution of the colliding particles. In this paper, we first derive analytical formulae for the electron and photon energy spectra under multiple beamstrahlung processes, and for the e{sup +}e{sup {minus}} and {gamma}{gamma} differential luminosities. We then apply our formulation to various classes of 500 GeV e{sup +}e{sup {minus}} linear collider designs currently under study.
Direct Probes of Linearly Polarized Gluons inside Unpolarized Hadrons
Boer, Danieel; Brodsky, Stanley J.; Mulders, Piet J.; Pisano, Cristian
2011-04-01
We show that linearly polarized gluons inside unpolarized hadrons can be directly probed in jet or heavy quark pair production in electron-hadron collisions. We discuss the simplest cos2{phi} asymmetries and estimate their maximal value, concluding that measurements of the unknown linearly polarized gluon distribution in the proton should be feasible in future Electron-Ion Collider or Large Hadron electron Collider experiments. Analogous asymmetries in hadron-hadron collisions suffer from factorization breaking contributions and would allow us to quantify the importance of initial- and final-state interactions.
LDRD final report on a unified linear reference system
Espinoza, J. Jr.; Mackoy, R.D.; Fletcher, D.R.
1997-06-01
The purpose of the project was to describe existing deficiencies in Geographic Information Systems for transportation (GIS-T) applications and prescribe solutions that would benefit the transportation community in general. After an in-depth literature search and much consultation with noted transportation experts, the need for a common linear reference system that integrated and supported the planning and operational needs of the transportation community became very apparent. The focus of the project was set on a unified linear reference system and how to go about its requirements definition, design, implementation, and promulgation to the transportation community.
Beryllium Program - Hanford Site
U.S. Department of Energy (DOE) all webpages (Extended Search)
Site workers. Program Performance Assessments Beryllium Program inspection and corrective action documents Feedback & Suggestions A closely monitored area to submit questions,...
U.S. Department of Energy (DOE) all webpages (Extended Search)
Programming Programming on Franklin Compiling Codes on Franklin Cray provides a convenient set of wrapper commands which should be used in almost all cases for compiling and...
Hydropower Program Technology Overview
Not Available
2001-10-01
New fact sheets for the DOE Office of Power Technologies (OPT) that provide technology overviews, description of DOE programs, and market potential for each OPT program area.
Utility Partnerships Program Overview
Management Program (FEMP) Utility Partnerships Program fosters effective partnerships between federal agencies and their local serving utility. FEMP works to reduce the cost ...
U.S. Department of Energy (DOE) all webpages (Extended Search)
New Commercial Program Development Commercial Current Promotions Industrial Federal Agriculture EnergySmart Grocer Program Close-out BPA and CLEAResult have concluded negotiations...
Graduate Research Assistant Program
U.S. Department of Energy (DOE) all webpages (Extended Search)
... Living in Los Alamos Ombuds Program Scholarships Student Association Autobiographies Student Programs Advisory Committee (internal) 2015 Student Liaison Contact List (pdf)
U.S. Department of Energy (DOE) all webpages (Extended Search)
... Resources Living in Los Alamos Ombuds Program Scholarships Student Association Autobiographies Student Programs Advisory Committee (internal) 2015 Student Liaison Contact List
NREL: Education Center - Programs
U.S. Department of Energy (DOE) all webpages (Extended Search)
Education Center Printable Version Programs NREL's Education Center in Golden, Colorado, offers a variety of program topics and experiences for students and adult groups addressing...
The State Energy Program (SEP) has released the following guidance documents, listed chronologically below, that explain how states must report and manage SEP program funding.
Weapons Program Associate Directors
U.S. Department of Energy (DOE) all webpages (Extended Search)
integration we have achieved between the various components of the program," said Bret Knapp, Principal Associate Director for Weapons Programs. "They have both done an...
Department of Energy (DOE) Fire Protection Program provides published fire safety directives (orders, standards, and guidance documents), a range of oversight activities, an annual fire protection program summary.
A complexity analysis of space-bounded learning algorithms for the constraint satisfaction problem
Bayardo, R.J. Jr.; Miranker, D.P.
1996-12-31
Learning during backtrack search is a space-intensive process that records information (such as additional constraints) in order to avoid redundant work. In this paper, we analyze the effects of polynomial-space-bounded learning on runtime complexity of backtrack search. One space-bounded learning scheme records only those constraints with limited size, and another records arbitrarily large constraints but deletes those that become irrelevant to the portion of the search space being explored. We find that relevance-bounded learning allows better runtime bounds than size-bounded learning on structurally restricted constraint satisfaction problems. Even when restricted to linear space, our relevance-bounded learning algorithm has runtime complexity near that of unrestricted (exponential space-consuming) learning schemes.
DOE Publishes CALiPER Report on Linear (T8) LED Lamps in a 2x4 K12-Lensed Troffer
The U.S. Department of Energy's CALiPER program has released Report 21.1, which is part of a series of investigations on linear LED lamps. Report 21.1 focuses on the performance of 31 types of...
Consistent satellite XCO2 retrievals from SCIAMACHY and GOSAT using the BESD algorithm
Heymann, J.; Reuter, M.; Hilker, M.; Buchwitz, M.; Schneising, O.; Bovensmann, H.; Burrows, J. P.; Kuze, A.; Suto, H.; Deutscher, N. M.; et al
2015-02-13
Consistent and accurate long-term data sets of global atmospheric concentrations of carbon dioxide (CO2) are required for carbon cycle and climate related research. However, global data sets based on satellite observations may suffer from inconsistencies originating from the use of products derived from different satellites as needed to cover a long enough time period. One reason for inconsistencies can be the use of different retrieval algorithms. We address this potential issue by applying the same algorithm, the Bremen Optimal Estimation DOAS (BESD) algorithm, to different satellite instruments, SCIAMACHY on-board ENVISAT (March 2002–April 2012) and TANSO-FTS on-board GOSAT (launched in Januarymore » 2009), to retrieve XCO2, the column-averaged dry-air mole fraction of CO2. BESD has been initially developed for SCIAMACHY XCO2 retrievals. Here, we present the first detailed assessment of the new GOSAT BESD XCO2 product. GOSAT BESD XCO2 is a product generated and delivered to the MACC project for assimilation into ECMWF's Integrated Forecasting System (IFS). We describe the modifications of the BESD algorithm needed in order to retrieve XCO2 from GOSAT and present detailed comparisons with ground-based observations of XCO2 from the Total Carbon Column Observing Network (TCCON). We discuss detailed comparison results between all three XCO2 data sets (SCIAMACHY, GOSAT and TCCON). The comparison results demonstrate the good consistency between the SCIAMACHY and the GOSAT XCO2. For example, we found a mean difference for daily averages of −0.60 ± 1.56 ppm (mean difference ± standard deviation) for GOSAT-SCIAMACHY (linear correlation coefficient r = 0.82), −0.34 ± 1.37 ppm (r = 0.86) for GOSAT-TCCON and 0.10 ± 1.79 ppm (r = 0.75) for SCIAMACHY-TCCON. The remaining differences between GOSAT and SCIAMACHY are likely due to non-perfect collocation (±2 h, 10° × 10° around TCCON sites), i.e., the observed air masses are not exactly identical, but likely also
Development of Speckle Interferometry Algorithm and System
Shamsir, A. A. M.; Jafri, M. Z. M.; Lim, H. S.
2011-05-25
Electronic speckle pattern interferometry (ESPI) method is a wholefield, non destructive measurement method widely used in the industries such as detection of defects on metal bodies, detection of defects in intergrated circuits in digital electronics components and in the preservation of priceless artwork. In this research field, this method is widely used to develop algorithms and to develop a new laboratory setup for implementing the speckle pattern interferometry. In speckle interferometry, an optically rough test surface is illuminated with an expanded laser beam creating a laser speckle pattern in the space surrounding the illuminated region. The speckle pattern is optically mixed with a second coherent light field that is either another speckle pattern or a smooth light field. This produces an interferometric speckle pattern that will be detected by sensor to count the change of the speckle pattern due to force given. In this project, an experimental setup of ESPI is proposed to analyze a stainless steel plate using 632.8 nm (red) wavelength of lights.
Interactive Beam-Dynamics Program
Energy Science and Technology Software Center (OSTI)
2001-01-08
TRACE3D is an interactive program that calculates the envelopes of a bunched beam, including linear space-charge forces, through a user-defined system. The transport system may consist of the following elements: drift, thin lens, quadrupole, permanent magnet quadrupole, solenoid, doublet, triplet, bending magnet, edge angle (for bend), RF gap, radio-frequency-quadrupole cell, RF cavity, coupled-cavity tank, user-desired element, coordinate rotation, and identical element. The beam is represented by a 6X6 matrix defining a hyper-ellipsoid in six-dimensional phasemore » space. The projection of this hyperellipsoid on any two-dimensional plane is an ellipse that defines the boundary of the beam in that plane.« less
Adaptive path planning algorithm for cooperating unmanned air vehicles
Cunningham, C T; Roberts, R S
2001-02-08
An adaptive path planning algorithm is presented for cooperating Unmanned Air Vehicles (UAVs) that are used to deploy and operate land-based sensor networks. The algorithm employs a global cost function to generate paths for the UAVs, and adapts the paths to exceptions that might occur. Examples are provided of the paths and adaptation.
An Adaptive Path Planning Algorithm for Cooperating Unmanned Air Vehicles
Cunningham, C.T.; Roberts, R.S.
2000-09-12
An adaptive path planning algorithm is presented for cooperating Unmanned Air Vehicles (UAVs) that are used to deploy and operate land-based sensor networks. The algorithm employs a global cost function to generate paths for the UAVs, and adapts the paths to exceptions that might occur. Examples are provided of the paths and adaptation.
Nuclear magnetic resonance implementation of a quantum clock synchronization algorithm
Zhang Jingfu; Long, G.C; Liu Wenzhang; Deng Zhiwei; Lu Zhiheng
2004-12-01
The quantum clock synchronization (QCS) algorithm proposed by Chuang [Phys. Rev. Lett. 85, 2006 (2000)] has been implemented in a three qubit nuclear magnetic resonance quantum system. The time difference between two separated clocks can be determined by measuring the output states. The experimental realization of the QCS algorithm also demonstrates an application of the quantum phase estimation.
Assistance Program, State Energy Program, Energy Efficiency and...
Office of Energy Efficiency and Renewable Energy (EERE) (indexed site)
Assistance Program, State Energy Program, Energy Efficiency and Conservation Block Grants Assistance Program, State Energy Program, Energy Efficiency and Conservation Block Grants ...
Adaptation of the CVT algorithm for catheter optimization in high dose rate brachytherapy
Poulin, Eric; Fekete, Charles-Antoine Collins; Beaulieu, Luc; Létourneau, Mélanie; Fenster, Aaron; Pouliot, Jean
2013-11-15
Purpose: An innovative, simple, and fast method to optimize the number and position of catheters is presented for prostate and breast high dose rate (HDR) brachytherapy, both for arbitrary templates or template-free implants (such as robotic templates).Methods: Eight clinical cases were chosen randomly from a bank of patients, previously treated in our clinic to test our method. The 2D Centroidal Voronoi Tessellations (CVT) algorithm was adapted to distribute catheters uniformly in space, within the maximum external contour of the planning target volume. The catheters optimization procedure includes the inverse planning simulated annealing algorithm (IPSA). Complete treatment plans can then be generated from the algorithm for different number of catheters. The best plan is chosen from different dosimetry criteria and will automatically provide the number of catheters and their positions. After the CVT algorithm parameters were optimized for speed and dosimetric results, it was validated against prostate clinical cases, using clinically relevant dose parameters. The robustness to implantation error was also evaluated. Finally, the efficiency of the method was tested in breast interstitial HDR brachytherapy cases.Results: The effect of the number and locations of the catheters on prostate cancer patients was studied. Treatment plans with a better or equivalent dose distributions could be obtained with fewer catheters. A better or equal prostate V100 was obtained down to 12 catheters. Plans with nine or less catheters would not be clinically acceptable in terms of prostate V100 and D90. Implantation errors up to 3 mm were acceptable since no statistical difference was found when compared to 0 mm error (p > 0.05). No significant difference in dosimetric indices was observed for the different combination of parameters within the CVT algorithm. A linear relation was found between the number of random points and the optimization time of the CVT algorithm. Because the
ON THE VERIFICATION AND VALIDATION OF GEOSPATIAL IMAGE ANALYSIS ALGORITHMS
Roberts, Randy S.; Trucano, Timothy G.; Pope, Paul A.; Aragon, Cecilia R.; Jiang , Ming; Wei, Thomas; Chilton, Lawrence; Bakel, A. J.
2010-07-25
Verification and validation (V&V) of geospatial image analysis algorithms is a difficult task and is becoming increasingly important. While there are many types of image analysis algorithms, we focus on developing V&V methodologies for algorithms designed to provide textual descriptions of geospatial imagery. In this paper, we present a novel methodological basis for V&V that employs a domain-specific ontology, which provides a naming convention for a domain-bounded set of objects and a set of named relationship between these objects. We describe a validation process that proceeds through objectively comparing benchmark imagery, produced using the ontology, with algorithm results. As an example, we describe how the proposed V&V methodology would be applied to algorithms designed to provide textual descriptions of facilities
An efficient parallel algorithm for matrix-vector multiplication
Hendrickson, B.; Leland, R.; Plimpton, S.
1993-03-01
The multiplication of a vector by a matrix is the kernel computation of many algorithms in scientific computation. A fast parallel algorithm for this calculation is therefore necessary if one is to make full use of the new generation of parallel supercomputers. This paper presents a high performance, parallel matrix-vector multiplication algorithm that is particularly well suited to hypercube multiprocessors. For an n x n matrix on p processors, the communication cost of this algorithm is O(n/[radical]p + log(p)), independent of the matrix sparsity pattern. The performance of the algorithm is demonstrated by employing it as the kernel in the well-known NAS conjugate gradient benchmark, where a run time of 6.09 seconds was observed. This is the best published performance on this benchmark achieved to date using a massively parallel supercomputer.
Program Assignments | Department of Energy
Office of Energy Efficiency and Renewable Energy (EERE) (indexed site)
Balanced Scorecard Program Officer Contractual Services Budget Officer Headquarters Purchase Card Program Headquarters Contract Closeout Headquarters Data Mining Programs Strategic ...
Solving linear inequalities in a least squares sense
Bramley, R.; Winnicka, B.
1994-12-31
Let A {element_of} {Re}{sup mxn} be an arbitrary real matrix, and let b {element_of} {Re}{sup m} a given vector. A familiar problem in computational linear algebra is to solve the system Ax = b in a least squares sense; that is, to find an x* minimizing {parallel}Ax {minus} b{parallel}, where {parallel} {center_dot} {parallel} refers to the vector two-norm. Such an x* solves the normal equations A{sup T}(Ax {minus} b) = 0, and the optimal residual r* = b {minus} Ax* is unique (although x* need not be). The least squares problem is usually interpreted as corresponding to multiple observations, represented by the rows of A and b, on a vector of data x. The observations may be inconsistent, and in this case a solution is sought that minimizes the norm of the residuals. A less familiar problem to numerical linear algebraists is the solution of systems of linear inequalities Ax {le} b in a least squares sense, but the motivation is similar: if a set of observations places upper or lower bounds on linear combinations of variables, the authors want to find x* minimizing {parallel} (Ax {minus} b){sub +} {parallel}, where the i{sup th} component of the vector v{sub +} is the maximum of zero and the i{sup th} component of v.
SIMULTANEOUS LINEAR AND CIRCULAR OPTICAL POLARIMETRY OF ASTEROID (4) VESTA
Wiktorowicz, Sloane J.; Nofi, Larissa A.
2015-02-10
From a single 3.8 hr observation of the asteroid (4) Vesta at 13.7 phase angle with the POlarimeter at Lick for Inclination Studies of Hot jupiters 2 (POLISH2) at the Lick Observatory Shane 3 m telescope, we confirm rotational modulation of linear polarization in the B and V bands. We measure the peak-to-peak modulation in the degree of linear polarization to be ?P = (294 35) 10{sup ?6} (ppm) and time-averaged ?P/P = 0.0575 0.0069. After rotating the plane of linear polarization to the scattering plane, asteroidal rotational modulation is detected with 12? confidence and observed solely in Stokes Q/I. POLISH2 simultaneously measures Stokes I, Q, U (linear polarization), and V (circular polarization), but we detect no significant circular polarization with a 1? upper limit of 78 ppm in the B band. Circular polarization is expected to arise from multiple scattering of sunlight by rough surfaces, and it has previously been detected in nearly all other classes of solar system bodies except for asteroids. Subsequent observations may be compared with surface albedo maps from the Dawn Mission, which may allow the identification of compositional variation across the asteroidal surface. These results demonstrate the high accuracy achieved by POLISH2 at the Lick 3 m telescope, which is designed to directly detect scattered light from spatially unresolvable exoplanets.
Tunneling control using classical non-linear oscillator
Kar, Susmita; Bhattacharyya, S. P.
2014-04-24
A quantum particle is placed in symmetric double well potential which is coupled to a classical non-linear oscillator via a coupling function. With different spatial symmetry of the coupling and under various controlling fashions, the tunneling of the quantum particle can be enhanced or suppressed, or totally destroyed.
Position sensor for linear synchronous motors employing halbach arrays
Post, Richard Freeman
2014-12-23
A position sensor suitable for use in linear synchronous motor (LSM) drive systems employing Halbach arrays to create their magnetic fields is described. The system has several advantages over previously employed ones, especially in its simplicity and its freedom from being affected by weather conditions, accumulated dirt, or electrical interference from the LSM system itself.
Annular linear induction pump with an externally supported duct
Craig, Edwin R.; Semken, Robert S.
1979-01-01
Several embodiments of an annular linear induction pump for pumping liquid metals are disclosed having the features of generally one pass flow of the liquid metal through the pump and an increased efficiency resulting from the use of thin duct walls to enclose the stator. The stator components of this pump are removable for repair and replacement.
Scalable Library for the Parallel Solution of Sparse Linear Systems
Energy Science and Technology Software Center (OSTI)
1993-07-14
BlockSolve is a scalable parallel software library for the solution of large sparse, symmetric systems of linear equations. It runs on a variety of parallel architectures and can easily be ported to others. BlockSovle is primarily intended for the solution of sparse linear systems that arise from physical problems having multiple degrees of freedom at each node point. For example, when the finite element method is used to solve practical problems in structural engineering, eachmore » node will typically have anywhere from 3-6 degrees of freedom associated with it. BlockSolve is written to take advantage of problems of this nature; however, it is still reasonably efficient for problems that have only one degree of freedom associated with each node, such as the three-dimensional Poisson problem. It does not require that the matrices have any particular structure other than being sparse and symmetric. BlockSolve is intended to be used within real application codes. It is designed to work best in the context of our experience which indicated that most application codes solve the same linear systems with several different right-hand sides and/or linear systems with the same structure, but different matrix values multiple times.« less