Deterministic algorithms for 2-d convex programming and 3-d online linear programming
Chan, T.M.
1997-06-01
We present a deterministic algorithm for solving two-dimensional convex programs with a linear objective function. The algorithm requires O(k log k) primitive operations for k constraints; if a feasible point is given, the bound reduces to O(k log k/ log log k). As a consequence, we can decide whether k convex n-gons in the plane have a common intersection in O(k log n min (log k, log log n)) worst-case time. Furthermore, we can solve the three-dimensional online linear programming problem in o(log{sup 3} n) worst-case time per operation.
Algorithmic advances in stochastic programming
Morton, D.P.
1993-07-01
Practical planning problems with deterministic forecasts of inherently uncertain parameters often yield unsatisfactory solutions. Stochastic programming formulations allow uncertain parameters to be modeled as random variables with known distributions, but the size of the resulting mathematical programs can be formidable. Decomposition-based algorithms take advantage of special structure and provide an attractive approach to such problems. We consider two classes of decomposition-based stochastic programming algorithms. The first type of algorithm addresses problems with a ``manageable`` number of scenarios. The second class incorporates Monte Carlo sampling within a decomposition algorithm. We develop and empirically study an enhanced Benders decomposition algorithm for solving multistage stochastic linear programs within a prespecified tolerance. The enhancements include warm start basis selection, preliminary cut generation, the multicut procedure, and decision tree traversing strategies. Computational results are presented for a collection of ``real-world`` multistage stochastic hydroelectric scheduling problems. Recently, there has been an increased focus on decomposition-based algorithms that use sampling within the optimization framework. These approaches hold much promise for solving stochastic programs with many scenarios. A critical component of such algorithms is a stopping criterion to ensure the quality of the solution. With this as motivation, we develop a stopping rule theory for algorithms in which bounds on the optimal objective function value are estimated by sampling. Rules are provided for selecting sample sizes and terminating the algorithm under which asymptotic validity of confidence interval statements for the quality of the proposed solution can be verified. Issues associated with the application of this theory to two sampling-based algorithms are considered, and preliminary empirical coverage results are presented.
Two linear time, low overhead algorithms for graph layout
Energy Science and Technology Software Center (OSTI)
2008-01-10
The software comprises two algorithms designed to perform a 2D layout of a graph structure in time linear with respect to the vertices and edges in the graph, whereas most other layout algorithms have a running time that is quadratic with respect to the number of vertices or greater. Although these layout algorithms run in a fraction of the time as their competitors, they provide competitive results when applied to most real-world graphs. These algorithmsmore » also have a low constant running time and small memory footprint, making them useful for small to large graphs.« less
APPLICATION OF NEURAL NETWORK ALGORITHMS FOR BPM LINEARIZATION
Musson, John C.; Seaton, Chad; Spata, Mike F.; Yan, Jianxun
2012-11-01
Stripline BPM sensors contain inherent non-linearities, as a result of field distortions from the pickup elements. Many methods have been devised to facilitate corrections, often employing polynomial fitting. The cost of computation makes real-time correction difficult, particulalry when integer math is utilized. The application of neural-network technology, particularly the multi-layer perceptron algorithm, is proposed as an efficient alternative for electrode linearization. A process of supervised learning is initially used to determine the weighting coefficients, which are subsequently applied to the incoming electrode data. A non-linear layer, known as an ?activation layer,? is responsible for the removal of saturation effects. Implementation of a perceptron in an FPGA-based software-defined radio (SDR) is presented, along with performance comparisons. In addition, efficient calculation of the sigmoidal activation function via the CORDIC algorithm is presented.
Toward portable programming of numerical linear algebra on manycore...
Office of Scientific and Technical Information (OSTI)
Toward portable programming of numerical linear algebra on manycore nodes. Citation Details In-Document Search Title: Toward portable programming of numerical linear algebra on ...
Planning under uncertainty solving large-scale stochastic linear programs
Infanger, G. . Dept. of Operations Research Technische Univ., Vienna . Inst. fuer Energiewirtschaft)
1992-12-01
For many practical problems, solutions obtained from deterministic models are unsatisfactory because they fail to hedge against certain contingencies that may occur in the future. Stochastic models address this shortcoming, but up to recently seemed to be intractable due to their size. Recent advances both in solution algorithms and in computer technology now allow us to solve important and general classes of practical stochastic problems. We show how large-scale stochastic linear programs can be efficiently solved by combining classical decomposition and Monte Carlo (importance) sampling techniques. We discuss the methodology for solving two-stage stochastic linear programs with recourse, present numerical results of large problems with numerous stochastic parameters, show how to efficiently implement the methodology on a parallel multi-computer and derive the theory for solving a general class of multi-stage problems with dependency of the stochastic parameters within a stage and between different stages.
Comparison of open-source linear programming solvers.
Gearhart, Jared Lee; Adair, Kristin Lynn; Durfee, Justin D.; Jones, Katherine A.; Martin, Nathaniel; Detry, Richard Joseph
2013-10-01
When developing linear programming models, issues such as budget limitations, customer requirements, or licensing may preclude the use of commercial linear programming solvers. In such cases, one option is to use an open-source linear programming solver. A survey of linear programming tools was conducted to identify potential open-source solvers. From this survey, four open-source solvers were tested using a collection of linear programming test problems and the results were compared to IBM ILOG CPLEX Optimizer (CPLEX) [1], an industry standard. The solvers considered were: COIN-OR Linear Programming (CLP) [2], [3], GNU Linear Programming Kit (GLPK) [4], lp_solve [5] and Modular In-core Nonlinear Optimization System (MINOS) [6]. As no open-source solver outperforms CPLEX, this study demonstrates the power of commercial linear programming software. CLP was found to be the top performing open-source solver considered in terms of capability and speed. GLPK also performed well but cannot match the speed of CLP or CPLEX. lp_solve and MINOS were considerably slower and encountered issues when solving several test problems.
Lewis, Robert Michael (College of William and Mary, Williamsburg, VA); Torczon, Virginia Joanne (College of William and Mary, Williamsburg, VA); Kolda, Tamara Gibson
2006-08-01
We consider the solution of nonlinear programs in the case where derivatives of the objective function and nonlinear constraints are unavailable. To solve such problems, we propose an adaptation of a method due to Conn, Gould, Sartenaer, and Toint that proceeds by approximately minimizing a succession of linearly constrained augmented Lagrangians. Our modification is to use a derivative-free generating set direct search algorithm to solve the linearly constrained subproblems. The stopping criterion proposed by Conn, Gould, Sartenaer and Toint for the approximate solution of the subproblems requires explicit knowledge of derivatives. Such information is presumed absent in the generating set search method we employ. Instead, we show that stationarity results for linearly constrained generating set search methods provide a derivative-free stopping criterion, based on a step-length control parameter, that is sufficient to preserve the convergence properties of the original augmented Lagrangian algorithm.
Efficient Algorithms for Estimating the Absorption Spectrum within Linear Response TDDFT
Brabec, Jiri; Lin, Lin; Shao, Meiyue; Govind, Niranjan; Yang, Chao; Saad, Yousef; Ng, Esmond
2015-10-06
We present two iterative algorithms for approximating the absorption spectrum of molecules within linear response of time-dependent density functional theory (TDDFT) framework. These methods do not attempt to compute eigenvalues or eigenvectors of the linear response matrix. They are designed to approximate the absorption spectrum as a function directly. They take advantage of the special structure of the linear response matrix. Neither method requires the linear response matrix to be constructed explicitly. They only require a procedure that performs the multiplication of the linear response matrix with a vector. These methods can also be easily modified to efficiently estimate the density of states (DOS) of the linear response matrix without computing the eigenvalues of this matrix. We show by computational experiments that the methods proposed in this paper can be much more efficient than methods that are based on the exact diagonalization of the linear response matrix. We show that they can also be more efficient than real-time TDDFT simulations. We compare the pros and cons of these methods in terms of their accuracy as well as their computational and storage cost.
Grant, C W; Lenderman, J S; Gansemer, J D
2011-02-24
This document is an update to the 'ADIS Algorithm Evaluation Project Plan' specified in the Statement of Work for the US-VISIT Identity Matching Algorithm Evaluation Program, as deliverable II.D.1. The original plan was delivered in August 2010. This document modifies the plan to reflect modified deliverables reflecting delays in obtaining a database refresh. This document describes the revised schedule of the program deliverables. The detailed description of the processes used, the statistical analysis processes and the results of the statistical analysis will be described fully in the program deliverables. The US-VISIT Identity Matching Algorithm Evaluation Program is work performed by Lawrence Livermore National Laboratory (LLNL) under IAA HSHQVT-07-X-00002 P00004 from the Department of Homeland Security (DHS).
Shang, Yu; Yu, Guoqiang
2014-09-29
Conventional semi-infinite analytical solutions of correlation diffusion equation may lead to errors when calculating blood flow index (BFI) from diffuse correlation spectroscopy (DCS) measurements in tissues with irregular geometries. Very recently, we created an algorithm integrating a Nth-order linear model of autocorrelation function with the Monte Carlo simulation of photon migrations in homogenous tissues with arbitrary geometries for extraction of BFI (i.e., αD{sub B}). The purpose of this study is to extend the capability of the Nth-order linear algorithm for extracting BFI in heterogeneous tissues with arbitrary geometries. The previous linear algorithm was modified to extract BFIs in different types of tissues simultaneously through utilizing DCS data at multiple source-detector separations. We compared the proposed linear algorithm with the semi-infinite homogenous solution in a computer model of adult head with heterogeneous tissue layers of scalp, skull, cerebrospinal fluid, and brain. To test the capability of the linear algorithm for extracting relative changes of cerebral blood flow (rCBF) in deep brain, we assigned ten levels of αD{sub B} in the brain layer with a step decrement of 10% while maintaining αD{sub B} values constant in other layers. Simulation results demonstrate the accuracy (errors < 3%) of high-order (N ≥ 5) linear algorithm in extracting BFIs in different tissue layers and rCBF in deep brain. By contrast, the semi-infinite homogenous solution resulted in substantial errors in rCBF (34.5% ≤ errors ≤ 60.2%) and BFIs in different layers. The Nth-order linear model simplifies data analysis, thus allowing for online data processing and displaying. Future study will test this linear algorithm in heterogeneous tissues with different levels of blood flow variations and noises.
Microgrid Reliability Modeling and Battery Scheduling Using Stochastic Linear Programming
Cardoso, Goncalo; Stadler, Michael; Siddiqui, Afzal; Marnay, Chris; DeForest, Nicholas; Barbosa-Povoa, Ana; Ferrao, Paulo
2013-05-23
This paper describes the introduction of stochastic linear programming into Operations DER-CAM, a tool used to obtain optimal operating schedules for a given microgrid under local economic and environmental conditions. This application follows previous work on optimal scheduling of a lithium-iron-phosphate battery given the output uncertainty of a 1 MW molten carbonate fuel cell. Both are in the Santa Rita Jail microgrid, located in Dublin, California. This fuel cell has proven unreliable, partially justifying the consideration of storage options. Several stochastic DER-CAM runs are executed to compare different scenarios to values obtained by a deterministic approach. Results indicate that using a stochastic approach provides a conservative yet more lucrative battery schedule. Lower expected energy bills result, given fuel cell outages, in potential savings exceeding 6percent.
Object detection utilizing a linear retrieval algorithm for thermal infrared imagery
Ramsey, M.S. [Arizona State Univ., Tempe, AZ (United States)
1996-11-01
Thermal infrared (TIR) spectroscopy and remote sensing have been proven to be extremely valuable tools for mineralogic discrimination. One technique for sub-pixel detection and data reduction, known as a spectral retrieval or unmixing algorithm, will prove useful in the analysis of data from scheduled TIR orbital instruments. This study represents the first quantitative attempt to identify the limits of the model, specifically concentrating on the TIR. The algorithm was written and applied to laboratory data, testing the effects of particle size, noise, and multiple endmembers, then adapted to operate on airborne Thermal Infrared Multispectral Scanner data of the Kelso Dunes, CA, Meteor Crater, AZ, and Medicine Lake Volcano, CA. Results indicate that linear spectral unmixmg can produce accurate endmember detection to within an average of 5%. In addition, the effects of vitrification and textural variations were modeled. The ability to predict mineral or rock abundances becomes extremely useful in tracking sediment transport, decertification, and potential hazard assessment in remote volcanic regions. 26 refs., 3 figs.
Fast Combinatorial Algorithm for the Solution of Linearly Constrained Least Squares Problems
Van Benthem, Mark H.; Keenan, Michael R.
2008-11-11
A fast combinatorial algorithm can significantly reduce the computational burden when solving general equality and inequality constrained least squares problems with large numbers of observation vectors. The combinatorial algorithm provides a mathematically rigorous solution and operates at great speed by reorganizing the calculations to take advantage of the combinatorial nature of the problems to be solved. The combinatorial algorithm exploits the structure that exists in large-scale problems in order to minimize the number of arithmetic operations required to obtain a solution.
Abdel-Rehim, A M; Stathopoulos, Andreas; Orginos, Kostas
2014-08-01
The technique that was used to build the EigCG algorithm for sparse symmetric linear systems is extended to the nonsymmetric case using the BiCG algorithm. We show that, similarly to the symmetric case, we can build an algorithm that is capable of computing a few smallest magnitude eigenvalues and their corresponding left and right eigenvectors of a nonsymmetric matrix using only a small window of the BiCG residuals while simultaneously solving a linear system with that matrix. For a system with multiple right-hand sides, we give an algorithm that computes incrementally more eigenvalues while solving the first few systems and then uses the computed eigenvectors to deflate BiCGStab for the remaining systems. Our experiments on various test problems, including Lattice QCD, show the remarkable ability of EigBiCG to compute spectral approximations with accuracy comparable to that of the unrestarted, nonsymmetric Lanczos. Furthermore, our incremental EigBiCG followed by appropriately restarted and deflated BiCGStab provides a competitive method for systems with multiple right-hand sides.
Sixth SIAM conference on applied linear algebra: Final program and abstracts. Final technical report
1997-12-31
Linear algebra plays a central role in mathematics and applications. The analysis and solution of problems from an amazingly wide variety of disciplines depend on the theory and computational techniques of linear algebra. In turn, the diversity of disciplines depending on linear algebra also serves to focus and shape its development. Some problems have special properties (numerical, structural) that can be exploited. Some are simply so large that conventional approaches are impractical. New computer architectures motivate new algorithms, and fresh ways to look at old ones. The pervasive nature of linear algebra in analyzing and solving problems means that people from a wide spectrum--universities, industrial and government laboratories, financial institutions, and many others--share an interest in current developments in linear algebra. This conference aims to bring them together for their mutual benefit. Abstracts of papers presented are included.
Application and implementation of transient algorithms in computer programs
Benson, D.J.
1985-07-01
This presentation gives a brief introduction to the nonlinear finite element programs developed at Lawrence Livermore National Laboratory by the Methods Development Group in the Mechanical Engineering Department. The four programs are DYNA3D and DYNA2D, which are explicit hydrocodes, and NIKE3D and NIKE2D, which are implicit programs. The presentation concentrates on DYNA3D with asides about the other programs. During the past year several new features were added to DYNA3D, and major improvements were made in the computational efficiency of the shell and beam elements. Most of these new features and improvements will eventually make their way into the other programs. The emphasis in our computational mechanics effort has always been, and continues to be, efficiency. To get the most out of our supercomputers, all Crays, we have vectorized the programs as much as possible. Several of the more interesting capabilities of DYNA3D will be described and their impact on efficiency will be discussed. Some of the recent work on NIKE3D and NIKE2D will also be presented. In the belief that a single example is worth a thousand equations, we are skipping the theory entirely and going directly to the examples.
Office of Scientific and Technical Information (OSTI)
1 are estimated us- ing the conventional MCMC (C-MCMC) with 60,000 model executions (red-solid lines), the linear, quadratic, and cubic surrogate systems with 9226, 4375, 3765...
Obtaining lower bounds from the progressive hedging algorithm for stochastic mixed-integer programs
DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)
Gade, Dinakar; Hackebeil, Gabriel; Ryan, Sarah M.; Watson, Jean -Paul; Wets, Roger J.-B.; Woodruff, David L.
2016-04-02
We present a method for computing lower bounds in the progressive hedging algorithm (PHA) for two-stage and multi-stage stochastic mixed-integer programs. Computing lower bounds in the PHA allows one to assess the quality of the solutions generated by the algorithm contemporaneously. The lower bounds can be computed in any iteration of the algorithm by using dual prices that are calculated during execution of the standard PHA. In conclusion, we report computational results on stochastic unit commitment and stochastic server location problem instances, and explore the relationship between key PHA parameters and the quality of the resulting lower bounds.
Refining and end use study of coal liquids II - linear programming analysis
Lowe, C.; Tam, S.
1995-12-31
A DOE-funded study is underway to determine the optimum refinery processing schemes for producing transportation fuels that will meet CAAA regulations from direct and indirect coal liquids. The study consists of three major parts: pilot plant testing of critical upgrading processes, linear programming analysis of different processing schemes, and engine emission testing of final products. Currently, fractions of a direct coal liquid produced form bituminous coal are being tested in sequence of pilot plant upgrading processes. This work is discussed in a separate paper. The linear programming model, which is the subject of this paper, has been completed for the petroleum refinery and is being modified to handle coal liquids based on the pilot plant test results. Preliminary coal liquid evaluation studies indicate that, if a refinery expansion scenario is adopted, then the marginal value of the coal liquid (over the base petroleum crude) is $3-4/bbl.
SLFP: A stochastic linear fractional programming approach for sustainable waste management
Zhu, H.; Huang, G.H.
2011-12-15
Highlights: > A new fractional programming (SLFP) method is developed for waste management. > SLFP can solve ratio optimization problems associated with random inputs. > A case study of waste flow allocation demonstrates its applicability. > SLFP helps compare objectives of two aspects and reflect system efficiency. > This study supports in-depth analysis of tradeoffs among multiple system criteria. - Abstract: A stochastic linear fractional programming (SLFP) approach is developed for supporting sustainable municipal solid waste management under uncertainty. The SLFP method can solve ratio optimization problems associated with random information, where chance-constrained programming is integrated into a linear fractional programming framework. It has advantages in: (1) comparing objectives of two aspects, (2) reflecting system efficiency, (3) dealing with uncertainty expressed as probability distributions, and (4) providing optimal-ratio solutions under different system-reliability conditions. The method is applied to a case study of waste flow allocation within a municipal solid waste (MSW) management system. The obtained solutions are useful for identifying sustainable MSW management schemes with maximized system efficiency under various constraint-violation risks. The results indicate that SLFP can support in-depth analysis of the interrelationships among system efficiency, system cost and system-failure risk.
Djukanovic, M.; Babic, B.; Milosevic, B.; Sobajic, D.J.; Pao, Y.H. |
1996-05-01
In this paper the blending/transloading facilities are modeled using an interactive fuzzy linear programming (FLP), in order to allow the decision-maker to solve the problem of uncertainty of input information within the fuel scheduling optimization. An interactive decision-making process is formulated in which decision-maker can learn to recognize good solutions by considering all possibilities of fuzziness. The application of the fuzzy formulation is accompanied by a careful examination of the definition of fuzziness, appropriateness of the membership function and interpretation of results. The proposed concept provides a decision support system with integration-oriented features, whereby the decision-maker can learn to recognize the relative importance of factors in the specific domain of optimal fuel scheduling (OFS) problem. The formulation of a fuzzy linear programming problem to obtain a reasonable nonfuzzy solution under consideration of the ambiguity of parameters, represented by fuzzy numbers, is introduced. An additional advantage of the FLP formulation is its ability to deal with multi-objective problems.
Library of Continuation Algorithms
Energy Science and Technology Software Center (OSTI)
2005-03-01
LOCA (Library of Continuation Algorithms) is scientific software written in C++ that provides advanced analysis tools for nonlinear systems. In particular, it provides parameter continuation algorithms. bifurcation tracking algorithms, and drivers for linear stability analysis. The algorithms are aimed at large-scale applications that use Newtons method for their nonlinear solve.
Frenkel, G.; Paterson, T.S.; Smith, M.E.
1988-04-01
The Institute for Defense Analyses (IDA) has collected and analyzed information on battle management algorithm technology that is relevant to Battle Management/Command, Control and Communications (BM/C3). This Memorandum Report represents a program plan that will provide the BM/C3 Directorate of the Strategic Defense Initiative Organization (SDIO) with administrative and technical insight into algorithm technology. This program plan focuses on current activity in algorithm development and provides information and analysis to the SDIO to be used in formulating budget requirements for FY 1988 and beyond. Based upon analysis of algorithm requirements and ongoing programs, recommendations have been made for research areas that should be pursued, including both the continuation of current work and the initiation of new tasks. This final report includes all relevant material from interim reports as well as new results.
1995-03-01
A model was developed for use in the Bechtel PIMS (Process Industry Modeling System) linear programming software to simulate a generic Midwest (PADD II) petroleum refinery of the future. This ``petroleum-only`` version of the model establishes the size and complexity of the refinery after the year 2000 and prior to the introduction of coal liquids. It should be noted that no assumption has been made on when a plant can be built to produce coal liquids except that it will be after the year 2000. The year 2000 was chosen because it is the latest year where fuel property and emission standards have been set by the Environmental Protection Agency. It assumes the refinery has been modified to accept crudes that are heavier in gravity and higher in sulfur than today`s average crude mix. In addition, the refinery has also been modified to produce a product slate of transportation fuels of the future (i.e. 40% reformulated gasolines). This model will be used as a basis for determining the optimum scheme for processing coal liquids in a petroleum refinery. This report summarizes the design basis for this ``petroleum only`` LP refinery model. A report detailing the refinery configuration when coal liquids are processed will be provided at a later date.
New Effective Multithreaded Matching Algorithms
Manne, Fredrik; Halappanavar, Mahantesh
2014-05-19
Matching is an important combinatorial problem with a number of applications in areas such as community detection, sparse linear algebra, and network alignment. Since computing optimal matchings can be very time consuming, several fast approximation algorithms, both sequential and parallel, have been suggested. Common to the algorithms giving the best solutions is that they tend to be sequential by nature, while algorithms more suitable for parallel computation give solutions of less quality. We present a new simple 1 2 -approximation algorithm for the weighted matching problem. This algorithm is both faster than any other suggested sequential 1 2 -approximation algorithm on almost all inputs and also scales better than previous multithreaded algorithms. We further extend this to a general scalable multithreaded algorithm that computes matchings of weight comparable with the best sequential algorithms. The performance of the suggested algorithms is documented through extensive experiments on different multithreaded architectures.
Independent Oversight Inspection, Stanford Linear Accelerator...
Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site
Safety, and Health Programs at the Stanford Linear Accelerator Center This report provides the results of an inspection of the environment, safety, and health programs at the ...
Christofilos, N.C.; Polk, I.J.
1959-02-17
Improvements in linear particle accelerators are described. A drift tube system for a linear ion accelerator reduces gap capacity between adjacent drift tube ends. This is accomplished by reducing the ratio of the diameter of the drift tube to the diameter of the resonant cavity. Concentration of magnetic field intensity at the longitudinal midpoint of the external sunface of each drift tube is reduced by increasing the external drift tube diameter at the longitudinal center region.
Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)
Linear Accelerator (LINAC) The core of the LANSCE facility is one of the nation's most powerful proton linear accelerators or LINAC. The LINAC at LANSCE has served the nation since 1972, providing the beam current required by all the experimental areas that support NNSA-DP and other DOE missions. The LINAC's capability to reliably deliver beam current is the key to the LANSCE's ability to do research-and thus the key to meeting NNSA and DOE mission deliverables. The LANSCE Accelerator The LANSCE
Colgate, S.A.
1958-05-27
An improvement is presented in linear accelerators for charged particles with respect to the stable focusing of the particle beam. The improvement consists of providing a radial electric field transverse to the accelerating electric fields and angularly introducing the beam of particles in the field. The results of the foregoing is to achieve a beam which spirals about the axis of the acceleration path. The combination of the electric fields and angular motion of the particles cooperate to provide a stable and focused particle beam.
Final Report-Optimization Under Uncertainty and Nonconvexity: Algorithms and Software
Jeff Linderoth
2008-10-10
The goal of this research was to develop new algorithmic techniques for solving large-scale numerical optimization problems, focusing on problems classes that have proven to be among the most challenging for practitioners: those involving uncertainty and those involving nonconvexity. This research advanced the state-of-the-art in solving mixed integer linear programs containing symmetry, mixed integer nonlinear programs, and stochastic optimization problems.
Belos Block Linear Solvers Package
Energy Science and Technology Software Center (OSTI)
2004-03-01
Belos is an extensible and interoperable framework for large-scale, iterative methods for solving systems of linear equations with multiple right-hand sides. The motivation for this framework is to provide a generic interface to a collection of algorithms for solving large-scale linear systems. Belos is interoperable because both the matrix and vectors are considered to be opaque objects--only knowledge of the matrix and vectors via elementary operations is necessary. An implementation of Balos is accomplished viamore » the use of interfaces. One of the goals of Belos is to allow the user flexibility in specifying the data representation for the matrix and vectors and so leverage any existing software investment. The algorithms that will be included in package are Krylov-based linear solvers, like Block GMRES (Generalized Minimal RESidual) and Block CG (Conjugate-Gradient).« less
Voila: A visual object-oriented iterative linear algebra problem solving environment
Edwards, H.C.; Hayes, L.J.
1994-12-31
Application of iterative methods to solve a large linear system of equations currently involves writing a program which calls iterative method subprograms from a large software package. These subprograms have complex interfaces which are difficult to use and even more difficult to program. A problem solving environment specifically tailored to the development and application of iterative methods is needed. This need will be fulfilled by Voila, a problem solving environment which provides a visual programming interface to object-oriented iterative linear algebra kernels. Voila will provide several quantum improvements over current iterative method problem solving environments. First, programming and applying iterative methods is considerably simplified through Voila`s visual programming interface. Second, iterative method algorithm implementations are independent of any particular sparse matrix data structure through Voila`s object-oriented kernels. Third, the compile-link-debug process is eliminated as Voila operates as an interpreter.
Miller, Naomi J.; Perrin, Tess E.; Royer, Michael P.; Wilkerson, Andrea M.; Beeson, Tracy A.
2014-05-20
Although lensed troffers are numerous, there are many other types of optical systems as well. This report looked at the performance of three linear (T8) LED lamps chosen primarily based on their luminous intensity distributions (narrow, medium, and wide beam angles) as well as a benchmark fluorescent lamp in five different troffer types. Also included are the results of a subjective evaluation. Results show that linear (T8) LED lamps can improve luminaire efficiency in K12-lensed and parabolic-louvered troffers, effect little change in volumetric and high-performance diffuse-lensed type luminaires, but reduce efficiency in recessed indirect troffers. These changes can be accompanied by visual appearance and visual comfort consequences, especially when LED lamps with clear lenses and narrow distributions are installed. Linear (T8) LED lamps with diffuse apertures exhibited wider beam angles, performed more similarly to fluorescent lamps, and received better ratings from observers. Guidance is provided on which luminaires are the best candidates for retrofitting with linear (T8) LED lamps.
Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)
Programming Programming Compiling and linking programs on Euclid. Compiling Codes How to compile and link MPI codes on Euclid. Read More » Using the ACML Math Library How to compile and link a code with the ACML library and include the $ACML environment variable. Read More » Process Limits The hard and soft process limits are listed. Read More » Last edited: 2016-04-29 11:35:11
Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)
Programming Programming The genepool system has a diverse set of software development tools and a rich environment for delivering their functionality to users. Genepool has adopted a modular system which has been adapted from the Programming Environments similar to those provided on the Cray systems at NERSC. The Programming Environment is managed by a meta-module named similar to "PrgEnv-gnu/4.6". The "gnu" indicates that it is providing the GNU environment, principally GCC,
Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)
Read More Programming Tuning Options Tips for tuning performance on the Hopper system ... The ACML library is also supported on Hopper and Franklin. Read More PGAS Language ...
Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)
Storage & File Systems Application Performance Data & Analytics Job Logs & Statistics ... Each programming environment contains the full set of compatible compilers and libraries. ...
Positrons for linear colliders
Ecklund, S.
1987-11-01
The requirements of a positron source for a linear collider are briefly reviewed, followed by methods of positron production and production of photons by electromagnetic cascade showers. Cross sections for the electromagnetic cascade shower processes of positron-electron pair production and Compton scattering are compared. A program used for Monte Carlo analysis of electromagnetic cascades is briefly discussed, and positron distributions obtained from several runs of the program are discussed. Photons from synchrotron radiation and from channeling are also mentioned briefly, as well as positron collection, transverse focusing techniques, and longitudinal capture. Computer ray tracing is then briefly discussed, followed by space-charge effects and thermal heating and stress due to showers. (LEW)
Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)
using MPI and OpenMP on NERSC systems, the same does not always exist for other supported parallel programming models such as UPC or Chapel. At the same time, we know that these...
An optimal point spread function subtraction algorithm for high...
Office of Scientific and Technical Information (OSTI)
An optimal point spread function subtraction algorithm for high-contrast imaging: a ... This image is built as a linear combination of all available images and is optimized ...
Gropp, William D.
2014-06-23
With the coming end of Moore's law, it has become essential to develop new algorithms and techniques that can provide the performance needed by demanding computational science applications, especially those that are part of the DOE science mission. This work was part of a multi-institution, multi-investigator project that explored several approaches to develop algorithms that would be effective at the extreme scales and with the complex processor architectures that are expected at the end of this decade. The work by this group developed new performance models that have already helped guide the development of highly scalable versions of an algebraic multigrid solver, new programming approaches designed to support numerical algorithms on heterogeneous architectures, and a new, more scalable version of conjugate gradient, an important algorithm in the solution of very large linear systems of equations.
FPGA-based Klystron linearization implementations in scope of ILC
Omet, M.; Michizono, S.; Varghese, P.; Schlarb, H.; Branlard, J.; Cichalewski, W.
2015-01-23
We report the development and implementation of four FPGA-based predistortion-type klystron linearization algorithms. Klystron linearization is essential for the realization of ILC, since it is required to operate the klystrons 7% in power below their saturation. The work presented was performed in international collaborations at the Fermi National Accelerator Laboratory (FNAL), USA and the Deutsches Elektronen Synchrotron (DESY), Germany. With the newly developed algorithms, the generation of correction factors on the FPGA was improved compared to past algorithms, avoiding quantization and decreasing memory requirements. At FNAL, three algorithms were tested at the Advanced Superconducting Test Accelerator (ASTA), demonstrating a successful implementation for one algorithm and a proof of principle for two algorithms. Furthermore, the functionality of the algorithm implemented at DESY was demonstrated successfully in a simulation.
FPGA-based Klystron linearization implementations in scope of ILC
DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)
Omet, M.; Michizono, S.; Matsumoto, T.; Miura, T.; Qiu, F.; Chase, B.; Varghese, P.; Schlarb, H.; Branlard, J.; Cichalewski, W.
2015-01-23
We report the development and implementation of four FPGA-based predistortion-type klystron linearization algorithms. Klystron linearization is essential for the realization of ILC, since it is required to operate the klystrons 7% in power below their saturation. The work presented was performed in international collaborations at the Fermi National Accelerator Laboratory (FNAL), USA and the Deutsches Elektronen Synchrotron (DESY), Germany. With the newly developed algorithms, the generation of correction factors on the FPGA was improved compared to past algorithms, avoiding quantization and decreasing memory requirements. At FNAL, three algorithms were tested at the Advanced Superconducting Test Accelerator (ASTA), demonstrating a successfulmore » implementation for one algorithm and a proof of principle for two algorithms. Furthermore, the functionality of the algorithm implemented at DESY was demonstrated successfully in a simulation.« less
Daniel, David J; Mc Pherson, Allen; Thorp, John R; Barrett, Richard; Clay, Robert; De Supinski, Bronis; Dube, Evi; Heroux, Mike; Janssen, Curtis; Langer, Steve; Laros, Jim
2011-01-14
A programming model is a set of software technologies that support the expression of algorithms and provide applications with an abstract representation of the capabilities of the underlying hardware architecture. The primary goals are productivity, portability and performance.
Linear induction accelerator parameter options
Birx, D.L.; Caporaso, G.J.; Reginato, L.L.
1986-04-21
The principal undertaking of the Beam Research Program over the past decade has been the investigation of propagating intense self-focused beams. Recently, the major activity of the program has shifted toward the investigation of converting high quality electron beams directly to laser radiation. During the early years of the program, accelerator development was directed toward the generation of very high current (>10 kA), high energy beams (>50 MeV). In its new mission, the program has shifted the emphasis toward the production of lower current beams (>3 kA) with high brightness (>10/sup 6/ A/(rad-cm)/sup 2/) at very high average power levels. In efforts to produce these intense beams, the state of the art of linear induction accelerators (LIA) has been advanced to the point of satisfying not only the current requirements but also future national needs.
Translation and integration of numerical atomic orbitals in linear molecules
Heinäsmäki, Sami
2014-02-14
We present algorithms for translation and integration of atomic orbitals for LCAO calculations in linear molecules. The method applies to arbitrary radial functions given on a numerical mesh. The algorithms are based on pseudospectral differentiation matrices in two dimensions and the corresponding two-dimensional Gaussian quadratures. As a result, multicenter overlap and Coulomb integrals can be evaluated effectively.
Algorithms for builder guidelines
Balcomb, J.D.; Lekov, A.B.
1989-06-01
The Builder Guidelines are designed to make simple, appropriate guidelines available to builders for their specific localities. Builders may select from passive solar and conservation strategies with different performance potentials. They can then compare the calculated results for their particular house design with a typical house in the same location. Algorithms used to develop the Builder Guidelines are described. The main algorithms used are the monthly solar ratio (SLR) method for winter heating, the diurnal heat capacity (DHC) method for temperature swing, and a new simplified calculation method (McCool) for summer cooling. This paper applies the algorithms to estimate the performance potential of passive solar strategies, and the annual heating and cooling loads of various combinations of conservation and passive solar strategies. The basis of the McCool method is described. All three methods are implemented in a microcomputer program used to generate the guideline numbers. Guidelines for Denver, Colorado, are used to illustrate the results. The structure of the guidelines and worksheet booklets are also presented. 5 refs., 3 tabs.
Asynchronous parallel generating set search for linearly-constrained optimization.
Lewis, Robert Michael; Griffin, Joshua D.; Kolda, Tamara Gibson
2006-08-01
Generating set search (GSS) is a family of direct search methods that encompasses generalized pattern search and related methods. We describe an algorithm for asynchronous linearly-constrained GSS, which has some complexities that make it different from both the asynchronous bound-constrained case as well as the synchronous linearly-constrained case. The algorithm has been implemented in the APPSPACK software framework and we present results from an extensive numerical study using CUTEr test problems. We discuss the results, both positive and negative, and conclude that GSS is a reliable method for solving small-to-medium sized linearly-constrained optimization problems without derivatives.
Fault tolerant linear actuator
Tesar, Delbert
2004-09-14
In varying embodiments, the fault tolerant linear actuator of the present invention is a new and improved linear actuator with fault tolerance and positional control that may incorporate velocity summing, force summing, or a combination of the two. In one embodiment, the invention offers a velocity summing arrangement with a differential gear between two prime movers driving a cage, which then drives a linear spindle screw transmission. Other embodiments feature two prime movers driving separate linear spindle screw transmissions, one internal and one external, in a totally concentric and compact integrated module.
Snapshot: Linear Lamps (TLEDs)
Broader source: Energy.gov [DOE]
A report using LED Lighting Facts data to examine the current state of the market for linear fluorescent lamps. (8 pages, July 2016)
Linear phase compressive filter
McEwan, Thomas E.
1995-01-01
A phase linear filter for soliton suppression is in the form of a laddered series of stages of non-commensurate low pass filters with each low pass filter having a series coupled inductance (L) and a reverse biased, voltage dependent varactor diode, to ground which acts as a variable capacitance (C). L and C values are set to levels which correspond to a linear or conventional phase linear filter. Inductance is mapped directly from that of an equivalent nonlinear transmission line and capacitance is mapped from the linear case using a large signal equivalent of a nonlinear transmission line.
Linear phase compressive filter
McEwan, T.E.
1995-06-06
A phase linear filter for soliton suppression is in the form of a laddered series of stages of non-commensurate low pass filters with each low pass filter having a series coupled inductance (L) and a reverse biased, voltage dependent varactor diode, to ground which acts as a variable capacitance (C). L and C values are set to levels which correspond to a linear or conventional phase linear filter. Inductance is mapped directly from that of an equivalent nonlinear transmission line and capacitance is mapped from the linear case using a large signal equivalent of a nonlinear transmission line. 2 figs.
Inpainting with sparse linear combinations of exemplars
Wohlberg, Brendt
2008-01-01
We introduce a new exemplar-based inpainting algorithm based on representing the region to be inpainted as a sparse linear combination of blocks extracted from similar parts of the image being inpainted. This method is conceptually simple, being computed by functional minimization, and avoids the complexity of correctly ordering the filling in of missing regions of other exemplar-based methods. Initial performance comparisons on small inpainting regions indicate that this method provides similar or better performance than other recent methods.
Linearly polarized fiber amplifier
Kliner, Dahv A.; Koplow, Jeffery P.
2004-11-30
Optically pumped rare-earth-doped polarizing fibers exhibit significantly higher gain for one linear polarization state than for the orthogonal state. Such a fiber can be used to construct a single-polarization fiber laser, amplifier, or amplified-spontaneous-emission (ASE) source without the need for additional optical components to obtain stable, linearly polarized operation.
Rios, A. B.; Valda, A.; Somacal, H.
2007-10-26
Usually tomographic procedure requires a set of projections around the object under study and a mathematical processing of such projections through reconstruction algorithms. An accurate reconstruction requires a proper number of projections (angular sampling) and a proper number of elements in each projection (linear sampling). However in several practical cases it is not possible to fulfill these conditions leading to the so-called problem of few projections. In this case, iterative reconstruction algorithms are more suitable than analytic ones. In this work we present a program written in C++ that provides an environment for two iterative algorithm implementations, one algebraic and the other statistical. The software allows the user a full definition of the acquisition and reconstruction geometries used for the reconstruction algorithms but also to perform projection and backprojection operations. A set of analysis tools was implemented for the characterization of the convergence process. We analyze the performance of the algorithms on numerical phantoms and present the reconstruction of experimental data with few projections coming from transmission X-ray and micro PIXE (Particle-Induced X-Ray Emission) images.
Energy Science and Technology Software Center (OSTI)
002651IBMPC00 Algorithm for Accounting for the Interactions of Multiple Renewable Energy Technologies in Estimation of Annual Performance
PC Basic Linear Algebra Subroutines
Energy Science and Technology Software Center (OSTI)
1992-03-09
PC-BLAS is a highly optimized version of the Basic Linear Algebra Subprograms (BLAS), a standardized set of thirty-eight routines that perform low-level operations on vectors of numbers in single and double-precision real and complex arithmetic. Routines are included to find the index of the largest component of a vector, apply a Givens or modified Givens rotation, multiply a vector by a constant, determine the Euclidean length, perform a dot product, swap and copy vectors, andmore » find the norm of a vector. The BLAS have been carefully written to minimize numerical problems such as loss of precision and underflow and are designed so that the computation is independent of the interface with the calling program. This independence is achieved through judicious use of Assembly language macros. Interfaces are provided for Lahey Fortran 77, Microsoft Fortran 77, and Ryan-McFarland IBM Professional Fortran.« less
A cooperative control algorithm for camera based observational systems.
Young, Joseph G.
2012-01-01
Over the last several years, there has been considerable growth in camera based observation systems for a variety of safety, scientific, and recreational applications. In order to improve the effectiveness of these systems, we frequently desire the ability to increase the number of observed objects, but solving this problem is not as simple as adding more cameras. Quite often, there are economic or physical restrictions that prevent us from adding additional cameras to the system. As a result, we require methods that coordinate the tracking of objects between multiple cameras in an optimal way. In order to accomplish this goal, we present a new cooperative control algorithm for a camera based observational system. Specifically, we present a receding horizon control where we model the underlying optimal control problem as a mixed integer linear program. The benefit of this design is that we can coordinate the actions between each camera while simultaneously respecting its kinematics. In addition, we further improve the quality of our solution by coupling our algorithm with a Kalman filter. Through this integration, we not only add a predictive component to our control, but we use the uncertainty estimates provided by the filter to encourage the system to periodically observe any outliers in the observed area. This combined approach allows us to intelligently observe the entire region of interest in an effective and thorough manner.
Bosamykin, V.S.; Pavlovskiy, A.I.
1984-03-01
A linear induction accelerator of charged particles, containing inductors and an acceleration circuit, characterized by the fact that, for the purpose of increasing the power of the accelerator, each inductor is made in the form of a toroidal line with distributed parameters, from one end of which in the gap of the line a ring commutator is included, and from the other end of the ine a resistor is hooked up, is described.
Combustion powered linear actuator
Fischer, Gary J.
2007-09-04
The present invention provides robotic vehicles having wheeled and hopping mobilities that are capable of traversing (e.g. by hopping over) obstacles that are large in size relative to the robot and, are capable of operation in unpredictable terrain over long range. The present invention further provides combustion powered linear actuators, which can include latching mechanisms to facilitate pressurized fueling of the actuators, as can be used to provide wheeled vehicles with a hopping mobility.
Buttram, M.T.; Ginn, J.W.
1988-06-21
A linear induction accelerator includes a plurality of adder cavities arranged in a series and provided in a structure which is evacuated so that a vacuum inductance is provided between each adder cavity and the structure. An energy storage system for the adder cavities includes a pulsed current source and a respective plurality of bipolar converting networks connected thereto. The bipolar high-voltage, high-repetition-rate square pulse train sets and resets the cavities. 4 figs.
Emma, P.
1995-06-01
The Stanford Linear Collider (SLC) is the first and only high-energy e{sup +}e{sup {minus}} linear collider in the world. Its most remarkable features are high intensity, submicron sized, polarized (e{sup {minus}}) beams at a single interaction point. The main challenges posed by these unique characteristics include machine-wide emittance preservation, consistent high intensity operation, polarized electron production and transport, and the achievement of a high degree of beam stability on all time scales. In addition to serving as an important machine for the study of Z{sup 0} boson production and decay using polarized beams, the SLC is also an indispensable source of hands-on experience for future linear colliders. Each new year of operation has been highlighted with a marked improvement in performance. The most significant improvements for the 1994-95 run include new low impedance vacuum chambers for the damping rings, an upgrade to the optics and diagnostics of the final focus systems, and a higher degree of polarization from the electron source. As a result, the average luminosity has nearly doubled over the previous year with peaks approaching 10{sup 30} cm{sup {minus}2}s{sup {minus}1} and an 80% electron polarization at the interaction point. These developments as well as the remaining identifiable performance limitations will be discussed.
Graphical representation of parallel algorithmic processes. Master's thesis
Williams, E.M.
1990-12-01
Algorithm animation is a visualization method used to enhance understanding of functioning of an algorithm or program. Visualization is used for many purposes, including education, algorithm research, performance analysis, and program debugging. This research applies algorithm animation techniques to programs developed for parallel architectures, with specific on the Intel iPSC/2 hypercube. While both P-time and NP-time algorithms can potentially benefit from using visualization techniques, the set of NP-complete problems provides fertile ground for developing parallel applications, since the combinatoric nature of the problems makes finding the optimum solution impractical. The primary goals for this visualization system are: Data should be displayed as it is generated. The interface to the targe program should be transparent, allowing the animation of existing programs. Flexibility - the system should be able to animate any algorithm. The resulting system incorporates and extends two AFIT products: the AFIT Algorithm Animation Research Facility (AAARF) and the Parallel Resource Analysis Software Environment (PRASE). AAARF is an algorithm animation system developed primarily for sequential programs, but is easily adaptable for use with parallel programs. PRASE is an instrumentation package that extracts system performance data from programs on the Intel hypercubes. Since performance data is an essential part of analyzing any parallel program, views of the performance data are provided as an elementary part of the system. Custom software is designed to interface these systems and to display the program data. The program chosen as the example for this study is a member of the NP-complete problem set; it is a parallel implementation of a general.
Parallelism of the SANDstorm hash algorithm.
Torgerson, Mark Dolan; Draelos, Timothy John; Schroeppel, Richard Crabtree
2009-09-01
Mainstream cryptographic hashing algorithms are not parallelizable. This limits their speed and they are not able to take advantage of the current trend of being run on multi-core platforms. Being limited in speed limits their usefulness as an authentication mechanism in secure communications. Sandia researchers have created a new cryptographic hashing algorithm, SANDstorm, which was specifically designed to take advantage of multi-core processing and be parallelizable on a wide range of platforms. This report describes a late-start LDRD effort to verify the parallelizability claims of the SANDstorm designers. We have shown, with operating code and bench testing, that the SANDstorm algorithm may be trivially parallelized on a wide range of hardware platforms. Implementations using OpenMP demonstrates a linear speedup with multiple cores. We have also shown significant performance gains with optimized C code and the use of assembly instructions to exploit particular platform capabilities.
Confirming the Lanchestrian linear-logarithmic model of attrition
Hartley, D.S. III.
1990-12-01
This paper is the fourth in a series of reports on the breakthrough research in historical validation of attrition in conflict. Significant defense policy decisions, including weapons acquisition and arms reduction, are based in part on models of conflict. Most of these models are driven by their attrition algorithms, usually forms of the Lanchester square and linear laws. None of these algorithms have been validated. The results of this paper confirm the results of earlier papers, using a large database of historical results. The homogeneous linear-logarithmic Lanchestrian attrition model is validated to the extent possible with current initial and final force size data and is consistent with the Iwo Jima data. A particular differential linear-logarithmic model is described that fits the data very well. A version of Helmbold's victory predicting parameter is also confirmed, with an associated probability function. 37 refs., 73 figs., 68 tabs.
Linear Fresnel | Department of Energy
Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site
Concentrating Solar Power » Linear Fresnel Linear Fresnel DOE funds solar research and development (R&D) in linear Fresnel systems as one of four CSP technologies aiming to meet the goals of the SunShot Initiative. Linear Fresnel systems, which are a type of linear concentrator, are active in Germany, Spain, Australia, India, and the United States. The SunShot Initiative funds R&D on linear Fresnel systems and related aspects within the industry, national laboratories and universities
Van Atta, C.M.; Beringer, R.; Smith, L.
1959-01-01
A linear accelerator of heavy ions is described. The basic contributions of the invention consist of a method and apparatus for obtaining high energy particles of an element with an increased charge-to-mass ratio. The method comprises the steps of ionizing the atoms of an element, accelerating the resultant ions to an energy substantially equal to one Mev per nucleon, stripping orbital electrons from the accelerated ions by passing the ions through a curtain of elemental vapor disposed transversely of the path of the ions to provide a second charge-to-mass ratio, and finally accelerating the resultant stripped ions to a final energy of at least ten Mev per nucleon.
Energy Science and Technology Software Center (OSTI)
2006-11-17
Software that simulates and inverts electromagnetic field data for subsurface electrical properties (electrical conductivity) of geological media. The software treats data produced by a time harmonic source field excitation arising from the following antenna geometery: loops and grounded bipoles, as well as point electric and magnetic dioples. The inversion process is carried out using a non-linear conjugate gradient optimization scheme, which minimizes the misfit between field data and model data using a least squares criteria.more » The software is an upgrade from the code NLCGCS_MP ver 1.0. The upgrade includes the following components: Incorporation of new 1 D field sourcing routines to more accurately simulate the 3D electromagnetic field for arbitrary geologic& media, treatment for generalized finite length transmitting antenna geometry (antennas with vertical and horizontal component directions). In addition, the software has been upgraded to treat transverse anisotropy in electrical conductivity.« less
History of Proton Linear Accelerators
DOE R&D Accomplishments [OSTI]
Alvarez, L. W.
1987-01-01
Some personal recollections are presented that relate to the author`s experience developing linear accelerators, particularly for protons. (LEW)
Brambley, Michael R.; Katipamula, Srinivas
2006-10-06
Pacific Northwest National Laboratory (PNNL) is assisting the U.S. Department of Energy (DOE) Distributed Energy (DE) Program by developing advanced control algorithms that would lead to development of tools to enhance performance and reliability, and reduce emissions of distributed energy technologies, including combined heat and power technologies. This report documents phase 2 of the program, providing a detailed functional specification for algorithms for performance monitoring and commissioning verification, scheduled for development in FY 2006. The report identifies the systems for which algorithms will be developed, the specific functions of each algorithm, metrics which the algorithms will output, and inputs required by each algorithm.
Kliman, G.B.; Brynsvold, G.V.; Jahns, T.M.
1989-08-22
A winding and method of winding for a submersible linear pump for pumping liquid sodium are disclosed. The pump includes a stator having a central cylindrical duct preferably vertically aligned. The central vertical duct is surrounded by a system of coils in slots. These slots are interleaved with magnetic flux conducting elements, these magnetic flux conducting elements forming a continuous magnetic field conduction path along the stator. The central duct has placed therein a cylindrical magnetic conducting core, this core having a cylindrical diameter less than the diameter of the cylindrical duct. The core once placed to the duct defines a cylindrical interstitial pumping volume of the pump. This cylindrical interstitial pumping volume preferably defines an inlet at the bottom of the pump, and an outlet at the top of the pump. Pump operation occurs by static windings in the outer stator sequentially conveying toroidal fields from the pump inlet at the bottom of the pump to the pump outlet at the top of the pump. The winding apparatus and method of winding disclosed uses multiple slots per pole per phase with parallel winding legs on each phase equal to or less than the number of slots per pole per phase. The slot sequence per pole per phase is chosen to equalize the variations in flux density of the pump sodium as it passes into the pump at the pump inlet with little or no flux and acquires magnetic flux in passage through the pump to the pump outlet. 4 figs.
Kliman, Gerald B.; Brynsvold, Glen V.; Jahns, Thomas M.
1989-01-01
A winding and method of winding for a submersible linear pump for pumping liquid sodium is disclosed. The pump includes a stator having a central cylindrical duct preferably vertically aligned. The central vertical duct is surrounded by a system of coils in slots. These slots are interleaved with magnetic flux conducting elements, these magnetic flux conducting elements forming a continuous magnetic field conduction path along the stator. The central duct has placed therein a cylindrical magnetic conducting core, this core having a cylindrical diameter less than the diameter of the cylindrical duct. The core once placed to the duct defines a cylindrical interstitial pumping volume of the pump. This cylindrical interstitial pumping volume preferably defines an inlet at the bottom of the pump, and an outlet at the top of the pump. Pump operation occurs by static windings in the outer stator sequentially conveying toroidal fields from the pump inlet at the bottom of the pump to the pump outlet at the top of the pump. The winding apparatus and method of winding disclosed uses multiple slots per pole per phase with parallel winding legs on each phase equal to or less than the number of slots per pole per phase. The slot sequence per pole per phase is chosen to equalize the variations in flux density of the pump sodium as it passes into the pump at the pump inlet with little or no flux and acquires magnetic flux in passage through the pump to the pump outlet.
Meisner, John W.; Moore, Robert M.; Bienvenue, Louis L.
1985-03-19
Electromagnetic linear induction pump for liquid metal which includes a unitary pump duct. The duct comprises two substantially flat parallel spaced-apart wall members, one being located above the other and two parallel opposing side members interconnecting the wall members. Located within the duct are a plurality of web members interconnecting the wall members and extending parallel to the side members whereby the wall members, side members and web members define a plurality of fluid passageways, each of the fluid passageways having substantially the same cross-sectional flow area. Attached to an outer surface of each side member is an electrically conductive end bar for the passage of an induced current therethrough. A multi-phase, electrical stator is located adjacent each of the wall members. The duct, stators, and end bars are enclosed in a housing which is provided with an inlet and outlet in fluid communication with opposite ends of the fluid passageways in the pump duct. In accordance with a preferred embodiment, the inlet and outlet includes a transition means which provides for a transition from a round cross-sectional flow path to a substantially rectangular cross-sectional flow path defined by the pump duct.
Energy Science and Technology Software Center (OSTI)
2013-07-29
The OpenEIS Algorithm package seeks to provide a low-risk path for building owners, service providers and managers to explore analytical methods for improving building control and operational efficiency. Users of this software can analyze building data, and learn how commercial implementations would provide long-term value. The code also serves as a reference implementation for developers who wish to adapt the algorithms for use in commercial tools or service offerings.
Linear Collider Physics Resource Book Snowmass 2001
Ronan , M.T.
2001-06-01
The American particle physics community can look forward to a well-conceived and vital program of experimentation for the next ten years, using both colliders and fixed target beams to study a wide variety of pressing questions. Beyond 2010, these programs will be reaching the end of their expected lives. The CERN LHC will provide an experimental program of the first importance. But beyond the LHC, the American community needs a coherent plan. The Snowmass 2001 Workshop and the deliberations of the HEPAP subpanel offer a rare opportunity to engage the full community in planning our future for the next decade or more. A major accelerator project requires a decade from the beginning of an engineering design to the receipt of the first data. So it is now time to decide whether to begin a new accelerator project that will operate in the years soon after 2010. We believe that the world high-energy physics community needs such a project. With the great promise of discovery in physics at the next energy scale, and with the opportunity for the uncovering of profound insights, we cannot allow our field to contract to a single experimental program at a single laboratory in the world. We believe that an e{sup +}e{sup -} linear collider is an excellent choice for the next major project in high-energy physics. Applying experimental techniques very different from those used at hadron colliders, an e{sup +}e{sup -} linear collider will allow us to build on the discoveries made at the Tevatron and the LHC, and to add a level of precision and clarity that will be necessary to understand the physics of the next energy scale. It is not necessary to anticipate specific results from the hadron collider programs to argue for constructing an e{sup +}e{sup -} linear collider; in any scenario that is now discussed, physics will benefit from the new information that e{sup +}e{sup -} experiments can provide. This last point merits further emphasis. If a new accelerator could be designed and
A robust return-map algorithm for general multisurface plasticity
DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)
Adhikary, Deepak P.; Jayasundara, Chandana T.; Podgorney, Robert K.; Wilkins, Andy H.
2016-06-16
Three new contributions to the field of multisurface plasticity are presented for general situations with an arbitrary number of nonlinear yield surfaces with hardening or softening. A method for handling linearly dependent flow directions is described. A residual that can be used in a line search is defined. An algorithm that has been implemented and comprehensively tested is discussed in detail. Examples are presented to illustrate the computational cost of various components of the algorithm. The overall result is that a single Newton-Raphson iteration of the algorithm costs between 1.5 and 2 times that of an elastic calculation. Examples alsomore » illustrate the successful convergence of the algorithm in complicated situations. For example, without using the new contributions presented here, the algorithm fails to converge for approximately 50% of the trial stresses for a common geomechanical model of sedementary rocks, while the current algorithm results in complete success. Since it involves no approximations, the algorithm is used to quantify the accuracy of an efficient, pragmatic, but approximate, algorithm used for sedimentary-rock plasticity in a commercial software package. Furthermore, the main weakness of the algorithm is identified as the difficulty of correctly choosing the set of initially active constraints in the general setting.« less
Amesos2 and Belos: Direct and Iterative Solvers for Large Sparse Linear Systems
DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)
Bavier, Eric; Hoemmen, Mark; Rajamanickam, Sivasankaran; Thornquist, Heidi
2012-01-01
Solvers for large sparse linear systems come in two categories: direct and iterative. Amesos2, a package in the Trilinos software project, provides direct methods, and Belos, another Trilinos package, provides iterative methods. Amesos2 offers a common interface to many different sparse matrix factorization codes, and can handle any implementation of sparse matrices and vectors, via an easy-to-extend C++ traits interface. It can also factor matrices whose entries have arbitrary “Scalar” type, enabling extended-precision and mixed-precision algorithms. Belos includes many different iterative methods for solving large sparse linear systems and least-squares problems. Unlike competing iterative solver libraries, Belos completely decouples themore » algorithms from the implementations of the underlying linear algebra objects. This lets Belos exploit the latest hardware without changes to the code. Belos favors algorithms that solve higher-level problems, such as multiple simultaneous linear systems and sequences of related linear systems, faster than standard algorithms. The package also supports extended-precision and mixed-precision algorithms. Together, Amesos2 and Belos form a complete suite of sparse linear solvers.« less
The TESLA superconducting linear collider
the TESLA Collaboration
1997-03-01
This paper summarizes the present status of the studies for a superconducting Linear Collider (TESLA). {copyright} {ital 1997 American Institute of Physics.}
2d PDE Linear Symmetric Matrix Solver
Energy Science and Technology Software Center (OSTI)
1983-10-01
ICCG2 (Incomplete Cholesky factorized Conjugate Gradient algorithm for 2d symmetric problems) was developed to solve a linear symmetric matrix system arising from a 9-point discretization of two-dimensional elliptic and parabolic partial differential equations found in plasma physics applications, such as resistive MHD, spatial diffusive transport, and phase space transport (Fokker-Planck equation) problems. These problems share the common feature of being stiff and requiring implicit solution techniques. When these parabolic or elliptic PDE''s are discretized withmore » finite-difference or finite-element methods,the resulting matrix system is frequently of block-tridiagonal form. To use ICCG2, the discretization of the two-dimensional partial differential equation and its boundary conditions must result in a block-tridiagonal supermatrix composed of elementary tridiagonal matrices. The incomplete Cholesky conjugate gradient algorithm is used to solve the linear symmetric matrix equation. Loops are arranged to vectorize on the Cray1 with the CFT compiler, wherever possible. Recursive loops, which cannot be vectorized, are written for optimum scalar speed. For matrices lacking symmetry, ILUCG2 should be used. Similar methods in three dimensions are available in ICCG3 and ILUCG3. A general source containing extensions and macros, which must be processed by a pre-compiler to obtain the standard FORTRAN source, is provided along with the standard FORTRAN source because it is believed to be more readable. The pre-compiler is not included, but pre-compilation may be performed by a text editor as described in the UCRL-88746 Preprint.« less
Nonlinear Global Optimization Using Curdling Algorithm
Energy Science and Technology Software Center (OSTI)
1996-03-01
An algorithm for performing curdling optimization which is a derivative-free, grid-refinement approach to nonlinear optimization was developed and implemented in software. This approach overcomes a number of deficiencies in existing approaches. Most notably, it finds extremal regions rather than only single external extremal points. The program is interactive and collects information on control parameters and constraints using menus. For up to four dimensions, function convergence is displayed graphically. Because the algorithm does not compute derivatives,more » gradients or vectors, it is numerically stable. It can find all the roots of a polynomial in one pass. It is an inherently parallel algorithm. Constraints are handled as being initially fuzzy, but become tighter with each iteration.« less
Statistics of voltage drop in distribution circuits: a dynamic programming approach
Turitsyn, Konstantin S
2010-01-01
We analyze a power distribution line with high penetration of distributed generation and strong variations of power consumption and generation levels. In the presence of uncertainty the statistical description of the system is required to assess the risks of power outages. In order to find the probability of exceeding the constraints for voltage levels we introduce the probability distribution of maximal voltage drop and propose an algorithm for finding this distribution. The algorithm is based on the assumption of random but statistically independent distribution of loads on buses. Linear complexity in the number of buses is achieved through the dynamic programming technique. We illustrate the performance of the algorithm by analyzing a simple 4-bus system with high variations of load levels.
Energy Science and Technology Software Center (OSTI)
2005-03-30
The Robotic Follow Algorithm enables allows any robotic vehicle to follow a moving target while reactively choosing a route around nearby obstacles. The robotic follow behavior can be used with different camera systems and can be used with thermal or visual tracking as well as other tracking methods such as radio frequency tags.
DOE - Office of Legacy Management -- Stanford Linear Accelerator Center -
Office of Legacy Management (LM)
005 Stanford Linear Accelerator Center - 005 FUSRAP Considered Sites Site: Stanford Linear Accelerator Center (005) More information at www.slac.stanford.edu Designated Name: Not Designated under FUSRAP Alternate Name: SLAC Location: Palo Alto, California Evaluation Year: Not considered for FUSRAP - in another program Site Operations: Research Site Disposition: Remediation completed by DOE Office of Environmental Management in 2014. DOE Office of Science is responsible for long-term
Acoustic emission linear pulse holography
Collins, H.D.; Busse, L.J.; Lemon, D.K.
1983-10-25
This device relates to the concept of and means for performing Acoustic Emission Linear Pulse Holography, which combines the advantages of linear holographic imaging and Acoustic Emission into a single non-destructive inspection system. This unique system produces a chronological, linear holographic image of a flaw by utilizing the acoustic energy emitted during crack growth. The innovation is the concept of utilizing the crack-generated acoustic emission energy to generate a chronological series of images of a growing crack by applying linear, pulse holographic processing to the acoustic emission data. The process is implemented by placing on a structure an array of piezoelectric sensors (typically 16 or 32 of them) near the defect location. A reference sensor is placed between the defect and the array.
Linear Accelerator | Advanced Photon Source
Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)
Linear Accelerator Producing brilliant x-ray beams at the APS begins with electrons emitted from a cathode heated to 1100 C. The electrons are accelerated by high-voltage...
Automating linear accelerator quality assurance
Eckhause, Tobias; Thorwarth, Ryan; Moran, Jean M.; Al-Hallaq, Hania; Farrey, Karl; Ritter, Timothy; DeMarco, John; Pawlicki, Todd; Kim, Gwe-Ya; Popple, Richard; Sharma, Vijeshwar; Park, SungYong; Perez, Mario; Booth, Jeremy T.
2015-10-15
Purpose: The purpose of this study was 2-fold. One purpose was to develop an automated, streamlined quality assurance (QA) program for use by multiple centers. The second purpose was to evaluate machine performance over time for multiple centers using linear accelerator (Linac) log files and electronic portal images. The authors sought to evaluate variations in Linac performance to establish as a reference for other centers. Methods: The authors developed analytical software tools for a QA program using both log files and electronic portal imaging device (EPID) measurements. The first tool is a general analysis tool which can read and visually represent data in the log file. This tool, which can be used to automatically analyze patient treatment or QA log files, examines the files for Linac deviations which exceed thresholds. The second set of tools consists of a test suite of QA fields, a standard phantom, and software to collect information from the log files on deviations from the expected values. The test suite was designed to focus on the mechanical tests of the Linac to include jaw, MLC, and collimator positions during static, IMRT, and volumetric modulated arc therapy delivery. A consortium of eight institutions delivered the test suite at monthly or weekly intervals on each Linac using a standard phantom. The behavior of various components was analyzed for eight TrueBeam Linacs. Results: For the EPID and trajectory log file analysis, all observed deviations which exceeded established thresholds for Linac behavior resulted in a beam hold off. In the absence of an interlock-triggering event, the maximum observed log file deviations between the expected and actual component positions (such as MLC leaves) varied from less than 1% to 26% of published tolerance thresholds. The maximum and standard deviations of the variations due to gantry sag, collimator angle, jaw position, and MLC positions are presented. Gantry sag among Linacs was 0.336 ± 0.072 mm. The
Large scale tracking algorithms.
Hansen, Ross L.; Love, Joshua Alan; Melgaard, David Kennett; Karelitz, David B.; Pitts, Todd Alan; Zollweg, Joshua David; Anderson, Dylan Z.; Nandy, Prabal; Whitlow, Gary L.; Bender, Daniel A.; Byrne, Raymond Harry
2015-01-01
Low signal-to-noise data processing algorithms for improved detection, tracking, discrimination and situational threat assessment are a key research challenge. As sensor technologies progress, the number of pixels will increase signi cantly. This will result in increased resolution, which could improve object discrimination, but unfortunately, will also result in a significant increase in the number of potential targets to track. Many tracking techniques, like multi-hypothesis trackers, suffer from a combinatorial explosion as the number of potential targets increase. As the resolution increases, the phenomenology applied towards detection algorithms also changes. For low resolution sensors, "blob" tracking is the norm. For higher resolution data, additional information may be employed in the detection and classfication steps. The most challenging scenarios are those where the targets cannot be fully resolved, yet must be tracked and distinguished for neighboring closely spaced objects. Tracking vehicles in an urban environment is an example of such a challenging scenario. This report evaluates several potential tracking algorithms for large-scale tracking in an urban environment.
Energy Science and Technology Software Center (OSTI)
1998-07-01
GenOpt is a generic optimization program for nonlinear, constrained optimization. For evaluating the objective function, any simulation program that communicates over text files can be coupled to GenOpt without code modification. No analytic properties of the objective function are used by GenOpt. ptimization algorithms and numerical methods can be implemented in a library and shared among users. Gencpt offers an interlace between the optimization algorithm and its kernel to make the implementation of new algorithmsmore » fast and easy. Different algorithms of constrained and unconstrained minimization can be added to a library. Algorithms for approximation derivatives and performing line-search will be implemented. The objective function is evaluated as a black-box function by an external simulation program. The kernel of GenOpt deals with the data I/O, result sotrage and report, interlace to the external simulation program, and error handling. An abstract optimization class offers methods to interface the GenOpt kernel and the optimization algorithm library.« less
MineSeis - A MATLAB@ GUI Program to
Office of Scientific and Technical Information (OSTI)
i MineSeis - A MATLAB@ GUI Program to Calculate Synthetic Seismograms from a Linear, ... The program was written with MATLAB Graphical User Interface (GUI) technique ...
Integrated Network Decompositions and Dynamic Programming for Graph Optimization (INDDGO)
Energy Science and Technology Software Center (OSTI)
2012-05-31
The INDDGO software package offers a set of tools for finding exact solutions to graph optimization problems via tree decompositions and dynamic programming algorithms. Currently the framework offers serial and parallel (distributed memory) algorithms for finding tree decompositions and solving the maximum weighted independent set problem. The parallel dynamic programming algorithm is implemented on top of the MADNESS task-based runtime.
A new augmentation based algorithm for extracting maximal chordal subgraphs
DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)
Bhowmick, Sanjukta; Chen, Tzu-Yi; Halappanavar, Mahantesh
2014-10-18
If every cycle of a graph is chordal length greater than three then it contains an edge between non-adjacent vertices. Chordal graphs are of interest both theoretically, since they admit polynomial time solutions to a range of NP-hard graph problems, and practically, since they arise in many applications including sparse linear algebra, computer vision, and computational biology. A maximal chordal subgraph is a chordal subgraph that is not a proper subgraph of any other chordal subgraph. Existing algorithms for computing maximal chordal subgraphs depend on dynamically ordering the vertices, which is an inherently sequential process and therefore limits the algorithms’more » parallelizability. In our paper we explore techniques to develop a scalable parallel algorithm for extracting a maximal chordal subgraph. We demonstrate that an earlier attempt at developing a parallel algorithm may induce a non-optimal vertex ordering and is therefore not guaranteed to terminate with a maximal chordal subgraph. We then give a new algorithm that first computes and then repeatedly augments a spanning chordal subgraph. After proving that the algorithm terminates with a maximal chordal subgraph, we then demonstrate that this algorithm is more amenable to parallelization and that the parallel version also terminates with a maximal chordal subgraph. That said, the complexity of the new algorithm is higher than that of the previous parallel algorithm, although the earlier algorithm computes a chordal subgraph which is not guaranteed to be maximal. Finally, we experimented with our augmentation-based algorithm on both synthetic and real-world graphs. We provide scalability results and also explore the effect of different choices for the initial spanning chordal subgraph on both the running time and on the number of edges in the maximal chordal subgraph.« less
2d PDE Linear Asymmetric Matrix Solver
Energy Science and Technology Software Center (OSTI)
1983-10-01
ILUCG2 (Incomplete LU factorized Conjugate Gradient algorithm for 2d problems) was developed to solve a linear asymmetric matrix system arising from a 9-point discretization of two-dimensional elliptic and parabolic partial differential equations found in plasma physics applications, such as plasma diffusion, equilibria, and phase space transport (Fokker-Planck equation) problems. These equations share the common feature of being stiff and requiring implicit solution techniques. When these parabolic or elliptic PDE''s are discretized with finite-difference or finite-elementmore » methods, the resulting matrix system is frequently of block-tridiagonal form. To use ILUCG2, the discretization of the two-dimensional partial differential equation and its boundary conditions must result in a block-tridiagonal supermatrix composed of elementary tridiagonal matrices. A generalization of the incomplete Cholesky conjugate gradient algorithm is used to solve the matrix equation. Loops are arranged to vectorize on the Cray1 with the CFT compiler, wherever possible. Recursive loops, which cannot be vectorized, are written for optimum scalar speed. For problems having a symmetric matrix ICCG2 should be used since it runs up to four times faster and uses approximately 30% less storage. Similar methods in three dimensions are available in ICCG3 and ILUCG3. A general source, containing extensions and macros, which must be processed by a pre-compiler to obtain the standard FORTRAN source, is provided along with the standard FORTRAN source because it is believed to be more readable. The pre-compiler is not included, but pre-compilation may be performed by a text editor as described in the UCRL-88746 Preprint.« less
MEMORANDUM OF UNDERSTANDING Between The Numerical Algorithms Group Ltd
Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)
Between The Numerical Algorithms Group Ltd and The University of California, as Management and Operating Contractor for Lawrence Berkeley National Laboratory on a Visitor Exchange Program This Memorandum of Understanding (MOU) is by and between the Numerical Algorithms Group Ltd (NAG) with a registered address at: Wilkinson House, Jordan hill Road, Oxford, UK and the University of California, as Management and Operating Contractor for Lawrence Berkeley National Laboratory, including its
Improved multiprocessor garbage collection algorithms
Newman, I.A.; Stallard, R.P.; Woodward, M.C.
1983-01-01
Outlines the results of an investigation of existing multiprocessor garbage collection algorithms and introduces two new algorithms which significantly improve some aspects of the performance of their predecessors. The two algorithms arise from different starting assumptions. One considers the case where the algorithm will terminate successfully whatever list structure is being processed and assumes that the extra data space should be minimised. The other seeks a very fast garbage collection time for list structures that do not contain loops. Results of both theoretical and experimental investigations are given to demonstrate the efficacy of the algorithms. 7 references.
Henry, J.J.
1961-09-01
A linear count-rate meter is designed to provide a highly linear output while receiving counting rates from one cycle per second to 100,000 cycles per second. Input pulses enter a linear discriminator and then are fed to a trigger circuit which produces positive pulses of uniform width and amplitude. The trigger circuit is connected to a one-shot multivibrator. The multivibrator output pulses have a selected width. Feedback means are provided for preventing transistor saturation in the multivibrator which improves the rise and decay times of the output pulses. The multivibrator is connected to a diode-switched, constant current metering circuit. A selected constant current is switched to an averaging circuit for each pulse received, and for a time determined by the received pulse width. The average output meter current is proportional to the product of the counting rate, the constant current, and the multivibrator output pulse width.
A Numerical Algorithm for the Solution of a Phase-Field Model of Polycrystalline Materials
Dorr, M R; Fattebert, J; Wickett, M E; Belak, J F; Turchi, P A
2008-12-04
We describe an algorithm for the numerical solution of a phase-field model (PFM) of microstructure evolution in polycrystalline materials. The PFM system of equations includes a local order parameter, a quaternion representation of local orientation and a species composition parameter. The algorithm is based on the implicit integration of a semidiscretization of the PFM system using a backward difference formula (BDF) temporal discretization combined with a Newton-Krylov algorithm to solve the nonlinear system at each time step. The BDF algorithm is combined with a coordinate projection method to maintain quaternion unit length, which is related to an important solution invariant. A key element of the Newton-Krylov algorithm is the selection of a preconditioner to accelerate the convergence of the Generalized Minimum Residual algorithm used to solve the Jacobian linear system in each Newton step. Results are presented for the application of the algorithm to 2D and 3D examples.
Linear Fresnel Power Plant Illustration
Broader source: Energy.gov [DOE]
With this concentrating solar power (CSP) graphic, flat or slightly curved mirrors mounted on trackers on the ground are configured to reflect sunlight onto a receiver tube fixed in space above these mirrors. A small parabolic mirror is sometimes added atop the receiver to further focus the sunlight. Linear CSP collectors capture the sun's energy with large mirrors that reflect and focus the sunlight onto a linear receiver tube. The receiver contains a fluid that is heated by the sunlight and then used to create superheated steam that spins a turbine that drives a generator to produce electricity.
Linear electric field mass spectrometry
McComas, D.J.; Nordholt, J.E.
1992-12-01
A mass spectrometer and methods for mass spectrometry are described. The apparatus is compact and of low weight and has a low power requirement, making it suitable for use on a space satellite and as a portable detector for the presence of substances. High mass resolution measurements are made by timing ions moving through a gridless cylindrically symmetric linear electric field. 8 figs.
Linear electric field mass spectrometry
McComas, David J.; Nordholt, Jane E.
1992-01-01
A mass spectrometer and methods for mass spectrometry. The apparatus is compact and of low weight and has a low power requirement, making it suitable for use on a space satellite and as a portable detector for the presence of substances. High mass resolution measurements are made by timing ions moving through a gridless cylindrically symmetric linear electric field.
Solar Power Ramp Events Detection Using an Optimized Swinging Door Algorithm
Cui, Mingjian; Zhang, Jie; Florita, Anthony; Hodge, Bri-Mathias; Ke, Deping; Sun, Yuanzhang
2015-08-05
Solar power ramp events (SPREs) significantly influence the integration of solar power on non-clear days and threaten the reliable and economic operation of power systems. Accurately extracting solar power ramps becomes more important with increasing levels of solar power penetrations in power systems. In this paper, we develop an optimized swinging door algorithm (OpSDA) to enhance the state of the art in SPRE detection. First, the swinging door algorithm (SDA) is utilized to segregate measured solar power generation into consecutive segments in a piecewise linear fashion. Then we use a dynamic programming approach to combine adjacent segments into significant ramps when the decision thresholds are met. In addition, the expected SPREs occurring in clear-sky solar power conditions are removed. Measured solar power data from Tucson Electric Power is used to assess the performance of the proposed methodology. OpSDA is compared to two other ramp detection methods: the SDA and the L1-Ramp Detect with Sliding Window (L1-SW) method. The statistical results show the validity and effectiveness of the proposed method. OpSDA can significantly improve the performance of the SDA, and it can perform as well as or better than L1-SW with substantially less computation time.
Solar Power Ramp Events Detection Using an Optimized Swinging Door Algorithm: Preprint
Cui, Mingjian; Zhang, Jie; Florita, Anthony; Hodge, Bri-Mathias; Ke, Deping; Sun, Yuanzhang
2015-08-07
Solar power ramp events (SPREs) are those that significantly influence the integration of solar power on non-clear days and threaten the reliable and economic operation of power systems. Accurately extracting solar power ramps becomes more important with increasing levels of solar power penetrations in power systems. In this paper, we develop an optimized swinging door algorithm (OpSDA) to detection. First, the swinging door algorithm (SDA) is utilized to segregate measured solar power generation into consecutive segments in a piecewise linear fashion. Then we use a dynamic programming approach to combine adjacent segments into significant ramps when the decision thresholds are met. In addition, the expected SPREs occurring in clear-sky solar power conditions are removed. Measured solar power data from Tucson Electric Power is used to assess the performance of the proposed methodology. OpSDA is compared to two other ramp detection methods: the SDA and the L1-Ramp Detect with Sliding Window (L1-SW) method. The statistical results show the validity and effectiveness of the proposed method. OpSDA can significantly improve the performance of the SDA, and it can perform as well as or better than L1-SW with substantially less computation time.
International Linear Collider Technical Design Report - Volume...
Office of Scientific and Technical Information (OSTI)
International Linear Collider Technical Design Report - Volume 2: Physics Citation Details In-Document Search Title: International Linear Collider Technical Design Report - Volume ...
Algorithmic crystal chemistry: A cellular automata approach
Krivovichev, S. V.
2012-01-15
Atomic-molecular mechanisms of crystal growth can be modeled based on crystallochemical information using cellular automata (a particular case of finite deterministic automata). In particular, the formation of heteropolyhedral layered complexes in uranyl selenates can be modeled applying a one-dimensional three-colored cellular automaton. The use of the theory of calculations (in particular, the theory of automata) in crystallography allows one to interpret crystal growth as a computational process (the realization of an algorithm or program with a finite number of steps).
Cast dielectric composite linear accelerator
Sanders, David M.; Sampayan, Stephen; Slenes, Kirk; Stoller, H. M.
2009-11-10
A linear accelerator having cast dielectric composite layers integrally formed with conductor electrodes in a solventless fabrication process, with the cast dielectric composite preferably having a nanoparticle filler in an organic polymer such as a thermosetting resin. By incorporating this cast dielectric composite the dielectric constant of critical insulating layers of the transmission lines of the accelerator are increased while simultaneously maintaining high dielectric strengths for the accelerator.
Segmented rail linear induction motor
Cowan, M. Jr.; Marder, B.M.
1996-09-03
A segmented rail linear induction motor has a segmented rail consisting of a plurality of nonferrous electrically conductive segments aligned along a guideway. The motor further includes a carriage including at least one pair of opposed coils fastened to the carriage for moving the carriage. A power source applies an electric current to the coils to induce currents in the conductive surfaces to repel the coils from adjacent edges of the conductive surfaces. 6 figs.
Segmented rail linear induction motor
Cowan, Jr., Maynard; Marder, Barry M.
1996-01-01
A segmented rail linear induction motor has a segmented rail consisting of a plurality of nonferrous electrically conductive segments aligned along a guideway. The motor further includes a carriage including at least one pair of opposed coils fastened to the carriage for moving the carriage. A power source applies an electric current to the coils to induce currents in the conductive surfaces to repel the coils from adjacent edges of the conductive surfaces.
Precision linear ramp function generator
Jatko, W. Bruce (Knoxville, TN); McNeilly, David R. (Maryville, TN); Thacker, Louis H. (Knoxville, TN)
1986-01-01
A ramp function generator is provided which produces a precise linear ramp unction which is repeatable and highly stable. A derivative feedback loop is used to stabilize the output of an integrator in the forward loop and control the ramp rate. The ramp may be started from a selected baseline voltage level and the desired ramp rate is selected by applying an appropriate constant voltage to the input of the integrator.
Precision linear ramp function generator
Jatko, W.B.; McNeilly, D.R.; Thacker, L.H.
1984-08-01
A ramp function generator is provided which produces a precise linear ramp function which is repeatable and highly stable. A derivative feedback loop is used to stabilize the output of an integrator in the forward loop and control the ramp rate. The ramp may be started from a selected baseline voltage level and the desired ramp rate is selected by applying an appropriate constant voltage to the input of the integrator.
St Aubin, J. Keyvanloo, A.; Fallone, B. G.; Vassiliev, O.
2015-02-15
Purpose: Accurate radiotherapy dose calculation algorithms are essential to any successful radiotherapy program, considering the high level of dose conformity and modulation in many of todays treatment plans. As technology continues to progress, such as is the case with novel MRI-guided radiotherapy systems, the necessity for dose calculation algorithms to accurately predict delivered dose in increasingly challenging scenarios is vital. To this end, a novel deterministic solution has been developed to the first order linear Boltzmann transport equation which accurately calculates x-ray based radiotherapy doses in the presence of magnetic fields. Methods: The deterministic formalism discussed here with the inclusion of magnetic fields is outlined mathematically using a discrete ordinates angular discretization in an attempt to leverage existing deterministic codes. It is compared against the EGSnrc Monte Carlo code, utilizing the emf-macros addition which calculates the effects of electromagnetic fields. This comparison is performed in an inhomogeneous phantom that was designed to present a challenging calculation for deterministic calculations in 0, 0.6, and 3 T magnetic fields oriented parallel and perpendicular to the radiation beam. The accuracy of the formalism discussed here against Monte Carlo was evaluated with a gamma comparison using a standard 2%/2 mm and a more stringent 1%/1 mm criterion for a standard reference 10 10 cm{sup 2} field as well as a smaller 2 2 cm{sup 2} field. Results: Greater than 99.8% (94.8%) of all points analyzed passed a 2%/2 mm (1%/1 mm) gamma criterion for all magnetic field strengths and orientations investigated. All dosimetric changes resulting from the inclusion of magnetic fields were accurately calculated using the deterministic formalism. However, despite the algorithms high degree of accuracy, it is noticed that this formalism was not unconditionally stable using a discrete ordinate angular discretization
Cubit Adaptive Meshing Algorithm Library
Energy Science and Technology Software Center (OSTI)
2004-09-01
CAMAL (Cubit adaptive meshing algorithm library) is a software component library for mesh generation. CAMAL 2.0 includes components for triangle, quad and tetrahedral meshing. A simple Application Programmers Interface (API) takes a discrete boundary definition and CAMAL computes a quality interior unstructured grid. The triangle and quad algorithms may also import a geometric definition of a surface on which to define the grid. CAMALs triangle meshing uses a 3D space advancing front method, the quadmore » meshing algorithm is based upon Sandias patented paving algorithm and the tetrahedral meshing algorithm employs the GHS3D-Tetmesh component developed by INRIA, France.« less
Acoustic emission linear pulse holography
Collins, H. Dale; Busse, Lawrence J.; Lemon, Douglas K.
1985-01-01
Defects in a structure are imaged as they propagate, using their emitted acoustic energy as a monitored source. Short bursts of acoustic energy propagate through the structure to a discrete element receiver array. A reference timing transducer located between the array and the inspection zone initiates a series of time-of-flight measurements. A resulting series of time-of-flight measurements are then treated as aperture data and are transferred to a computer for reconstruction of a synthetic linear holographic image. The images can be displayed and stored as a record of defect growth.
Nonferromagnetic linear variable differential transformer
Ellis, James F.; Walstrom, Peter L.
1977-06-14
A nonferromagnetic linear variable differential transformer for accurately measuring mechanical displacements in the presence of high magnetic fields is provided. The device utilizes a movable primary coil inside a fixed secondary coil that consists of two series-opposed windings. Operation is such that the secondary output voltage is maintained in phase (depending on polarity) with the primary voltage. The transducer is well-suited to long cable runs and is useful for measuring small displacements in the presence of high or alternating magnetic fields.
Energy Science and Technology Software Center (OSTI)
2013-07-24
Version 00 Calculations of the decay heat is of great importance for the design of the shielding of discharged fuel, the design and transport of fuel-storage flasks and the management of the resulting radioactive waste. These are relevant to safety and have large economic and legislative consequences. In the HEATKAU code, a new approach has been proposed to evaluate the decay heat power after a fission burst of a fissile nuclide for short cooling time.more » This method is based on the numerical solution of coupled linear differential equations that describe decays and buildups of the minor fission products (MFPs) nuclides. HEATKAU is written entirely in the MATLAB programming environment. The MATLAB data can be stored in a standard, fast and easy-access, platform- independent binary format which is easy to visualize.« less
DOE Publishes CALiPER Report on Linear (T8) LED Lamps in Recessed Troffers
Broader source: Energy.gov [DOE]
The U.S. Department of Energy's CALiPER program has released Report 21.2, which is part of a series of investigations on linear LED lamps. Report 21.2 focuses on the performance of three linear (T8...
DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)
Wang, C. L.
2016-05-17
On the basis of FluoroBancroft linear-algebraic method [S.B. Andersson, Opt. Exp. 16, 18714 (2008)] three highly-resolved positioning methodswere proposed for wavelength-shifting fiber (WLSF) neutron detectors. Using a Gaussian or exponential-decay light-response function (LRF), the non-linear relation of photon-number profiles vs. x-pixels was linearized and neutron positions were determined. The proposed algorithms give an average 0.03-0.08 pixel position error, much smaller than that (0.29 pixel) from a traditional maximum photon algorithm (MPA). The new algorithms result in better detector uniformity, less position misassignment (ghosting), better spatial resolution, and an equivalent or better instrument resolution in powder diffraction than the MPA. Moreover,more » these characters will facilitate broader applications of WLSF detectors at time-of-flight neutron powder diffraction beamlines, including single-crystal diffraction and texture analysis.« less
International linear collider reference design report
Aarons, G.
2007-06-22
The International Linear Collider will give physicists a new cosmic doorway to explore energy regimes beyond the reach of today's accelerators. A proposed electron-positron collider, the ILC will complement the Large Hadron Collider, a proton-proton collider at the European Center for Nuclear Research (CERN) in Geneva, Switzerland, together unlocking some of the deepest mysteries in the universe. With LHC discoveries pointing the way, the ILC -- a true precision machine -- will provide the missing pieces of the puzzle. Consisting of two linear accelerators that face each other, the ILC will hurl some 10 billion electrons and their anti-particles, positrons, toward each other at nearly the speed of light. Superconducting accelerator cavities operating at temperatures near absolute zero give the particles more and more energy until they smash in a blazing crossfire at the centre of the machine. Stretching approximately 35 kilometres in length, the beams collide 14,000 times every second at extremely high energies -- 500 billion-electron-volts (GeV). Each spectacular collision creates an array of new particles that could answer some of the most fundamental questions of all time. The current baseline design allows for an upgrade to a 50-kilometre, 1 trillion-electron-volt (TeV) machine during the second stage of the project. This reference design provides the first detailed technical snapshot of the proposed future electron-positron collider, defining in detail the technical parameters and components that make up each section of the 31-kilometer long accelerator. The report will guide the development of the worldwide R&D program, motivate international industrial studies and serve as the basis for the final engineering design needed to make an official project proposal later this decade.
Reticle stage based linear dosimeter
Berger, Kurt W.
2007-03-27
A detector to measure EUV intensity employs a linear array of photodiodes. The detector is particularly suited for photolithography systems that includes: (i) a ringfield camera; (ii) a source of radiation; (iii) a condenser for processing radiation from the source of radiation to produce a ringfield illumination field for illuminating a mask; (iv) a reticle that is positioned at the ringfield camera's object plane and from which a reticle image in the form of an intensity profile is reflected into the entrance pupil of the ringfield camera, wherein the reticle moves in a direction that is transverse to the length of the ringfield illumination field that illuminates the reticle; (v) detector for measuring the entire intensity along the length of the ringfield illumination field that is projected onto the reticle; and (vi) a wafer onto which the reticle imaged is projected from the ringfield camera.
Reticle stage based linear dosimeter
Berger, Kurt W.
2005-06-14
A detector to measure EUV intensity employs a linear array of photodiodes. The detector is particularly suited for photolithography systems that includes: (i) a ringfield camera; (ii) a source of radiation; (iii) a condenser for processing radiation from the source of radiation to produce a ringfield illumination field for illuminating a mask; (iv) a reticle that is positioned at the ringfield camera's object plane and from which a reticle image in the form of an intensity profile is reflected into the entrance pupil of the ringfield camera, wherein the reticle moves in a direction that is transverse to the length of the ringfield illumination field that illuminates the reticle; (v) detector for measuring the entire intensity along the length of the ringfield illumination field that is projected onto the reticle; and (vi) a wafer onto which the reticle imaged is projected from the ringfield camera.
Belief network algorithms: A study of performance
Jitnah, N.
1996-12-31
This abstract gives an overview of the work. We present a survey of Belief Network algorithms and propose a domain characterization system to be used as a basis for algorithm comparison and for predicting algorithm performance.
Optimized Algorithms Boost Combustion Research
Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)
Optimized Algorithms Boost Combustion Research Optimized Algorithms Boost Combustion Research Methane Flame Simulations Run 6x Faster on NERSC's Hopper Supercomputer November 25, 2014 Contact: Kathy Kincade, +1 510 495 2124, kkincade@lbl.gov Turbulent combustion simulations, which provide input to the design of more fuel-efficient combustion systems, have gotten their own efficiency boost, thanks to researchers from the Computational Research Division (CRD) at Lawrence Berkeley National
Modeling patterns in data using linear and related models
Engelhardt, M.E.
1996-06-01
This report considers the use of linear models for analyzing data related to reliability and safety issues of the type usually associated with nuclear power plants. The report discusses some of the general results of linear regression analysis, such as the model assumptions and properties of the estimators of the parameters. The results are motivated with examples of operational data. Results about the important case of a linear regression model with one covariate are covered in detail. This case includes analysis of time trends. The analysis is applied with two different sets of time trend data. Diagnostic procedures and tests for the adequacy of the model are discussed. Some related methods such as weighted regression and nonlinear models are also considered. A discussion of the general linear model is also included. Appendix A gives some basic SAS programs and outputs for some of the analyses discussed in the body of the report. Appendix B is a review of some of the matrix theoretic results which are useful in the development of linear models.
High Performance Preconditioners and Linear Solvers
Energy Science and Technology Software Center (OSTI)
2006-07-27
Hypre is a software library focused on the solution of large, sparse linear systems of equations on massively parallel computers.
Berkeley Algorithms Help Researchers Understand Dark Energy
Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)
Algorithms Help Researchers Understand Dark Energy Berkeley Algorithms Help Researchers Understand Dark Energy November 24, 2014 Contact: Linda Vu, +1 510 495 2402, lvu@lbl.gov ...
The Computational Physics Program of the national MFE Computer Center
Mirin, A.A.
1989-01-01
Since June 1974, the MFE Computer Center has been engaged in a significant computational physics effort. The principal objective of the Computational Physics Group is to develop advanced numerical models for the investigation of plasma phenomena and the simulation of present and future magnetic confinement devices. Another major objective of the group is to develop efficient algorithms and programming techniques for current and future generations of supercomputers. The Computational Physics Group has been involved in several areas of fusion research. One main area is the application of Fokker-Planck/quasilinear codes to tokamaks. Another major area is the investigation of resistive magnetohydrodynamics in three dimensions, with applications to tokamaks and compact toroids. A third area is the investigation of kinetic instabilities using a 3-D particle code; this work is often coupled with the task of numerically generating equilibria which model experimental devices. Ways to apply statistical closure approximations to study tokamak-edge plasma turbulence have been under examination, with the hope of being able to explain anomalous transport. Also, we are collaborating in an international effort to evaluate fully three-dimensional linear stability of toroidal devices. In addition to these computational physics studies, the group has developed a number of linear systems solvers for general classes of physics problems and has been making a major effort at ascertaining how to efficiently utilize multiprocessor computers. A summary of these programs are included in this paper. 6 tabs.
A Linac Simulation Code for Macro-Particles Tracking and Steering Algorithm Implementation
sun, yipeng
2012-05-03
In this paper, a linac simulation code written in Fortran90 is presented and several simulation examples are given. This code is optimized to implement linac alignment and steering algorithms, and evaluate the accelerator errors such as RF phase and acceleration gradient, quadrupole and BPM misalignment. It can track a single particle or a bunch of particles through normal linear accelerator elements such as quadrupole, RF cavity, dipole corrector and drift space. One-to-one steering algorithm and a global alignment (steering) algorithm are implemented in this code.
GPU Accelerated Event Detection Algorithm
Energy Science and Technology Software Center (OSTI)
2011-05-25
Smart grid external require new algorithmic approaches as well as parallel formulations. One of the critical components is the prediction of changes and detection of anomalies within the power grid. The state-of-the-art algorithms are not suited to handle the demands of streaming data analysis. (i) need for events detection algorithms that can scale with the size of data, (ii) need for algorithms that can not only handle multi dimensional nature of the data, but alsomore » model both spatial and temporal dependencies in the data, which, for the most part, are highly nonlinear, (iii) need for algorithms that can operate in an online fashion with streaming data. The GAEDA code is a new online anomaly detection techniques that take into account spatial, temporal, multi-dimensional aspects of the data set. The basic idea behind the proposed approach is to (a) to convert a multi-dimensional sequence into a univariate time series that captures the changes between successive windows extracted from the original sequence using singular value decomposition (SVD), and then (b) to apply known anomaly detection techniques for univariate time series. A key challenge for the proposed approach is to make the algorithm scalable to huge datasets by adopting techniques from perturbation theory, incremental SVD analysis. We used recent advances in tensor decomposition techniques which reduce computational complexity to monitor the change between successive windows and detect anomalies in the same manner as described above. Therefore we propose to develop the parallel solutions on many core systems such as GPUs, because these algorithms involve lot of numerical operations and are highly data-parallelizable.« less
DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)
Frolov, Vladimir; Backhaus, Scott; Chertkov, Misha
2014-10-24
We explore optimization methods for planning the placement, sizing and operations of Flexible Alternating Current Transmission System (FACTS) devices installed to relieve transmission grid congestion. We limit our selection of FACTS devices to Series Compensation (SC) devices that can be represented by modification of the inductance of transmission lines. Our master optimization problem minimizes the l1 norm of the inductance modification subject to the usual line thermal-limit constraints. We develop heuristics that reduce this non-convex optimization to a succession of Linear Programs (LP) which are accelerated further using cutting plane methods. The algorithm solves an instance of the MatPower Polishmore » Grid model (3299 lines and 2746 nodes) in 40 seconds per iteration on a standard laptop—a speed up that allows the sizing and placement of a family of SC devices to correct a large set of anticipated congestions. We observe that our algorithm finds feasible solutions that are always sparse, i.e., SC devices are placed on only a few lines. In a companion manuscript, we demonstrate our approach on realistically-sized networks that suffer congestion from a range of causes including generator retirement. In this manuscript, we focus on the development of our approach, investigate its structure on a small test system subject to congestion from uniform load growth, and demonstrate computational efficiency on a realistically-sized network.« less
Frolov, Vladimir; Backhaus, Scott; Chertkov, Misha
2014-10-24
We explore optimization methods for planning the placement, sizing and operations of Flexible Alternating Current Transmission System (FACTS) devices installed to relieve transmission grid congestion. We limit our selection of FACTS devices to Series Compensation (SC) devices that can be represented by modification of the inductance of transmission lines. Our master optimization problem minimizes the l_{1} norm of the inductance modification subject to the usual line thermal-limit constraints. We develop heuristics that reduce this non-convex optimization to a succession of Linear Programs (LP) which are accelerated further using cutting plane methods. The algorithm solves an instance of the MatPower Polish Grid model (3299 lines and 2746 nodes) in 40 seconds per iteration on a standard laptop—a speed up that allows the sizing and placement of a family of SC devices to correct a large set of anticipated congestions. We observe that our algorithm finds feasible solutions that are always sparse, i.e., SC devices are placed on only a few lines. In a companion manuscript, we demonstrate our approach on realistically-sized networks that suffer congestion from a range of causes including generator retirement. In this manuscript, we focus on the development of our approach, investigate its structure on a small test system subject to congestion from uniform load growth, and demonstrate computational efficiency on a realistically-sized network.
Frolov, Vladimir; Backhaus, Scott N.; Chertkov, Michael
2014-01-14
We explore optimization methods for planning the placement, sizing and operations of Flexible Alternating Current Transmission System (FACTS) devices installed to relieve transmission grid congestion. We limit our selection of FACTS devices to Series Compensation (SC) devices that can be represented by modification of the inductance of transmission lines. Our master optimization problem minimizes the l_{1} norm of the inductance modification subject to the usual line thermal-limit constraints. We develop heuristics that reduce this non-convex optimization to a succession of Linear Programs (LP) which are accelerated further using cutting plane methods. The algorithm solves an instance of the MatPower Polish Grid model (3299 lines and 2746 nodes) in 40 seconds per iteration on a standard laptop—a speed up that allows the sizing and placement of a family of SC devices to correct a large set of anticipated congestions. We observe that our algorithm finds feasible solutions that are always sparse, i.e., SC devices are placed on only a few lines. In a companion manuscript, we demonstrate our approach on realistically-sized networks that suffer congestion from a range of causes including generator retirement. In this manuscript, we focus on the development of our approach, investigate its structure on a small test system subject to congestion from uniform load growth, and demonstrate computational efficiency on a realistically-sized network.
Visual Empirical Region of Influence (VERI) Pattern Recognition Algorithms
2002-05-01
best pattern recognition results. With a small number of features in a data set an exact solution can be determined. However, the number of possible combinations increases exponentially with the number of features and an alternate means of finding a solution must be found. We developed and implemented a technique for finding solutions in data sets with both small and large numbers of features. The VERI interface tools were written using the Tcl/Tk GUI programming language, version 8.1. Although the Tcl/Tk packages are designed to run on multiple computer platforms, we have concentrated our efforts to develop a user interface for the ubiquitous DOS environment. The VERI algorithms are compiled, executable programs. The interfaces run the VERI algorithms in Leave-One-Out mode using the Euclidean metric.
Visual Empirical Region of Influence (VERI) Pattern Recognition Algorithms
Energy Science and Technology Software Center (OSTI)
2002-05-01
the best pattern recognition results. With a small number of features in a data set an exact solution can be determined. However, the number of possible combinations increases exponentially with the number of features and an alternate means of finding a solution must be found. We developed and implemented a technique for finding solutions in data sets with both small and large numbers of features. The VERI interface tools were written using the Tcl/Tk GUI programming language, version 8.1. Although the Tcl/Tk packages are designed to run on multiple computer platforms, we have concentrated our efforts to develop a user interface for the ubiquitous DOS environment. The VERI algorithms are compiled, executable programs. The interfaces run the VERI algorithms in Leave-One-Out mode using the Euclidean metric.« less
Automated DNA Base Pair Calling Algorithm
Energy Science and Technology Software Center (OSTI)
1999-07-07
The procedure solves the problem of calling the DNA base pair sequence from two channel electropherogram separations in an automated fashion. The core of the program involves a peak picking algorithm based upon first, second, and third derivative spectra for each electropherogram channel, signal levels as a function of time, peak spacing, base pair signal to noise sequence patterns, frequency vs ratio of the two channel histograms, and confidence levels generated during the run. Themore » ratios of the two channels at peak centers can be used to accurately and reproducibly determine the base pair sequence. A further enhancement is a novel Gaussian deconvolution used to determine the peak heights used in generating the ratio.« less
Nonlinear Global Optimization Using Curdling Algorithm in Mathematica Environmet
Energy Science and Technology Software Center (OSTI)
1997-08-05
An algorithm for performing optimization which is a derivative-free, grid-refinement approach to nonlinear optimization was developed and implemented in software as OPTIMIZE. This approach overcomes a number of deficiencies in existing approaches. Most notably, it finds extremal regions rather than only single extremal points. the program is interactive and collects information on control parameters and constraints using menus. For up to two (and potentially three) dimensions, function convergence is displayed graphically. Because the algorithm doesmore » not compute derivatives, gradients, or vectors, it is numerically stable. It can find all the roots of a polynomial in one pass. It is an inherently parallel algorithm. OPTIMIZE-M is a modification of OPTIMIZE designed for use within the Mathematica environment created by Wolfram Research.« less
Optimized Algorithm for Collision Probability Calculations in Cubic Geometry
Garcia, R.D.M.
2004-06-15
An optimized algorithm for implementing a recently developed method of computing collision probabilities (CPs) in three dimensions is reported in this work for the case of a homogeneous cube. Use is made of the geometrical regularity of the domain to rewrite, in a very compact way, the approximate formulas for calculating CPs in general three-dimensional geometry that were derived in a previous work by the author. The ensuing gain in computation time is found to be substantial: While the computation time associated with the general formulas increases as K{sup 2}, where K is the number of elements used in the calculation, that of the specific formulas increases only linearly with K. Accurate numerical results are given for several test cases, and an extension of the algorithm for computing the self-collision probability for a hexahedron is reported at the end of the work.
International Workshop on Linear Colliders 2010
None
2011-10-06
IWLC2010 International Workshop on Linear Colliders 2010ECFA-CLIC-ILC joint meeting: Monday 18 October - Friday 22 October 2010Venue: CERN and CICG (International Conference Centre Geneva, Switzerland)This year, the International Workshop on Linear Colliders organized by the European Committee for Future Accelerators (ECFA) will study the physics, detectors and accelerator complex of a linear collider covering both CLIC and ILC options.Contact Workshop SecretariatIWLC2010 is hostedby CERN
International Workshop on Linear Colliders 2010
None
2011-10-06
IWLC2010 International Workshop on Linear Colliders 2010ECFA-CLIC-ILC joint meeting: Monday 18 October - Friday 22 October 2010Venue: CERN and CICG (International Conference Centre Geneva, Switzerland) This year, the International Workshop on Linear Colliders organized by the European Committee for Future Accelerators (ECFA) will study the physics, detectors and accelerator complex of a linear collider covering both CLIC and ILC options.Contact Workshop Secretariat IWLC2010 is hosted by CERN
Developing and Implementing the Data Mining Algorithms in RAVEN
Sen, Ramazan Sonat; Maljovec, Daniel Patrick; Alfonsi, Andrea; Rabiti, Cristian
2015-09-01
The RAVEN code is becoming a comprehensive tool to perform probabilistic risk assessment, uncertainty quantification, and verification and validation. The RAVEN code is being developed to support many programs and to provide a set of methodologies and algorithms for advanced analysis. Scientific computer codes can generate enormous amounts of data. To post-process and analyze such data might, in some cases, take longer than the initial software runtime. Data mining algorithms/methods help in recognizing and understanding patterns in the data, and thus discover knowledge in databases. The methodologies used in the dynamic probabilistic risk assessment or in uncertainty and error quantification analysis couple system/physics codes with simulation controller codes, such as RAVEN. RAVEN introduces both deterministic and stochastic elements into the simulation while the system/physics code model the dynamics deterministically. A typical analysis is performed by sampling values of a set of parameter values. A major challenge in using dynamic probabilistic risk assessment or uncertainty and error quantification analysis for a complex system is to analyze the large number of scenarios generated. Data mining techniques are typically used to better organize and understand data, i.e. recognizing patterns in the data. This report focuses on development and implementation of Application Programming Interfaces (APIs) for different data mining algorithms, and the application of these algorithms to different databases.
International Linear Collider Technical Design Report - Volume...
Office of Scientific and Technical Information (OSTI)
Linear Collider Technical Design Report - Volume 2: Physics Baer, Howard; Barklow, Tim; Fujii, Keisuke; Gao, Yuanning; Hoang, Andre; Kanemura, Shinya; List, Jenny; Logan, Heather...
LED Replacements for Linear Fluorescent Lamps Webcast
Broader source: Energy.gov [DOE]
In this June 20, 2011 webcast on LED products marketed as replacements for linear fluorescent lamps, Jason Tuenge of the Pacific Northwest National Laboratory (PNNL) discussed current Lighting...
Linear Thermite Charge - Energy Innovation Portal
Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)
The Linear Thermite Charge (LTC) is designed to rapidly cut through concrete and steel ... Can cut both concrete and steel at one time making rebarconcrete structural elements ...
International Linear Collider Technical Design Report - Volume...
Office of Scientific and Technical Information (OSTI)
Design Report - Volume 2: Physics Citation Details In-Document Search Title: International Linear Collider Technical Design Report - Volume 2: Physics You are accessing a ...
Ultra-high vacuum photoelectron linear accelerator
Yu, David U.L.; Luo, Yan
2013-07-16
An rf linear accelerator for producing an electron beam. The outer wall of the rf cavity of said linear accelerator being perforated to allow gas inside said rf cavity to flow to a pressure chamber surrounding said rf cavity and having means of ultra high vacuum pumping of the cathode of said rf linear accelerator. Said rf linear accelerator is used to accelerate polarized or unpolarized electrons produced by a photocathode, or to accelerate thermally heated electrons produced by a thermionic cathode, or to accelerate rf heated field emission electrons produced by a field emission cathode.
Shang, Yu; Lin, Yu; Yu, Guoqiang; Li, Ting; Chen, Lei; Toborek, Michal
2014-05-12
Conventional semi-infinite solution for extracting blood flow index (BFI) from diffuse correlation spectroscopy (DCS) measurements may cause errors in estimation of BFI (αD{sub B}) in tissues with small volume and large curvature. We proposed an algorithm integrating Nth-order linear model of autocorrelation function with the Monte Carlo simulation of photon migrations in tissue for the extraction of αD{sub B}. The volume and geometry of the measured tissue were incorporated in the Monte Carlo simulation, which overcome the semi-infinite restrictions. The algorithm was tested using computer simulations on four tissue models with varied volumes/geometries and applied on an in vivo stroke model of mouse. Computer simulations shows that the high-order (N ≥ 5) linear algorithm was more accurate in extracting αD{sub B} (errors < ±2%) from the noise-free DCS data than the semi-infinite solution (errors: −5.3% to −18.0%) for different tissue models. Although adding random noises to DCS data resulted in αD{sub B} variations, the mean values of errors in extracting αD{sub B} were similar to those reconstructed from the noise-free DCS data. In addition, the errors in extracting the relative changes of αD{sub B} using both linear algorithm and semi-infinite solution were fairly small (errors < ±2.0%) and did not rely on the tissue volume/geometry. The experimental results from the in vivo stroke mice agreed with those in simulations, demonstrating the robustness of the linear algorithm. DCS with the high-order linear algorithm shows the potential for the inter-subject comparison and longitudinal monitoring of absolute BFI in a variety of tissues/organs with different volumes/geometries.
Adaptive protection algorithm and system
Hedrick, Paul (Pittsburgh, PA) [Pittsburgh, PA; Toms, Helen L. (Irwin, PA) [Irwin, PA; Miller, Roger M. (Mars, PA) [Mars, PA
2009-04-28
An adaptive protection algorithm and system for protecting electrical distribution systems traces the flow of power through a distribution system, assigns a value (or rank) to each circuit breaker in the system and then determines the appropriate trip set points based on the assigned rank.
Mesh Algorithms for PDE with Sieve I: Mesh Distribution
DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)
Knepley, Matthew G.; Karpeev, Dmitry A.
2009-01-01
We have developed a new programming framework, called Sieve, to support parallel numerical partial differential equation(s) (PDE) algorithms operating over distributed meshes. We have also developed a reference implementation of Sieve in C++ as a library of generic algorithms operating on distributed containers conforming to the Sieve interface. Sieve makes instances of the incidence relation, or arrows, the conceptual first-class objects represented in the containers. Further, generic algorithms acting on this arrow container are systematically used to provide natural geometric operations on the topology and also, through duality, on the data. Finally, coverings and duality are used to encode notmore » only individual meshes, but all types of hierarchies underlying PDE data structures, including multigrid and mesh partitions. In order to demonstrate the usefulness of the framework, we show how the mesh partition data can be represented and manipulated using the same fundamental mechanisms used to represent meshes. We present the complete description of an algorithm to encode a mesh partition and then distribute a mesh, which is independent of the mesh dimension, element shape, or embedding. Moreover, data associated with the mesh can be similarly distributed with exactly the same algorithm. The use of a high level of abstraction within the Sieve leads to several benefits in terms of code reuse, simplicity, and extensibility. We discuss these benefits and compare our approach to other existing mesh libraries.« less
Parallel Algorithms for Graph Optimization using Tree Decompositions
Sullivan, Blair D; Weerapurage, Dinesh P; Groer, Christopher S
2012-06-01
Although many $\\cal{NP}$-hard graph optimization problems can be solved in polynomial time on graphs of bounded tree-width, the adoption of these techniques into mainstream scientific computation has been limited due to the high memory requirements of the necessary dynamic programming tables and excessive runtimes of sequential implementations. This work addresses both challenges by proposing a set of new parallel algorithms for all steps of a tree decomposition-based approach to solve the maximum weighted independent set problem. A hybrid OpenMP/MPI implementation includes a highly scalable parallel dynamic programming algorithm leveraging the MADNESS task-based runtime, and computational results demonstrate scaling. This work enables a significant expansion of the scale of graphs on which exact solutions to maximum weighted independent set can be obtained, and forms a framework for solving additional graph optimization problems with similar techniques.
Finite element analyses of a linear-accelerator electron gun
Iqbal, M. E-mail: muniqbal@ihep.ac.cn; Wasy, A.; Islam, G. U.; Zhou, Z.
2014-02-15
Thermo-structural analyses of the Beijing Electron-Positron Collider (BEPCII) linear-accelerator, electron gun, were performed for the gun operating with the cathode at 1000 °C. The gun was modeled in computer aided three-dimensional interactive application for finite element analyses through ANSYS workbench. This was followed by simulations using the SLAC electron beam trajectory program EGUN for beam optics analyses. The simulations were compared with experimental results of the assembly to verify its beam parameters under the same boundary conditions. Simulation and test results were found to be in good agreement and hence confirmed the design parameters under the defined operating temperature. The gun is operating continuously since commissioning without any thermal induced failures for the BEPCII linear accelerator.
Swiler, Laura Painton; Eldred, Michael Scott
2009-09-01
This report documents the results of an FY09 ASC V&V Methods level 2 milestone demonstrating new algorithmic capabilities for mixed aleatory-epistemic uncertainty quantification. Through the combination of stochastic expansions for computing aleatory statistics and interval optimization for computing epistemic bounds, mixed uncertainty analysis studies are shown to be more accurate and efficient than previously achievable. Part I of the report describes the algorithms and presents benchmark performance results. Part II applies these new algorithms to UQ analysis of radiation effects in electronic devices and circuits for the QASPR program.
SEP Program Planning Template ("Program Planning Template") ...
Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site
SEP Program Planning Template ("Program Planning Template") SEP Program Planning Template ("Program Planning Template") Program Planning Template More Documents & Publications...
Program Evaluation: Program Life Cycle
Broader source: Energy.gov [DOE]
In general, different types of evaluation are carried out over different parts of a program's life cycle (e.g., Creating a program, Program is underway, or Closing out or end of program)....
On Parallel Push-Relabel based Algorithms for Bipartite Maximum Matching
Langguth, Johannes; Azad, Md Ariful; Halappanavar, Mahantesh; Manne, Fredrik
2014-07-01
We study multithreaded push-relabel based algorithms for computing maximum cardinality matching in bipartite graphs. Matching is a fundamental combinatorial (graph) problem with applications in a wide variety of problems in science and engineering. We are motivated by its use in the context of sparse linear solvers for computing maximum transversal of a matrix. We implement and test our algorithms on several multi-socket multicore systems and compare their performance to state-of-the-art augmenting path-based serial and parallel algorithms using a testset comprised of a wide range of real-world instances. Building on several heuristics for enhancing performance, we demonstrate good scaling for the parallel push-relabel algorithm. We show that it is comparable to the best augmenting path-based algorithms for bipartite matching. To the best of our knowledge, this is the first extensive study of multithreaded push-relabel based algorithms. In addition to a direct impact on the applications using matching, the proposed algorithmic techniques can be extended to preflow-push based algorithms for computing maximum flow in graphs.
Non-Linear Seismic Soil Structure Interaction (SSI) Method for...
Office of Environmental Management (EM)
Non-Linear Seismic Soil Structure Interaction (SSI) Method for Developing Non-Linear Seismic SSI Analysis Techniques Non-Linear Seismic Soil Structure Interaction (SSI) Method for ...
Non-linear Seismic Soil Structure Interaction Method for Developing...
Office of Environmental Management (EM)
Non-Linearity in Seismic SSI Analysis Commercial Software Elements Commercial Software Non-Linear Constitutive Models Non-Linear Seismic SSI Damping ...
Voltage regulation in linear induction accelerators
Parsons, William M.
1992-01-01
Improvement in voltage regulation in a Linear Induction Accelerator wherein a varistor, such as a metal oxide varistor, is placed in parallel with the beam accelerating cavity and the magnetic core. The non-linear properties of the varistor result in a more stable voltage across the beam accelerating cavity than with a conventional compensating resistance.
Voltage regulation in linear induction accelerators
Parsons, W.M.
1992-12-29
Improvement in voltage regulation in a linear induction accelerator wherein a varistor, such as a metal oxide varistor, is placed in parallel with the beam accelerating cavity and the magnetic core is disclosed. The non-linear properties of the varistor result in a more stable voltage across the beam accelerating cavity than with a conventional compensating resistance. 4 figs.
Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)
and its Use in Coupling Codes for Multiphysics Simulations Rod Schmidt, Noel Belcourt, Russell Hooper, and Roger Pawlowski Sandia National Laboratories P.O. Box 5800...
Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)
of the vehicle. Here the two domains are the fluid exterior to the vehicle (compressible, turbulent fluid flow) and the interior of the vehicle (structural dynamics)...
Practical application of equivalent linearization approaches to nonlinear piping systems
Park, Y.J.; Hofmayer, C.H.
1995-05-01
The use of mechanical energy absorbers as an alternative to conventional hydraulic and mechanical snubbers for piping supports has attracted a wide interest among researchers and practitioners in the nuclear industry. The basic design concept of energy absorbers (EA) is to dissipate the vibration energy of piping systems through nonlinear hysteretic actions of EA!s under design seismic loads. Therefore, some type of nonlinear analysis needs to be performed in the seismic design of piping systems with EA supports. The equivalent linearization approach (ELA) can be a practical analysis tool for this purpose, particularly when the response approach (RSA) is also incorporated in the analysis formulations. In this paper, the following ELA/RSA methods are presented and compared to each other regarding their practice and numerical accuracy: Response approach using the square root of sum of squares (SRSS) approximation (denoted RS in this paper). Classical ELA based on modal combinations and linear random vibration theory (denoted CELA in this paper). Stochastic ELA based on direct solution of response covariance matrix (denoted SELA in this paper). New algorithms to convert response spectra to the equivalent power spectral density (PSD) functions are presented for both the above CELA and SELA methods. The numerical accuracy of the three EL are studied through a parametric error analysis. Finally, the practicality of the presented analysis is demonstrated in two application examples for piping systems with EA supports.
Time Variant Floating Mean Counting Algorithm
Energy Science and Technology Software Center (OSTI)
1999-06-03
This software was written to test a time variant floating mean counting algorithm. The algorithm was developed by Westinghouse Savannah River Company and a provisional patent has been filed on the algorithm. The test software was developed to work with the Val Tech model IVB prototype version II count rate meter hardware. The test software was used to verify the algorithm developed by WSRC could be correctly implemented with the vendor''s hardware.
Daylighting simulation: methods, algorithms, and resources
Carroll, William L.
1999-12-01
This document presents work conducted as part of Subtask C, ''Daylighting Design Tools'', Subgroup C2, ''New Daylight Algorithms'', of the IEA SHC Task 21 and the ECBCS Program Annex 29 ''Daylight in Buildings''. The search for and collection of daylighting analysis methods and algorithms led to two important observations. First, there is a wide range of needs for different types of methods to produce a complete analysis tool. These include: Geometry; Light modeling; Characterization of the natural illumination resource; Materials and components properties, representations; and Usability issues (interfaces, interoperability, representation of analysis results, etc). Second, very advantageously, there have been rapid advances in many basic methods in these areas, due to other forces. They are in part driven by: The commercial computer graphics community (commerce, entertainment); The lighting industry; Architectural rendering and visualization for projects; and Academia: Course materials, research. This has led to a very rich set of information resources that have direct applicability to the small daylighting analysis community. Furthermore, much of this information is in fact available online. Because much of the information about methods and algorithms is now online, an innovative reporting strategy was used: the core formats are electronic, and used to produce a printed form only secondarily. The electronic forms include both online WWW pages and a downloadable .PDF file with the same appearance and content. Both electronic forms include live primary and indirect links to actual information sources on the WWW. In most cases, little additional commentary is provided regarding the information links or citations that are provided. This in turn allows the report to be very concise. The links are expected speak for themselves. The report consists of only about 10+ pages, with about 100+ primary links, but with potentially thousands of indirect links. For purposes of
Optically isolated signal coupler with linear response
Kronberg, James W.
1994-01-01
An optocoupler for isolating electrical signals that translates an electrical input signal linearly to an electrical output signal. The optocoupler comprises a light emitter, a light receiver, and a light transmitting medium. The light emitter, preferably a blue, silicon carbide LED, is of the type that provides linear, electro-optical conversion of electrical signals within a narrow wavelength range. Correspondingly, the light receiver, which converts light signals to electrical signals and is preferably a cadmium sulfide photoconductor, is linearly responsive to light signals within substantially the same wavelength range as the blue LED.
LINEAR COLLIDER PHYSICS RESOURCE BOOK FOR SNOWMASS 2001.
ABE,T.; DAWSON,S.; HEINEMEYER,S.; MARCIANO,W.; PAIGE,F.; TURCOT,A.S.; ET AL
2001-05-03
The American particle physics community can look forward to a well-conceived and vital program of experimentation for the next ten years, using both colliders and fixed target beams to study a wide variety of pressing questions. Beyond 2010, these programs will be reaching the end of their expected lives. The CERN LHC will provide an experimental program of the first importance. But beyond the LHC, the American community needs a coherent plan. The Snowmass 2001 Workshop and the deliberations of the HEPAP subpanel offer a rare opportunity to engage the full community in planning our future for the next decade or more. A major accelerator project requires a decade from the beginning of an engineering design to the receipt of the first data. So it is now time to decide whether to begin a new accelerator project that will operate in the years soon after 2010. We believe that the world high-energy physics community needs such a project. With the great promise of discovery in physics at the next energy scale, and with the opportunity for the uncovering of profound insights, we cannot allow our field to contract to a single experimental program at a single laboratory in the world. We believe that an e{sup +}e{sup {minus}} linear collider is an excellent choice for the next major project in high-energy physics. Applying experimental techniques very different from those used at hadron colliders, an e{sup +}e{sup {minus}} linear collider will allow us to build on the discoveries made at the Tevatron and the LHC, and to add a level of precision and clarity that will be necessary to understand the physics of the next energy scale. It is not necessary to anticipate specific results from the hadron collider programs to argue for constructing an e{sup +}e{sup {minus}} linear collider; in any scenario that is now discussed, physics will benefit from the new information that e{sup +}e{sup {minus}} experiments can provide.
Broader source: Directives, Delegations, and Requirements [Office of Management (MA)]
1997-08-21
This volume describes program administration that establishes and maintains effective organizational management and control of the emergency management program. Canceled by DOE G 151.1-3.
Broader source: Energy.gov [DOE]
Residences participating in the Home Energy Rebate or New Home Rebate Program may not also participate in the Weatherization Program.
Status of the SLC (Stanford Linear Collider)
Coupal, D.P.
1989-07-01
This report presents a brief review of the status of the Stanford Linear Collider. Topics covered are: Beam luminosity, Detectors and backgrounds; and Future prospects. 3 refs., 8 figs., 1 tab. (LSP)
Constant-complexity stochastic simulation algorithm with optimal binning
Sanft, Kevin R.; Othmer, Hans G.
2015-08-21
At the molecular level, biochemical processes are governed by random interactions between reactant molecules, and the dynamics of such systems are inherently stochastic. When the copy numbers of reactants are large, a deterministic description is adequate, but when they are small, such systems are often modeled as continuous-time Markov jump processes that can be described by the chemical master equation. Gillespie’s Stochastic Simulation Algorithm (SSA) generates exact trajectories of these systems, but the amount of computational work required for each step of the original SSA is proportional to the number of reaction channels, leading to computational complexity that scales linearly with the problem size. The original SSA is therefore inefficient for large problems, which has prompted the development of several alternative formulations with improved scaling properties. We describe an exact SSA that uses a table data structure with event time binning to achieve constant computational complexity with respect to the number of reaction channels for weakly coupled reaction networks. We present a novel adaptive binning strategy and discuss optimal algorithm parameters. We compare the computational efficiency of the algorithm to existing methods and demonstrate excellent scaling for large problems. This method is well suited for generating exact trajectories of large weakly coupled models, including those that can be described by the reaction-diffusion master equation that arises from spatially discretized reaction-diffusion processes.
A fast contour descriptor algorithm for supernova imageclassification
Aragon, Cecilia R.; Aragon, David Bradburn
2006-07-16
We describe a fast contour descriptor algorithm and its application to a distributed supernova detection system (the Nearby Supernova Factory) that processes 600,000 candidate objects in 80 GB of image data per night. Our shape-detection algorithm reduced the number of false positives generated by the supernova search pipeline by 41% while producing no measurable impact on running time. Fourier descriptors are an established method of numerically describing the shapes of object contours, but transform-based techniques are ordinarily avoided in this type of application due to their computational cost. We devised a fast contour descriptor implementation for supernova candidates that meets the tight processing budget of the application. Using the lowest-order descriptors (F{sub 1} and F{sub -1}) and the total variance in the contour, we obtain one feature representing the eccentricity of the object and another denoting its irregularity. Because the number of Fourier terms to be calculated is fixed and small, the algorithm runs in linear time, rather than the O(n log n) time of an FFT. Constraints on object size allow further optimizations so that the total cost of producing the required contour descriptors is about 4n addition/subtraction operations, where n is the length of the contour.
An efficient algorithm for incompressible N-phase flows
Dong, S.
2014-11-01
We present an efficient algorithm within the phase field framework for simulating the motion of a mixture of N (N?2) immiscible incompressible fluids, with possibly very different physical properties such as densities, viscosities, and pairwise surface tensions. The algorithm employs a physical formulation for the N-phase system that honors the conservations of mass and momentum and the second law of thermodynamics. We present a method for uniquely determining the mixing energy density coefficients involved in the N-phase model based on the pairwise surface tensions among the N fluids. Our numerical algorithm has several attractive properties that make it computationally very efficient: (i) it has completely de-coupled the computations for different flow variables, and has also completely de-coupled the computations for the (N?1) phase field functions; (ii) the algorithm only requires the solution of linear algebraic systems after discretization, and no nonlinear algebraic solve is needed; (iii) for each flow variable the linear algebraic system involves only constant and time-independent coefficient matrices, which can be pre-computed during pre-processing, despite the variable density and variable viscosity of the N-phase mixture; (iv) within a time step the semi-discretized system involves only individual de-coupled Helmholtz-type (including Poisson) equations, despite the strongly-coupled phasefield system of fourth spatial order at the continuum level; (v) the algorithm is suitable for large density contrasts and large viscosity contrasts among the N fluids. Extensive numerical experiments have been presented for several problems involving multiple fluid phases, large density contrasts and large viscosity contrasts. In particular, we compare our simulations with the de Gennes theory, and demonstrate that our method produces physically accurate results for multiple fluid phases. We also demonstrate the significant and sometimes dramatic effects of the gravity
Visiting Faculty Program Program Description
Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)
Visiting Faculty Program Program Description The Visiting Faculty Program seeks to increase the research competitiveness of faculty members and their students at institutions historically underrepresented in the research community in order to expand the workforce vital to Department of Energy mission areas. As part of the program, selected university/college faculty members collaborate with DOE laboratory research staff on a research project of mutual interest. Program Objective The program is
Visiting Faculty Program Program Description
Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)
covers stipend and travel reimbursement for the 10-week program. Teacherfaculty participants: 1 Program Coordinator: Scott Robbins Email: srobbins@lanl.gov Phone number: 663-5621...
Kok, J.
1988-01-01
To the human programmer the ease of coding distributed computing is highly dependent on the suitability of the employed programming language. But with a particular language it is also important whether the possibilities of one or more parallel architectures can efficiently be addressed by available language constructs. In this paper the possibilities are discussed of the high-level language Ada and in particular of its tasking concept as a descriptional tool for the design and implementation of numerical and other algorithms that allow execution of parts in parallel. Language tools are explained and their use for common applications is shown. Conclusions are drawn about the usefulness of several Ada concepts.
Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)
Community Programs Community Environmental Documents Tours Community Programs Friends of Berkeley Lab ⇒ Navigate Section Community Environmental Documents Tours Community Programs Friends of Berkeley Lab Community Education Programs Workforce Development & Education As part of the Lab's education mission to inspire and prepare the next generation of scientists and engineers, the Workforce Development & Education runs numerous education programs for all ages of students-from elementary
Linac Alignment Algorithm: Analysis on 1-to-1 Steering
Sun, Yipeng; Adolphsen, Chris; /SLAC
2011-08-19
In a linear accelerator, it is important to achieve a good alignment between all of its components (such as quadrupoles, RF cavities, beam position monitors et al.), in order to better preserve the beam quality during acceleration. After the survey of the main linac components, there are several beam-based alignment (BBA) techniques to be applied, to further optimize the beam trajectory and calculate the corresponding steering magnets strength. Among these techniques the most simple and straightforward one is the one-to-one (1-to-1) steering technique, which steers the beam from quad center to center, and removes the betatron oscillation from quad focusing. For a future linear collider such as the International Linear Collider (ILC), the initial beam emittance is very small in the vertical plane (flat beam with {gamma}{epsilon}{sub y} = 20-40nm), which means the alignment requirement is very tight. In this note, we evaluate the emittance growth with one-to-one correction algorithm employed, both analytically and numerically. Then the ILC main linac accelerator is taken as an example to compare the vertical emittance growth after 1-to-1 steering, both from analytical formulae and multi-particle tracking simulation. It is demonstrated that the estimated emittance growth from the derived formulae agrees well with the results from numerical simulation, with and without acceleration, respectively.
Computer programs for multilocus haplotyping of general pedigrees
Weeks, D.E.; O`Connell, J.R.; Sobel, E.
1995-06-01
We have recently developed and implemented three different computer algorithms for accurate haplotyping with large numbers of codominant markers. Each of these algorithms employs likelihood criteria that correctly incorporate all intermarker recombination fractions. The three programs, HAPLO, SIMCROSS, and SIMWALK, are now available for haplotying general pedigrees. The HAPLO program will be distributed as part of the Programs for Pedigree Analysis package by Kenneth Lange. The SIMCROSS and SIMWALK programs are available by anonymous ftp from watson.hgen.pitt.edu. Each program is written in FORTRAN 77 and is distributed as source code. 15 refs.
Brau, James E
2013-04-22
The U.S Linear Collider Detector R&D program, supported by the DOE and NSF umbrella grants to the University of Oregon, made significant advances on many critical aspects of the ILC detector program. Progress advanced on vertex detector sensor development, silicon and TPC tracking, calorimetry on candidate technologies, and muon detection, as well as on beamline measurements of luminosity, energy, and polarization.
DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)
Slattery, Stuart R.
2015-12-02
In this study we analyze and extend mesh-free algorithms for three-dimensional data transfer problems in partitioned multiphysics simulations. We first provide a direct comparison between a mesh-based weighted residual method using the common-refinement scheme and two mesh-free algorithms leveraging compactly supported radial basis functions: one using a spline interpolation and one using a moving least square reconstruction. Through the comparison we assess both the conservation and accuracy of the data transfer obtained from each of the methods. We do so for a varying set of geometries with and without curvature and sharp features and for functions with and without smoothnessmore » and with varying gradients. Our results show that the mesh-based and mesh-free algorithms are complementary with cases where each was demonstrated to perform better than the other. We then focus on the mesh-free methods by developing a set of algorithms to parallelize them based on sparse linear algebra techniques. This includes a discussion of fast parallel radius searching in point clouds and restructuring the interpolation algorithms to leverage data structures and linear algebra services designed for large distributed computing environments. The scalability of our new algorithms is demonstrated on a leadership class computing facility using a set of basic scaling studies. Finally, these scaling studies show that for problems with reasonable load balance, our new algorithms for both spline interpolation and moving least square reconstruction demonstrate both strong and weak scalability using more than 100,000 MPI processes with billions of degrees of freedom in the data transfer operation.« less
Slattery, Stuart R.
2015-12-02
In this study we analyze and extend mesh-free algorithms for three-dimensional data transfer problems in partitioned multiphysics simulations. We first provide a direct comparison between a mesh-based weighted residual method using the common-refinement scheme and two mesh-free algorithms leveraging compactly supported radial basis functions: one using a spline interpolation and one using a moving least square reconstruction. Through the comparison we assess both the conservation and accuracy of the data transfer obtained from each of the methods. We do so for a varying set of geometries with and without curvature and sharp features and for functions with and without smoothness and with varying gradients. Our results show that the mesh-based and mesh-free algorithms are complementary with cases where each was demonstrated to perform better than the other. We then focus on the mesh-free methods by developing a set of algorithms to parallelize them based on sparse linear algebra techniques. This includes a discussion of fast parallel radius searching in point clouds and restructuring the interpolation algorithms to leverage data structures and linear algebra services designed for large distributed computing environments. The scalability of our new algorithms is demonstrated on a leadership class computing facility using a set of basic scaling studies. Finally, these scaling studies show that for problems with reasonable load balance, our new algorithms for both spline interpolation and moving least square reconstruction demonstrate both strong and weak scalability using more than 100,000 MPI processes with billions of degrees of freedom in the data transfer operation.
Dual-range linearized transimpedance amplifier system
Wessendorf, Kurt O.
2010-11-02
A transimpedance amplifier system is disclosed which simultaneously generates a low-gain output signal and a high-gain output signal from an input current signal using a single transimpedance amplifier having two different feedback loops with different amplification factors to generate two different output voltage signals. One of the feedback loops includes a resistor, and the other feedback loop includes another resistor in series with one or more diodes. The transimpedance amplifier system includes a signal linearizer to linearize one or both of the low- and high-gain output signals by scaling and adding the two output voltage signals from the transimpedance amplifier. The signal linearizer can be formed either as an analog device using one or two summing amplifiers, or alternately can be formed as a digital device using two analog-to-digital converters and a digital signal processor (e.g. a microprocessor or a computer).
Linear transformer driver for pulse generation
Kim, Alexander A; Mazarakis, Michael G; Sinebryukhov, Vadim A; Volkov, Sergey N; Kondratiev, Sergey S; Alexeenko, Vitaly M; Bayol, Frederic; Demol, Gauthier; Stygar, William A
2015-04-07
A linear transformer driver includes at least one ferrite ring positioned to accept a load. The linear transformer driver also includes a first power delivery module that includes a first charge storage devices and a first switch. The first power delivery module sends a first energy in the form of a first pulse to the load. The linear transformer driver also includes a second power delivery module including a second charge storage device and a second switch. The second power delivery module sends a second energy in the form of a second pulse to the load. The second pulse has a frequency that is approximately three times the frequency of the first pulse. The at least one ferrite ring is positioned to force the first pulse and the second pulse to the load by temporarily isolating the first pulse and the second pulse from an electrical ground.
An implementation analysis of the linear discontinuous finite element method
Becker, T. L.
2013-07-01
This paper provides an implementation analysis of the linear discontinuous finite element method (LD-FEM) that spans the space of (l, x, y, z). A practical implementation of LD includes 1) selecting a computationally efficient algorithm to solve the 4 x 4 matrix system Ax = b that describes the angular flux in a mesh element, and 2) choosing how to store the data used to construct the matrix A and the vector b to either reduce memory consumption or increase computational speed. To analyze the first of these, three algorithms were selected to solve the 4 x 4 matrix equation: Cramer's rule, a streamlined implementation of Gaussian elimination, and LAPACK's Gaussian elimination subroutine dgesv. The results indicate that Cramer's rule and the streamlined Gaussian elimination algorithm perform nearly equivalently and outperform LAPACK's implementation of Gaussian elimination by a factor of 2. To analyze the second implementation detail, three formulations of the discretized LD-FEM equations were provided for implementation in a transport solver: 1) a low-memory formulation, which relies heavily on 'on-the-fly' calculations and less on the storage of pre-computed data, 2) a high-memory formulation, which pre-computes much of the data used to construct A and b, and 3) a reduced-memory formulation, which lies between the low - and high-memory formulations. These three formulations were assessed in the Jaguar transport solver based on relative memory footprint and computational speed for increasing mesh size and quadrature order. The results indicated that the memory savings of the low-memory formulation were not sufficient to warrant its implementation. The high-memory formulation resulted in a significant speed advantage over the reduced-memory option (10-50%), but also resulted in a proportional increase in memory consumption (5-45%) for increasing quadrature order and mesh count; therefore, the practitioner should weigh the system memory constraints against any
Light Water Reactor Sustainability Program - Integrated Program...
Office of Environmental Management (EM)
Program - Integrated Program Plan Light Water Reactor Sustainability Program - Integrated Program Plan The Light Water Reactor Sustainability (LWRS) Program is a research and ...
Linear Transformation Method for Multinuclide Decay Calculation
Ding Yuan
2010-12-29
A linear transformation method for generic multinuclide decay calculations is presented together with its properties and implications. The method takes advantage of the linear form of the decay solution N(t) = F(t)N{sub 0}, where N(t) is a column vector that represents the numbers of atoms of the radioactive nuclides in the decay chain, N{sub 0} is the initial value vector of N(t), and F(t) is a lower triangular matrix whose time-dependent elements are independent of the initial values of the system.
Linear and angular retroreflecting interferometric alignment target
Maxey, L. Curtis
2001-01-01
The present invention provides a method and apparatus for measuring both the linear displacement and angular displacement of an object using a linear interferometer system and an optical target comprising a lens, a reflective surface and a retroreflector. The lens, reflecting surface and retroreflector are specifically aligned and fixed in optical connection with one another, creating a single optical target which moves as a unit that provides multi-axis displacement information for the object with which it is associated. This displacement information is useful in many applications including machine tool control systems and laser tracker systems, among others.
Linear Concentrator Solar Power Plant Illustration
Broader source: Energy.gov [DOE]
This graphic illustrates linear concentrating solar power (CSP) collectors that capture the sun's energy with large mirrors that reflect and focus the sunlight onto a linear receiver tube. The receiver contains a fluid that is heated by the sunlight and then used to create superheated steam that spins a turbine that drives a generator to produce electricity. Alternatively, steam can be generated directly in the solar field, eliminating the need for costly heat exchangers. In a parabolic trough system, the receiver tube is positioned along the focal line of each parabola-shaped reflector.
Beamstrahlung spectra in next generation linear colliders
Barklow, T.; Chen, P. ); Kozanecki, W. )
1992-04-01
For the next generation of linear colliders, the energy loss due to beamstrahlung during the collision of the e{sup +}e{sup {minus}} beams is expected to substantially influence the effective center-of-mass energy distribution of the colliding particles. In this paper, we first derive analytical formulae for the electron and photon energy spectra under multiple beamstrahlung processes, and for the e{sup +}e{sup {minus}} and {gamma}{gamma} differential luminosities. We then apply our formulation to various classes of 500 GeV e{sup +}e{sup {minus}} linear collider designs currently under study.
DOE Publishes CALiPER Report on Cost-Effectiveness of Linear (T8) LED Lamps
Broader source: Energy.gov [DOE]
The U.S. Department of Energy's CALiPER program has released Report 21.3, which is part of a series of investigations on linear LED lamps. Report 21.3 details a set of life-cycle cost simulations...
Student's algorithm solves real-world problem
Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)
Student's algorithm solves real-world problem Supercomputing Challenge: student's algorithm solves real-world problem Students learn how to use powerful computers to analyze, model, and solve real-world problems. April 3, 2012 Jordon Medlock of Albuquerque's Manzano High School won the 2012 Lab-sponsored Supercomputing Challenge Jordon Medlock of Albuquerque's Manzano High School won the 2012 Lab-sponsored Supercomputing Challenge by creating a computer algorithm that automates the process of
DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)
Hager, Robert; Yoon, E. S.; Ku, S.; D'Azevedo, E. F.; Worley, P. H.; Chang, C. S.
2016-04-04
Fusion edge plasmas can be far from thermal equilibrium and require the use of a non-linear collision operator for accurate numerical simulations. The non-linear single-species Fokker–Planck–Landau collision operator developed by Yoon and Chang (2014) [9] is generalized to include multiple particle species. Moreover, the finite volume discretization used in this work naturally yields exact conservation of mass, momentum, and energy. The implementation of this new non-linear Fokker–Planck–Landau operator in the gyrokinetic particle-in-cell codes XGC1 and XGCa is described and results of a verification study are discussed. Finally, the numerical techniques that make our non-linear collision operator viable on high-performance computingmore » systems are described, including specialized load balancing algorithms and nested OpenMP parallelization. As a result, the collision operator's good weak and strong scaling behavior are shown.« less
Solar Position Algorithm (SPA) - Energy Innovation Portal
Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)
Thermal Solar Thermal Energy Analysis Energy Analysis Find More Like This Return to Search Solar Position Algorithm (SPA) National Renewable Energy Laboratory Contact NREL About ...
Java implementation of Class Association Rule algorithms
Energy Science and Technology Software Center (OSTI)
2007-08-30
Java implementation of three Class Association Rule mining algorithms, NETCAR, CARapriori, and clustering based rule mining. NETCAR algorithm is a novel algorithm developed by Makio Tamura. The algorithm is discussed in a paper: UCRL-JRNL-232466-DRAFT, and would be published in a peer review scientific journal. The software is used to extract combinations of genes relevant with a phenotype from a phylogenetic profile and a phenotype profile. The phylogenetic profiles is represented by a binary matrix andmore » a phenotype profile is represented by a binary vector. The present application of this software will be in genome analysis, however, it could be applied more generally.« less
Generation of attributes for learning algorithms
Hu, Yuh-Jyh; Kibler, D.
1996-12-31
Inductive algorithms rely strongly on their representational biases. Constructive induction can mitigate representational inadequacies. This paper introduces the notion of a relative gain measure and describes a new constructive induction algorithm (GALA) which is independent of the learning algorithm. Unlike most previous research on constructive induction, our methods are designed as preprocessing step before standard machine learning algorithms are applied. We present the results which demonstrate the effectiveness of GALA on artificial and real domains for several learners: C4.5, CN2, perceptron and backpropagation.
Hybrid Discrete - Continuum Algorithms for Stochastic Reaction...
Office of Scientific and Technical Information (OSTI)
for Stochastic Reaction Networks. Citation Details In-Document Search Title: Hybrid Discrete - Continuum Algorithms for Stochastic Reaction Networks. Abstract not provided. ...
KLU2 Direct Linear Solver Package
Energy Science and Technology Software Center (OSTI)
2012-01-04
KLU2 is a direct sparse solver for solving unsymmetric linear systems. It is related to the existing KLU solver, (in Amesos package and also as a stand-alone package from University of Florida) but provides template support for scalar and ordinal types. It uses a left looking LU factorization method.
Physics Case for the International Linear Collider
Fujii, Keisuke; Grojean, Christophe; Peskin, Michael E.; Barklow, Tim; Gao, Yuanning; Kanemura, Shinya; Kim, Hyungdo; List, Jenny; Nojiri, Mihoko; Perelstein, Maxim; Poeschl, Roman; Reuter, Juergen; Simon, Frank; Tanabe, Tomohiko; Yu, Jaehoon; Wells, James D.; Murayama, Hitoshi; Yamamoto, Hitoshi; /Tohoku U.
2015-06-23
We summarize the physics case for the International Linear Collider (ILC). We review the key motivations for the ILC presented in the literature, updating the projected measurement uncertainties for the ILC experiments in accord with the expected schedule of operation of the accelerator and the results of the most recent simulation studies.
Notes on beam dynamics in linear accelerators
Gluckstern, R.L.
1980-09-01
A collection of notes, on various aspects of beam dynamics in linear accelerators, which were produced by the author during five years (1975 to 1980) of consultation for the LASL Accelerator Technology (AT) Division and Medium-Energy Physics (MP) Division is presented.
A microcomputer-controlled linear heater
Schuck, V.; Rahimi, S. )
1991-10-01
In this note the circuits and principles of operation of a relatively simple and inexpensive linear temperature ramp generator are described. The upper-temperature limit and the heating rate are controlled by an Apple II microcomputer. The temperature versus time is displayed on the screen and may be plotted by an {ital x}-{ital y} plotter.
Finite Element Interface to Linear Solvers
Energy Science and Technology Software Center (OSTI)
2005-03-18
Sparse systems of linear equations arise in many engineering applications, including finite elements, finite volumes, and others. The solution of linear systems is often the most computationally intensive portion of the application. Depending on the complexity of problems addressed by the application, there may be no single solver capable of solving all of the linear systems that arise. This motivates the desire to switch an application from one solver librwy to another, depending on themore » problem being solved. The interfaces provided by solver libraries differ greatly, making it difficult to switch an application code from one library to another. The amount of library-specific code in an application Can be greatly reduced by having an abstraction layer between solver libraries and the application, putting a common "face" on various solver libraries. One such abstraction layer is the Finite Element Interface to Linear Solvers (EEl), which has seen significant use by finite element applications at Sandia National Laboratories and Lawrence Livermore National Laboratory.« less
Linear and non-linear forced response of a conical, ducted, laminar premixed flame
Karimi, Nader; Brear, Michael J.; Jin, Seong-Ho; Monty, Jason P. [Department of Mechanical Engineering, University of Melbourne, Parkville, 3010 Vic. (Australia)
2009-11-15
This paper presents an experimental study on the dynamics of a ducted, conical, laminar premixed flame subjected to acoustic excitation of varying amplitudes. The flame transfer function is measured over a range of forcing frequencies and equivalence ratios. In keeping with previous works, the measured flame transfer function is in good agreement with that predicted by linear kinematic theory at low amplitudes of acoustic velocity excitation. However, a systematic departure from linear behaviour is observed as the amplitude of the velocity forcing upstream of the flame increases. This non-linearity is mostly in the phase of the transfer function and manifests itself as a roughly constant phase at high forcing amplitude. Nonetheless, as predicted by non-linear kinematic arguments, the response always remains close to linear at low forcing frequencies, regardless of the forcing amplitude. The origin of this phase behaviour is then sought through optical data post-processing. (author)
Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)
Library Services » Retiree Program Retiree Program The Research Library offers a 1 year library card to retired LANL employees that allows usage of Library materials. This service is only available to retired LANL employees. Who is eligible? Any Laboratory retiree, not participating in any other program (ie, Guest Scientist, Affiliate). Upon completion of your application, you will be notified of your acceptance into the program. This does not include past students. What is the term of the
Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)
New Commercial Program Development Commercial Current Promotions Industrial Federal Agriculture Heating Ventilation and Air Conditioning Energy efficient Heating Ventilation and...
Neutrino mass, dark energy, and the linear growth factor (Journal...
Office of Scientific and Technical Information (OSTI)
dark energy, and the linear growth factor Citation Details In-Document Search Title: Neutrino mass, dark energy, and the linear growth factor We study the degeneracies between ...
Updates to the International Linear Collider Damping Rings Baseline...
Office of Scientific and Technical Information (OSTI)
Updates to the International Linear Collider Damping Rings Baseline Design Citation Details In-Document Search Title: Updates to the International Linear Collider Damping Rings...
International Linear Collider-A Technical Progress Report (Technical...
Office of Scientific and Technical Information (OSTI)
Technical Report: International Linear Collider-A Technical Progress Report Citation Details In-Document Search Title: International Linear Collider-A Technical Progress Report The ...
A Linear Theory of Microwave Instability in Electron Storage...
Office of Scientific and Technical Information (OSTI)
Journal Article: A Linear Theory of Microwave Instability in Electron Storage Rings Citation Details In-Document Search Title: A Linear Theory of Microwave Instability in Electron...
MHK Technologies/Ocean Current Linear Turbine | Open Energy Informatio...
Current Linear Turbine < MHK Technologies Jump to: navigation, search << Return to the MHK database homepage Ocean Current Linear Turbine.jpg Technology Profile Primary...
SuperLU{_}DIST: A scalable distributed-memory sparse direct solver for unsymmetric linear systems
Li, Xiaoye S.; Demmel, James W.
2002-03-27
In this paper, we present the main algorithmic features in the software package SuperLU{_}DIST, a distributed-memory sparse direct solver for large sets of linear equations. We give in detail our parallelization strategies, with focus on scalability issues, and demonstrate the parallel performance and scalability on current machines. The solver is based on sparse Gaussian elimination, with an innovative static pivoting strategy proposed earlier by the authors. The main advantage of static pivoting over classical partial pivoting is that it permits a priori determination of data structures and communication pattern for sparse Gaussian elimination, which makes it more scalable on distributed memory machines. Based on this a priori knowledge, we designed highly parallel and scalable algorithms for both LU decomposition and triangular solve and we show that they are suitable for large-scale distributed memory machines.
Phase and Radial Motion in Ion Linear Accelerators
Energy Science and Technology Software Center (OSTI)
2007-03-29
Parmila is an ion-linac particle-dynamics code. The name comes from the phrase, "Phase and Radial Motion in Ion Linear Accelerators." The code generates DTL, CCDTL, and CCL accelerating cells and, using a "drift-kick" method, transforms the beam, represented by a collection of particles, through the linac. The code includes a 2-D and 3-D space-charge calculations. Parmila uses data generated by the Poisson Superfish postprocessor SEC. This version of Parmila was written by Harunori Takeda andmore » was supported through Feb. 2006 by James H. Billen. Setup installs executable programs Parmila.EXE, Lingraf.EXE, and ReadPMI.EXE in the LANL directory. The directory LANL\\Examples\\Parmila contains several subdirectories with sample files for Parmila.« less
An overview of SuperLU: Algorithms, implementation, and userinterface
Li, Xiaoye S.
2003-09-30
We give an overview of the algorithms, design philosophy,and implementation techniques in the software SuperLU, for solving sparseunsymmetric linear systems. In particular, we highlight the differencesbetween the sequential SuperLU (including its multithreaded extension)and parallel SuperLU_DIST. These include the numerical pivoting strategy,the ordering strategy for preserving sparsity, the ordering in which theupdating tasks are performed, the numerical kernel, and theparallelization strategy. Because of the scalability concern, theparallel code is drastically different from the sequential one. Wedescribe the user interfaces ofthe libraries, and illustrate how to usethe libraries most efficiently depending on some matrix characteristics.Finally, we give some examples of how the solver has been used inlarge-scale scientific applications, and the performance.
A High-Order Finite-Volume Algorithm for Fokker-Planck Collisions in Magnetized Plasmas
Xiong, Z; Cohen, R H; Rognlien, T D; Xu, X Q
2007-04-18
A high-order finite volume algorithm is developed for the Fokker-Planck Operator (FPO) describing Coulomb collisions in strongly magnetized plasmas. The algorithm is based on a general fourth-order reconstruction scheme for an unstructured grid in the velocity space spanned by parallel velocity and magnetic moment. The method provides density conservation and high-order-accurate evaluation of the FPO independent of the choice of the velocity coordinates. As an example, a linearized FPO in constant-of-motion coordinates, i.e. the total energy and the magnetic moment, is developed using the present algorithm combined with a cut-cell merging procedure. Numerical tests include the Spitzer thermalization problem and the return to isotropy for distributions initialized with velocity space loss cones. Utilization of the method for a nonlinear FPO is straightforward but requires evaluation of the Rosenbluth potentials.
Petascale algorithms for reactor hydrodynamics.
Fischer, P.; Lottes, J.; Pointer, W. D.; Siegel, A.
2008-01-01
We describe recent algorithmic developments that have enabled large eddy simulations of reactor flows on up to P = 65, 000 processors on the IBM BG/P at the Argonne Leadership Computing Facility. Petascale computing is expected to play a pivotal role in the design and analysis of next-generation nuclear reactors. Argonne's SHARP project is focused on advanced reactor simulation, with a current emphasis on modeling coupled neutronics and thermal-hydraulics (TH). The TH modeling comprises a hierarchy of computational fluid dynamics approaches ranging from detailed turbulence computations, using DNS (direct numerical simulation) and LES (large eddy simulation), to full core analysis based on RANS (Reynolds-averaged Navier-Stokes) and subchannel models. Our initial study is focused on LES of sodium-cooled fast reactor cores. The aim is to leverage petascale platforms at DOE's Leadership Computing Facilities (LCFs) to provide detailed information about heat transfer within the core and to provide baseline data for less expensive RANS and subchannel models.
Initial borehole acoustic televiewer data processing algorithms
Moore, T.K.
1988-06-01
With the development of a new digital televiewer, several algorithms have been developed in support of off-line data processing. This report describes the initial set of utilities developed to support data handling as well as data display. Functional descriptions, implementation details, and instructions for use of the seven algorithms are provided. 5 refs., 33 figs., 1 tab.
Close, E.; Fong, C; Lee, E.
1991-10-30
Although this report is called a program document, it is not simply a user's guide to running HILDA nor is it a programmer's guide to maintaining and updating HILDA. It is a guide to HILDA as a program and as a model for designing and costing a heavy ion fusion (HIF) driver. HILDA represents the work and ideas of many people; as does the model upon which it is based. The project was initiated by Denis Keefe, the leader of the LBL HIFAR project. He suggested the name HILDA, which is an acronym for Heavy Ion Linac Driver Analysis. The conventions and style of development of the HILDA program are based on the original goals. It was desired to have a computer program that could estimate the cost and find an optimal design for Heavy Ion Fusion induction linac drivers. This program should model near-term machines as well as fullscale drivers. The code objectives were: (1) A relatively detailed, but easily understood model. (2) Modular, structured code to facilitate making changes in the model, the analysis reports, and the user interface. (3) Documentation that defines, and explains the system model, cost algorithm, program structure, and generated reports. With this tool a knowledgeable user would be able to examine an ensemble of drivers and find the driver that is minimum in cost, subject to stated constraints. This document contains a report section that describes how to use HILDA, some simple illustrative examples, and descriptions of the models used for the beam dynamics and component design. Associated with this document, as files on floppy disks, are the complete HILDA source code, much information that is needed to maintain and update HILDA, and some complete examples. These examples illustrate that the present version of HILDA can generate much useful information about the design of a HIF driver. They also serve as guides to what features would be useful to include in future updates. The HPD represents the current state of development of this project.
Broader source: Energy.gov [DOE]
Non-Linear Seismic Soil Structure Interaction (SSI) Method for Developing Non-Linear Seismic SSI Analysis Techniques Justin Coleman, P.E. October 25th, 2011
Linear optics measurements and corrections using an AC dipole in RHIC
Wang, G.; Bai, M.; Yang, L.
2010-05-23
We report recent experimental results on linear optics measurements and corrections using ac dipole. In RHIC 2009 run, the concept of the SVD correction algorithm is tested at injection energy for both identifying the artificial gradient errors and correcting it using the trim quadrupoles. The measured phase beatings were reduced by 30% and 40% respectively for two dedicated experiments. In RHIC 2010 run, ac dipole is used to measure {beta}* and chromatic {beta} function. For the 0.65m {beta}* lattice, we observed a factor of 3 discrepancy between model and measured chromatic {beta} function in the yellow ring.
Enhanced dielectric-wall linear accelerator
Sampayan, S.E.; Caporaso, G.J.; Kirbie, H.C.
1998-09-22
A dielectric-wall linear accelerator is enhanced by a high-voltage, fast e-time switch that includes a pair of electrodes between which are laminated alternating layers of isolated conductors and insulators. A high voltage is placed between the electrodes sufficient to stress the voltage breakdown of the insulator on command. A light trigger, such as a laser, is focused along at least one line along the edge surface of the laminated alternating layers of isolated conductors and insulators extending between the electrodes. The laser is energized to initiate a surface breakdown by a fluence of photons, thus causing the electrical switch to close very promptly. Such insulators and lasers are incorporated in a dielectric wall linear accelerator with Blumlein modules, and phasing is controlled by adjusting the length of fiber optic cables that carry the laser light to the insulator surface. 6 figs.
Noise in phase-preserving linear amplifiers
Pandey, Shashank; Jiang, Zhang; Combes, Joshua; Caves, Carlton M.
2014-12-04
The purpose of a phase-preserving linear amplifier is to make a small signal larger, so that it can be perceived by instruments incapable of resolving the original signal, while sacrificing as little as possible in signal-to-noise. Quantum mechanics limits how well this can be done: the noise added by the amplifier, referred to the input, must be at least half a quantum at the operating frequency. This well-known quantum limit only constrains the second moments of the added noise. Here we provide the quantum constraints on the entire distribution of added noise: any phasepreserving linear amplifier is equivalent to a parametric amplifier with a physical state ? for the ancillary mode; ? determines the properties of the added noise.
Enhanced dielectric-wall linear accelerator
Sampayan, Stephen E.; Caporaso, George J.; Kirbie, Hugh C.
1998-01-01
A dielectric-wall linear accelerator is enhanced by a high-voltage, fast e-time switch that includes a pair of electrodes between which are laminated alternating layers of isolated conductors and insulators. A high voltage is placed between the electrodes sufficient to stress the voltage breakdown of the insulator on command. A light trigger, such as a laser, is focused along at least one line along the edge surface of the laminated alternating layers of isolated conductors and insulators extending between the electrodes. The laser is energized to initiate a surface breakdown by a fluence of photons, thus causing the electrical switch to close very promptly. Such insulators and lasers are incorporated in a dielectric wall linear accelerator with Blumlein modules, and phasing is controlled by adjusting the length of fiber optic cables that carry the laser light to the insulator surface.
Jimenez, Edward Steven,
2013-09-01
The goal of this work is to develop a fast computed tomography (CT) reconstruction algorithm based on graphics processing units (GPU) that achieves significant improvement over traditional central processing unit (CPU) based implementations. The main challenge in developing a CT algorithm that is capable of handling very large datasets is parallelizing the algorithm in such a way that data transfer does not hinder performance of the reconstruction algorithm. General Purpose Graphics Processing (GPGPU) is a new technology that the Science and Technology (S&T) community is starting to adopt in many fields where CPU-based computing is the norm. GPGPU programming requires a new approach to algorithm development that utilizes massively multi-threaded environments. Multi-threaded algorithms in general are difficult to optimize since performance bottlenecks occur that are non-existent in single-threaded algorithms such as memory latencies. If an efficient GPU-based CT reconstruction algorithm can be developed; computational times could be improved by a factor of 20. Additionally, cost benefits will be realized as commodity graphics hardware could potentially replace expensive supercomputers and high-end workstations. This project will take advantage of the CUDA programming environment and attempt to parallelize the task in such a way that multiple slices of the reconstruction volume are computed simultaneously. This work will also take advantage of the GPU memory by utilizing asynchronous memory transfers, GPU texture memory, and (when possible) pinned host memory so that the memory transfer bottleneck inherent to GPGPU is amortized. Additionally, this work will take advantage of GPU-specific hardware (i.e. fast texture memory, pixel-pipelines, hardware interpolators, and varying memory hierarchy) that will allow for additional performance improvements.
TRACC: Algorithm for Predicting and Tracking Barges on Inland Waterways
Energy Science and Technology Software Center (OSTI)
2010-04-23
Algorithm developed in this work is used to predict the location and estimate the traveling speed of a barge moving in inland waterway network. Measurements obtained from GPS or other systems are corrupted with measurement noise and reported at large, irregular time intervals. Thus, creating uncertainty about the current location of the barge and minimizing the effectiveness of emergency response activities in case of an accident or act of terrorism. Developing a prediction algorithm becomemore » a non-trivial problem due to estimation of speed becomes challenging, attributed to the complex interactions between multiple systems associated in the process. This software, uses systems approach in modeling the motion dynamics of the barge and estimates the location and speed of the barge at next, user defined, time interval. In this work, first, to estimate the speed a non-linear, stochastic modeling technique was developed that take local variations and interactions existing in the system. Output speed is then used as an observation in a statistically optimal filtering technique, Kalman filter, formulated in state-space to minimize numerous errors observed in the system. The combined system synergistically fuses the local information available with measurements obtained to predict the location and speed of traveling of the barge accurately.« less
Radio frequency quadrupole resonator for linear accelerator
Moretti, Alfred
1985-01-01
An RFQ resonator for a linear accelerator having a reduced level of interfering modes and producing a quadrupole mode for focusing, bunching and accelerating beams of heavy charged particles, with the construction being characterized by four elongated resonating rods within a cylinder with the rods being alternately shorted and open electrically to the shell at common ends of the rods to provide an LC parallel resonant circuit when activated by a magnetic field transverse to the longitudinal axis.
Communications circuit including a linear quadratic estimator
Ferguson, Dennis D.
2015-07-07
A circuit includes a linear quadratic estimator (LQE) configured to receive a plurality of measurements a signal. The LQE is configured to weight the measurements based on their respective uncertainties to produce weighted averages. The circuit further includes a controller coupled to the LQE and configured to selectively adjust at least one data link parameter associated with a communication channel in response to receiving the weighted averages.
High gradient accelerators for linear light sources
Barletta, W.A.
1988-09-26
Ultra-high gradient radio frequency linacs powered by relativistic klystrons appear to be able to provide compact sources of radiation at XUV and soft x-ray wavelengths with a duration of 1 picosecond or less. This paper provides a tutorial review of the physics applicable to scaling the present experience of the accelerator community to the regime applicable to compact linear light sources. 22 refs., 11 figs., 21 tabs.
The Next Linear Collider: NLC2001
D. Burke et al.
2002-01-14
Recent studies in elementary particle physics have made the need for an e{sup +}e{sup -} linear collider able to reach energies of 500 GeV and above with high luminosity more compelling than ever [1]. Observations and measurements completed in the last five years at the SLC (SLAC), LEP (CERN), and the Tevatron (FNAL) can be explained only by the existence of at least one particle or interaction that has not yet been directly observed in experiment. The Higgs boson of the Standard Model could be that particle. The data point strongly to a mass for the Higgs boson that is just beyond the reach of existing colliders. This brings great urgency and excitement to the potential for discovery at the upgraded Tevatron early in this decade, and almost assures that later experiments at the LHC will find new physics. But the next generation of experiments to be mounted by the world-wide particle physics community must not only find this new physics, they must find out what it is. These experiments must also define the next important threshold in energy. The need is to understand physics at the TeV energy scale as well as the physics at the 100-GeV energy scale is now understood. This will require both the LHC and a companion linear electron-positron collider. A first Zeroth-Order Design Report (ZDR) [2] for a second-generation electron-positron linear collider, the Next Linear Collider (NLC), was published five years ago. The NLC design is based on a high-frequency room-temperature rf accelerator. Its goal is exploration of elementary particle physics at the TeV center-of-mass energy, while learning how to design and build colliders at still higher energies. Many advances in accelerator technologies and improvements in the design of the NLC have been made since 1996. This Report is a brief update of the ZDR.
Towards a Future Linear Collider and The Linear Collider Studies at CERN
None
2011-10-06
During the week 18-22 October, more than 400 physicists will meet at CERN and in the CICG (International Conference Centre Geneva) to review the global progress towards a future linear collider. The 2010 International Workshop on Linear Colliders will study the physics, detectors and accelerator complex of a linear collider covering both the CLIC and ILC options. Among the topics presented and discussed will be the progress towards the CLIC Conceptual Design Report in 2011, the ILC Technical Design Report in 2012, physics and detector studies linked to these reports, and an increasing numbers of common working group activities. The seminar will give an overview of these topics and also CERN?s linear collider studies, focusing on current activities and initial plans for the period 2011-16. n.b: The Council Chamber is also reserved for this colloquium with a live transmission from the Main Auditorium.
Independent Oversight Inspection, Stanford Linear Accelerator Center -
Office of Environmental Management (EM)
Energy Plant - June 2009 Independent Oversight Inspection, Pantex Plant - June 2009 June 2009 Inspection of Environment, Safety, and Health Programs at the Pantex Plant This report documents the results of an inspection of the environment, safety, and health programs at the Department of Energy's (DOE) Pantex Plant. The inspection was performed during March and April 2009 by the DOE Office of Independent Oversight's Office of Environment, Safety and Health Evaluations, which is within the
Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)
Program Description SAGE, the Summer of Applied Geophysical Experience, is a unique educational program designed to introduce students in geophysics and related fields to "hands on" geophysical exploration and research. The program emphasizes both teaching of field methods and research related to basic science and a variety of applied problems. SAGE is hosted by the National Security Education Center and the Earth and Environmental Sciences Division of the Los Alamos National
Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)
National VolunteerMatch Retired and Senior Volunteer Program United Way of Northern New Mexico United Way of Santa Fe County Giving Employee Giving Campaign Holiday Food Drive...
National Nuclear Security Administration (NNSA)
and dispose of many different hazardous substances, including radioactive materials, toxic chemicals, and biological agents and toxins.
There are a few programs NNSA uses...
Broader source: Directives, Delegations, and Requirements [Office of Management (MA)]
1992-09-04
To establish the policies, procedures, and specific responsibilities for the Department of Energy (DOE) Counterintelligence (CI) Program. This directive does not cancel any other directive.
Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)
Applied Geophysical Experience, is a unique educational program designed to introduce students in geophysics and related fields to "hands on" geophysical exploration and research....
Broader source: Directives, Delegations, and Requirements [Office of Management (MA)]
1997-05-21
This chapter addresses plans for the acquisition and installation of operating environment hardware and software and design of a training program.
Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)
their potential and pursue opportunities in science, technology, engineering and mathematics. Through Expanding Your Horizon (EYH) Network programs, we provide STEM role models...
Office of Energy Efficiency and Renewable Energy (EERE)
Headquarters Human Resources Operations promotes a variety of hiring flexibilities for managers to attract a diverse workforce, from Student Internship Program opportunities (Pathways), Veteran...
Broader source: Directives, Delegations, and Requirements [Office of Management (MA)]
2004-12-10
The Order establishes Counterintelligence Program requirements and responsibilities for the Department of Energy, including the National Nuclear Security Administration. Supersedes DOE 5670.3.
Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)
Program Description Inspiring girls to recognize their potential and pursue opportunities in science, technology, engineering and mathematics. Through Expanding Your Horizon (EYH) ...
IMPACTS: Industrial Technologies Program, Summary of Program...
Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site
IMPACTS: Industrial Technologies Program, Summary of Program Results for CY2009 IMPACTS: Industrial Technologies Program, Summary of Program Results for CY2009 ...
Performance Models for the Spike Banded Linear System Solver
DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)
Manguoglu, Murat; Saied, Faisal; Sameh, Ahmed; Grama, Ananth
2011-01-01
With availability of large-scale parallel platforms comprised of tens-of-thousands of processors and beyond, there is significant impetus for the development of scalable parallel sparse linear system solvers and preconditioners. An integral part of this design process is the development of performance models capable of predicting performance and providing accurate cost models for the solvers and preconditioners. There has been some work in the past on characterizing performance of the iterative solvers themselves. In this paper, we investigate the problem of characterizing performance and scalability of banded preconditioners. Recent work has demonstrated the superior convergence properties and robustness of banded preconditioners,more » compared to state-of-the-art ILU family of preconditioners as well as algebraic multigrid preconditioners. Furthermore, when used in conjunction with efficient banded solvers, banded preconditioners are capable of significantly faster time-to-solution. Our banded solver, the Truncated Spike algorithm is specifically designed for parallel performance and tolerance to deep memory hierarchies. Its regular structure is also highly amenable to accurate performance characterization. Using these characteristics, we derive the following results in this paper: (i) we develop parallel formulations of the Truncated Spike solver, (ii) we develop a highly accurate pseudo-analytical parallel performance model for our solver, (iii) we show excellent predication capabilities of our model – based on which we argue the high scalability of our solver. Our pseudo-analytical performance model is based on analytical performance characterization of each phase of our solver. These analytical models are then parameterized using actual runtime information on target platforms. An important consequence of our performance models is that they reveal underlying performance bottlenecks in both serial and parallel formulations. All of our results are validated
Densmore, J.D.; Park, H.; Wollaber, A.B.; Rauenzahn, R.M.; Knoll, D.A.
2015-03-01
We present a moment-based acceleration algorithm applied to Monte Carlo simulation of thermal radiative-transfer problems. Our acceleration algorithm employs a continuum system of moments to accelerate convergence of stiff absorption–emission physics. The combination of energy-conserving tallies and the use of an asymptotic approximation in optically thick regions remedy the difficulties of local energy conservation and mitigation of statistical noise in such regions. We demonstrate the efficiency and accuracy of the developed method. We also compare directly to the standard linearization-based method of Fleck and Cummings [1]. A factor of 40 reduction in total computational time is achieved with the new algorithm for an equivalent (or more accurate) solution as compared with the Fleck–Cummings algorithm.
Student Internship Programs Program Description
Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)
Student Internship Programs Program Description The objective of the Laboratory's student internship programs is to provide students with opportunities for meaningful hands- on experience supporting educational progress in their selected scientific or professional fields. The most significant impact of these internship experiences is observed in the intellectual growth experienced by the participants. Student interns are able to appreciate the practical value of their education efforts in their
The design of a parallel adaptive paving all-quadrilateral meshing algorithm
Tautges, T.J.; Lober, R.R.; Vaughan, C.
1995-08-01
Adaptive finite element analysis demands a great deal of computational resources, and as such is most appropriately solved in a massively parallel computer environment. This analysis will require other parallel algorithms before it can fully utilize MP computers, one of which is parallel adaptive meshing. A version of the paving algorithm is being designed which operates in parallel but which also retains the robustness and other desirable features present in the serial algorithm. Adaptive paving in a production mode is demonstrated using a Babuska-Rheinboldt error estimator on a classic linearly elastic plate problem. The design of the parallel paving algorithm is described, and is based on the decomposition of a surface into {open_quotes}virtual{close_quotes} surfaces. The topology of the virtual surface boundaries is defined using mesh entities (mesh nodes and edges) so as to allow movement of these boundaries with smoothing and other operations. This arrangement allows the use of the standard paving algorithm on subdomain interiors, after the negotiation of the boundary mesh.
Program Description | Robotics Internship Program
Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)
March 4, 2016. Apply Now for the Robotics Internship About the Internship Program Description Start of Appointment Renewal of Appointment End of Appointment Stipend Information...
Microfabricated linear Paul-Straubel ion trap
Mangan, Michael A.; Blain, Matthew G.; Tigges, Chris P.; Linker, Kevin L.
2011-04-19
An array of microfabricated linear Paul-Straubel ion traps can be used for mass spectrometric applications. Each ion trap comprises two parallel inner RF electrodes and two parallel outer DC control electrodes symmetric about a central trap axis and suspended over an opening in a substrate. Neighboring ion traps in the array can share a common outer DC control electrode. The ions confined transversely by an RF quadrupole electric field potential well on the ion trap axis. The array can trap a wide array of ions.
Micromechanism linear actuator with capillary force sealing
Sniegowski, Jeffry J.
1997-01-01
A class of micromachine linear actuators whose function is based on gas driven pistons in which capillary forces are used to seal the gas behind the piston. The capillary forces also increase the amount of force transmitted from the gas pressure to the piston. In a major subclass of such devices, the gas bubble is produced by thermal vaporization of a working fluid. Because of their dependence on capillary forces for sealing, such devices are only practical on the sub-mm size scale, but in that regime they produce very large force times distance (total work) values.
Rf power sources for linear colliders
Allen, M.A.; Callin, R.S.; Caryotakis, G.; Deruyter, H.; Eppley, K.R.; Fant, K.S.; Farkas, Z.D.; Fowkes, W.R.; Hoag, H.A.; Feinstein, J.; Ko, K.; Koontz, R.F.; Kroll, N.M.; Lavine, T.L.; Lee, T.G.; Loew, G.A.; Miller, R.H.; Nelson, E.M.; Ruth, R.D.; Vlieks, A.E.; Wang, J.W.; Wilson, P.B. ); Boyd, J.K.; Houk, T.; Ryne, R.D.; Westenskow, G.A.; Yu, S.S. (Lawrence Live
1990-06-01
The next generation of linear colliders requires peak power sources of over 200 MW per meter at frequencies above 10 GHz at pulse widths of less than 100 nsec. Several power sources are under active development, including a conventional klystron with rf pulse compression, a relativistic klystron (RK) and a crossed-field amplifier. Power from one of these has energized a 0.5 meter two- section High Gradient Accelerator (HGA) and accelerated a beam at over 80 MeV meter. Results of tests with these experimental devices are presented here.
Genetic algorithms at UC Davis/LLNL
Vemuri, V.R.
1993-12-31
A tutorial introduction to genetic algorithms is given. This brief tutorial should serve the purpose of introducing the subject to the novice. The tutorial is followed by a brief commentary on the term project reports that follow.
Tracking Algorithm for Multi- Dimensional Array Transposition
Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)
192002 Yun (Helen) He, SC2002 1 MPI and OpenMP Paradigms on Cluster of SMP Architectures: the Vacancy Tracking Algorithm for Multi- Dimensional Array Transposition Yun (Helen) He...
Advanced Imaging Algorithms for Radiation Imaging Systems
Marleau, Peter
2015-10-01
The intent of the proposed work, in collaboration with University of Michigan, is to develop the algorithms that will bring the analysis from qualitative images to quantitative attributes of objects containing SNM. The first step to achieving this is to develop an indepth understanding of the intrinsic errors associated with the deconvolution and MLEM algorithms. A significant new effort will be undertaken to relate the image data to a posited three-dimensional model of geometric primitives that can be adjusted to get the best fit. In this way, parameters of the model such as sizes, shapes, and masses can be extracted for both radioactive and non-radioactive materials. This model-based algorithm will need the integrated response of a hypothesized configuration of material to be calculated many times. As such, both the MLEM and the model-based algorithm require significant increases in calculation speed in order to converge to solutions in practical amounts of time.
Advanced CHP Control Algorithms: Scope Specification
Katipamula, Srinivas; Brambley, Michael R.
2006-04-28
The primary objective of this multiyear project is to develop algorithms for combined heat and power systems to ensure optimal performance, increase reliability, and lead to the goal of clean, efficient, reliable and affordable next generation energy systems.
Drainage Algorithm for Geospatial Knowledge
Energy Science and Technology Software Center (OSTI)
2006-08-15
The Pacific Northwest National Laboratory (PNNL) has developed a prototype stream extraction algorithm that semi-automatically extracts and characterizes streams using a variety of multisensor imagery and digital terrain elevation data (DTEDÃÂ¯ÃÂÃÂ¢) data. The system is currently optimized for three types of single-band imagery: radar, visible, and thermal. Method of Solution: DRAGON: (1) classifies pixels into clumps of water objects based on the classification of water pixels by spectral signatures and neighborhood relationships, (2) uses themore » morphology operations (erosion and dilation) to separate out large lakes (or embayment), isolated lakes, ponds, wide rivers and narrow rivers, and (3) translates the river objects into vector objects. In detail, the process can be broken down into the following steps. A. Water pixels are initially identified using on the extend range and slope values (if an optional DEM file is available). B. Erode to the distance that defines a large water body and then dilate back. The resulting mask can be used to identify large lake and embayment objects that are then removed from the image. Since this operation be time consuming it is only performed if a simple test (i.e. a large box can be found somewhere in the image that contains only water pixels) that indicates a large water body is present. C. All water pixels are ÃÂ¢ÃÂÃÂclumpedÃÂ¢ÃÂÃÂ (in Imagine terminology clumping is when pixels of a common classification that touch are connected) and clumps which do not contain pure water pixels (e.g. dark cloud shadows) are removed D. The resulting true water pixels are clumped and water objects which are too small (e.g. ponds) or isolated lakes (i.e. isolated objects with a small compactness ratio) are removed. Note that at this point lakes have been identified has a byproduct of the filtering process and can be output has vector layers if needed. E. At this point only river pixels