Deterministic algorithms for 2-d convex programming and 3-d online linear programming
Chan, T.M.
1997-06-01
We present a deterministic algorithm for solving two-dimensional convex programs with a linear objective function. The algorithm requires O(k log k) primitive operations for k constraints; if a feasible point is given, the bound reduces to O(k log k/ log log k). As a consequence, we can decide whether k convex n-gons in the plane have a common intersection in O(k log n min (log k, log log n)) worst-case time. Furthermore, we can solve the three-dimensional online linear programming problem in o(log{sup 3} n) worst-case time per operation.
Algorithmic advances in stochastic programming
Morton, D.P.
1993-07-01
Practical planning problems with deterministic forecasts of inherently uncertain parameters often yield unsatisfactory solutions. Stochastic programming formulations allow uncertain parameters to be modeled as random variables with known distributions, but the size of the resulting mathematical programs can be formidable. Decomposition-based algorithms take advantage of special structure and provide an attractive approach to such problems. We consider two classes of decomposition-based stochastic programming algorithms. The first type of algorithm addresses problems with a ``manageable`` number of scenarios. The second class incorporates Monte Carlo sampling within a decomposition algorithm. We develop and empirically study an enhanced Benders decomposition algorithm for solving multistage stochastic linear programs within a prespecified tolerance. The enhancements include warm start basis selection, preliminary cut generation, the multicut procedure, and decision tree traversing strategies. Computational results are presented for a collection of ``real-world`` multistage stochastic hydroelectric scheduling problems. Recently, there has been an increased focus on decomposition-based algorithms that use sampling within the optimization framework. These approaches hold much promise for solving stochastic programs with many scenarios. A critical component of such algorithms is a stopping criterion to ensure the quality of the solution. With this as motivation, we develop a stopping rule theory for algorithms in which bounds on the optimal objective function value are estimated by sampling. Rules are provided for selecting sample sizes and terminating the algorithm under which asymptotic validity of confidence interval statements for the quality of the proposed solution can be verified. Issues associated with the application of this theory to two sampling-based algorithms are considered, and preliminary empirical coverage results are presented.
Two linear time, low overhead algorithms for graph layout
Energy Science and Technology Software Center
2008-01-10
The software comprises two algorithms designed to perform a 2D layout of a graph structure in time linear with respect to the vertices and edges in the graph, whereas most other layout algorithms have a running time that is quadratic with respect to the number of vertices or greater. Although these layout algorithms run in a fraction of the time as their competitors, they provide competitive results when applied to most real-world graphs. These algorithmsmore » also have a low constant running time and small memory footprint, making them useful for small to large graphs.« less
APPLICATION OF NEURAL NETWORK ALGORITHMS FOR BPM LINEARIZATION
Musson, John C.; Seaton, Chad; Spata, Mike F.; Yan, Jianxun
2012-11-01
Stripline BPM sensors contain inherent non-linearities, as a result of field distortions from the pickup elements. Many methods have been devised to facilitate corrections, often employing polynomial fitting. The cost of computation makes real-time correction difficult, particulalry when integer math is utilized. The application of neural-network technology, particularly the multi-layer perceptron algorithm, is proposed as an efficient alternative for electrode linearization. A process of supervised learning is initially used to determine the weighting coefficients, which are subsequently applied to the incoming electrode data. A non-linear layer, known as an ?activation layer,? is responsible for the removal of saturation effects. Implementation of a perceptron in an FPGA-based software-defined radio (SDR) is presented, along with performance comparisons. In addition, efficient calculation of the sigmoidal activation function via the CORDIC algorithm is presented.
Toward portable programming of numerical linear algebra on manycore...
Office of Scientific and Technical Information (OSTI)
Toward portable programming of numerical linear algebra on manycore nodes. Citation Details In-Document Search Title: Toward portable programming of numerical linear algebra on ...
Planning under uncertainty solving large-scale stochastic linear programs
Infanger, G. . Dept. of Operations Research Technische Univ., Vienna . Inst. fuer Energiewirtschaft)
1992-12-01
For many practical problems, solutions obtained from deterministic models are unsatisfactory because they fail to hedge against certain contingencies that may occur in the future. Stochastic models address this shortcoming, but up to recently seemed to be intractable due to their size. Recent advances both in solution algorithms and in computer technology now allow us to solve important and general classes of practical stochastic problems. We show how large-scale stochastic linear programs can be efficiently solved by combining classical decomposition and Monte Carlo (importance) sampling techniques. We discuss the methodology for solving two-stage stochastic linear programs with recourse, present numerical results of large problems with numerous stochastic parameters, show how to efficiently implement the methodology on a parallel multi-computer and derive the theory for solving a general class of multi-stage problems with dependency of the stochastic parameters within a stage and between different stages.
Comparison of open-source linear programming solvers.
Gearhart, Jared Lee; Adair, Kristin Lynn; Durfee, Justin David.; Jones, Katherine A.; Martin, Nathaniel; Detry, Richard Joseph
2013-10-01
When developing linear programming models, issues such as budget limitations, customer requirements, or licensing may preclude the use of commercial linear programming solvers. In such cases, one option is to use an open-source linear programming solver. A survey of linear programming tools was conducted to identify potential open-source solvers. From this survey, four open-source solvers were tested using a collection of linear programming test problems and the results were compared to IBM ILOG CPLEX Optimizer (CPLEX) [1], an industry standard. The solvers considered were: COIN-OR Linear Programming (CLP) [2], [3], GNU Linear Programming Kit (GLPK) [4], lp_solve [5] and Modular In-core Nonlinear Optimization System (MINOS) [6]. As no open-source solver outperforms CPLEX, this study demonstrates the power of commercial linear programming software. CLP was found to be the top performing open-source solver considered in terms of capability and speed. GLPK also performed well but cannot match the speed of CLP or CPLEX. lp_solve and MINOS were considerably slower and encountered issues when solving several test problems.
Lewis, Robert Michael (College of William and Mary, Williamsburg, VA); Torczon, Virginia Joanne (College of William and Mary, Williamsburg, VA); Kolda, Tamara Gibson
2006-08-01
We consider the solution of nonlinear programs in the case where derivatives of the objective function and nonlinear constraints are unavailable. To solve such problems, we propose an adaptation of a method due to Conn, Gould, Sartenaer, and Toint that proceeds by approximately minimizing a succession of linearly constrained augmented Lagrangians. Our modification is to use a derivative-free generating set direct search algorithm to solve the linearly constrained subproblems. The stopping criterion proposed by Conn, Gould, Sartenaer and Toint for the approximate solution of the subproblems requires explicit knowledge of derivatives. Such information is presumed absent in the generating set search method we employ. Instead, we show that stationarity results for linearly constrained generating set search methods provide a derivative-free stopping criterion, based on a step-length control parameter, that is sufficient to preserve the convergence properties of the original augmented Lagrangian algorithm.
LDRD final report on massively-parallel linear programming : the parPCx system.
Parekh, Ojas; Phillips, Cynthia Ann; Boman, Erik Gunnar
2005-02-01
This report summarizes the research and development performed from October 2002 to September 2004 at Sandia National Laboratories under the Laboratory-Directed Research and Development (LDRD) project ''Massively-Parallel Linear Programming''. We developed a linear programming (LP) solver designed to use a large number of processors. LP is the optimization of a linear objective function subject to linear constraints. Companies and universities have expended huge efforts over decades to produce fast, stable serial LP solvers. Previous parallel codes run on shared-memory systems and have little or no distribution of the constraint matrix. We have seen no reports of general LP solver runs on large numbers of processors. Our parallel LP code is based on an efficient serial implementation of Mehrotra's interior-point predictor-corrector algorithm (PCx). The computational core of this algorithm is the assembly and solution of a sparse linear system. We have substantially rewritten the PCx code and based it on Trilinos, the parallel linear algebra library developed at Sandia. Our interior-point method can use either direct or iterative solvers for the linear system. To achieve a good parallel data distribution of the constraint matrix, we use a (pre-release) version of a hypergraph partitioner from the Zoltan partitioning library. We describe the design and implementation of our new LP solver called parPCx and give preliminary computational results. We summarize a number of issues related to efficient parallel solution of LPs with interior-point methods including data distribution, numerical stability, and solving the core linear system using both direct and iterative methods. We describe a number of applications of LP specific to US Department of Energy mission areas and we summarize our efforts to integrate parPCx (and parallel LP solvers in general) into Sandia's massively-parallel integer programming solver PICO (Parallel Interger and Combinatorial Optimizer). We
Efficient Algorithms for Estimating the Absorption Spectrum within Linear Response TDDFT
Brabec, Jiri; Lin, Lin; Shao, Meiyue; Govind, Niranjan; Yang, Chao; Saad, Yousef; Ng, Esmond
2015-10-06
We present two iterative algorithms for approximating the absorption spectrum of molecules within linear response of time-dependent density functional theory (TDDFT) framework. These methods do not attempt to compute eigenvalues or eigenvectors of the linear response matrix. They are designed to approximate the absorption spectrum as a function directly. They take advantage of the special structure of the linear response matrix. Neither method requires the linear response matrix to be constructed explicitly. They only require a procedure that performs the multiplication of the linear response matrix with a vector. These methods can also be easily modified to efficiently estimate the density of states (DOS) of the linear response matrix without computing the eigenvalues of this matrix. We show by computational experiments that the methods proposed in this paper can be much more efficient than methods that are based on the exact diagonalization of the linear response matrix. We show that they can also be more efficient than real-time TDDFT simulations. We compare the pros and cons of these methods in terms of their accuracy as well as their computational and storage cost.
Grant, C W; Lenderman, J S; Gansemer, J D
2011-02-24
This document is an update to the 'ADIS Algorithm Evaluation Project Plan' specified in the Statement of Work for the US-VISIT Identity Matching Algorithm Evaluation Program, as deliverable II.D.1. The original plan was delivered in August 2010. This document modifies the plan to reflect modified deliverables reflecting delays in obtaining a database refresh. This document describes the revised schedule of the program deliverables. The detailed description of the processes used, the statistical analysis processes and the results of the statistical analysis will be described fully in the program deliverables. The US-VISIT Identity Matching Algorithm Evaluation Program is work performed by Lawrence Livermore National Laboratory (LLNL) under IAA HSHQVT-07-X-00002 P00004 from the Department of Homeland Security (DHS).
Microgrid Reliability Modeling and Battery Scheduling Using Stochastic Linear Programming
Cardoso, Goncalo; Stadler, Michael; Siddiqui, Afzal; Marnay, Chris; DeForest, Nicholas; Barbosa-Povoa, Ana; Ferrao, Paulo
2013-05-23
This paper describes the introduction of stochastic linear programming into Operations DER-CAM, a tool used to obtain optimal operating schedules for a given microgrid under local economic and environmental conditions. This application follows previous work on optimal scheduling of a lithium-iron-phosphate battery given the output uncertainty of a 1 MW molten carbonate fuel cell. Both are in the Santa Rita Jail microgrid, located in Dublin, California. This fuel cell has proven unreliable, partially justifying the consideration of storage options. Several stochastic DER-CAM runs are executed to compare different scenarios to values obtained by a deterministic approach. Results indicate that using a stochastic approach provides a conservative yet more lucrative battery schedule. Lower expected energy bills result, given fuel cell outages, in potential savings exceeding 6percent.
Shang, Yu; Yu, Guoqiang
2014-09-29
Conventional semi-infinite analytical solutions of correlation diffusion equation may lead to errors when calculating blood flow index (BFI) from diffuse correlation spectroscopy (DCS) measurements in tissues with irregular geometries. Very recently, we created an algorithm integrating a Nth-order linear model of autocorrelation function with the Monte Carlo simulation of photon migrations in homogenous tissues with arbitrary geometries for extraction of BFI (i.e., αD{sub B}). The purpose of this study is to extend the capability of the Nth-order linear algorithm for extracting BFI in heterogeneous tissues with arbitrary geometries. The previous linear algorithm was modified to extract BFIs in different types of tissues simultaneously through utilizing DCS data at multiple source-detector separations. We compared the proposed linear algorithm with the semi-infinite homogenous solution in a computer model of adult head with heterogeneous tissue layers of scalp, skull, cerebrospinal fluid, and brain. To test the capability of the linear algorithm for extracting relative changes of cerebral blood flow (rCBF) in deep brain, we assigned ten levels of αD{sub B} in the brain layer with a step decrement of 10% while maintaining αD{sub B} values constant in other layers. Simulation results demonstrate the accuracy (errors < 3%) of high-order (N ≥ 5) linear algorithm in extracting BFIs in different tissue layers and rCBF in deep brain. By contrast, the semi-infinite homogenous solution resulted in substantial errors in rCBF (34.5% ≤ errors ≤ 60.2%) and BFIs in different layers. The Nth-order linear model simplifies data analysis, thus allowing for online data processing and displaying. Future study will test this linear algorithm in heterogeneous tissues with different levels of blood flow variations and noises.
EVOLVING RETRIEVAL ALGORITHMS WITH A GENETIC PROGRAMMING SCHEME
J. THEILER; ET AL
1999-06-01
The retrieval of scene properties (surface temperature, material type, vegetation health, etc.) from remotely sensed data is the ultimate goal of many earth observing satellites. The algorithms that have been developed for these retrievals are informed by physical models of how the raw data were generated. This includes models of radiation as emitted and/or rejected by the scene, propagated through the atmosphere, collected by the optics, detected by the sensor, and digitized by the electronics. To some extent, the retrieval is the inverse of this ''forward'' modeling problem. But in contrast to this forward modeling, the practical task of making inferences about the original scene usually requires some ad hoc assumptions, good physical intuition, and a healthy dose of trial and error. The standard MTI data processing pipeline will employ algorithms developed with this traditional approach. But we will discuss some preliminary research on the use of a genetic programming scheme to ''evolve'' retrieval algorithms. Such a scheme cannot compete with the physical intuition of a remote sensing scientist, but it may be able to automate some of the trial and error. In this scenario, a training set is used, which consists of multispectral image data and the associated ''ground truth;'' that is, a registered map of the desired retrieval quantity. The genetic programming scheme attempts to combine a core set of image processing primitives to produce an IDL (Interactive Data Language) program which estimates this retrieval quantity from the raw data.
Object detection utilizing a linear retrieval algorithm for thermal infrared imagery
Ramsey, M.S. [Arizona State Univ., Tempe, AZ (United States)
1996-11-01
Thermal infrared (TIR) spectroscopy and remote sensing have been proven to be extremely valuable tools for mineralogic discrimination. One technique for sub-pixel detection and data reduction, known as a spectral retrieval or unmixing algorithm, will prove useful in the analysis of data from scheduled TIR orbital instruments. This study represents the first quantitative attempt to identify the limits of the model, specifically concentrating on the TIR. The algorithm was written and applied to laboratory data, testing the effects of particle size, noise, and multiple endmembers, then adapted to operate on airborne Thermal Infrared Multispectral Scanner data of the Kelso Dunes, CA, Meteor Crater, AZ, and Medicine Lake Volcano, CA. Results indicate that linear spectral unmixmg can produce accurate endmember detection to within an average of 5%. In addition, the effects of vitrification and textural variations were modeled. The ability to predict mineral or rock abundances becomes extremely useful in tracking sediment transport, decertification, and potential hazard assessment in remote volcanic regions. 26 refs., 3 figs.
Fast Combinatorial Algorithm for the Solution of Linearly Constrained Least Squares Problems
Van Benthem, Mark H.; Keenan, Michael R.
2008-11-11
A fast combinatorial algorithm can significantly reduce the computational burden when solving general equality and inequality constrained least squares problems with large numbers of observation vectors. The combinatorial algorithm provides a mathematically rigorous solution and operates at great speed by reorganizing the calculations to take advantage of the combinatorial nature of the problems to be solved. The combinatorial algorithm exploits the structure that exists in large-scale problems in order to minimize the number of arithmetic operations required to obtain a solution.
Sixth SIAM conference on applied linear algebra: Final program and abstracts. Final technical report
1997-12-31
Linear algebra plays a central role in mathematics and applications. The analysis and solution of problems from an amazingly wide variety of disciplines depend on the theory and computational techniques of linear algebra. In turn, the diversity of disciplines depending on linear algebra also serves to focus and shape its development. Some problems have special properties (numerical, structural) that can be exploited. Some are simply so large that conventional approaches are impractical. New computer architectures motivate new algorithms, and fresh ways to look at old ones. The pervasive nature of linear algebra in analyzing and solving problems means that people from a wide spectrum--universities, industrial and government laboratories, financial institutions, and many others--share an interest in current developments in linear algebra. This conference aims to bring them together for their mutual benefit. Abstracts of papers presented are included.
The MARX Modulator Development Program for the International Linear Collider
Leyh, G.E.; /SLAC
2006-06-12
The ILC Marx Modulator Development Program at SLAC is working towards developing a full-scale ILC Marx ''Reference Design'' modulator prototype, with the goal of significantly reducing the size and cost of the ILC modulator while improving overall modulator efficiency and availability. The ILC Reference Design prototype will provide a proof-of-concept model to industry in advance of Phase II SBIR funding, and also allow operation of the new 10MW L-Band Klystron prototypes immediately upon their arrival at SLAC.
Abdel-Rehim, A M; Stathopoulos, Andreas; Orginos, Kostas
2014-08-01
The technique that was used to build the EigCG algorithm for sparse symmetric linear systems is extended to the nonsymmetric case using the BiCG algorithm. We show that, similarly to the symmetric case, we can build an algorithm that is capable of computing a few smallest magnitude eigenvalues and their corresponding left and right eigenvectors of a nonsymmetric matrix using only a small window of the BiCG residuals while simultaneously solving a linear system with that matrix. For a system with multiple right-hand sides, we give an algorithm that computes incrementally more eigenvalues while solving the first few systems and then uses the computed eigenvectors to deflate BiCGStab for the remaining systems. Our experiments on various test problems, including Lattice QCD, show the remarkable ability of EigBiCG to compute spectral approximations with accuracy comparable to that of the unrestarted, nonsymmetric Lanczos. Furthermore, our incremental EigBiCG followed by appropriately restarted and deflated BiCGStab provides a competitive method for systems with multiple right-hand sides.
Application and implementation of transient algorithms in computer programs
Benson, D.J.
1985-07-01
This presentation gives a brief introduction to the nonlinear finite element programs developed at Lawrence Livermore National Laboratory by the Methods Development Group in the Mechanical Engineering Department. The four programs are DYNA3D and DYNA2D, which are explicit hydrocodes, and NIKE3D and NIKE2D, which are implicit programs. The presentation concentrates on DYNA3D with asides about the other programs. During the past year several new features were added to DYNA3D, and major improvements were made in the computational efficiency of the shell and beam elements. Most of these new features and improvements will eventually make their way into the other programs. The emphasis in our computational mechanics effort has always been, and continues to be, efficiency. To get the most out of our supercomputers, all Crays, we have vectorized the programs as much as possible. Several of the more interesting capabilities of DYNA3D will be described and their impact on efficiency will be discussed. Some of the recent work on NIKE3D and NIKE2D will also be presented. In the belief that a single example is worth a thousand equations, we are skipping the theory entirely and going directly to the examples.
Refining and end use study of coal liquids II - linear programming analysis
Lowe, C.; Tam, S.
1995-12-31
A DOE-funded study is underway to determine the optimum refinery processing schemes for producing transportation fuels that will meet CAAA regulations from direct and indirect coal liquids. The study consists of three major parts: pilot plant testing of critical upgrading processes, linear programming analysis of different processing schemes, and engine emission testing of final products. Currently, fractions of a direct coal liquid produced form bituminous coal are being tested in sequence of pilot plant upgrading processes. This work is discussed in a separate paper. The linear programming model, which is the subject of this paper, has been completed for the petroleum refinery and is being modified to handle coal liquids based on the pilot plant test results. Preliminary coal liquid evaluation studies indicate that, if a refinery expansion scenario is adopted, then the marginal value of the coal liquid (over the base petroleum crude) is $3-4/bbl.
Obtaining lower bounds from the progressive hedging algorithm for stochastic mixed-integer programs
Gade, Dinakar; Hackebeil, Gabriel; Ryan, Sarah M.; Watson, Jean -Paul; Wets, Roger J.-B.; Woodruff, David L.
2016-04-02
We present a method for computing lower bounds in the progressive hedging algorithm (PHA) for two-stage and multi-stage stochastic mixed-integer programs. Computing lower bounds in the PHA allows one to assess the quality of the solutions generated by the algorithm contemporaneously. The lower bounds can be computed in any iteration of the algorithm by using dual prices that are calculated during execution of the standard PHA. In conclusion, we report computational results on stochastic unit commitment and stochastic server location problem instances, and explore the relationship between key PHA parameters and the quality of the resulting lower bounds.
SLFP: A stochastic linear fractional programming approach for sustainable waste management
Zhu, H.; Huang, G.H.
2011-12-15
Highlights: > A new fractional programming (SLFP) method is developed for waste management. > SLFP can solve ratio optimization problems associated with random inputs. > A case study of waste flow allocation demonstrates its applicability. > SLFP helps compare objectives of two aspects and reflect system efficiency. > This study supports in-depth analysis of tradeoffs among multiple system criteria. - Abstract: A stochastic linear fractional programming (SLFP) approach is developed for supporting sustainable municipal solid waste management under uncertainty. The SLFP method can solve ratio optimization problems associated with random information, where chance-constrained programming is integrated into a linear fractional programming framework. It has advantages in: (1) comparing objectives of two aspects, (2) reflecting system efficiency, (3) dealing with uncertainty expressed as probability distributions, and (4) providing optimal-ratio solutions under different system-reliability conditions. The method is applied to a case study of waste flow allocation within a municipal solid waste (MSW) management system. The obtained solutions are useful for identifying sustainable MSW management schemes with maximized system efficiency under various constraint-violation risks. The results indicate that SLFP can support in-depth analysis of the interrelationships among system efficiency, system cost and system-failure risk.
Office of Scientific and Technical Information (OSTI)
1 are estimated us- ing the conventional MCMC (C-MCMC) with 60,000 model executions (red-solid lines), the linear, quadratic, and cubic surrogate systems with 9226, 4375, 3765...
Djukanovic, M.; Babic, B.; Milosevic, B.; Sobajic, D.J.; Pao, Y.H. |
1996-05-01
In this paper the blending/transloading facilities are modeled using an interactive fuzzy linear programming (FLP), in order to allow the decision-maker to solve the problem of uncertainty of input information within the fuel scheduling optimization. An interactive decision-making process is formulated in which decision-maker can learn to recognize good solutions by considering all possibilities of fuzziness. The application of the fuzzy formulation is accompanied by a careful examination of the definition of fuzziness, appropriateness of the membership function and interpretation of results. The proposed concept provides a decision support system with integration-oriented features, whereby the decision-maker can learn to recognize the relative importance of factors in the specific domain of optimal fuel scheduling (OFS) problem. The formulation of a fuzzy linear programming problem to obtain a reasonable nonfuzzy solution under consideration of the ambiguity of parameters, represented by fuzzy numbers, is introduced. An additional advantage of the FLP formulation is its ability to deal with multi-objective problems.
Averaging and Linear Programming in Some Singularly Perturbed Problems of Optimal Control
Gaitsgory, Vladimir; Rossomakhine, Sergey
2015-04-15
The paper aims at the development of an apparatus for analysis and construction of near optimal solutions of singularly perturbed (SP) optimal controls problems (that is, problems of optimal control of SP systems) considered on the infinite time horizon. We mostly focus on problems with time discounting criteria but a possibility of the extension of results to periodic optimization problems is discussed as well. Our consideration is based on earlier results on averaging of SP control systems and on linear programming formulations of optimal control problems. The idea that we exploit is to first asymptotically approximate a given problem of optimal control of the SP system by a certain averaged optimal control problem, then reformulate this averaged problem as an infinite-dimensional linear programming (LP) problem, and then approximate the latter by semi-infinite LP problems. We show that the optimal solution of these semi-infinite LP problems and their duals (that can be found with the help of a modification of an available LP software) allow one to construct near optimal controls of the SP system. We demonstrate the construction with two numerical examples.
Strout, Michelle
2015-08-15
Programming parallel machines is fraught with difficulties: the obfuscation of algorithms due to implementation details such as communication and synchronization, the need for transparency between language constructs and performance, the difficulty of performing program analysis to enable automatic parallelization techniques, and the existence of important "dusty deck" codes. The SAIMI project developed abstractions that enable the orthogonal specification of algorithms and implementation details within the context of existing DOE applications. The main idea is to enable the injection of small programming models such as expressions involving transcendental functions, polyhedral iteration spaces with sparse constraints, and task graphs into full programs through the use of pragmas. These smaller, more restricted programming models enable orthogonal specification of many implementation details such as how to map the computation on to parallel processors, how to schedule the computation, and how to allocation storage for the computation. At the same time, these small programming models enable the expression of the most computationally intense and communication heavy portions in many scientific simulations. The ability to orthogonally manipulate the implementation for such computations will significantly ease performance programming efforts and expose transformation possibilities and parameter to automated approaches such as autotuning. At Colorado State University, the SAIMI project was supported through DOE grant DE-SC3956 from April 2010 through August 2015. The SAIMI project has contributed a number of important results to programming abstractions that enable the orthogonal specification of implementation details in scientific codes. This final report summarizes the research that was funded by the SAIMI project.
Library of Continuation Algorithms
Energy Science and Technology Software Center
2005-03-01
LOCA (Library of Continuation Algorithms) is scientific software written in C++ that provides advanced analysis tools for nonlinear systems. In particular, it provides parameter continuation algorithms. bifurcation tracking algorithms, and drivers for linear stability analysis. The algorithms are aimed at large-scale applications that use Newtons method for their nonlinear solve.
1995-03-01
A model was developed for use in the Bechtel PIMS (Process Industry Modeling System) linear programming software to simulate a generic Midwest (PADD II) petroleum refinery of the future. This ``petroleum-only`` version of the model establishes the size and complexity of the refinery after the year 2000 and prior to the introduction of coal liquids. It should be noted that no assumption has been made on when a plant can be built to produce coal liquids except that it will be after the year 2000. The year 2000 was chosen because it is the latest year where fuel property and emission standards have been set by the Environmental Protection Agency. It assumes the refinery has been modified to accept crudes that are heavier in gravity and higher in sulfur than today`s average crude mix. In addition, the refinery has also been modified to produce a product slate of transportation fuels of the future (i.e. 40% reformulated gasolines). This model will be used as a basis for determining the optimum scheme for processing coal liquids in a petroleum refinery. This report summarizes the design basis for this ``petroleum only`` LP refinery model. A report detailing the refinery configuration when coal liquids are processed will be provided at a later date.
Frenkel, G.; Paterson, T.S.; Smith, M.E.
1988-04-01
The Institute for Defense Analyses (IDA) has collected and analyzed information on battle management algorithm technology that is relevant to Battle Management/Command, Control and Communications (BM/C3). This Memorandum Report represents a program plan that will provide the BM/C3 Directorate of the Strategic Defense Initiative Organization (SDIO) with administrative and technical insight into algorithm technology. This program plan focuses on current activity in algorithm development and provides information and analysis to the SDIO to be used in formulating budget requirements for FY 1988 and beyond. Based upon analysis of algorithm requirements and ongoing programs, recommendations have been made for research areas that should be pursued, including both the continuation of current work and the initiation of new tasks. This final report includes all relevant material from interim reports as well as new results.
Award DE-FG02-04ER52655 Final Technical Report: Interior Point Algorithms for Optimization Problems
O'Leary, Dianne P.; Tits, Andre
2014-04-03
Over the period of this award we developed an algorithmic framework for constraint reduction in linear programming (LP) and convex quadratic programming (QP), proved convergence of our algorithms, and applied them to a variety of applications, including entropy-based moment closure in gas dynamics.
Independent Oversight Inspection, Stanford Linear Accelerator...
Office of Energy Efficiency and Renewable Energy (EERE) (indexed site)
Safety, and Health Programs at the Stanford Linear Accelerator Center This report provides the results of an inspection of the environment, safety, and health programs at the ...
Christofilos, N.C.; Polk, I.J.
1959-02-17
Improvements in linear particle accelerators are described. A drift tube system for a linear ion accelerator reduces gap capacity between adjacent drift tube ends. This is accomplished by reducing the ratio of the diameter of the drift tube to the diameter of the resonant cavity. Concentration of magnetic field intensity at the longitudinal midpoint of the external sunface of each drift tube is reduced by increasing the external drift tube diameter at the longitudinal center region.
New Effective Multithreaded Matching Algorithms
Manne, Fredrik; Halappanavar, Mahantesh
2014-05-19
Matching is an important combinatorial problem with a number of applications in areas such as community detection, sparse linear algebra, and network alignment. Since computing optimal matchings can be very time consuming, several fast approximation algorithms, both sequential and parallel, have been suggested. Common to the algorithms giving the best solutions is that they tend to be sequential by nature, while algorithms more suitable for parallel computation give solutions of less quality. We present a new simple 1 2 -approximation algorithm for the weighted matching problem. This algorithm is both faster than any other suggested sequential 1 2 -approximation algorithm on almost all inputs and also scales better than previous multithreaded algorithms. We further extend this to a general scalable multithreaded algorithm that computes matchings of weight comparable with the best sequential algorithms. The performance of the suggested algorithms is documented through extensive experiments on different multithreaded architectures.
U.S. Department of Energy (DOE) - all webpages (Extended Search)
Linear Accelerator (LINAC) The core of the LANSCE facility is one of the nation's most powerful proton linear accelerators or LINAC. The LINAC at LANSCE has served the nation since 1972, providing the beam current required by all the experimental areas that support NNSA-DP and other DOE missions. The LINAC's capability to reliably deliver beam current is the key to the LANSCE's ability to do research-and thus the key to meeting NNSA and DOE mission deliverables. The LANSCE Accelerator The LANSCE
Munguia, Lluis-Miquel; Oxberry, Geoffrey; Rajan, Deepak
2016-05-01
Stochastic mixed-integer programs (SMIPs) deal with optimization under uncertainty at many levels of the decision-making process. When solved as extensive formulation mixed- integer programs, problem instances can exceed available memory on a single workstation. In order to overcome this limitation, we present PIPS-SBB: a distributed-memory parallel stochastic MIP solver that takes advantage of parallelism at multiple levels of the optimization process. We also show promising results on the SIPLIB benchmark by combining methods known for accelerating Branch and Bound (B&B) methods with new ideas that leverage the structure of SMIPs. Finally, we expect the performance of PIPS-SBB to improve furthermore » as more functionality is added in the future.« less
Colgate, S.A.
1958-05-27
An improvement is presented in linear accelerators for charged particles with respect to the stable focusing of the particle beam. The improvement consists of providing a radial electric field transverse to the accelerating electric fields and angularly introducing the beam of particles in the field. The results of the foregoing is to achieve a beam which spirals about the axis of the acceleration path. The combination of the electric fields and angular motion of the particles cooperate to provide a stable and focused particle beam.
Voila: A visual object-oriented iterative linear algebra problem solving environment
Edwards, H.C.; Hayes, L.J.
1994-12-31
Application of iterative methods to solve a large linear system of equations currently involves writing a program which calls iterative method subprograms from a large software package. These subprograms have complex interfaces which are difficult to use and even more difficult to program. A problem solving environment specifically tailored to the development and application of iterative methods is needed. This need will be fulfilled by Voila, a problem solving environment which provides a visual programming interface to object-oriented iterative linear algebra kernels. Voila will provide several quantum improvements over current iterative method problem solving environments. First, programming and applying iterative methods is considerably simplified through Voila`s visual programming interface. Second, iterative method algorithm implementations are independent of any particular sparse matrix data structure through Voila`s object-oriented kernels. Third, the compile-link-debug process is eliminated as Voila operates as an interpreter.
Belos Block Linear Solvers Package
Energy Science and Technology Software Center
2004-03-01
Belos is an extensible and interoperable framework for large-scale, iterative methods for solving systems of linear equations with multiple right-hand sides. The motivation for this framework is to provide a generic interface to a collection of algorithms for solving large-scale linear systems. Belos is interoperable because both the matrix and vectors are considered to be opaque objects--only knowledge of the matrix and vectors via elementary operations is necessary. An implementation of Balos is accomplished viamore » the use of interfaces. One of the goals of Belos is to allow the user flexibility in specifying the data representation for the matrix and vectors and so leverage any existing software investment. The algorithms that will be included in package are Krylov-based linear solvers, like Block GMRES (Generalized Minimal RESidual) and Block CG (Conjugate-Gradient).« less
Final Report-Optimization Under Uncertainty and Nonconvexity: Algorithms and Software
Jeff Linderoth
2008-10-10
The goal of this research was to develop new algorithmic techniques for solving large-scale numerical optimization problems, focusing on problems classes that have proven to be among the most challenging for practitioners: those involving uncertainty and those involving nonconvexity. This research advanced the state-of-the-art in solving mixed integer linear programs containing symmetry, mixed integer nonlinear programs, and stochastic optimization problems.
Independent Oversight Inspection, Stanford Linear Accelerator Center -
Office of Environmental Management (EM)
January 2007 | Department of Energy Stanford Linear Accelerator Center - January 2007 Independent Oversight Inspection, Stanford Linear Accelerator Center - January 2007 January 2007 Inspection of Environment, Safety, and Health Programs at the Stanford Linear Accelerator Center This report provides the results of an inspection of the environment, safety, and health programs at the Department of Energy's (DOE) Stanford Linear Accelerator Center. The inspection was conducted during October
Miller, Naomi J.; Perrin, Tess E.; Royer, Michael P.; Wilkerson, Andrea M.; Beeson, Tracy A.
2014-05-20
Although lensed troffers are numerous, there are many other types of optical systems as well. This report looked at the performance of three linear (T8) LED lamps chosen primarily based on their luminous intensity distributions (narrow, medium, and wide beam angles) as well as a benchmark fluorescent lamp in five different troffer types. Also included are the results of a subjective evaluation. Results show that linear (T8) LED lamps can improve luminaire efficiency in K12-lensed and parabolic-louvered troffers, effect little change in volumetric and high-performance diffuse-lensed type luminaires, but reduce efficiency in recessed indirect troffers. These changes can be accompanied by visual appearance and visual comfort consequences, especially when LED lamps with clear lenses and narrow distributions are installed. Linear (T8) LED lamps with diffuse apertures exhibited wider beam angles, performed more similarly to fluorescent lamps, and received better ratings from observers. Guidance is provided on which luminaires are the best candidates for retrofitting with linear (T8) LED lamps.
U.S. Department of Energy (DOE) - all webpages (Extended Search)
Programming Programming Compiling and linking programs on Euclid. Compiling Codes How to compile and link MPI codes on Euclid. Read More » Using the ACML Math Library How to compile and link a code with the ACML library and include the $ACML environment variable. Read More » Process Limits The hard and soft process limits are listed. Read More » Last edited: 2016-04-29 11:35:11
U.S. Department of Energy (DOE) - all webpages (Extended Search)
Programming Programming Compiling Codes on Hopper Cray provides a convenient set of wrapper commands that should be used in almost all cases for compiling and linking parallel programs. Invoking the wrappers will automatically link codes with the MPI libraries and other Cray system software libraries. All the MPI and Cray system include directories are also transparently imported. This page shows examples of how to compile codes on Franklin and Hopper. Read More » Shared and Dynamic Libraries
U.S. Department of Energy (DOE) - all webpages (Extended Search)
Programming Programming The genepool system has a diverse set of software development tools and a rich environment for delivering their functionality to users. Genepool has adopted a modular system which has been adapted from the Programming Environments similar to those provided on the Cray systems at NERSC. The Programming Environment is managed by a meta-module named similar to "PrgEnv-gnu/4.6". The "gnu" indicates that it is providing the GNU environment, principally GCC,
U.S. Department of Energy (DOE) - all webpages (Extended Search)
Storage & File Systems Application Performance Data & Analytics Job Logs & Statistics ... Each programming environment contains the full set of compatible compilers and libraries. ...
Adiabatic Quantum Programming: Minor Embedding With Hard Faults
Klymko, Christine F; Sullivan, Blair D; Humble, Travis S
2013-01-01
Adiabatic quantum programming defines the time-dependent mapping of a quantum algorithm into the hardware or logical fabric. An essential programming step is the embedding of problem-specific information into the logical fabric to define the quantum computational transformation. We present algorithms for embedding arbitrary instances of the adiabatic quantum optimization algorithm into a square lattice of specialized unit cells. Our methods are shown to be extensible in fabric growth, linear in time, and quadratic in logical footprint. In addition, we provide methods for accommodating hard faults in the logical fabric without invoking approximations to the original problem. These hard fault-tolerant embedding algorithms are expected to prove useful for benchmarking the adiabatic quantum optimization algorithm on existing quantum logical hardware. We illustrate this versatility through numerical studies of embeddabilty versus hard fault rates in square lattices of complete bipartite unit cells.
Positrons for linear colliders
Ecklund, S.
1987-11-01
The requirements of a positron source for a linear collider are briefly reviewed, followed by methods of positron production and production of photons by electromagnetic cascade showers. Cross sections for the electromagnetic cascade shower processes of positron-electron pair production and Compton scattering are compared. A program used for Monte Carlo analysis of electromagnetic cascades is briefly discussed, and positron distributions obtained from several runs of the program are discussed. Photons from synchrotron radiation and from channeling are also mentioned briefly, as well as positron collection, transverse focusing techniques, and longitudinal capture. Computer ray tracing is then briefly discussed, followed by space-charge effects and thermal heating and stress due to showers. (LEW)
U.S. Department of Energy (DOE) - all webpages (Extended Search)
using MPI and OpenMP on NERSC systems, the same does not always exist for other supported parallel programming models such as UPC or Chapel. At the same time, we know that these...
U.S. Department of Energy (DOE) - all webpages (Extended Search)
Programming Programming Compiling Codes There are three compiler suites available on Carver: Portland Group (PGI), Intel, and GCC. The PGI compilers are the default, to provide compatibility with other NERSC platforms. Read More » Using MKL Intel's Math Kernel Library (MKL) is a library of highly optimized, extensively threaded math routines optimized for Intel processors. Core math functions include BLAS, LAPACK, ScaLAPACK, Sparse Solvers, Fast Fourier Transforms, Vector Math, and more. It is
An optimal point spread function subtraction algorithm for high...
Office of Scientific and Technical Information (OSTI)
An optimal point spread function subtraction algorithm for high-contrast imaging: a ... This image is built as a linear combination of all available images and is optimized ...
Gropp, William D.
2014-06-23
With the coming end of Moore's law, it has become essential to develop new algorithms and techniques that can provide the performance needed by demanding computational science applications, especially those that are part of the DOE science mission. This work was part of a multi-institution, multi-investigator project that explored several approaches to develop algorithms that would be effective at the extreme scales and with the complex processor architectures that are expected at the end of this decade. The work by this group developed new performance models that have already helped guide the development of highly scalable versions of an algebraic multigrid solver, new programming approaches designed to support numerical algorithms on heterogeneous architectures, and a new, more scalable version of conjugate gradient, an important algorithm in the solution of very large linear systems of equations.
A new hybrid algorithm for analysis of HVdc and FACTs systems
Anderson, G.W.J.; Watson, N.R.; Arnold, C.P.; Arrillaga, J.
1995-12-31
Hybrid stability programs use a transient stability analysis for ac systems, in conjunction with detailed state variable or EMTP type modelling for fast dynamic devices. This paper presents a new hybrid algorithm that uses optimized techniques based on previously proposed methods. The hybrid provides a useful analysis tool to examine systems incorporating fast dynamic non-linear components such as HVdc links and FACTs devices.
Daniel, David J; Mc Pherson, Allen; Thorp, John R; Barrett, Richard; Clay, Robert; De Supinski, Bronis; Dube, Evi; Heroux, Mike; Janssen, Curtis; Langer, Steve; Laros, Jim
2011-01-14
A programming model is a set of software technologies that support the expression of algorithms and provide applications with an abstract representation of the capabilities of the underlying hardware architecture. The primary goals are productivity, portability and performance.
FPGA-based Klystron linearization implementations in scope of ILC
Omet, M.; Michizono, S.; Varghese, P.; Schlarb, H.; Branlard, J.; Cichalewski, W.
2015-01-23
We report the development and implementation of four FPGA-based predistortion-type klystron linearization algorithms. Klystron linearization is essential for the realization of ILC, since it is required to operate the klystrons 7% in power below their saturation. The work presented was performed in international collaborations at the Fermi National Accelerator Laboratory (FNAL), USA and the Deutsches Elektronen Synchrotron (DESY), Germany. With the newly developed algorithms, the generation of correction factors on the FPGA was improved compared to past algorithms, avoiding quantization and decreasing memory requirements. At FNAL, three algorithms were tested at the Advanced Superconducting Test Accelerator (ASTA), demonstrating a successful implementation for one algorithm and a proof of principle for two algorithms. Furthermore, the functionality of the algorithm implemented at DESY was demonstrated successfully in a simulation.
FPGA-based Klystron linearization implementations in scope of ILC
Omet, M.; Michizono, S.; Matsumoto, T.; Miura, T.; Qiu, F.; Chase, B.; Varghese, P.; Schlarb, H.; Branlard, J.; Cichalewski, W.
2015-01-23
We report the development and implementation of four FPGA-based predistortion-type klystron linearization algorithms. Klystron linearization is essential for the realization of ILC, since it is required to operate the klystrons 7% in power below their saturation. The work presented was performed in international collaborations at the Fermi National Accelerator Laboratory (FNAL), USA and the Deutsches Elektronen Synchrotron (DESY), Germany. With the newly developed algorithms, the generation of correction factors on the FPGA was improved compared to past algorithms, avoiding quantization and decreasing memory requirements. At FNAL, three algorithms were tested at the Advanced Superconducting Test Accelerator (ASTA), demonstrating a successfulmore » implementation for one algorithm and a proof of principle for two algorithms. Furthermore, the functionality of the algorithm implemented at DESY was demonstrated successfully in a simulation.« less
Linear induction accelerator parameter options
Birx, D.L.; Caporaso, G.J.; Reginato, L.L.
1986-04-21
The principal undertaking of the Beam Research Program over the past decade has been the investigation of propagating intense self-focused beams. Recently, the major activity of the program has shifted toward the investigation of converting high quality electron beams directly to laser radiation. During the early years of the program, accelerator development was directed toward the generation of very high current (>10 kA), high energy beams (>50 MeV). In its new mission, the program has shifted the emphasis toward the production of lower current beams (>3 kA) with high brightness (>10/sup 6/ A/(rad-cm)/sup 2/) at very high average power levels. In efforts to produce these intense beams, the state of the art of linear induction accelerators (LIA) has been advanced to the point of satisfying not only the current requirements but also future national needs.
Translation and integration of numerical atomic orbitals in linear molecules
Heinäsmäki, Sami
2014-02-14
We present algorithms for translation and integration of atomic orbitals for LCAO calculations in linear molecules. The method applies to arbitrary radial functions given on a numerical mesh. The algorithms are based on pseudospectral differentiation matrices in two dimensions and the corresponding two-dimensional Gaussian quadratures. As a result, multicenter overlap and Coulomb integrals can be evaluated effectively.
Asynchronous parallel generating set search for linearly-constrained optimization.
Lewis, Robert Michael; Griffin, Joshua D.; Kolda, Tamara Gibson
2006-08-01
Generating set search (GSS) is a family of direct search methods that encompasses generalized pattern search and related methods. We describe an algorithm for asynchronous linearly-constrained GSS, which has some complexities that make it different from both the asynchronous bound-constrained case as well as the synchronous linearly-constrained case. The algorithm has been implemented in the APPSPACK software framework and we present results from an extensive numerical study using CUTEr test problems. We discuss the results, both positive and negative, and conclude that GSS is a reliable method for solving small-to-medium sized linearly-constrained optimization problems without derivatives.
Fault tolerant linear actuator
Tesar, Delbert
2004-09-14
In varying embodiments, the fault tolerant linear actuator of the present invention is a new and improved linear actuator with fault tolerance and positional control that may incorporate velocity summing, force summing, or a combination of the two. In one embodiment, the invention offers a velocity summing arrangement with a differential gear between two prime movers driving a cage, which then drives a linear spindle screw transmission. Other embodiments feature two prime movers driving separate linear spindle screw transmissions, one internal and one external, in a totally concentric and compact integrated module.
Linear phase compressive filter
McEwan, Thomas E.
1995-01-01
A phase linear filter for soliton suppression is in the form of a laddered series of stages of non-commensurate low pass filters with each low pass filter having a series coupled inductance (L) and a reverse biased, voltage dependent varactor diode, to ground which acts as a variable capacitance (C). L and C values are set to levels which correspond to a linear or conventional phase linear filter. Inductance is mapped directly from that of an equivalent nonlinear transmission line and capacitance is mapped from the linear case using a large signal equivalent of a nonlinear transmission line.
Linear phase compressive filter
McEwan, T.E.
1995-06-06
A phase linear filter for soliton suppression is in the form of a laddered series of stages of non-commensurate low pass filters with each low pass filter having a series coupled inductance (L) and a reverse biased, voltage dependent varactor diode, to ground which acts as a variable capacitance (C). L and C values are set to levels which correspond to a linear or conventional phase linear filter. Inductance is mapped directly from that of an equivalent nonlinear transmission line and capacitance is mapped from the linear case using a large signal equivalent of a nonlinear transmission line. 2 figs.
Snapshot: Linear Lamps (TLEDs)
A report using LED Lighting Facts data to examine the current state of the market for linear fluorescent lamps. (8 pages, July 2016)
Focusing in Linear Accelerators
McMillan, E. M.
1950-08-24
Review of the theory of focusing in linear accelerators with comments on the incompatibility of phase stability and first-order focusing in a simple accelerator.
Algorithms for builder guidelines
Balcomb, J.D.; Lekov, A.B.
1989-06-01
The Builder Guidelines are designed to make simple, appropriate guidelines available to builders for their specific localities. Builders may select from passive solar and conservation strategies with different performance potentials. They can then compare the calculated results for their particular house design with a typical house in the same location. Algorithms used to develop the Builder Guidelines are described. The main algorithms used are the monthly solar ratio (SLR) method for winter heating, the diurnal heat capacity (DHC) method for temperature swing, and a new simplified calculation method (McCool) for summer cooling. This paper applies the algorithms to estimate the performance potential of passive solar strategies, and the annual heating and cooling loads of various combinations of conservation and passive solar strategies. The basis of the McCool method is described. All three methods are implemented in a microcomputer program used to generate the guideline numbers. Guidelines for Denver, Colorado, are used to illustrate the results. The structure of the guidelines and worksheet booklets are also presented. 5 refs., 3 tabs.
Linearly polarized fiber amplifier
Kliner, Dahv A.; Koplow, Jeffery P.
2004-11-30
Optically pumped rare-earth-doped polarizing fibers exhibit significantly higher gain for one linear polarization state than for the orthogonal state. Such a fiber can be used to construct a single-polarization fiber laser, amplifier, or amplified-spontaneous-emission (ASE) source without the need for additional optical components to obtain stable, linearly polarized operation.
Inpainting with sparse linear combinations of exemplars
Wohlberg, Brendt
2008-01-01
We introduce a new exemplar-based inpainting algorithm based on representing the region to be inpainted as a sparse linear combination of blocks extracted from similar parts of the image being inpainted. This method is conceptually simple, being computed by functional minimization, and avoids the complexity of correctly ordering the filling in of missing regions of other exemplar-based methods. Initial performance comparisons on small inpainting regions indicate that this method provides similar or better performance than other recent methods.
Rios, A. B.; Valda, A.; Somacal, H.
2007-10-26
Usually tomographic procedure requires a set of projections around the object under study and a mathematical processing of such projections through reconstruction algorithms. An accurate reconstruction requires a proper number of projections (angular sampling) and a proper number of elements in each projection (linear sampling). However in several practical cases it is not possible to fulfill these conditions leading to the so-called problem of few projections. In this case, iterative reconstruction algorithms are more suitable than analytic ones. In this work we present a program written in C++ that provides an environment for two iterative algorithm implementations, one algebraic and the other statistical. The software allows the user a full definition of the acquisition and reconstruction geometries used for the reconstruction algorithms but also to perform projection and backprojection operations. A set of analysis tools was implemented for the characterization of the convergence process. We analyze the performance of the algorithms on numerical phantoms and present the reconstruction of experimental data with few projections coming from transmission X-ray and micro PIXE (Particle-Induced X-Ray Emission) images.
PC Basic Linear Algebra Subroutines
Energy Science and Technology Software Center
1992-03-09
PC-BLAS is a highly optimized version of the Basic Linear Algebra Subprograms (BLAS), a standardized set of thirty-eight routines that perform low-level operations on vectors of numbers in single and double-precision real and complex arithmetic. Routines are included to find the index of the largest component of a vector, apply a Givens or modified Givens rotation, multiply a vector by a constant, determine the Euclidean length, perform a dot product, swap and copy vectors, andmore » find the norm of a vector. The BLAS have been carefully written to minimize numerical problems such as loss of precision and underflow and are designed so that the computation is independent of the interface with the calling program. This independence is achieved through judicious use of Assembly language macros. Interfaces are provided for Lahey Fortran 77, Microsoft Fortran 77, and Ryan-McFarland IBM Professional Fortran.« less
Combustion powered linear actuator
Fischer, Gary J.
2007-09-04
The present invention provides robotic vehicles having wheeled and hopping mobilities that are capable of traversing (e.g. by hopping over) obstacles that are large in size relative to the robot and, are capable of operation in unpredictable terrain over long range. The present invention further provides combustion powered linear actuators, which can include latching mechanisms to facilitate pressurized fueling of the actuators, as can be used to provide wheeled vehicles with a hopping mobility.
Bosamykin, V.S.; Pavlovskiy, A.I.
1984-03-01
A linear induction accelerator of charged particles, containing inductors and an acceleration circuit, characterized by the fact that, for the purpose of increasing the power of the accelerator, each inductor is made in the form of a toroidal line with distributed parameters, from one end of which in the gap of the line a ring commutator is included, and from the other end of the ine a resistor is hooked up, is described.
Buttram, M.T.; Ginn, J.W.
1988-06-21
A linear induction accelerator includes a plurality of adder cavities arranged in a series and provided in a structure which is evacuated so that a vacuum inductance is provided between each adder cavity and the structure. An energy storage system for the adder cavities includes a pulsed current source and a respective plurality of bipolar converting networks connected thereto. The bipolar high-voltage, high-repetition-rate square pulse train sets and resets the cavities. 4 figs.
Emma, P.
1995-06-01
The Stanford Linear Collider (SLC) is the first and only high-energy e{sup +}e{sup {minus}} linear collider in the world. Its most remarkable features are high intensity, submicron sized, polarized (e{sup {minus}}) beams at a single interaction point. The main challenges posed by these unique characteristics include machine-wide emittance preservation, consistent high intensity operation, polarized electron production and transport, and the achievement of a high degree of beam stability on all time scales. In addition to serving as an important machine for the study of Z{sup 0} boson production and decay using polarized beams, the SLC is also an indispensable source of hands-on experience for future linear colliders. Each new year of operation has been highlighted with a marked improvement in performance. The most significant improvements for the 1994-95 run include new low impedance vacuum chambers for the damping rings, an upgrade to the optics and diagnostics of the final focus systems, and a higher degree of polarization from the electron source. As a result, the average luminosity has nearly doubled over the previous year with peaks approaching 10{sup 30} cm{sup {minus}2}s{sup {minus}1} and an 80% electron polarization at the interaction point. These developments as well as the remaining identifiable performance limitations will be discussed.
Energy Science and Technology Software Center
002651IBMPC00 Algorithm for Accounting for the Interactions of Multiple Renewable Energy Technologies in Estimation of Annual Performance
A cooperative control algorithm for camera based observational systems.
Young, Joseph G.
2012-01-01
Over the last several years, there has been considerable growth in camera based observation systems for a variety of safety, scientific, and recreational applications. In order to improve the effectiveness of these systems, we frequently desire the ability to increase the number of observed objects, but solving this problem is not as simple as adding more cameras. Quite often, there are economic or physical restrictions that prevent us from adding additional cameras to the system. As a result, we require methods that coordinate the tracking of objects between multiple cameras in an optimal way. In order to accomplish this goal, we present a new cooperative control algorithm for a camera based observational system. Specifically, we present a receding horizon control where we model the underlying optimal control problem as a mixed integer linear program. The benefit of this design is that we can coordinate the actions between each camera while simultaneously respecting its kinematics. In addition, we further improve the quality of our solution by coupling our algorithm with a Kalman filter. Through this integration, we not only add a predictive component to our control, but we use the uncertainty estimates provided by the filter to encourage the system to periodically observe any outliers in the observed area. This combined approach allows us to intelligently observe the entire region of interest in an effective and thorough manner.
Confirming the Lanchestrian linear-logarithmic model of attrition
Hartley, D.S. III.
1990-12-01
This paper is the fourth in a series of reports on the breakthrough research in historical validation of attrition in conflict. Significant defense policy decisions, including weapons acquisition and arms reduction, are based in part on models of conflict. Most of these models are driven by their attrition algorithms, usually forms of the Lanchester square and linear laws. None of these algorithms have been validated. The results of this paper confirm the results of earlier papers, using a large database of historical results. The homogeneous linear-logarithmic Lanchestrian attrition model is validated to the extent possible with current initial and final force size data and is consistent with the Iwo Jima data. A particular differential linear-logarithmic model is described that fits the data very well. A version of Helmbold's victory predicting parameter is also confirmed, with an associated probability function. 37 refs., 73 figs., 68 tabs.
Linear Fresnel | Department of Energy
Office of Energy Efficiency and Renewable Energy (EERE) (indexed site)
Concentrating Solar Power » Linear Fresnel Linear Fresnel DOE funds solar research and development (R&D) in linear Fresnel systems as one of four CSP technologies aiming to meet the goals of the SunShot Initiative. Linear Fresnel systems, which are a type of linear concentrator, are active in Germany, Spain, Australia, India, and the United States. The SunShot Initiative funds R&D on linear Fresnel systems and related aspects within the industry, national laboratories and universities
Energy Science and Technology Software Center
2006-11-17
Software that simulates and inverts electromagnetic field data for subsurface electrical properties (electrical conductivity) of geological media. The software treats data produced by a time harmonic source field excitation arising from the following antenna geometery: loops and grounded bipoles, as well as point electric and magnetic dioples. The inversion process is carried out using a non-linear conjugate gradient optimization scheme, which minimizes the misfit between field data and model data using a least squares criteria.more » The software is an upgrade from the code NLCGCS_MP ver 1.0. The upgrade includes the following components: Incorporation of new 1 D field sourcing routines to more accurately simulate the 3D electromagnetic field for arbitrary geologic& media, treatment for generalized finite length transmitting antenna geometry (antennas with vertical and horizontal component directions). In addition, the software has been upgraded to treat transverse anisotropy in electrical conductivity.« less
Van Atta, C.M.; Beringer, R.; Smith, L.
1959-01-01
A linear accelerator of heavy ions is described. The basic contributions of the invention consist of a method and apparatus for obtaining high energy particles of an element with an increased charge-to-mass ratio. The method comprises the steps of ionizing the atoms of an element, accelerating the resultant ions to an energy substantially equal to one Mev per nucleon, stripping orbital electrons from the accelerated ions by passing the ions through a curtain of elemental vapor disposed transversely of the path of the ions to provide a second charge-to-mass ratio, and finally accelerating the resultant stripped ions to a final energy of at least ten Mev per nucleon.
Graphical representation of parallel algorithmic processes. Master's thesis
Williams, E.M.
1990-12-01
Algorithm animation is a visualization method used to enhance understanding of functioning of an algorithm or program. Visualization is used for many purposes, including education, algorithm research, performance analysis, and program debugging. This research applies algorithm animation techniques to programs developed for parallel architectures, with specific on the Intel iPSC/2 hypercube. While both P-time and NP-time algorithms can potentially benefit from using visualization techniques, the set of NP-complete problems provides fertile ground for developing parallel applications, since the combinatoric nature of the problems makes finding the optimum solution impractical. The primary goals for this visualization system are: Data should be displayed as it is generated. The interface to the targe program should be transparent, allowing the animation of existing programs. Flexibility - the system should be able to animate any algorithm. The resulting system incorporates and extends two AFIT products: the AFIT Algorithm Animation Research Facility (AAARF) and the Parallel Resource Analysis Software Environment (PRASE). AAARF is an algorithm animation system developed primarily for sequential programs, but is easily adaptable for use with parallel programs. PRASE is an instrumentation package that extracts system performance data from programs on the Intel hypercubes. Since performance data is an essential part of analyzing any parallel program, views of the performance data are provided as an elementary part of the system. Custom software is designed to interface these systems and to display the program data. The program chosen as the example for this study is a member of the NP-complete problem set; it is a parallel implementation of a general.
Kliman, G.B.; Brynsvold, G.V.; Jahns, T.M.
1989-08-22
A winding and method of winding for a submersible linear pump for pumping liquid sodium are disclosed. The pump includes a stator having a central cylindrical duct preferably vertically aligned. The central vertical duct is surrounded by a system of coils in slots. These slots are interleaved with magnetic flux conducting elements, these magnetic flux conducting elements forming a continuous magnetic field conduction path along the stator. The central duct has placed therein a cylindrical magnetic conducting core, this core having a cylindrical diameter less than the diameter of the cylindrical duct. The core once placed to the duct defines a cylindrical interstitial pumping volume of the pump. This cylindrical interstitial pumping volume preferably defines an inlet at the bottom of the pump, and an outlet at the top of the pump. Pump operation occurs by static windings in the outer stator sequentially conveying toroidal fields from the pump inlet at the bottom of the pump to the pump outlet at the top of the pump. The winding apparatus and method of winding disclosed uses multiple slots per pole per phase with parallel winding legs on each phase equal to or less than the number of slots per pole per phase. The slot sequence per pole per phase is chosen to equalize the variations in flux density of the pump sodium as it passes into the pump at the pump inlet with little or no flux and acquires magnetic flux in passage through the pump to the pump outlet. 4 figs.
Kliman, Gerald B.; Brynsvold, Glen V.; Jahns, Thomas M.
1989-01-01
A winding and method of winding for a submersible linear pump for pumping liquid sodium is disclosed. The pump includes a stator having a central cylindrical duct preferably vertically aligned. The central vertical duct is surrounded by a system of coils in slots. These slots are interleaved with magnetic flux conducting elements, these magnetic flux conducting elements forming a continuous magnetic field conduction path along the stator. The central duct has placed therein a cylindrical magnetic conducting core, this core having a cylindrical diameter less than the diameter of the cylindrical duct. The core once placed to the duct defines a cylindrical interstitial pumping volume of the pump. This cylindrical interstitial pumping volume preferably defines an inlet at the bottom of the pump, and an outlet at the top of the pump. Pump operation occurs by static windings in the outer stator sequentially conveying toroidal fields from the pump inlet at the bottom of the pump to the pump outlet at the top of the pump. The winding apparatus and method of winding disclosed uses multiple slots per pole per phase with parallel winding legs on each phase equal to or less than the number of slots per pole per phase. The slot sequence per pole per phase is chosen to equalize the variations in flux density of the pump sodium as it passes into the pump at the pump inlet with little or no flux and acquires magnetic flux in passage through the pump to the pump outlet.
Berkeley Proton Linear Accelerator
Alvarez, L. W.; Bradner, H.; Franck, J.; Gordon, H.; Gow, J. D.; Marshall, L. C.; Oppenheimer, F. F.; Panofsky, W. K. H.; Richman, C.; Woodyard, J. R.
1953-10-13
A linear accelerator, which increases the energy of protons from a 4 Mev Van de Graaff injector, to a final energy of 31.5 Mev, has been constructed. The accelerator consists of a cavity 40 feet long and 39 inches in diameter, excited at resonance in a longitudinal electric mode with a radio-frequency power of about 2.2 x 10{sup 6} watts peak at 202.5 mc. Acceleration is made possible by the introduction of 46 axial "drift tubes" into the cavity, which is designed such that the particles traverse the distance between the centers of successive tubes in one cycle of the r.f. power. The protons are longitudinally stable as in the synchrotron, and are stabilized transversely by the action of converging fields produced by focusing grids. The electrical cavity is constructed like an inverted airplane fuselage and is supported in a vacuum tank. Power is supplied by 9 high powered oscillators fed from a pulse generator of the artificial transmission line type.
Meisner, John W.; Moore, Robert M.; Bienvenue, Louis L.
1985-03-19
Electromagnetic linear induction pump for liquid metal which includes a unitary pump duct. The duct comprises two substantially flat parallel spaced-apart wall members, one being located above the other and two parallel opposing side members interconnecting the wall members. Located within the duct are a plurality of web members interconnecting the wall members and extending parallel to the side members whereby the wall members, side members and web members define a plurality of fluid passageways, each of the fluid passageways having substantially the same cross-sectional flow area. Attached to an outer surface of each side member is an electrically conductive end bar for the passage of an induced current therethrough. A multi-phase, electrical stator is located adjacent each of the wall members. The duct, stators, and end bars are enclosed in a housing which is provided with an inlet and outlet in fluid communication with opposite ends of the fluid passageways in the pump duct. In accordance with a preferred embodiment, the inlet and outlet includes a transition means which provides for a transition from a round cross-sectional flow path to a substantially rectangular cross-sectional flow path defined by the pump duct.
Parallelism of the SANDstorm hash algorithm.
Torgerson, Mark Dolan; Draelos, Timothy John; Schroeppel, Richard Crabtree
2009-09-01
Mainstream cryptographic hashing algorithms are not parallelizable. This limits their speed and they are not able to take advantage of the current trend of being run on multi-core platforms. Being limited in speed limits their usefulness as an authentication mechanism in secure communications. Sandia researchers have created a new cryptographic hashing algorithm, SANDstorm, which was specifically designed to take advantage of multi-core processing and be parallelizable on a wide range of platforms. This report describes a late-start LDRD effort to verify the parallelizability claims of the SANDstorm designers. We have shown, with operating code and bench testing, that the SANDstorm algorithm may be trivially parallelized on a wide range of hardware platforms. Implementations using OpenMP demonstrates a linear speedup with multiple cores. We have also shown significant performance gains with optimized C code and the use of assembly instructions to exploit particular platform capabilities.
Linear Collider Physics Resource Book Snowmass 2001
Ronan , M.T.
2001-06-01
The American particle physics community can look forward to a well-conceived and vital program of experimentation for the next ten years, using both colliders and fixed target beams to study a wide variety of pressing questions. Beyond 2010, these programs will be reaching the end of their expected lives. The CERN LHC will provide an experimental program of the first importance. But beyond the LHC, the American community needs a coherent plan. The Snowmass 2001 Workshop and the deliberations of the HEPAP subpanel offer a rare opportunity to engage the full community in planning our future for the next decade or more. A major accelerator project requires a decade from the beginning of an engineering design to the receipt of the first data. So it is now time to decide whether to begin a new accelerator project that will operate in the years soon after 2010. We believe that the world high-energy physics community needs such a project. With the great promise of discovery in physics at the next energy scale, and with the opportunity for the uncovering of profound insights, we cannot allow our field to contract to a single experimental program at a single laboratory in the world. We believe that an e{sup +}e{sup -} linear collider is an excellent choice for the next major project in high-energy physics. Applying experimental techniques very different from those used at hadron colliders, an e{sup +}e{sup -} linear collider will allow us to build on the discoveries made at the Tevatron and the LHC, and to add a level of precision and clarity that will be necessary to understand the physics of the next energy scale. It is not necessary to anticipate specific results from the hadron collider programs to argue for constructing an e{sup +}e{sup -} linear collider; in any scenario that is now discussed, physics will benefit from the new information that e{sup +}e{sup -} experiments can provide. This last point merits further emphasis. If a new accelerator could be designed and
Brambley, Michael R.; Katipamula, Srinivas
2006-10-06
Pacific Northwest National Laboratory (PNNL) is assisting the U.S. Department of Energy (DOE) Distributed Energy (DE) Program by developing advanced control algorithms that would lead to development of tools to enhance performance and reliability, and reduce emissions of distributed energy technologies, including combined heat and power technologies. This report documents phase 2 of the program, providing a detailed functional specification for algorithms for performance monitoring and commissioning verification, scheduled for development in FY 2006. The report identifies the systems for which algorithms will be developed, the specific functions of each algorithm, metrics which the algorithms will output, and inputs required by each algorithm.
The TESLA superconducting linear collider
the TESLA Collaboration
1997-03-01
This paper summarizes the present status of the studies for a superconducting Linear Collider (TESLA). {copyright} {ital 1997 American Institute of Physics.}
Amesos2 and Belos: Direct and Iterative Solvers for Large Sparse Linear Systems
Bavier, Eric; Hoemmen, Mark; Rajamanickam, Sivasankaran; Thornquist, Heidi
2012-01-01
Solvers for large sparse linear systems come in two categories: direct and iterative. Amesos2, a package in the Trilinos software project, provides direct methods, and Belos, another Trilinos package, provides iterative methods. Amesos2 offers a common interface to many different sparse matrix factorization codes, and can handle any implementation of sparse matrices and vectors, via an easy-to-extend C++ traits interface. It can also factor matrices whose entries have arbitrary “Scalar” type, enabling extended-precision and mixed-precision algorithms. Belos includes many different iterative methods for solving large sparse linear systems and least-squares problems. Unlike competing iterative solver libraries, Belos completely decouples themore » algorithms from the implementations of the underlying linear algebra objects. This lets Belos exploit the latest hardware without changes to the code. Belos favors algorithms that solve higher-level problems, such as multiple simultaneous linear systems and sequences of related linear systems, faster than standard algorithms. The package also supports extended-precision and mixed-precision algorithms. Together, Amesos2 and Belos form a complete suite of sparse linear solvers.« less
2d PDE Linear Symmetric Matrix Solver
Energy Science and Technology Software Center
1983-10-01
ICCG2 (Incomplete Cholesky factorized Conjugate Gradient algorithm for 2d symmetric problems) was developed to solve a linear symmetric matrix system arising from a 9-point discretization of two-dimensional elliptic and parabolic partial differential equations found in plasma physics applications, such as resistive MHD, spatial diffusive transport, and phase space transport (Fokker-Planck equation) problems. These problems share the common feature of being stiff and requiring implicit solution techniques. When these parabolic or elliptic PDE''s are discretized withmore » finite-difference or finite-element methods,the resulting matrix system is frequently of block-tridiagonal form. To use ICCG2, the discretization of the two-dimensional partial differential equation and its boundary conditions must result in a block-tridiagonal supermatrix composed of elementary tridiagonal matrices. The incomplete Cholesky conjugate gradient algorithm is used to solve the linear symmetric matrix equation. Loops are arranged to vectorize on the Cray1 with the CFT compiler, wherever possible. Recursive loops, which cannot be vectorized, are written for optimum scalar speed. For matrices lacking symmetry, ILUCG2 should be used. Similar methods in three dimensions are available in ICCG3 and ILUCG3. A general source containing extensions and macros, which must be processed by a pre-compiler to obtain the standard FORTRAN source, is provided along with the standard FORTRAN source because it is believed to be more readable. The pre-compiler is not included, but pre-compilation may be performed by a text editor as described in the UCRL-88746 Preprint.« less
Statistics of voltage drop in distribution circuits: a dynamic programming approach
Turitsyn, Konstantin S
2010-01-01
We analyze a power distribution line with high penetration of distributed generation and strong variations of power consumption and generation levels. In the presence of uncertainty the statistical description of the system is required to assess the risks of power outages. In order to find the probability of exceeding the constraints for voltage levels we introduce the probability distribution of maximal voltage drop and propose an algorithm for finding this distribution. The algorithm is based on the assumption of random but statistically independent distribution of loads on buses. Linear complexity in the number of buses is achieved through the dynamic programming technique. We illustrate the performance of the algorithm by analyzing a simple 4-bus system with high variations of load levels.
Energy Science and Technology Software Center
2013-07-29
The OpenEIS Algorithm package seeks to provide a low-risk path for building owners, service providers and managers to explore analytical methods for improving building control and operational efficiency. Users of this software can analyze building data, and learn how commercial implementations would provide long-term value. The code also serves as a reference implementation for developers who wish to adapt the algorithms for use in commercial tools or service offerings.
A robust return-map algorithm for general multisurface plasticity
Adhikary, Deepak P.; Jayasundara, Chandana T.; Podgorney, Robert K.; Wilkins, Andy H.
2016-06-16
Three new contributions to the field of multisurface plasticity are presented for general situations with an arbitrary number of nonlinear yield surfaces with hardening or softening. A method for handling linearly dependent flow directions is described. A residual that can be used in a line search is defined. An algorithm that has been implemented and comprehensively tested is discussed in detail. Examples are presented to illustrate the computational cost of various components of the algorithm. The overall result is that a single Newton-Raphson iteration of the algorithm costs between 1.5 and 2 times that of an elastic calculation. Examples alsomore » illustrate the successful convergence of the algorithm in complicated situations. For example, without using the new contributions presented here, the algorithm fails to converge for approximately 50% of the trial stresses for a common geomechanical model of sedementary rocks, while the current algorithm results in complete success. Since it involves no approximations, the algorithm is used to quantify the accuracy of an efficient, pragmatic, but approximate, algorithm used for sedimentary-rock plasticity in a commercial software package. Furthermore, the main weakness of the algorithm is identified as the difficulty of correctly choosing the set of initially active constraints in the general setting.« less
Nonlinear Global Optimization Using Curdling Algorithm
Energy Science and Technology Software Center
1996-03-01
An algorithm for performing curdling optimization which is a derivative-free, grid-refinement approach to nonlinear optimization was developed and implemented in software. This approach overcomes a number of deficiencies in existing approaches. Most notably, it finds extremal regions rather than only single external extremal points. The program is interactive and collects information on control parameters and constraints using menus. For up to four dimensions, function convergence is displayed graphically. Because the algorithm does not compute derivatives,more » gradients or vectors, it is numerically stable. It can find all the roots of a polynomial in one pass. It is an inherently parallel algorithm. Constraints are handled as being initially fuzzy, but become tighter with each iteration.« less
Linear Accelerator | Advanced Photon Source
U.S. Department of Energy (DOE) - all webpages (Extended Search)
Linear Accelerator Producing brilliant x-ray beams at the APS begins with electrons emitted from a cathode heated to 1100 C. The electrons are accelerated by high-voltage...
Acoustic emission linear pulse holography
Collins, H.D.; Busse, L.J.; Lemon, D.K.
1983-10-25
This device relates to the concept of and means for performing Acoustic Emission Linear Pulse Holography, which combines the advantages of linear holographic imaging and Acoustic Emission into a single non-destructive inspection system. This unique system produces a chronological, linear holographic image of a flaw by utilizing the acoustic energy emitted during crack growth. The innovation is the concept of utilizing the crack-generated acoustic emission energy to generate a chronological series of images of a growing crack by applying linear, pulse holographic processing to the acoustic emission data. The process is implemented by placing on a structure an array of piezoelectric sensors (typically 16 or 32 of them) near the defect location. A reference sensor is placed between the defect and the array.
Automating linear accelerator quality assurance
Eckhause, Tobias; Thorwarth, Ryan; Moran, Jean M.; Al-Hallaq, Hania; Farrey, Karl; Ritter, Timothy; DeMarco, John; Pawlicki, Todd; Kim, Gwe-Ya; Popple, Richard; Sharma, Vijeshwar; Park, SungYong; Perez, Mario; Booth, Jeremy T.
2015-10-15
Purpose: The purpose of this study was 2-fold. One purpose was to develop an automated, streamlined quality assurance (QA) program for use by multiple centers. The second purpose was to evaluate machine performance over time for multiple centers using linear accelerator (Linac) log files and electronic portal images. The authors sought to evaluate variations in Linac performance to establish as a reference for other centers. Methods: The authors developed analytical software tools for a QA program using both log files and electronic portal imaging device (EPID) measurements. The first tool is a general analysis tool which can read and visually represent data in the log file. This tool, which can be used to automatically analyze patient treatment or QA log files, examines the files for Linac deviations which exceed thresholds. The second set of tools consists of a test suite of QA fields, a standard phantom, and software to collect information from the log files on deviations from the expected values. The test suite was designed to focus on the mechanical tests of the Linac to include jaw, MLC, and collimator positions during static, IMRT, and volumetric modulated arc therapy delivery. A consortium of eight institutions delivered the test suite at monthly or weekly intervals on each Linac using a standard phantom. The behavior of various components was analyzed for eight TrueBeam Linacs. Results: For the EPID and trajectory log file analysis, all observed deviations which exceeded established thresholds for Linac behavior resulted in a beam hold off. In the absence of an interlock-triggering event, the maximum observed log file deviations between the expected and actual component positions (such as MLC leaves) varied from less than 1% to 26% of published tolerance thresholds. The maximum and standard deviations of the variations due to gantry sag, collimator angle, jaw position, and MLC positions are presented. Gantry sag among Linacs was 0.336 ± 0.072 mm. The
Energy Science and Technology Software Center
1998-07-01
GenOpt is a generic optimization program for nonlinear, constrained optimization. For evaluating the objective function, any simulation program that communicates over text files can be coupled to GenOpt without code modification. No analytic properties of the objective function are used by GenOpt. ptimization algorithms and numerical methods can be implemented in a library and shared among users. Gencpt offers an interlace between the optimization algorithm and its kernel to make the implementation of new algorithmsmore » fast and easy. Different algorithms of constrained and unconstrained minimization can be added to a library. Algorithms for approximation derivatives and performing line-search will be implemented. The objective function is evaluated as a black-box function by an external simulation program. The kernel of GenOpt deals with the data I/O, result sotrage and report, interlace to the external simulation program, and error handling. An abstract optimization class offers methods to interface the GenOpt kernel and the optimization algorithm library.« less
MineSeis - A MATLAB@ GUI Program to
Office of Scientific and Technical Information (OSTI)
i MineSeis - A MATLAB@ GUI Program to Calculate Synthetic Seismograms from a Linear, ... The program was written with MATLAB Graphical User Interface (GUI) technique ...
Energy Science and Technology Software Center
2005-03-30
The Robotic Follow Algorithm enables allows any robotic vehicle to follow a moving target while reactively choosing a route around nearby obstacles. The robotic follow behavior can be used with different camera systems and can be used with thermal or visual tracking as well as other tracking methods such as radio frequency tags.
Integrated Network Decompositions and Dynamic Programming for Graph Optimization (INDDGO)
Energy Science and Technology Software Center
2012-05-31
The INDDGO software package offers a set of tools for finding exact solutions to graph optimization problems via tree decompositions and dynamic programming algorithms. Currently the framework offers serial and parallel (distributed memory) algorithms for finding tree decompositions and solving the maximum weighted independent set problem. The parallel dynamic programming algorithm is implemented on top of the MADNESS task-based runtime.
2d PDE Linear Asymmetric Matrix Solver
Energy Science and Technology Software Center
1983-10-01
ILUCG2 (Incomplete LU factorized Conjugate Gradient algorithm for 2d problems) was developed to solve a linear asymmetric matrix system arising from a 9-point discretization of two-dimensional elliptic and parabolic partial differential equations found in plasma physics applications, such as plasma diffusion, equilibria, and phase space transport (Fokker-Planck equation) problems. These equations share the common feature of being stiff and requiring implicit solution techniques. When these parabolic or elliptic PDE''s are discretized with finite-difference or finite-elementmore » methods, the resulting matrix system is frequently of block-tridiagonal form. To use ILUCG2, the discretization of the two-dimensional partial differential equation and its boundary conditions must result in a block-tridiagonal supermatrix composed of elementary tridiagonal matrices. A generalization of the incomplete Cholesky conjugate gradient algorithm is used to solve the matrix equation. Loops are arranged to vectorize on the Cray1 with the CFT compiler, wherever possible. Recursive loops, which cannot be vectorized, are written for optimum scalar speed. For problems having a symmetric matrix ICCG2 should be used since it runs up to four times faster and uses approximately 30% less storage. Similar methods in three dimensions are available in ICCG3 and ILUCG3. A general source, containing extensions and macros, which must be processed by a pre-compiler to obtain the standard FORTRAN source, is provided along with the standard FORTRAN source because it is believed to be more readable. The pre-compiler is not included, but pre-compilation may be performed by a text editor as described in the UCRL-88746 Preprint.« less
Henry, J.J.
1961-09-01
A linear count-rate meter is designed to provide a highly linear output while receiving counting rates from one cycle per second to 100,000 cycles per second. Input pulses enter a linear discriminator and then are fed to a trigger circuit which produces positive pulses of uniform width and amplitude. The trigger circuit is connected to a one-shot multivibrator. The multivibrator output pulses have a selected width. Feedback means are provided for preventing transistor saturation in the multivibrator which improves the rise and decay times of the output pulses. The multivibrator is connected to a diode-switched, constant current metering circuit. A selected constant current is switched to an averaging circuit for each pulse received, and for a time determined by the received pulse width. The average output meter current is proportional to the product of the counting rate, the constant current, and the multivibrator output pulse width.
General Purpose Unfolding Program with Linear and Nonlinear Regularizations.
Energy Science and Technology Software Center
1987-05-07
Version 00 The interpretation of several physical measurements requires the unfolding or deconvolution of the solution of Fredholm integral equations of the first kind. Examples include neutron spectroscopy with activation detectors, moderating spheres, or proton recoil measurements. LOUHI82 is designed to be applicable to a large number of physical problems and to be extended to incorporate other unfolding methods.
Linear Corrugating - Final Technical Report
Lloyd Chapman
2000-05-23
Linear Corrugating is a process for the manufacture of corrugated containers in which the flutes of the corrugated medium are oriented in the Machine Direction (MD) of the several layers of paper used. Conversely, in the conventional corrugating process the flutes are oriented at right angles to the MD in the Cross Machine Direction (CD). Paper is stronger in MD than in CD. Therefore, boxes made using the Linear Corrugating process are significantly stronger-in the prime strength criteria, Box Compression Test (BCT) than boxes made conventionally. This means that using Linear Corrugating boxes can be manufactured to BCT equaling conventional boxes but containing 30% less fiber. The corrugated container industry is a large part of the U.S. economy, producing over 40 million tons annually. For such a large industry, the potential savings of Linear Corrugating are enormous. The grant for this project covered three phases in the development of the Linear Corrugating process: (1) Production and evaluation of corrugated boxes on commercial equipment to verify that boxes so manufactured would have enhanced BCT as proposed in the application; (2) Production and evaluation of corrugated boxes made on laboratory equipment using combined board from (1) above but having dual manufactures joints (glue joints). This box manufacturing method (Dual Joint) is proposed to overcome box perimeter limitations of the Linear Corrugating process; (3) Design, Construction, Operation and Evaluation of an engineering prototype machine to form flutes in corrugating medium in the MD of the paper. This operation is the central requirement of the Linear Corrugating process. Items I and II were successfully completed, showing predicted BCT increases from the Linear Corrugated boxes and significant strength improvement in the Dual Joint boxes. The Former was constructed and operated successfully using kraft linerboard as the forming medium. It was found that tensile strength and stretch
Linear Fresnel Power Plant Illustration
With this concentrating solar power (CSP) graphic, flat or slightly curved mirrors mounted on trackers on the ground are configured to reflect sunlight onto a receiver tube fixed in space above these mirrors. A small parabolic mirror is sometimes added atop the receiver to further focus the sunlight. Linear CSP collectors capture the sun's energy with large mirrors that reflect and focus the sunlight onto a linear receiver tube. The receiver contains a fluid that is heated by the sunlight and then used to create superheated steam that spins a turbine that drives a generator to produce electricity.
Linear electric field mass spectrometry
McComas, D.J.; Nordholt, J.E.
1992-12-01
A mass spectrometer and methods for mass spectrometry are described. The apparatus is compact and of low weight and has a low power requirement, making it suitable for use on a space satellite and as a portable detector for the presence of substances. High mass resolution measurements are made by timing ions moving through a gridless cylindrically symmetric linear electric field. 8 figs.
Linear electric field mass spectrometry
McComas, David J.; Nordholt, Jane E.
1992-01-01
A mass spectrometer and methods for mass spectrometry. The apparatus is compact and of low weight and has a low power requirement, making it suitable for use on a space satellite and as a portable detector for the presence of substances. High mass resolution measurements are made by timing ions moving through a gridless cylindrically symmetric linear electric field.
Large scale tracking algorithms.
Hansen, Ross L.; Love, Joshua Alan; Melgaard, David Kennett; Karelitz, David B.; Pitts, Todd Alan; Zollweg, Joshua David; Anderson, Dylan Z.; Nandy, Prabal; Whitlow, Gary L.; Bender, Daniel A.; Byrne, Raymond Harry
2015-01-01
Low signal-to-noise data processing algorithms for improved detection, tracking, discrimination and situational threat assessment are a key research challenge. As sensor technologies progress, the number of pixels will increase signi cantly. This will result in increased resolution, which could improve object discrimination, but unfortunately, will also result in a significant increase in the number of potential targets to track. Many tracking techniques, like multi-hypothesis trackers, suffer from a combinatorial explosion as the number of potential targets increase. As the resolution increases, the phenomenology applied towards detection algorithms also changes. For low resolution sensors, "blob" tracking is the norm. For higher resolution data, additional information may be employed in the detection and classfication steps. The most challenging scenarios are those where the targets cannot be fully resolved, yet must be tracked and distinguished for neighboring closely spaced objects. Tracking vehicles in an urban environment is an example of such a challenging scenario. This report evaluates several potential tracking algorithms for large-scale tracking in an urban environment.
International Linear Collider Technical Design Report - Volume...
Office of Scientific and Technical Information (OSTI)
International Linear Collider Technical Design Report - Volume 2: Physics Citation Details In-Document Search Title: International Linear Collider Technical Design Report - Volume ...
A new augmentation based algorithm for extracting maximal chordal subgraphs
Bhowmick, Sanjukta; Chen, Tzu-Yi; Halappanavar, Mahantesh
2014-10-18
If every cycle of a graph is chordal length greater than three then it contains an edge between non-adjacent vertices. Chordal graphs are of interest both theoretically, since they admit polynomial time solutions to a range of NP-hard graph problems, and practically, since they arise in many applications including sparse linear algebra, computer vision, and computational biology. A maximal chordal subgraph is a chordal subgraph that is not a proper subgraph of any other chordal subgraph. Existing algorithms for computing maximal chordal subgraphs depend on dynamically ordering the vertices, which is an inherently sequential process and therefore limits the algorithms’ parallelizability. In our paper we explore techniques to develop a scalable parallel algorithm for extracting a maximal chordal subgraph. We demonstrate that an earlier attempt at developing a parallel algorithm may induce a non-optimal vertex ordering and is therefore not guaranteed to terminate with a maximal chordal subgraph. We then give a new algorithm that first computes and then repeatedly augments a spanning chordal subgraph. After proving that the algorithm terminates with a maximal chordal subgraph, we then demonstrate that this algorithm is more amenable to parallelization and that the parallel version also terminates with a maximal chordal subgraph. That said, the complexity of the new algorithm is higher than that of the previous parallel algorithm, although the earlier algorithm computes a chordal subgraph which is not guaranteed to be maximal. Finally, we experimented with our augmentation-based algorithm on both synthetic and real-world graphs. We provide scalability results and also explore the effect of different choices for the initial spanning chordal subgraph on both the running time and on the number of edges in the maximal chordal subgraph.
A new augmentation based algorithm for extracting maximal chordal subgraphs
Bhowmick, Sanjukta; Chen, Tzu-Yi; Halappanavar, Mahantesh
2014-10-18
If every cycle of a graph is chordal length greater than three then it contains an edge between non-adjacent vertices. Chordal graphs are of interest both theoretically, since they admit polynomial time solutions to a range of NP-hard graph problems, and practically, since they arise in many applications including sparse linear algebra, computer vision, and computational biology. A maximal chordal subgraph is a chordal subgraph that is not a proper subgraph of any other chordal subgraph. Existing algorithms for computing maximal chordal subgraphs depend on dynamically ordering the vertices, which is an inherently sequential process and therefore limits the algorithms’more » parallelizability. In our paper we explore techniques to develop a scalable parallel algorithm for extracting a maximal chordal subgraph. We demonstrate that an earlier attempt at developing a parallel algorithm may induce a non-optimal vertex ordering and is therefore not guaranteed to terminate with a maximal chordal subgraph. We then give a new algorithm that first computes and then repeatedly augments a spanning chordal subgraph. After proving that the algorithm terminates with a maximal chordal subgraph, we then demonstrate that this algorithm is more amenable to parallelization and that the parallel version also terminates with a maximal chordal subgraph. That said, the complexity of the new algorithm is higher than that of the previous parallel algorithm, although the earlier algorithm computes a chordal subgraph which is not guaranteed to be maximal. Finally, we experimented with our augmentation-based algorithm on both synthetic and real-world graphs. We provide scalability results and also explore the effect of different choices for the initial spanning chordal subgraph on both the running time and on the number of edges in the maximal chordal subgraph.« less
A Numerical Algorithm for the Solution of a Phase-Field Model of Polycrystalline Materials
Dorr, M R; Fattebert, J; Wickett, M E; Belak, J F; Turchi, P A
2008-12-04
We describe an algorithm for the numerical solution of a phase-field model (PFM) of microstructure evolution in polycrystalline materials. The PFM system of equations includes a local order parameter, a quaternion representation of local orientation and a species composition parameter. The algorithm is based on the implicit integration of a semidiscretization of the PFM system using a backward difference formula (BDF) temporal discretization combined with a Newton-Krylov algorithm to solve the nonlinear system at each time step. The BDF algorithm is combined with a coordinate projection method to maintain quaternion unit length, which is related to an important solution invariant. A key element of the Newton-Krylov algorithm is the selection of a preconditioner to accelerate the convergence of the Generalized Minimum Residual algorithm used to solve the Jacobian linear system in each Newton step. Results are presented for the application of the algorithm to 2D and 3D examples.
Cast dielectric composite linear accelerator
Sanders, David M.; Sampayan, Stephen; Slenes, Kirk; Stoller, H. M.
2009-11-10
A linear accelerator having cast dielectric composite layers integrally formed with conductor electrodes in a solventless fabrication process, with the cast dielectric composite preferably having a nanoparticle filler in an organic polymer such as a thermosetting resin. By incorporating this cast dielectric composite the dielectric constant of critical insulating layers of the transmission lines of the accelerator are increased while simultaneously maintaining high dielectric strengths for the accelerator.
Segmented rail linear induction motor
Cowan, M. Jr.; Marder, B.M.
1996-09-03
A segmented rail linear induction motor has a segmented rail consisting of a plurality of nonferrous electrically conductive segments aligned along a guideway. The motor further includes a carriage including at least one pair of opposed coils fastened to the carriage for moving the carriage. A power source applies an electric current to the coils to induce currents in the conductive surfaces to repel the coils from adjacent edges of the conductive surfaces. 6 figs.
Segmented rail linear induction motor
Cowan, Jr., Maynard; Marder, Barry M.
1996-01-01
A segmented rail linear induction motor has a segmented rail consisting of a plurality of nonferrous electrically conductive segments aligned along a guideway. The motor further includes a carriage including at least one pair of opposed coils fastened to the carriage for moving the carriage. A power source applies an electric current to the coils to induce currents in the conductive surfaces to repel the coils from adjacent edges of the conductive surfaces.
Precision linear ramp function generator
Jatko, W. Bruce (Knoxville, TN); McNeilly, David R. (Maryville, TN); Thacker, Louis H. (Knoxville, TN)
1986-01-01
A ramp function generator is provided which produces a precise linear ramp unction which is repeatable and highly stable. A derivative feedback loop is used to stabilize the output of an integrator in the forward loop and control the ramp rate. The ramp may be started from a selected baseline voltage level and the desired ramp rate is selected by applying an appropriate constant voltage to the input of the integrator.
Precision linear ramp function generator
Jatko, W.B.; McNeilly, D.R.; Thacker, L.H.
1984-08-01
A ramp function generator is provided which produces a precise linear ramp function which is repeatable and highly stable. A derivative feedback loop is used to stabilize the output of an integrator in the forward loop and control the ramp rate. The ramp may be started from a selected baseline voltage level and the desired ramp rate is selected by applying an appropriate constant voltage to the input of the integrator.
Improved multiprocessor garbage collection algorithms
Newman, I.A.; Stallard, R.P.; Woodward, M.C.
1983-01-01
Outlines the results of an investigation of existing multiprocessor garbage collection algorithms and introduces two new algorithms which significantly improve some aspects of the performance of their predecessors. The two algorithms arise from different starting assumptions. One considers the case where the algorithm will terminate successfully whatever list structure is being processed and assumes that the extra data space should be minimised. The other seeks a very fast garbage collection time for list structures that do not contain loops. Results of both theoretical and experimental investigations are given to demonstrate the efficacy of the algorithms. 7 references.
Linearity Testing of Photovoltaic Cells
Emery, K.; Winter, S.; Pinegar, S.; Nalley, D.
2006-01-01
Photovoltaic devices are rated in terms of their peak power with respect to a specific spectrum, total irradiance, and temperature. To rate photovoltaic devices, a reference detector is required whose response is linear with total irradiance. This paper describes a procedure to determine the linearity of the short-circuit current (I{sub sc}) versus the total irradiance (E{sub tot}) by illuminating a reference cell with two lamps. A device is linear if the current measured with both lamps illuminating the cell is the same as the sum of the currents with each lamp illuminating the cell. The two-lamp method is insensitive to the light spectra or spatial nonuniformity changing with irradiance. The two-lamp method is rapid, easy to implement, and does not require operator intervention to change the irradiances. The presence of room light only limits the lowest irradiance that can be evaluated. Unlike other methods, the two-lamp method does not allow the current to be corrected for nonlinear effects.
Solar Power Ramp Events Detection Using an Optimized Swinging Door Algorithm
Cui, Mingjian; Zhang, Jie; Florita, Anthony; Hodge, Bri-Mathias; Ke, Deping; Sun, Yuanzhang
2015-08-05
Solar power ramp events (SPREs) significantly influence the integration of solar power on non-clear days and threaten the reliable and economic operation of power systems. Accurately extracting solar power ramps becomes more important with increasing levels of solar power penetrations in power systems. In this paper, we develop an optimized swinging door algorithm (OpSDA) to enhance the state of the art in SPRE detection. First, the swinging door algorithm (SDA) is utilized to segregate measured solar power generation into consecutive segments in a piecewise linear fashion. Then we use a dynamic programming approach to combine adjacent segments into significant ramps when the decision thresholds are met. In addition, the expected SPREs occurring in clear-sky solar power conditions are removed. Measured solar power data from Tucson Electric Power is used to assess the performance of the proposed methodology. OpSDA is compared to two other ramp detection methods: the SDA and the L1-Ramp Detect with Sliding Window (L1-SW) method. The statistical results show the validity and effectiveness of the proposed method. OpSDA can significantly improve the performance of the SDA, and it can perform as well as or better than L1-SW with substantially less computation time.
Solar Power Ramp Events Detection Using an Optimized Swinging Door Algorithm: Preprint
Cui, Mingjian; Zhang, Jie; Florita, Anthony; Hodge, Bri-Mathias; Ke, Deping; Sun, Yuanzhang
2015-08-07
Solar power ramp events (SPREs) are those that significantly influence the integration of solar power on non-clear days and threaten the reliable and economic operation of power systems. Accurately extracting solar power ramps becomes more important with increasing levels of solar power penetrations in power systems. In this paper, we develop an optimized swinging door algorithm (OpSDA) to detection. First, the swinging door algorithm (SDA) is utilized to segregate measured solar power generation into consecutive segments in a piecewise linear fashion. Then we use a dynamic programming approach to combine adjacent segments into significant ramps when the decision thresholds are met. In addition, the expected SPREs occurring in clear-sky solar power conditions are removed. Measured solar power data from Tucson Electric Power is used to assess the performance of the proposed methodology. OpSDA is compared to two other ramp detection methods: the SDA and the L1-Ramp Detect with Sliding Window (L1-SW) method. The statistical results show the validity and effectiveness of the proposed method. OpSDA can significantly improve the performance of the SDA, and it can perform as well as or better than L1-SW with substantially less computation time.
AN ALGORITHM FOR PARALLEL SN SWEEPS ON UNSTRUCTURED MESHES
S. D. PAUTZ
2000-12-01
We develop a new algorithm for performing parallel S{sub n} sweeps on unstructured meshes. The algorithm uses a low-complexity list ordering heuristic to determine a sweep ordering on any partitioned mesh. For typical problems and with ''normal'' mesh partitionings we have observed nearly linear speedups on up to 126 processors. This is an important and desirable result, since although analyses of structured meshes indicate that parallel sweeps will not scale with normal partitioning approaches, we do not observe any severe asymptotic degradation in the parallel efficiency with modest ({le}100) levels of parallelism. This work is a fundamental step in the development of parallel S{sub n} methods.
St Aubin, J. Keyvanloo, A.; Fallone, B. G.; Vassiliev, O.
2015-02-15
Purpose: Accurate radiotherapy dose calculation algorithms are essential to any successful radiotherapy program, considering the high level of dose conformity and modulation in many of todays treatment plans. As technology continues to progress, such as is the case with novel MRI-guided radiotherapy systems, the necessity for dose calculation algorithms to accurately predict delivered dose in increasingly challenging scenarios is vital. To this end, a novel deterministic solution has been developed to the first order linear Boltzmann transport equation which accurately calculates x-ray based radiotherapy doses in the presence of magnetic fields. Methods: The deterministic formalism discussed here with the inclusion of magnetic fields is outlined mathematically using a discrete ordinates angular discretization in an attempt to leverage existing deterministic codes. It is compared against the EGSnrc Monte Carlo code, utilizing the emf-macros addition which calculates the effects of electromagnetic fields. This comparison is performed in an inhomogeneous phantom that was designed to present a challenging calculation for deterministic calculations in 0, 0.6, and 3 T magnetic fields oriented parallel and perpendicular to the radiation beam. The accuracy of the formalism discussed here against Monte Carlo was evaluated with a gamma comparison using a standard 2%/2 mm and a more stringent 1%/1 mm criterion for a standard reference 10 10 cm{sup 2} field as well as a smaller 2 2 cm{sup 2} field. Results: Greater than 99.8% (94.8%) of all points analyzed passed a 2%/2 mm (1%/1 mm) gamma criterion for all magnetic field strengths and orientations investigated. All dosimetric changes resulting from the inclusion of magnetic fields were accurately calculated using the deterministic formalism. However, despite the algorithms high degree of accuracy, it is noticed that this formalism was not unconditionally stable using a discrete ordinate angular discretization
Linear Thermite Charge - Energy Innovation Portal
U.S. Department of Energy (DOE) - all webpages (Extended Search)
Linear Thermite Charge Battelle Memorial Institute Contact BMI About This Technology Publications: PDF Document Publication Linear Thermite Charge Picture (40 KB) PDF Document Publication Linear Thermite Charge Patent (207 KB) Technology Marketing Summary The Linear Thermite Charge (LTC) is designed to rapidly cut through concrete and steel structural components by using extremely high temperature thermite reactions jetted through a linear nozzle. Description Broadly, the invention provides for
CALiPER Snapshot Linear Lamps (TLEDs)
BUILDING TECHNOLOGIES OFFICE Snapshot Linear Lamps (TLEDs) Linear fluorescent lamps-and the "troffers" in which they are often used-are a staple of ambient lighting in offices, classrooms, and other types of commercial spaces. They are energy-efficient, long-lived, and relatively inexpensive. Linear LED lamps, often called TLEDs, are an alternative to linear fluorescent lamps and are mainly used in retrofit situations. Typically drawing about 60% of the power of linear fluorescent
Energy Science and Technology Software Center
2013-07-24
Version 00 Calculations of the decay heat is of great importance for the design of the shielding of discharged fuel, the design and transport of fuel-storage flasks and the management of the resulting radioactive waste. These are relevant to safety and have large economic and legislative consequences. In the HEATKAU code, a new approach has been proposed to evaluate the decay heat power after a fission burst of a fissile nuclide for short cooling time.more » This method is based on the numerical solution of coupled linear differential equations that describe decays and buildups of the minor fission products (MFPs) nuclides. HEATKAU is written entirely in the MATLAB programming environment. The MATLAB data can be stored in a standard, fast and easy-access, platform- independent binary format which is easy to visualize.« less
Nonferromagnetic linear variable differential transformer
Ellis, James F.; Walstrom, Peter L.
1977-06-14
A nonferromagnetic linear variable differential transformer for accurately measuring mechanical displacements in the presence of high magnetic fields is provided. The device utilizes a movable primary coil inside a fixed secondary coil that consists of two series-opposed windings. Operation is such that the secondary output voltage is maintained in phase (depending on polarity) with the primary voltage. The transducer is well-suited to long cable runs and is useful for measuring small displacements in the presence of high or alternating magnetic fields.
Acoustic emission linear pulse holography
Collins, H. Dale; Busse, Lawrence J.; Lemon, Douglas K.
1985-01-01
Defects in a structure are imaged as they propagate, using their emitted acoustic energy as a monitored source. Short bursts of acoustic energy propagate through the structure to a discrete element receiver array. A reference timing transducer located between the array and the inspection zone initiates a series of time-of-flight measurements. A resulting series of time-of-flight measurements are then treated as aperture data and are transferred to a computer for reconstruction of a synthetic linear holographic image. The images can be displayed and stored as a record of defect growth.
DOE Publishes CALiPER Report on Linear (T8) LED Lamps in Recessed Troffers
The U.S. Department of Energy's CALiPER program has released Report 21.2, which is part of a series of investigations on linear LED lamps. Report 21.2 focuses on the performance of three linear (T8...
Algorithmic crystal chemistry: A cellular automata approach
Krivovichev, S. V.
2012-01-15
Atomic-molecular mechanisms of crystal growth can be modeled based on crystallochemical information using cellular automata (a particular case of finite deterministic automata). In particular, the formation of heteropolyhedral layered complexes in uranyl selenates can be modeled applying a one-dimensional three-colored cellular automaton. The use of the theory of calculations (in particular, the theory of automata) in crystallography allows one to interpret crystal growth as a computational process (the realization of an algorithm or program with a finite number of steps).
International linear collider reference design report
Aarons, G.
2007-06-22
The International Linear Collider will give physicists a new cosmic doorway to explore energy regimes beyond the reach of today's accelerators. A proposed electron-positron collider, the ILC will complement the Large Hadron Collider, a proton-proton collider at the European Center for Nuclear Research (CERN) in Geneva, Switzerland, together unlocking some of the deepest mysteries in the universe. With LHC discoveries pointing the way, the ILC -- a true precision machine -- will provide the missing pieces of the puzzle. Consisting of two linear accelerators that face each other, the ILC will hurl some 10 billion electrons and their anti-particles, positrons, toward each other at nearly the speed of light. Superconducting accelerator cavities operating at temperatures near absolute zero give the particles more and more energy until they smash in a blazing crossfire at the centre of the machine. Stretching approximately 35 kilometres in length, the beams collide 14,000 times every second at extremely high energies -- 500 billion-electron-volts (GeV). Each spectacular collision creates an array of new particles that could answer some of the most fundamental questions of all time. The current baseline design allows for an upgrade to a 50-kilometre, 1 trillion-electron-volt (TeV) machine during the second stage of the project. This reference design provides the first detailed technical snapshot of the proposed future electron-positron collider, defining in detail the technical parameters and components that make up each section of the 31-kilometer long accelerator. The report will guide the development of the worldwide R&D program, motivate international industrial studies and serve as the basis for the final engineering design needed to make an official project proposal later this decade.
Design and performance of the Stanford Linear Collider Control System
Melen, R.E.
1984-10-01
The success of the Stanford Linear Collider (SLC) will be dependent upon the implementation of a very large advanced computer-based instrumentation and control system. This paper describes the architectural design of this system as well as a critique of its performance. This critique is based on experience obtained from its use in the control and monitoring of 1/3 of the SLAC linac and in support of an expensive experimental machine physics experimental program. 11 references, 3 figures.
Reticle stage based linear dosimeter
Berger, Kurt W.
2007-03-27
A detector to measure EUV intensity employs a linear array of photodiodes. The detector is particularly suited for photolithography systems that includes: (i) a ringfield camera; (ii) a source of radiation; (iii) a condenser for processing radiation from the source of radiation to produce a ringfield illumination field for illuminating a mask; (iv) a reticle that is positioned at the ringfield camera's object plane and from which a reticle image in the form of an intensity profile is reflected into the entrance pupil of the ringfield camera, wherein the reticle moves in a direction that is transverse to the length of the ringfield illumination field that illuminates the reticle; (v) detector for measuring the entire intensity along the length of the ringfield illumination field that is projected onto the reticle; and (vi) a wafer onto which the reticle imaged is projected from the ringfield camera.
Reticle stage based linear dosimeter
Berger, Kurt W.
2005-06-14
A detector to measure EUV intensity employs a linear array of photodiodes. The detector is particularly suited for photolithography systems that includes: (i) a ringfield camera; (ii) a source of radiation; (iii) a condenser for processing radiation from the source of radiation to produce a ringfield illumination field for illuminating a mask; (iv) a reticle that is positioned at the ringfield camera's object plane and from which a reticle image in the form of an intensity profile is reflected into the entrance pupil of the ringfield camera, wherein the reticle moves in a direction that is transverse to the length of the ringfield illumination field that illuminates the reticle; (v) detector for measuring the entire intensity along the length of the ringfield illumination field that is projected onto the reticle; and (vi) a wafer onto which the reticle imaged is projected from the ringfield camera.
Modeling patterns in data using linear and related models
Engelhardt, M.E.
1996-06-01
This report considers the use of linear models for analyzing data related to reliability and safety issues of the type usually associated with nuclear power plants. The report discusses some of the general results of linear regression analysis, such as the model assumptions and properties of the estimators of the parameters. The results are motivated with examples of operational data. Results about the important case of a linear regression model with one covariate are covered in detail. This case includes analysis of time trends. The analysis is applied with two different sets of time trend data. Diagnostic procedures and tests for the adequacy of the model are discussed. Some related methods such as weighted regression and nonlinear models are also considered. A discussion of the general linear model is also included. Appendix A gives some basic SAS programs and outputs for some of the analyses discussed in the body of the report. Appendix B is a review of some of the matrix theoretic results which are useful in the development of linear models.
High Performance Preconditioners and Linear Solvers
Energy Science and Technology Software Center
2006-07-27
Hypre is a software library focused on the solution of large, sparse linear systems of equations on massively parallel computers.
Cubit Adaptive Meshing Algorithm Library
Energy Science and Technology Software Center
2004-09-01
CAMAL (Cubit adaptive meshing algorithm library) is a software component library for mesh generation. CAMAL 2.0 includes components for triangle, quad and tetrahedral meshing. A simple Application Programmers Interface (API) takes a discrete boundary definition and CAMAL computes a quality interior unstructured grid. The triangle and quad algorithms may also import a geometric definition of a surface on which to define the grid. CAMALs triangle meshing uses a 3D space advancing front method, the quadmore » meshing algorithm is based upon Sandias patented paving algorithm and the tetrahedral meshing algorithm employs the GHS3D-Tetmesh component developed by INRIA, France.« less
Wang, C. L.
2016-05-17
On the basis of FluoroBancroft linear-algebraic method [S.B. Andersson, Opt. Exp. 16, 18714 (2008)] three highly-resolved positioning methodswere proposed for wavelength-shifting fiber (WLSF) neutron detectors. Using a Gaussian or exponential-decay light-response function (LRF), the non-linear relation of photon-number profiles vs. x-pixels was linearized and neutron positions were determined. The proposed algorithms give an average 0.03-0.08 pixel position error, much smaller than that (0.29 pixel) from a traditional maximum photon algorithm (MPA). The new algorithms result in better detector uniformity, less position misassignment (ghosting), better spatial resolution, and an equivalent or better instrument resolution in powder diffraction than the MPA. Moreover,more » these characters will facilitate broader applications of WLSF detectors at time-of-flight neutron powder diffraction beamlines, including single-crystal diffraction and texture analysis.« less
The Computational Physics Program of the national MFE Computer Center
Mirin, A.A.
1989-01-01
Since June 1974, the MFE Computer Center has been engaged in a significant computational physics effort. The principal objective of the Computational Physics Group is to develop advanced numerical models for the investigation of plasma phenomena and the simulation of present and future magnetic confinement devices. Another major objective of the group is to develop efficient algorithms and programming techniques for current and future generations of supercomputers. The Computational Physics Group has been involved in several areas of fusion research. One main area is the application of Fokker-Planck/quasilinear codes to tokamaks. Another major area is the investigation of resistive magnetohydrodynamics in three dimensions, with applications to tokamaks and compact toroids. A third area is the investigation of kinetic instabilities using a 3-D particle code; this work is often coupled with the task of numerically generating equilibria which model experimental devices. Ways to apply statistical closure approximations to study tokamak-edge plasma turbulence have been under examination, with the hope of being able to explain anomalous transport. Also, we are collaborating in an international effort to evaluate fully three-dimensional linear stability of toroidal devices. In addition to these computational physics studies, the group has developed a number of linear systems solvers for general classes of physics problems and has been making a major effort at ascertaining how to efficiently utilize multiprocessor computers. A summary of these programs are included in this paper. 6 tabs.
Belief network algorithms: A study of performance
Jitnah, N.
1996-12-31
This abstract gives an overview of the work. We present a survey of Belief Network algorithms and propose a domain characterization system to be used as a basis for algorithm comparison and for predicting algorithm performance.
Phase and amplitude control system for Stanford Linear Accelerator
Yoo, S.J.
1983-09-26
The computer controlled phase and amplitude detection system measures the instantaneous phase and amplitude of a 1 micro-second 2856 MHz rf pulse at a 180 Hz rate. This will be used for phase feedback control, and also for phase and amplitude jitter measurement. The program, which was originally written by John Fox and Keith Jobe, has been modified to improve the function of the system. The software algorithms used in the measurement are described, as is the performance of the prototype phase and amplitude detector system.
Optimized Algorithms Boost Combustion Research
U.S. Department of Energy (DOE) - all webpages (Extended Search)
Optimized Algorithms Boost Combustion Research Optimized Algorithms Boost Combustion Research Methane Flame Simulations Run 6x Faster on NERSC's Hopper Supercomputer November 25, 2014 Contact: Kathy Kincade, +1 510 495 2124, kkincade@lbl.gov Turbulent combustion simulations, which provide input to the design of more fuel-efficient combustion systems, have gotten their own efficiency boost, thanks to researchers from the Computational Research Division (CRD) at Lawrence Berkeley National
A Linac Simulation Code for Macro-Particles Tracking and Steering Algorithm Implementation
sun, yipeng
2012-05-03
In this paper, a linac simulation code written in Fortran90 is presented and several simulation examples are given. This code is optimized to implement linac alignment and steering algorithms, and evaluate the accelerator errors such as RF phase and acceleration gradient, quadrupole and BPM misalignment. It can track a single particle or a bunch of particles through normal linear accelerator elements such as quadrupole, RF cavity, dipole corrector and drift space. One-to-one steering algorithm and a global alignment (steering) algorithm are implemented in this code.
Berkeley Algorithms Help Researchers Understand Dark Energy
U.S. Department of Energy (DOE) - all webpages (Extended Search)
Algorithms Help Researchers Understand Dark Energy Berkeley Algorithms Help Researchers Understand Dark Energy November 24, 2014 Contact: Linda Vu, +1 510 495 2402, lvu@lbl.gov ...
International Workshop on Linear Colliders 2010
None
2011-10-06
IWLC2010 International Workshop on Linear Colliders 2010ECFA-CLIC-ILC joint meeting: Monday 18 October - Friday 22 October 2010Venue: CERN and CICG (International Conference Centre Geneva, Switzerland) This year, the International Workshop on Linear Colliders organized by the European Committee for Future Accelerators (ECFA) will study the physics, detectors and accelerator complex of a linear collider covering both CLIC and ILC options.Contact Workshop Secretariat IWLC2010 is hosted by CERN
2011 DOE Hydrogen and Fuel Cells Program, and Vehicle Technologies Program Annual Merit Review and Peer Evaluation Vehicle Technologies Plenary
Frolov, Vladimir; Backhaus, Scott; Chertkov, Misha
2014-10-24
We explore optimization methods for planning the placement, sizing and operations of Flexible Alternating Current Transmission System (FACTS) devices installed to relieve transmission grid congestion. We limit our selection of FACTS devices to Series Compensation (SC) devices that can be represented by modification of the inductance of transmission lines. Our master optimization problem minimizes the l1 norm of the inductance modification subject to the usual line thermal-limit constraints. We develop heuristics that reduce this non-convex optimization to a succession of Linear Programs (LP) which are accelerated further using cutting plane methods. The algorithm solves an instance of the MatPower Polishmore » Grid model (3299 lines and 2746 nodes) in 40 seconds per iteration on a standard laptop—a speed up that allows the sizing and placement of a family of SC devices to correct a large set of anticipated congestions. We observe that our algorithm finds feasible solutions that are always sparse, i.e., SC devices are placed on only a few lines. In a companion manuscript, we demonstrate our approach on realistically-sized networks that suffer congestion from a range of causes including generator retirement. In this manuscript, we focus on the development of our approach, investigate its structure on a small test system subject to congestion from uniform load growth, and demonstrate computational efficiency on a realistically-sized network.« less
Frolov, Vladimir; Backhaus, Scott; Chertkov, Misha
2014-10-24
We explore optimization methods for planning the placement, sizing and operations of Flexible Alternating Current Transmission System (FACTS) devices installed to relieve transmission grid congestion. We limit our selection of FACTS devices to Series Compensation (SC) devices that can be represented by modification of the inductance of transmission lines. Our master optimization problem minimizes the l_{1} norm of the inductance modification subject to the usual line thermal-limit constraints. We develop heuristics that reduce this non-convex optimization to a succession of Linear Programs (LP) which are accelerated further using cutting plane methods. The algorithm solves an instance of the MatPower Polish Grid model (3299 lines and 2746 nodes) in 40 seconds per iteration on a standard laptop—a speed up that allows the sizing and placement of a family of SC devices to correct a large set of anticipated congestions. We observe that our algorithm finds feasible solutions that are always sparse, i.e., SC devices are placed on only a few lines. In a companion manuscript, we demonstrate our approach on realistically-sized networks that suffer congestion from a range of causes including generator retirement. In this manuscript, we focus on the development of our approach, investigate its structure on a small test system subject to congestion from uniform load growth, and demonstrate computational efficiency on a realistically-sized network.
Frolov, Vladimir; Backhaus, Scott N.; Chertkov, Michael
2014-01-14
We explore optimization methods for planning the placement, sizing and operations of Flexible Alternating Current Transmission System (FACTS) devices installed to relieve transmission grid congestion. We limit our selection of FACTS devices to Series Compensation (SC) devices that can be represented by modification of the inductance of transmission lines. Our master optimization problem minimizes the l_{1} norm of the inductance modification subject to the usual line thermal-limit constraints. We develop heuristics that reduce this non-convex optimization to a succession of Linear Programs (LP) which are accelerated further using cutting plane methods. The algorithm solves an instance of the MatPower Polish Grid model (3299 lines and 2746 nodes) in 40 seconds per iteration on a standard laptop—a speed up that allows the sizing and placement of a family of SC devices to correct a large set of anticipated congestions. We observe that our algorithm finds feasible solutions that are always sparse, i.e., SC devices are placed on only a few lines. In a companion manuscript, we demonstrate our approach on realistically-sized networks that suffer congestion from a range of causes including generator retirement. In this manuscript, we focus on the development of our approach, investigate its structure on a small test system subject to congestion from uniform load growth, and demonstrate computational efficiency on a realistically-sized network.
Ultra-high vacuum photoelectron linear accelerator
Yu, David U.L.; Luo, Yan
2013-07-16
An rf linear accelerator for producing an electron beam. The outer wall of the rf cavity of said linear accelerator being perforated to allow gas inside said rf cavity to flow to a pressure chamber surrounding said rf cavity and having means of ultra high vacuum pumping of the cathode of said rf linear accelerator. Said rf linear accelerator is used to accelerate polarized or unpolarized electrons produced by a photocathode, or to accelerate thermally heated electrons produced by a thermionic cathode, or to accelerate rf heated field emission electrons produced by a field emission cathode.
International Linear Collider Technical Design Report - Volume...
Office of Scientific and Technical Information (OSTI)
Design Report - Volume 2: Physics Citation Details In-Document Search Title: International Linear Collider Technical Design Report - Volume 2: Physics You are accessing a ...
International Linear Collider Technical Design Report - Volume...
Office of Scientific and Technical Information (OSTI)
Linear Collider Technical Design Report - Volume 2: Physics Baer, Howard; Barklow, Tim; Fujii, Keisuke; Gao, Yuanning; Hoang, Andre; Kanemura, Shinya; List, Jenny; Logan, Heather...
The Linear Engine Pathway of Transformation
This poster highlights the major milestones in the history of the linear engine in terms of technological advances, novel designs, and economic/social impact.
LED Replacements for Linear Fluorescent Lamps Webcast
In this June 20, 2011 webcast on LED products marketed as replacements for linear fluorescent lamps, Jason Tuenge of the Pacific Northwest National Laboratory (PNNL) discussed current Lighting...
Shang, Yu; Lin, Yu; Yu, Guoqiang; Li, Ting; Chen, Lei; Toborek, Michal
2014-05-12
Conventional semi-infinite solution for extracting blood flow index (BFI) from diffuse correlation spectroscopy (DCS) measurements may cause errors in estimation of BFI (αD{sub B}) in tissues with small volume and large curvature. We proposed an algorithm integrating Nth-order linear model of autocorrelation function with the Monte Carlo simulation of photon migrations in tissue for the extraction of αD{sub B}. The volume and geometry of the measured tissue were incorporated in the Monte Carlo simulation, which overcome the semi-infinite restrictions. The algorithm was tested using computer simulations on four tissue models with varied volumes/geometries and applied on an in vivo stroke model of mouse. Computer simulations shows that the high-order (N ≥ 5) linear algorithm was more accurate in extracting αD{sub B} (errors < ±2%) from the noise-free DCS data than the semi-infinite solution (errors: −5.3% to −18.0%) for different tissue models. Although adding random noises to DCS data resulted in αD{sub B} variations, the mean values of errors in extracting αD{sub B} were similar to those reconstructed from the noise-free DCS data. In addition, the errors in extracting the relative changes of αD{sub B} using both linear algorithm and semi-infinite solution were fairly small (errors < ±2.0%) and did not rely on the tissue volume/geometry. The experimental results from the in vivo stroke mice agreed with those in simulations, demonstrating the robustness of the linear algorithm. DCS with the high-order linear algorithm shows the potential for the inter-subject comparison and longitudinal monitoring of absolute BFI in a variety of tissues/organs with different volumes/geometries.
Finite element analyses of a linear-accelerator electron gun
Iqbal, M. E-mail: muniqbal@ihep.ac.cn; Wasy, A.; Islam, G. U.; Zhou, Z.
2014-02-15
Thermo-structural analyses of the Beijing Electron-Positron Collider (BEPCII) linear-accelerator, electron gun, were performed for the gun operating with the cathode at 1000 °C. The gun was modeled in computer aided three-dimensional interactive application for finite element analyses through ANSYS workbench. This was followed by simulations using the SLAC electron beam trajectory program EGUN for beam optics analyses. The simulations were compared with experimental results of the assembly to verify its beam parameters under the same boundary conditions. Simulation and test results were found to be in good agreement and hence confirmed the design parameters under the defined operating temperature. The gun is operating continuously since commissioning without any thermal induced failures for the BEPCII linear accelerator.
Program Evaluation: Program Life Cycle
In general, different types of evaluation are carried out over different parts of a program's life cycle (e.g., Creating a program, Program is underway, or Closing out or end of program)....
Visual Empirical Region of Influence (VERI) Pattern Recognition Algorithms
2002-05-01
best pattern recognition results. With a small number of features in a data set an exact solution can be determined. However, the number of possible combinations increases exponentially with the number of features and an alternate means of finding a solution must be found. We developed and implemented a technique for finding solutions in data sets with both small and large numbers of features. The VERI interface tools were written using the Tcl/Tk GUI programming language, version 8.1. Although the Tcl/Tk packages are designed to run on multiple computer platforms, we have concentrated our efforts to develop a user interface for the ubiquitous DOS environment. The VERI algorithms are compiled, executable programs. The interfaces run the VERI algorithms in Leave-One-Out mode using the Euclidean metric.
Visual Empirical Region of Influence (VERI) Pattern Recognition Algorithms
Energy Science and Technology Software Center
2002-05-01
the best pattern recognition results. With a small number of features in a data set an exact solution can be determined. However, the number of possible combinations increases exponentially with the number of features and an alternate means of finding a solution must be found. We developed and implemented a technique for finding solutions in data sets with both small and large numbers of features. The VERI interface tools were written using the Tcl/Tk GUI programming language, version 8.1. Although the Tcl/Tk packages are designed to run on multiple computer platforms, we have concentrated our efforts to develop a user interface for the ubiquitous DOS environment. The VERI algorithms are compiled, executable programs. The interfaces run the VERI algorithms in Leave-One-Out mode using the Euclidean metric.« less
GPU Accelerated Event Detection Algorithm
Energy Science and Technology Software Center
2011-05-25
Smart grid external require new algorithmic approaches as well as parallel formulations. One of the critical components is the prediction of changes and detection of anomalies within the power grid. The state-of-the-art algorithms are not suited to handle the demands of streaming data analysis. (i) need for events detection algorithms that can scale with the size of data, (ii) need for algorithms that can not only handle multi dimensional nature of the data, but alsomore » model both spatial and temporal dependencies in the data, which, for the most part, are highly nonlinear, (iii) need for algorithms that can operate in an online fashion with streaming data. The GAEDA code is a new online anomaly detection techniques that take into account spatial, temporal, multi-dimensional aspects of the data set. The basic idea behind the proposed approach is to (a) to convert a multi-dimensional sequence into a univariate time series that captures the changes between successive windows extracted from the original sequence using singular value decomposition (SVD), and then (b) to apply known anomaly detection techniques for univariate time series. A key challenge for the proposed approach is to make the algorithm scalable to huge datasets by adopting techniques from perturbation theory, incremental SVD analysis. We used recent advances in tensor decomposition techniques which reduce computational complexity to monitor the change between successive windows and detect anomalies in the same manner as described above. Therefore we propose to develop the parallel solutions on many core systems such as GPUs, because these algorithms involve lot of numerical operations and are highly data-parallelizable.« less
Non-Linear Seismic Soil Structure Interaction (SSI) Method for...
Office of Environmental Management (EM)
Non-Linear Seismic Soil Structure Interaction (SSI) Method for Developing Non-Linear Seismic SSI Analysis Techniques Non-Linear Seismic Soil Structure Interaction (SSI) Method for ...
Nonlinear Global Optimization Using Curdling Algorithm in Mathematica Environmet
Energy Science and Technology Software Center
1997-08-05
An algorithm for performing optimization which is a derivative-free, grid-refinement approach to nonlinear optimization was developed and implemented in software as OPTIMIZE. This approach overcomes a number of deficiencies in existing approaches. Most notably, it finds extremal regions rather than only single extremal points. the program is interactive and collects information on control parameters and constraints using menus. For up to two (and potentially three) dimensions, function convergence is displayed graphically. Because the algorithm doesmore » not compute derivatives, gradients, or vectors, it is numerically stable. It can find all the roots of a polynomial in one pass. It is an inherently parallel algorithm. OPTIMIZE-M is a modification of OPTIMIZE designed for use within the Mathematica environment created by Wolfram Research.« less
Voltage regulation in linear induction accelerators
Parsons, William M.
1992-01-01
Improvement in voltage regulation in a Linear Induction Accelerator wherein a varistor, such as a metal oxide varistor, is placed in parallel with the beam accelerating cavity and the magnetic core. The non-linear properties of the varistor result in a more stable voltage across the beam accelerating cavity than with a conventional compensating resistance.
Voltage regulation in linear induction accelerators
Parsons, W.M.
1992-12-29
Improvement in voltage regulation in a linear induction accelerator wherein a varistor, such as a metal oxide varistor, is placed in parallel with the beam accelerating cavity and the magnetic core is disclosed. The non-linear properties of the varistor result in a more stable voltage across the beam accelerating cavity than with a conventional compensating resistance. 4 figs.
Automated DNA Base Pair Calling Algorithm
Energy Science and Technology Software Center
1999-07-07
The procedure solves the problem of calling the DNA base pair sequence from two channel electropherogram separations in an automated fashion. The core of the program involves a peak picking algorithm based upon first, second, and third derivative spectra for each electropherogram channel, signal levels as a function of time, peak spacing, base pair signal to noise sequence patterns, frequency vs ratio of the two channel histograms, and confidence levels generated during the run. Themore » ratios of the two channels at peak centers can be used to accurately and reproducibly determine the base pair sequence. A further enhancement is a novel Gaussian deconvolution used to determine the peak heights used in generating the ratio.« less
Practical application of equivalent linearization approaches to nonlinear piping systems
Park, Y.J.; Hofmayer, C.H.
1995-05-01
The use of mechanical energy absorbers as an alternative to conventional hydraulic and mechanical snubbers for piping supports has attracted a wide interest among researchers and practitioners in the nuclear industry. The basic design concept of energy absorbers (EA) is to dissipate the vibration energy of piping systems through nonlinear hysteretic actions of EA!s under design seismic loads. Therefore, some type of nonlinear analysis needs to be performed in the seismic design of piping systems with EA supports. The equivalent linearization approach (ELA) can be a practical analysis tool for this purpose, particularly when the response approach (RSA) is also incorporated in the analysis formulations. In this paper, the following ELA/RSA methods are presented and compared to each other regarding their practice and numerical accuracy: Response approach using the square root of sum of squares (SRSS) approximation (denoted RS in this paper). Classical ELA based on modal combinations and linear random vibration theory (denoted CELA in this paper). Stochastic ELA based on direct solution of response covariance matrix (denoted SELA in this paper). New algorithms to convert response spectra to the equivalent power spectral density (PSD) functions are presented for both the above CELA and SELA methods. The numerical accuracy of the three EL are studied through a parametric error analysis. Finally, the practicality of the presented analysis is demonstrated in two application examples for piping systems with EA supports.
Optimized Algorithm for Collision Probability Calculations in Cubic Geometry
Garcia, R.D.M.
2004-06-15
An optimized algorithm for implementing a recently developed method of computing collision probabilities (CPs) in three dimensions is reported in this work for the case of a homogeneous cube. Use is made of the geometrical regularity of the domain to rewrite, in a very compact way, the approximate formulas for calculating CPs in general three-dimensional geometry that were derived in a previous work by the author. The ensuing gain in computation time is found to be substantial: While the computation time associated with the general formulas increases as K{sup 2}, where K is the number of elements used in the calculation, that of the specific formulas increases only linearly with K. Accurate numerical results are given for several test cases, and an extension of the algorithm for computing the self-collision probability for a hexahedron is reported at the end of the work.
Developing and Implementing the Data Mining Algorithms in RAVEN
Sen, Ramazan Sonat; Maljovec, Daniel Patrick; Alfonsi, Andrea; Rabiti, Cristian
2015-09-01
The RAVEN code is becoming a comprehensive tool to perform probabilistic risk assessment, uncertainty quantification, and verification and validation. The RAVEN code is being developed to support many programs and to provide a set of methodologies and algorithms for advanced analysis. Scientific computer codes can generate enormous amounts of data. To post-process and analyze such data might, in some cases, take longer than the initial software runtime. Data mining algorithms/methods help in recognizing and understanding patterns in the data, and thus discover knowledge in databases. The methodologies used in the dynamic probabilistic risk assessment or in uncertainty and error quantification analysis couple system/physics codes with simulation controller codes, such as RAVEN. RAVEN introduces both deterministic and stochastic elements into the simulation while the system/physics code model the dynamics deterministically. A typical analysis is performed by sampling values of a set of parameter values. A major challenge in using dynamic probabilistic risk assessment or uncertainty and error quantification analysis for a complex system is to analyze the large number of scenarios generated. Data mining techniques are typically used to better organize and understand data, i.e. recognizing patterns in the data. This report focuses on development and implementation of Application Programming Interfaces (APIs) for different data mining algorithms, and the application of these algorithms to different databases.
Parallel algorithms for unconstrained optimizations by multisplitting
He, Qing
1994-12-31
In this paper a new parallel iterative algorithm for unconstrained optimization using the idea of multisplitting is proposed. This algorithm uses the existing sequential algorithms without any parallelization. Some convergence and numerical results for this algorithm are presented. The experiments are performed on an Intel iPSC/860 Hyper Cube with 64 nodes. It is interesting that the sequential implementation on one node shows that if the problem is split properly, the algorithm converges much faster than one without splitting.
Mesh Algorithms for PDE with Sieve I: Mesh Distribution
Knepley, Matthew G.; Karpeev, Dmitry A.
2009-01-01
We have developed a new programming framework, called Sieve, to support parallel numerical partial differential equation(s) (PDE) algorithms operating over distributed meshes. We have also developed a reference implementation of Sieve in C++ as a library of generic algorithms operating on distributed containers conforming to the Sieve interface. Sieve makes instances of the incidence relation, or arrows, the conceptual first-class objects represented in the containers. Further, generic algorithms acting on this arrow container are systematically used to provide natural geometric operations on the topology and also, through duality, on the data. Finally, coverings and duality are used to encode notmore » only individual meshes, but all types of hierarchies underlying PDE data structures, including multigrid and mesh partitions. In order to demonstrate the usefulness of the framework, we show how the mesh partition data can be represented and manipulated using the same fundamental mechanisms used to represent meshes. We present the complete description of an algorithm to encode a mesh partition and then distribute a mesh, which is independent of the mesh dimension, element shape, or embedding. Moreover, data associated with the mesh can be similarly distributed with exactly the same algorithm. The use of a high level of abstraction within the Sieve leads to several benefits in terms of code reuse, simplicity, and extensibility. We discuss these benefits and compare our approach to other existing mesh libraries.« less
Parallel Algorithms for Graph Optimization using Tree Decompositions
Sullivan, Blair D; Weerapurage, Dinesh P; Groer, Christopher S
2012-06-01
Although many $\\cal{NP}$-hard graph optimization problems can be solved in polynomial time on graphs of bounded tree-width, the adoption of these techniques into mainstream scientific computation has been limited due to the high memory requirements of the necessary dynamic programming tables and excessive runtimes of sequential implementations. This work addresses both challenges by proposing a set of new parallel algorithms for all steps of a tree decomposition-based approach to solve the maximum weighted independent set problem. A hybrid OpenMP/MPI implementation includes a highly scalable parallel dynamic programming algorithm leveraging the MADNESS task-based runtime, and computational results demonstrate scaling. This work enables a significant expansion of the scale of graphs on which exact solutions to maximum weighted independent set can be obtained, and forms a framework for solving additional graph optimization problems with similar techniques.
Swiler, Laura Painton; Eldred, Michael Scott
2009-09-01
This report documents the results of an FY09 ASC V&V Methods level 2 milestone demonstrating new algorithmic capabilities for mixed aleatory-epistemic uncertainty quantification. Through the combination of stochastic expansions for computing aleatory statistics and interval optimization for computing epistemic bounds, mixed uncertainty analysis studies are shown to be more accurate and efficient than previously achievable. Part I of the report describes the algorithms and presents benchmark performance results. Part II applies these new algorithms to UQ analysis of radiation effects in electronic devices and circuits for the QASPR program.
DOE Publishes CALiPER Report on Cost-Effectiveness of Linear (T8) LED Lamps
| Department of Energy CALiPER Report on Cost-Effectiveness of Linear (T8) LED Lamps DOE Publishes CALiPER Report on Cost-Effectiveness of Linear (T8) LED Lamps May 30, 2014 - 4:58pm Addthis The U.S. Department of Energy's CALiPER program has released Report 21.3, which is part of a series of investigations on linear LED lamps. Report 21.3 details a set of life-cycle cost simulations that compared a two-lamp troffer using LED lamps (38W total power draw) or fluorescent lamps (51W total power
Optically isolated signal coupler with linear response
Kronberg, James W.
1994-01-01
An optocoupler for isolating electrical signals that translates an electrical input signal linearly to an electrical output signal. The optocoupler comprises a light emitter, a light receiver, and a light transmitting medium. The light emitter, preferably a blue, silicon carbide LED, is of the type that provides linear, electro-optical conversion of electrical signals within a narrow wavelength range. Correspondingly, the light receiver, which converts light signals to electrical signals and is preferably a cadmium sulfide photoconductor, is linearly responsive to light signals within substantially the same wavelength range as the blue LED.
International Workshop on Linear Colliders 2010
None
2016-07-12
IWLC2010 International Workshop on Linear Colliders 2010ECFA-CLIC-ILC joint meeting: Monday 18 October - Friday 22 October 2010Venue: CERN and CICG (International Conference Centre Geneva, Switzerland)Â This year, the International Workshop on Linear Colliders organized by the European Committee for Future Accelerators (ECFA) will study the physics, detectors and accelerator complex of a linear collider covering both CLIC and ILC options.Contact Workshop SecretariatÂ Â IWLC2010 is hostedÂ by CERN
LINEAR COLLIDER PHYSICS RESOURCE BOOK FOR SNOWMASS 2001.
ABE,T.; DAWSON,S.; HEINEMEYER,S.; MARCIANO,W.; PAIGE,F.; TURCOT,A.S.; ET AL
2001-05-03
The American particle physics community can look forward to a well-conceived and vital program of experimentation for the next ten years, using both colliders and fixed target beams to study a wide variety of pressing questions. Beyond 2010, these programs will be reaching the end of their expected lives. The CERN LHC will provide an experimental program of the first importance. But beyond the LHC, the American community needs a coherent plan. The Snowmass 2001 Workshop and the deliberations of the HEPAP subpanel offer a rare opportunity to engage the full community in planning our future for the next decade or more. A major accelerator project requires a decade from the beginning of an engineering design to the receipt of the first data. So it is now time to decide whether to begin a new accelerator project that will operate in the years soon after 2010. We believe that the world high-energy physics community needs such a project. With the great promise of discovery in physics at the next energy scale, and with the opportunity for the uncovering of profound insights, we cannot allow our field to contract to a single experimental program at a single laboratory in the world. We believe that an e{sup +}e{sup {minus}} linear collider is an excellent choice for the next major project in high-energy physics. Applying experimental techniques very different from those used at hadron colliders, an e{sup +}e{sup {minus}} linear collider will allow us to build on the discoveries made at the Tevatron and the LHC, and to add a level of precision and clarity that will be necessary to understand the physics of the next energy scale. It is not necessary to anticipate specific results from the hadron collider programs to argue for constructing an e{sup +}e{sup {minus}} linear collider; in any scenario that is now discussed, physics will benefit from the new information that e{sup +}e{sup {minus}} experiments can provide.
On Parallel Push-Relabel based Algorithms for Bipartite Maximum Matching
Langguth, Johannes; Azad, Md Ariful; Halappanavar, Mahantesh; Manne, Fredrik
2014-07-01
We study multithreaded push-relabel based algorithms for computing maximum cardinality matching in bipartite graphs. Matching is a fundamental combinatorial (graph) problem with applications in a wide variety of problems in science and engineering. We are motivated by its use in the context of sparse linear solvers for computing maximum transversal of a matrix. We implement and test our algorithms on several multi-socket multicore systems and compare their performance to state-of-the-art augmenting path-based serial and parallel algorithms using a testset comprised of a wide range of real-world instances. Building on several heuristics for enhancing performance, we demonstrate good scaling for the parallel push-relabel algorithm. We show that it is comparable to the best augmenting path-based algorithms for bipartite matching. To the best of our knowledge, this is the first extensive study of multithreaded push-relabel based algorithms. In addition to a direct impact on the applications using matching, the proposed algorithmic techniques can be extended to preflow-push based algorithms for computing maximum flow in graphs.
U.S. Department of Energy (DOE) - all webpages (Extended Search)
Educational Programs Educational Programs The Lab provides a variety of focused educational programs aimed at the development and application of essential knowledge and skills in scientific fields key to our national security mission. Contacts Student Programs Team Leader Scott Robbins National Security Education Center 505-665-3639 Email Los Alamos Educational Programs Educational programs at Los Alamos combine significant hands-on group project experiences with more traditional classroom
Adaptive protection algorithm and system
Hedrick, Paul (Pittsburgh, PA) [Pittsburgh, PA; Toms, Helen L. (Irwin, PA) [Irwin, PA; Miller, Roger M. (Mars, PA) [Mars, PA
2009-04-28
An adaptive protection algorithm and system for protecting electrical distribution systems traces the flow of power through a distribution system, assigns a value (or rank) to each circuit breaker in the system and then determines the appropriate trip set points based on the assigned rank.
Residences participating in the Home Energy Rebate or New Home Rebate Program may not also participate in the Weatherization Program.
Directives, Delegations, and Other Requirements [Office of Management (MA)]
1997-08-21
This volume describes program administration that establishes and maintains effective organizational management and control of the emergency management program. Canceled by DOE G 151.1-3.
Status of the SLC (Stanford Linear Collider)
Coupal, D.P.
1989-07-01
This report presents a brief review of the status of the Stanford Linear Collider. Topics covered are: Beam luminosity, Detectors and backgrounds; and Future prospects. 3 refs., 8 figs., 1 tab. (LSP)
Randomized Linear Algebra for BioImaging
U.S. Department of Energy (DOE) - all webpages (Extended Search)
Randomized Linear Algebra for BioImaging Randomized Linear Algebra for BioImaging Mass-Spectrometry Imaging Mass spectrometry measures ions derived from the molecules present in a biological sample. Spectra of the ions are acquired at each location (pixel) of a sample, allowing for the collection of spatially resolved mass spectra. This mode of analysis is known as mass spectrometry imaging (MSI). The addition of ion-mobility separation (IMS) to MSI adds another dimension, drift time. The
Visiting Faculty Program Program Description
U.S. Department of Energy (DOE) - all webpages (Extended Search)
Visiting Faculty Program Program Description The Visiting Faculty Program seeks to increase the research competitiveness of faculty members and their students at institutions historically underrepresented in the research community in order to expand the workforce vital to Department of Energy mission areas. As part of the program, selected university/college faculty members collaborate with DOE laboratory research staff on a research project of mutual interest. Program Objective The program is
Visiting Faculty Program Program Description
U.S. Department of Energy (DOE) - all webpages (Extended Search)
covers stipend and travel reimbursement for the 10-week program. Teacherfaculty participants: 1 Program Coordinator: Scott Robbins Email: srobbins@lanl.gov Phone number: 663-5621...
Kok, J.
1988-01-01
To the human programmer the ease of coding distributed computing is highly dependent on the suitability of the employed programming language. But with a particular language it is also important whether the possibilities of one or more parallel architectures can efficiently be addressed by available language constructs. In this paper the possibilities are discussed of the high-level language Ada and in particular of its tasking concept as a descriptional tool for the design and implementation of numerical and other algorithms that allow execution of parts in parallel. Language tools are explained and their use for common applications is shown. Conclusions are drawn about the usefulness of several Ada concepts.
U.S. Department of Energy (DOE) - all webpages (Extended Search)
and its Use in Coupling Codes for Multiphysics Simulations Rod Schmidt, Noel Belcourt, Russell Hooper, and Roger Pawlowski Sandia National Laboratories P.O. Box 5800...
U.S. Department of Energy (DOE) - all webpages (Extended Search)
of the vehicle. Here the two domains are the fluid exterior to the vehicle (compressible, turbulent fluid flow) and the interior of the vehicle (structural dynamics)...
Block quasi-minimal residual iterations for non-Hermitian linear systems
Freund, R.W.
1994-12-31
Many applications require the solution of multiple linear systems that have the same coefficient matrix, but differ only in their right-hand sides. Instead of applying an iterative method to each of these systems individually, it is usually more efficient to employ a block version of the method that generates blocks of iterates for all the systems simultaneously. An example of such an iteration is the block conjugate gradient algorithm, which was first studied by Underwood and O`Leary. On parallel architectures, block versions of conjugate gradient-type methods are attractive even for the solution of single linear systems, since they have fewer synchronization points than the standard versions of these algorithms. In this talk, the author presents a block version of Freund and Nachtigal`s quasi-minimal residual (QMR) method for the iterative solution of non-Hermitian linear systems. He describes two different implementations of the block-QMR method, one based on a block version of the three-term Lanczos algorithm and one based on coupled two-term block recurrences. In both cases, the underlying block-Lanczos process still allows arbitrary normalizations of the vectors within each block, and the author discusses different normalization strategies. To maintain linear independence within each block, it is usually necessary to reduce the block size in the course of the iteration, and the author describes a deflation technique for performing this reduction. He also present some convergence results, and reports results of numerical experiments with the block-QMR method. Finally, the author discusses possible block versions of transpose-free Lanczos-based iterations such as the TFQMR method.
Brau, James E
2013-04-22
The U.S Linear Collider Detector R&D program, supported by the DOE and NSF umbrella grants to the University of Oregon, made significant advances on many critical aspects of the ILC detector program. Progress advanced on vertex detector sensor development, silicon and TPC tracking, calorimetry on candidate technologies, and muon detection, as well as on beamline measurements of luminosity, energy, and polarization.
Computer programs for multilocus haplotyping of general pedigrees
Weeks, D.E.; O`Connell, J.R.; Sobel, E.
1995-06-01
We have recently developed and implemented three different computer algorithms for accurate haplotyping with large numbers of codominant markers. Each of these algorithms employs likelihood criteria that correctly incorporate all intermarker recombination fractions. The three programs, HAPLO, SIMCROSS, and SIMWALK, are now available for haplotying general pedigrees. The HAPLO program will be distributed as part of the Programs for Pedigree Analysis package by Kenneth Lange. The SIMCROSS and SIMWALK programs are available by anonymous ftp from watson.hgen.pitt.edu. Each program is written in FORTRAN 77 and is distributed as source code. 15 refs.
Composite-step product methods for solving nonsymmetric linear systems
Chan, T.F.; Szeto, T.
1994-12-31
The Biconjugate Gradient (BCG) algorithm is the {open_quotes}natural{close_quotes} generalization of the classical Conjugate Gradient method to nonsymmetric linear systems. It is an attractive method because of its simplicity and its good convergence properties. Unfortunately, BCG suffers from two kinds of breakdowns (divisions by 0): one due to the non-existence of the residual polynomial, and the other due to a breakdown in the recurrence relationship used. There are many look-ahead techniques in existence which are designed to handle these breakdowns. Although the step size needed to overcome an exact breakdown can be computed in principle, these methods can unfortunately be quite complicated for handling near breakdowns since the sizes of the look-ahead steps are variable (indeed, the breakdowns can be incurable). Recently, Bank and Chan introduced the Composite Step Biconjugate Gradient (CSBCG) algorithm, an alternative which cures only the first of the two breakdowns mentioned by skipping over steps for which the BCG iterate is not defined. This is done with a simple modification of BCG which needs only a maximum look-ahead step size of 2 to eliminate the (near) breakdown and to smooth the sometimes erratic convergence of BCG. Thus, instead of a more complicated (but less prone to breakdown) version, CSBCG cures only one kind of breakdown, but does so with a minimal modification to the usual implementation of BCG in the hope that its empirically observed stability will be inherited. The authors note, then, that the Composite Step idea can be incorporated anywhere the BCG polynomial is used; in particular, in product methods such as CGS, Bi-CGSTAB, and TFQMR. Doing this not only cures the breakdown mentioned above, but also takes on the advantages of these product methods, namely, no multiplications by the transpose matrix and a faster convergence rate than BCG.
Generation of multi-million element meshes for solid model-based geometries: The Dicer algorithm
Melander, D.J.; Benzley, S.E.; Tautges, T.J.
1997-06-01
The Dicer algorithm generates a fine mesh by refining each element in a coarse all-hexahedral mesh generated by any existing all-hexahedral mesh generation algorithm. The fine mesh is geometry-conforming. Using existing all-hexahedral meshing algorithms to define the initial coarse mesh simplifies the overall meshing process and allows dicing to take advantage of improvements in other meshing algorithms immediately. The Dicer algorithm will be used to generate large meshes in support of the ASCI program. The authors also plan to use dicing as the basis for parallel mesh generation. Dicing strikes a careful balance between the interactive mesh generation and multi-million element mesh generation processes for complex 3D geometries, providing an efficient means for producing meshes of varying refinement once the coarse mesh is obtained.
Daylighting simulation: methods, algorithms, and resources
Carroll, William L.
1999-12-01
This document presents work conducted as part of Subtask C, ''Daylighting Design Tools'', Subgroup C2, ''New Daylight Algorithms'', of the IEA SHC Task 21 and the ECBCS Program Annex 29 ''Daylight in Buildings''. The search for and collection of daylighting analysis methods and algorithms led to two important observations. First, there is a wide range of needs for different types of methods to produce a complete analysis tool. These include: Geometry; Light modeling; Characterization of the natural illumination resource; Materials and components properties, representations; and Usability issues (interfaces, interoperability, representation of analysis results, etc). Second, very advantageously, there have been rapid advances in many basic methods in these areas, due to other forces. They are in part driven by: The commercial computer graphics community (commerce, entertainment); The lighting industry; Architectural rendering and visualization for projects; and Academia: Course materials, research. This has led to a very rich set of information resources that have direct applicability to the small daylighting analysis community. Furthermore, much of this information is in fact available online. Because much of the information about methods and algorithms is now online, an innovative reporting strategy was used: the core formats are electronic, and used to produce a printed form only secondarily. The electronic forms include both online WWW pages and a downloadable .PDF file with the same appearance and content. Both electronic forms include live primary and indirect links to actual information sources on the WWW. In most cases, little additional commentary is provided regarding the information links or citations that are provided. This in turn allows the report to be very concise. The links are expected speak for themselves. The report consists of only about 10+ pages, with about 100+ primary links, but with potentially thousands of indirect links. For purposes of
Time Variant Floating Mean Counting Algorithm
Energy Science and Technology Software Center
1999-06-03
This software was written to test a time variant floating mean counting algorithm. The algorithm was developed by Westinghouse Savannah River Company and a provisional patent has been filed on the algorithm. The test software was developed to work with the Val Tech model IVB prototype version II count rate meter hardware. The test software was used to verify the algorithm developed by WSRC could be correctly implemented with the vendor''s hardware.
Light Water Reactor Sustainability Program - Integrated Program...
Program - Integrated Program Plan Light Water Reactor Sustainability Program - Integrated Program Plan The Light Water Reactor Sustainability (LWRS) Program is a research and ...
Dual-range linearized transimpedance amplifier system
Wessendorf, Kurt O.
2010-11-02
A transimpedance amplifier system is disclosed which simultaneously generates a low-gain output signal and a high-gain output signal from an input current signal using a single transimpedance amplifier having two different feedback loops with different amplification factors to generate two different output voltage signals. One of the feedback loops includes a resistor, and the other feedback loop includes another resistor in series with one or more diodes. The transimpedance amplifier system includes a signal linearizer to linearize one or both of the low- and high-gain output signals by scaling and adding the two output voltage signals from the transimpedance amplifier. The signal linearizer can be formed either as an analog device using one or two summing amplifiers, or alternately can be formed as a digital device using two analog-to-digital converters and a digital signal processor (e.g. a microprocessor or a computer).
Linear transformer driver for pulse generation
Kim, Alexander A; Mazarakis, Michael G; Sinebryukhov, Vadim A; Volkov, Sergey N; Kondratiev, Sergey S; Alexeenko, Vitaly M; Bayol, Frederic; Demol, Gauthier; Stygar, William A
2015-04-07
A linear transformer driver includes at least one ferrite ring positioned to accept a load. The linear transformer driver also includes a first power delivery module that includes a first charge storage devices and a first switch. The first power delivery module sends a first energy in the form of a first pulse to the load. The linear transformer driver also includes a second power delivery module including a second charge storage device and a second switch. The second power delivery module sends a second energy in the form of a second pulse to the load. The second pulse has a frequency that is approximately three times the frequency of the first pulse. The at least one ferrite ring is positioned to force the first pulse and the second pulse to the load by temporarily isolating the first pulse and the second pulse from an electrical ground.
Constant-complexity stochastic simulation algorithm with optimal binning
Sanft, Kevin R.; Othmer, Hans G.
2015-08-21
At the molecular level, biochemical processes are governed by random interactions between reactant molecules, and the dynamics of such systems are inherently stochastic. When the copy numbers of reactants are large, a deterministic description is adequate, but when they are small, such systems are often modeled as continuous-time Markov jump processes that can be described by the chemical master equation. Gillespie’s Stochastic Simulation Algorithm (SSA) generates exact trajectories of these systems, but the amount of computational work required for each step of the original SSA is proportional to the number of reaction channels, leading to computational complexity that scales linearly with the problem size. The original SSA is therefore inefficient for large problems, which has prompted the development of several alternative formulations with improved scaling properties. We describe an exact SSA that uses a table data structure with event time binning to achieve constant computational complexity with respect to the number of reaction channels for weakly coupled reaction networks. We present a novel adaptive binning strategy and discuss optimal algorithm parameters. We compare the computational efficiency of the algorithm to existing methods and demonstrate excellent scaling for large problems. This method is well suited for generating exact trajectories of large weakly coupled models, including those that can be described by the reaction-diffusion master equation that arises from spatially discretized reaction-diffusion processes.
A fast contour descriptor algorithm for supernova imageclassification
Aragon, Cecilia R.; Aragon, David Bradburn
2006-07-16
We describe a fast contour descriptor algorithm and its application to a distributed supernova detection system (the Nearby Supernova Factory) that processes 600,000 candidate objects in 80 GB of image data per night. Our shape-detection algorithm reduced the number of false positives generated by the supernova search pipeline by 41% while producing no measurable impact on running time. Fourier descriptors are an established method of numerically describing the shapes of object contours, but transform-based techniques are ordinarily avoided in this type of application due to their computational cost. We devised a fast contour descriptor implementation for supernova candidates that meets the tight processing budget of the application. Using the lowest-order descriptors (F{sub 1} and F{sub -1}) and the total variance in the contour, we obtain one feature representing the eccentricity of the object and another denoting its irregularity. Because the number of Fourier terms to be calculated is fixed and small, the algorithm runs in linear time, rather than the O(n log n) time of an FFT. Constraints on object size allow further optimizations so that the total cost of producing the required contour descriptors is about 4n addition/subtraction operations, where n is the length of the contour.
An efficient algorithm for incompressible N-phase flows
Dong, S.
2014-11-01
We present an efficient algorithm within the phase field framework for simulating the motion of a mixture of N (N?2) immiscible incompressible fluids, with possibly very different physical properties such as densities, viscosities, and pairwise surface tensions. The algorithm employs a physical formulation for the N-phase system that honors the conservations of mass and momentum and the second law of thermodynamics. We present a method for uniquely determining the mixing energy density coefficients involved in the N-phase model based on the pairwise surface tensions among the N fluids. Our numerical algorithm has several attractive properties that make it computationally very efficient: (i) it has completely de-coupled the computations for different flow variables, and has also completely de-coupled the computations for the (N?1) phase field functions; (ii) the algorithm only requires the solution of linear algebraic systems after discretization, and no nonlinear algebraic solve is needed; (iii) for each flow variable the linear algebraic system involves only constant and time-independent coefficient matrices, which can be pre-computed during pre-processing, despite the variable density and variable viscosity of the N-phase mixture; (iv) within a time step the semi-discretized system involves only individual de-coupled Helmholtz-type (including Poisson) equations, despite the strongly-coupled phasefield system of fourth spatial order at the continuum level; (v) the algorithm is suitable for large density contrasts and large viscosity contrasts among the N fluids. Extensive numerical experiments have been presented for several problems involving multiple fluid phases, large density contrasts and large viscosity contrasts. In particular, we compare our simulations with the de Gennes theory, and demonstrate that our method produces physically accurate results for multiple fluid phases. We also demonstrate the significant and sometimes dramatic effects of the gravity
Linear Transformation Method for Multinuclide Decay Calculation
Ding Yuan
2010-12-29
A linear transformation method for generic multinuclide decay calculations is presented together with its properties and implications. The method takes advantage of the linear form of the decay solution N(t) = F(t)N{sub 0}, where N(t) is a column vector that represents the numbers of atoms of the radioactive nuclides in the decay chain, N{sub 0} is the initial value vector of N(t), and F(t) is a lower triangular matrix whose time-dependent elements are independent of the initial values of the system.
Linear Concentrator Solar Power Plant Illustration
This graphic illustrates linear concentrating solar power (CSP) collectors that capture the sun's energy with large mirrors that reflect and focus the sunlight onto a linear receiver tube. The receiver contains a fluid that is heated by the sunlight and then used to create superheated steam that spins a turbine that drives a generator to produce electricity. Alternatively, steam can be generated directly in the solar field, eliminating the need for costly heat exchangers. In a parabolic trough system, the receiver tube is positioned along the focal line of each parabola-shaped reflector.
Linear and angular retroreflecting interferometric alignment target
Maxey, L. Curtis
2001-01-01
The present invention provides a method and apparatus for measuring both the linear displacement and angular displacement of an object using a linear interferometer system and an optical target comprising a lens, a reflective surface and a retroreflector. The lens, reflecting surface and retroreflector are specifically aligned and fixed in optical connection with one another, creating a single optical target which moves as a unit that provides multi-axis displacement information for the object with which it is associated. This displacement information is useful in many applications including machine tool control systems and laser tracker systems, among others.
Beamstrahlung spectra in next generation linear colliders
Barklow, T.; Chen, P. ); Kozanecki, W. )
1992-04-01
For the next generation of linear colliders, the energy loss due to beamstrahlung during the collision of the e{sup +}e{sup {minus}} beams is expected to substantially influence the effective center-of-mass energy distribution of the colliding particles. In this paper, we first derive analytical formulae for the electron and photon energy spectra under multiple beamstrahlung processes, and for the e{sup +}e{sup {minus}} and {gamma}{gamma} differential luminosities. We then apply our formulation to various classes of 500 GeV e{sup +}e{sup {minus}} linear collider designs currently under study.
An implementation analysis of the linear discontinuous finite element method
Becker, T. L.
2013-07-01
This paper provides an implementation analysis of the linear discontinuous finite element method (LD-FEM) that spans the space of (l, x, y, z). A practical implementation of LD includes 1) selecting a computationally efficient algorithm to solve the 4 x 4 matrix system Ax = b that describes the angular flux in a mesh element, and 2) choosing how to store the data used to construct the matrix A and the vector b to either reduce memory consumption or increase computational speed. To analyze the first of these, three algorithms were selected to solve the 4 x 4 matrix equation: Cramer's rule, a streamlined implementation of Gaussian elimination, and LAPACK's Gaussian elimination subroutine dgesv. The results indicate that Cramer's rule and the streamlined Gaussian elimination algorithm perform nearly equivalently and outperform LAPACK's implementation of Gaussian elimination by a factor of 2. To analyze the second implementation detail, three formulations of the discretized LD-FEM equations were provided for implementation in a transport solver: 1) a low-memory formulation, which relies heavily on 'on-the-fly' calculations and less on the storage of pre-computed data, 2) a high-memory formulation, which pre-computes much of the data used to construct A and b, and 3) a reduced-memory formulation, which lies between the low - and high-memory formulations. These three formulations were assessed in the Jaguar transport solver based on relative memory footprint and computational speed for increasing mesh size and quadrature order. The results indicated that the memory savings of the low-memory formulation were not sufficient to warrant its implementation. The high-memory formulation resulted in a significant speed advantage over the reduced-memory option (10-50%), but also resulted in a proportional increase in memory consumption (5-45%) for increasing quadrature order and mesh count; therefore, the practitioner should weigh the system memory constraints against any
DOE Publishes CALiPER Report on Cost-Effectiveness of Linear (T8) LED Lamps
The U.S. Department of Energy's CALiPER program has released Report 21.3, which is part of a series of investigations on linear LED lamps. Report 21.3 details a set of life-cycle cost simulations...
Linear corrugating: Forest products project fact sheet
NREL
1999-12-14
This is a fact sheet written for the Inventions and Innovation Program about a new process for creating corrugated cardboard products.
U.S. Department of Energy (DOE) - all webpages (Extended Search)
New Commercial Program Development Commercial Current Promotions Industrial Federal Agriculture Heating Ventilation and Air Conditioning Energy efficient Heating Ventilation and...
U.S. Department of Energy (DOE) - all webpages (Extended Search)
Library Services » Retiree Program Retiree Program The Research Library offers a 1 year library card to retired LANL employees that allows usage of Library materials. This service is only available to retired LANL employees. Who is eligible? Any Laboratory retiree, not participating in any other program (ie, Guest Scientist, Affiliate). Upon completion of your application, you will be notified of your acceptance into the program. This does not include past students. What is the term of the
Hager, Robert; Yoon, E. S.; Ku, S.; D'Azevedo, E. F.; Worley, P. H.; Chang, C. S.
2016-04-04
Fusion edge plasmas can be far from thermal equilibrium and require the use of a non-linear collision operator for accurate numerical simulations. The non-linear single-species Fokker–Planck–Landau collision operator developed by Yoon and Chang (2014) [9] is generalized to include multiple particle species. Moreover, the finite volume discretization used in this work naturally yields exact conservation of mass, momentum, and energy. The implementation of this new non-linear Fokker–Planck–Landau operator in the gyrokinetic particle-in-cell codes XGC1 and XGCa is described and results of a verification study are discussed. Finally, the numerical techniques that make our non-linear collision operator viable on high-performance computingmore » systems are described, including specialized load balancing algorithms and nested OpenMP parallelization. As a result, the collision operator's good weak and strong scaling behavior are shown.« less
Jing, Yaqi; Meng, Qinghao Qi, Peifeng; Zeng, Ming; Li, Wei; Ma, Shugen
2014-05-15
An electronic nose (e-nose) was designed to classify Chinese liquors of the same aroma style. A new method of feature reduction which combined feature selection with feature extraction was proposed. Feature selection method used 8 feature-selection algorithms based on information theory and reduced the dimension of the feature space to 41. Kernel entropy component analysis was introduced into the e-nose system as a feature extraction method and the dimension of feature space was reduced to 12. Classification of Chinese liquors was performed by using back propagation artificial neural network (BP-ANN), linear discrimination analysis (LDA), and a multi-linear classifier. The classification rate of the multi-linear classifier was 97.22%, which was higher than LDA and BP-ANN. Finally the classification of Chinese liquors according to their raw materials and geographical origins was performed using the proposed multi-linear classifier and classification rate was 98.75% and 100%, respectively.
Linear functional minimization for inverse modeling
Barajas-Solano, David A.; Wohlberg, Brendt Egon; Vesselinov, Velimir Valentinov; Tartakovsky, Daniel M.
2015-06-01
In this paper, we present a novel inverse modeling strategy to estimate spatially distributed parameters of nonlinear models. The maximum a posteriori (MAP) estimators of these parameters are based on a likelihood functional, which contains spatially discrete measurements of the system parameters and spatiotemporally discrete measurements of the transient system states. The piecewise continuity prior for the parameters is expressed via Total Variation (TV) regularization. The MAP estimator is computed by minimizing a nonquadratic objective equipped with the TV operator. We apply this inversion algorithm to estimate hydraulic conductivity of a synthetic confined aquifer from measurements of conductivity and hydraulic head. The synthetic conductivity field is composed of a low-conductivity heterogeneous intrusion into a high-conductivity heterogeneous medium. Our algorithm accurately reconstructs the location, orientation, and extent of the intrusion from the steady-state data only. Finally, addition of transient measurements of hydraulic head improves the parameter estimation, accurately reconstructing the conductivity field in the vicinity of observation locations.
KLU2 Direct Linear Solver Package
Energy Science and Technology Software Center
2012-01-04
KLU2 is a direct sparse solver for solving unsymmetric linear systems. It is related to the existing KLU solver, (in Amesos package and also as a stand-alone package from University of Florida) but provides template support for scalar and ordinal types. It uses a left looking LU factorization method.
Finite Element Interface to Linear Solvers
Energy Science and Technology Software Center
2005-03-18
Sparse systems of linear equations arise in many engineering applications, including finite elements, finite volumes, and others. The solution of linear systems is often the most computationally intensive portion of the application. Depending on the complexity of problems addressed by the application, there may be no single solver capable of solving all of the linear systems that arise. This motivates the desire to switch an application from one solver librwy to another, depending on themore » problem being solved. The interfaces provided by solver libraries differ greatly, making it difficult to switch an application code from one library to another. The amount of library-specific code in an application Can be greatly reduced by having an abstraction layer between solver libraries and the application, putting a common "face" on various solver libraries. One such abstraction layer is the Finite Element Interface to Linear Solvers (EEl), which has seen significant use by finite element applications at Sandia National Laboratories and Lawrence Livermore National Laboratory.« less
Physics Case for the International Linear Collider
Fujii, Keisuke; Grojean, Christophe; Peskin, Michael E.; Barklow, Tim; Gao, Yuanning; Kanemura, Shinya; Kim, Hyungdo; List, Jenny; Nojiri, Mihoko; Perelstein, Maxim; Poeschl, Roman; Reuter, Juergen; Simon, Frank; Tanabe, Tomohiko; Yu, Jaehoon; Wells, James D.; Murayama, Hitoshi; Yamamoto, Hitoshi; /Tohoku U.
2015-06-23
We summarize the physics case for the International Linear Collider (ILC). We review the key motivations for the ILC presented in the literature, updating the projected measurement uncertainties for the ILC experiments in accord with the expected schedule of operation of the accelerator and the results of the most recent simulation studies.
Notes on beam dynamics in linear accelerators
Gluckstern, R.L.
1980-09-01
A collection of notes, on various aspects of beam dynamics in linear accelerators, which were produced by the author during five years (1975 to 1980) of consultation for the LASL Accelerator Technology (AT) Division and Medium-Energy Physics (MP) Division is presented.
A microcomputer-controlled linear heater
Schuck, V.; Rahimi, S. )
1991-10-01
In this note the circuits and principles of operation of a relatively simple and inexpensive linear temperature ramp generator are described. The upper-temperature limit and the heating rate are controlled by an Apple II microcomputer. The temperature versus time is displayed on the screen and may be plotted by an {ital x}-{ital y} plotter.
Linear and non-linear forced response of a conical, ducted, laminar premixed flame
Karimi, Nader; Brear, Michael J.; Jin, Seong-Ho; Monty, Jason P. [Department of Mechanical Engineering, University of Melbourne, Parkville, 3010 Vic. (Australia)
2009-11-15
This paper presents an experimental study on the dynamics of a ducted, conical, laminar premixed flame subjected to acoustic excitation of varying amplitudes. The flame transfer function is measured over a range of forcing frequencies and equivalence ratios. In keeping with previous works, the measured flame transfer function is in good agreement with that predicted by linear kinematic theory at low amplitudes of acoustic velocity excitation. However, a systematic departure from linear behaviour is observed as the amplitude of the velocity forcing upstream of the flame increases. This non-linearity is mostly in the phase of the transfer function and manifests itself as a roughly constant phase at high forcing amplitude. Nonetheless, as predicted by non-linear kinematic arguments, the response always remains close to linear at low forcing frequencies, regardless of the forcing amplitude. The origin of this phase behaviour is then sought through optical data post-processing. (author)
Linac Alignment Algorithm: Analysis on 1-to-1 Steering
Sun, Yipeng; Adolphsen, Chris; /SLAC
2011-08-19
In a linear accelerator, it is important to achieve a good alignment between all of its components (such as quadrupoles, RF cavities, beam position monitors et al.), in order to better preserve the beam quality during acceleration. After the survey of the main linac components, there are several beam-based alignment (BBA) techniques to be applied, to further optimize the beam trajectory and calculate the corresponding steering magnets strength. Among these techniques the most simple and straightforward one is the one-to-one (1-to-1) steering technique, which steers the beam from quad center to center, and removes the betatron oscillation from quad focusing. For a future linear collider such as the International Linear Collider (ILC), the initial beam emittance is very small in the vertical plane (flat beam with {gamma}{epsilon}{sub y} = 20-40nm), which means the alignment requirement is very tight. In this note, we evaluate the emittance growth with one-to-one correction algorithm employed, both analytically and numerically. Then the ILC main linac accelerator is taken as an example to compare the vertical emittance growth after 1-to-1 steering, both from analytical formulae and multi-particle tracking simulation. It is demonstrated that the estimated emittance growth from the derived formulae agrees well with the results from numerical simulation, with and without acceleration, respectively.
Implementing wide baseline matching algorithms on a graphics processing unit.
Rothganger, Fredrick H.; Larson, Kurt W.; Gonzales, Antonio Ignacio; Myers, Daniel S.
2007-10-01
Wide baseline matching is the state of the art for object recognition and image registration problems in computer vision. Though effective, the computational expense of these algorithms limits their application to many real-world problems. The performance of wide baseline matching algorithms may be improved by using a graphical processing unit as a fast multithreaded co-processor. In this paper, we present an implementation of the difference of Gaussian feature extractor, based on the CUDA system of GPU programming developed by NVIDIA, and implemented on their hardware. For a 2000x2000 pixel image, the GPU-based method executes nearly thirteen times faster than a comparable CPU-based method, with no significant loss of accuracy.
International Linear Collider-A Technical Progress Report (Technical...
Office of Scientific and Technical Information (OSTI)
Technical Report: International Linear Collider-A Technical Progress Report Citation Details In-Document Search Title: International Linear Collider-A Technical Progress Report The ...
MHK Technologies/Ocean Current Linear Turbine | Open Energy Informatio...
OpenEI (Open Energy Information) [EERE & EIA]
Current Linear Turbine < MHK Technologies Jump to: navigation, search << Return to the MHK database homepage Ocean Current Linear Turbine.jpg Technology Profile Primary...
Updates to the International Linear Collider Damping Rings Baseline...
Office of Scientific and Technical Information (OSTI)
Updates to the International Linear Collider Damping Rings Baseline Design Citation Details In-Document Search Title: Updates to the International Linear Collider Damping Rings...
A Linear Theory of Microwave Instability in Electron Storage...
Office of Scientific and Technical Information (OSTI)
Journal Article: A Linear Theory of Microwave Instability in Electron Storage Rings Citation Details In-Document Search Title: A Linear Theory of Microwave Instability in Electron...
Neutrino mass, dark energy, and the linear growth factor (Journal...
Office of Scientific and Technical Information (OSTI)
dark energy, and the linear growth factor Citation Details In-Document Search Title: Neutrino mass, dark energy, and the linear growth factor We study the degeneracies between ...
Phase and Radial Motion in Ion Linear Accelerators
Energy Science and Technology Software Center
2007-03-29
Parmila is an ion-linac particle-dynamics code. The name comes from the phrase, "Phase and Radial Motion in Ion Linear Accelerators." The code generates DTL, CCDTL, and CCL accelerating cells and, using a "drift-kick" method, transforms the beam, represented by a collection of particles, through the linac. The code includes a 2-D and 3-D space-charge calculations. Parmila uses data generated by the Poisson Superfish postprocessor SEC. This version of Parmila was written by Harunori Takeda andmore » was supported through Feb. 2006 by James H. Billen. Setup installs executable programs Parmila.EXE, Lingraf.EXE, and ReadPMI.EXE in the LANL directory. The directory LANL\\Examples\\Parmila contains several subdirectories with sample files for Parmila.« less
Slattery, Stuart R.
2015-12-02
In this study we analyze and extend mesh-free algorithms for three-dimensional data transfer problems in partitioned multiphysics simulations. We first provide a direct comparison between a mesh-based weighted residual method using the common-refinement scheme and two mesh-free algorithms leveraging compactly supported radial basis functions: one using a spline interpolation and one using a moving least square reconstruction. Through the comparison we assess both the conservation and accuracy of the data transfer obtained from each of the methods. We do so for a varying set of geometries with and without curvature and sharp features and for functions with and without smoothness and with varying gradients. Our results show that the mesh-based and mesh-free algorithms are complementary with cases where each was demonstrated to perform better than the other. We then focus on the mesh-free methods by developing a set of algorithms to parallelize them based on sparse linear algebra techniques. This includes a discussion of fast parallel radius searching in point clouds and restructuring the interpolation algorithms to leverage data structures and linear algebra services designed for large distributed computing environments. The scalability of our new algorithms is demonstrated on a leadership class computing facility using a set of basic scaling studies. Finally, these scaling studies show that for problems with reasonable load balance, our new algorithms for both spline interpolation and moving least square reconstruction demonstrate both strong and weak scalability using more than 100,000 MPI processes with billions of degrees of freedom in the data transfer operation.
Slattery, Stuart R.
2015-12-02
In this study we analyze and extend mesh-free algorithms for three-dimensional data transfer problems in partitioned multiphysics simulations. We first provide a direct comparison between a mesh-based weighted residual method using the common-refinement scheme and two mesh-free algorithms leveraging compactly supported radial basis functions: one using a spline interpolation and one using a moving least square reconstruction. Through the comparison we assess both the conservation and accuracy of the data transfer obtained from each of the methods. We do so for a varying set of geometries with and without curvature and sharp features and for functions with and without smoothnessmore » and with varying gradients. Our results show that the mesh-based and mesh-free algorithms are complementary with cases where each was demonstrated to perform better than the other. We then focus on the mesh-free methods by developing a set of algorithms to parallelize them based on sparse linear algebra techniques. This includes a discussion of fast parallel radius searching in point clouds and restructuring the interpolation algorithms to leverage data structures and linear algebra services designed for large distributed computing environments. The scalability of our new algorithms is demonstrated on a leadership class computing facility using a set of basic scaling studies. Finally, these scaling studies show that for problems with reasonable load balance, our new algorithms for both spline interpolation and moving least square reconstruction demonstrate both strong and weak scalability using more than 100,000 MPI processes with billions of degrees of freedom in the data transfer operation.« less
SuperLU{_}DIST: A scalable distributed-memory sparse direct solver for unsymmetric linear systems
Li, Xiaoye S.; Demmel, James W.
2002-03-27
In this paper, we present the main algorithmic features in the software package SuperLU{_}DIST, a distributed-memory sparse direct solver for large sets of linear equations. We give in detail our parallelization strategies, with focus on scalability issues, and demonstrate the parallel performance and scalability on current machines. The solver is based on sparse Gaussian elimination, with an innovative static pivoting strategy proposed earlier by the authors. The main advantage of static pivoting over classical partial pivoting is that it permits a priori determination of data structures and communication pattern for sparse Gaussian elimination, which makes it more scalable on distributed memory machines. Based on this a priori knowledge, we designed highly parallel and scalable algorithms for both LU decomposition and triangular solve and we show that they are suitable for large-scale distributed memory machines.
Close, E.; Fong, C; Lee, E.
1991-10-30
Although this report is called a program document, it is not simply a user's guide to running HILDA nor is it a programmer's guide to maintaining and updating HILDA. It is a guide to HILDA as a program and as a model for designing and costing a heavy ion fusion (HIF) driver. HILDA represents the work and ideas of many people; as does the model upon which it is based. The project was initiated by Denis Keefe, the leader of the LBL HIFAR project. He suggested the name HILDA, which is an acronym for Heavy Ion Linac Driver Analysis. The conventions and style of development of the HILDA program are based on the original goals. It was desired to have a computer program that could estimate the cost and find an optimal design for Heavy Ion Fusion induction linac drivers. This program should model near-term machines as well as fullscale drivers. The code objectives were: (1) A relatively detailed, but easily understood model. (2) Modular, structured code to facilitate making changes in the model, the analysis reports, and the user interface. (3) Documentation that defines, and explains the system model, cost algorithm, program structure, and generated reports. With this tool a knowledgeable user would be able to examine an ensemble of drivers and find the driver that is minimum in cost, subject to stated constraints. This document contains a report section that describes how to use HILDA, some simple illustrative examples, and descriptions of the models used for the beam dynamics and component design. Associated with this document, as files on floppy disks, are the complete HILDA source code, much information that is needed to maintain and update HILDA, and some complete examples. These examples illustrate that the present version of HILDA can generate much useful information about the design of a HIF driver. They also serve as guides to what features would be useful to include in future updates. The HPD represents the current state of development of this project.
Non-Linear Seismic Soil Structure Interaction (SSI) Method for Developing Non-Linear Seismic SSI Analysis Techniques Justin Coleman, P.E. October 25th, 2011
Enhanced dielectric-wall linear accelerator
Sampayan, S.E.; Caporaso, G.J.; Kirbie, H.C.
1998-09-22
A dielectric-wall linear accelerator is enhanced by a high-voltage, fast e-time switch that includes a pair of electrodes between which are laminated alternating layers of isolated conductors and insulators. A high voltage is placed between the electrodes sufficient to stress the voltage breakdown of the insulator on command. A light trigger, such as a laser, is focused along at least one line along the edge surface of the laminated alternating layers of isolated conductors and insulators extending between the electrodes. The laser is energized to initiate a surface breakdown by a fluence of photons, thus causing the electrical switch to close very promptly. Such insulators and lasers are incorporated in a dielectric wall linear accelerator with Blumlein modules, and phasing is controlled by adjusting the length of fiber optic cables that carry the laser light to the insulator surface. 6 figs.
Noise in phase-preserving linear amplifiers
Pandey, Shashank; Jiang, Zhang; Combes, Joshua; Caves, Carlton M.
2014-12-04
The purpose of a phase-preserving linear amplifier is to make a small signal larger, so that it can be perceived by instruments incapable of resolving the original signal, while sacrificing as little as possible in signal-to-noise. Quantum mechanics limits how well this can be done: the noise added by the amplifier, referred to the input, must be at least half a quantum at the operating frequency. This well-known quantum limit only constrains the second moments of the added noise. Here we provide the quantum constraints on the entire distribution of added noise: any phasepreserving linear amplifier is equivalent to a parametric amplifier with a physical state ? for the ancillary mode; ? determines the properties of the added noise.
Tilted panel linear echelon solar collector
Appeldorn, R.H.; Vanderwerf, D.F.
1989-01-31
A solar concentrator is described for directing incident solar radiation to a linear focus, comprising: a planar base surface being positioned at an angle phi, which is greater than 0/sup 0/ but less than 90/sup 0/, with respect to a direction which is normal to the incident solar radiation; a plurality of planar reflective elements set along the planar base surface each of which is positioned at an angle ..cap alpha..' with respect to the planar base surface and which varies for each of the planar reflective elements to reflect the incident solar radiation to the linear focus, the plurality of planar reflective elements being separated from each other by substantially planar riser elements, the riser elements being substantially normal to the planar base surface, each of the planar reflective elements making an angle ..cap alpha.. with respect to the direction which is normal to the incident solar radiation.
Enhanced dielectric-wall linear accelerator
Sampayan, Stephen E.; Caporaso, George J.; Kirbie, Hugh C.
1998-01-01
A dielectric-wall linear accelerator is enhanced by a high-voltage, fast e-time switch that includes a pair of electrodes between which are laminated alternating layers of isolated conductors and insulators. A high voltage is placed between the electrodes sufficient to stress the voltage breakdown of the insulator on command. A light trigger, such as a laser, is focused along at least one line along the edge surface of the laminated alternating layers of isolated conductors and insulators extending between the electrodes. The laser is energized to initiate a surface breakdown by a fluence of photons, thus causing the electrical switch to close very promptly. Such insulators and lasers are incorporated in a dielectric wall linear accelerator with Blumlein modules, and phasing is controlled by adjusting the length of fiber optic cables that carry the laser light to the insulator surface.
Radio frequency quadrupole resonator for linear accelerator
Moretti, Alfred
1985-01-01
An RFQ resonator for a linear accelerator having a reduced level of interfering modes and producing a quadrupole mode for focusing, bunching and accelerating beams of heavy charged particles, with the construction being characterized by four elongated resonating rods within a cylinder with the rods being alternately shorted and open electrically to the shell at common ends of the rods to provide an LC parallel resonant circuit when activated by a magnetic field transverse to the longitudinal axis.
High gradient accelerators for linear light sources
Barletta, W.A.
1988-09-26
Ultra-high gradient radio frequency linacs powered by relativistic klystrons appear to be able to provide compact sources of radiation at XUV and soft x-ray wavelengths with a duration of 1 picosecond or less. This paper provides a tutorial review of the physics applicable to scaling the present experience of the accelerator community to the regime applicable to compact linear light sources. 22 refs., 11 figs., 21 tabs.
The Next Linear Collider: NLC2001
D. Burke et al.
2002-01-14
Recent studies in elementary particle physics have made the need for an e{sup +}e{sup -} linear collider able to reach energies of 500 GeV and above with high luminosity more compelling than ever [1]. Observations and measurements completed in the last five years at the SLC (SLAC), LEP (CERN), and the Tevatron (FNAL) can be explained only by the existence of at least one particle or interaction that has not yet been directly observed in experiment. The Higgs boson of the Standard Model could be that particle. The data point strongly to a mass for the Higgs boson that is just beyond the reach of existing colliders. This brings great urgency and excitement to the potential for discovery at the upgraded Tevatron early in this decade, and almost assures that later experiments at the LHC will find new physics. But the next generation of experiments to be mounted by the world-wide particle physics community must not only find this new physics, they must find out what it is. These experiments must also define the next important threshold in energy. The need is to understand physics at the TeV energy scale as well as the physics at the 100-GeV energy scale is now understood. This will require both the LHC and a companion linear electron-positron collider. A first Zeroth-Order Design Report (ZDR) [2] for a second-generation electron-positron linear collider, the Next Linear Collider (NLC), was published five years ago. The NLC design is based on a high-frequency room-temperature rf accelerator. Its goal is exploration of elementary particle physics at the TeV center-of-mass energy, while learning how to design and build colliders at still higher energies. Many advances in accelerator technologies and improvements in the design of the NLC have been made since 1996. This Report is a brief update of the ZDR.
Communications circuit including a linear quadratic estimator
Ferguson, Dennis D.
2015-07-07
A circuit includes a linear quadratic estimator (LQE) configured to receive a plurality of measurements a signal. The LQE is configured to weight the measurements based on their respective uncertainties to produce weighted averages. The circuit further includes a controller coupled to the LQE and configured to selectively adjust at least one data link parameter associated with a communication channel in response to receiving the weighted averages.
Linearity Testing of Photovoltaic Cells (Poster)
Emery, K.; Winter, S.; Pinegar, S.; Nalley, D.
2006-05-01
International PV standards require that the short-circuit current or response of the reference device be linear with total irradiance. Accredited calibration laboratories can not assume that their reference device is linear unless another accredited laboratory has performed the measurement. The NREL PV performance laboratory is ISO 17025 accredited for primary reference cell, secondary reference cell and secondary module calibrations. Limited labor resources necessitated the development of a technique to determine linearity without taking significant labor or technical skill. The two-lamp method is insensitive to the spectrum of the light or spatial nonuniformity changing as the irradiance is varied. It does assume that the temperature does not change with irradiance and that the light-source spectrum resembles the solar spectrum. This requirement is only because nonlinear mechanisms in the photo-current are wavelength dependent. A laser for example may show the same device as linear or very nonlinear with irradiance depending on the wavelength. The two-lamp method assumes that the lamp intensities when individually irradiating the sample are the same as when both lamps irradiate the sample. The presence of room light only limits the lowest irradiance that can be evaluated. Unlike other methods, the two-lamp method does not allow the current to be corrected for nonlinear effects. The most appealing aspect of the two-lamp method when compared with other methods for a high-volume calibration laboratory is that it is fast and does not require operator intervention to change the irradiances and is difficult for the operator to make mistakes that would affect the outcome.
Linear optics measurements and corrections using an AC dipole in RHIC
Wang, G.; Bai, M.; Yang, L.
2010-05-23
We report recent experimental results on linear optics measurements and corrections using ac dipole. In RHIC 2009 run, the concept of the SVD correction algorithm is tested at injection energy for both identifying the artificial gradient errors and correcting it using the trim quadrupoles. The measured phase beatings were reduced by 30% and 40% respectively for two dedicated experiments. In RHIC 2010 run, ac dipole is used to measure {beta}* and chromatic {beta} function. For the 0.65m {beta}* lattice, we observed a factor of 3 discrepancy between model and measured chromatic {beta} function in the yellow ring.
Linearity Testing of Photovoltaic Cells: Preprint
Emery, K.; Winter, S.; Pinegar, S.; Nalley D.
2006-05-01
Photovoltaic devices are rated in terms of their peak power with respect to a specific spectrum, total irradiance, and temperature. To rate photovoltaic devices, a reference detector is required whose response is linear with total irradiance. This paper describes a procedure to determine the linearity of the short-circuit current (Isc) versus the total irradiance (Etot) by illuminating a reference cell with two lamps. A device is linear if the current measured with both lamps illuminating the cell is the same as the sum of the currents with each lamp illuminating the cell. The two-lamp method is insensitive to the light spectra or spatial nonuniformity changing with irradiance. The two-lamp method is rapid, easy to implement, and does not require operator intervention to change the irradiances. The presence of room light only limits the lowest irradiance that can be evaluated. Unlike other methods, the two-lamp method does not allow the current to be corrected for nonlinear effects.
U.S. Department of Energy (DOE) - all webpages (Extended Search)
Program Description SAGE, the Summer of Applied Geophysical Experience, is a unique educational program designed to introduce students in geophysics and related fields to "hands on" geophysical exploration and research. The program emphasizes both teaching of field methods and research related to basic science and a variety of applied problems. SAGE is hosted by the National Security Education Center and the Earth and Environmental Sciences Division of the Los Alamos National
Towards a Future Linear Collider and The Linear Collider Studies at CERN
None
2016-07-12
During the week 18-22 October, more than 400 physicists will meet at CERN and in the CICG (International Conference Centre Geneva) to review the global progress towards a future linear collider. The 2010 International Workshop on Linear Colliders will study the physics, detectors and accelerator complex of a linear collider covering both the CLIC and ILC options. Among the topics presented and discussed will be the progress towards the CLIC Conceptual Design Report in 2011, the ILC Technical Design Report in 2012, physics and detector studies linked to these reports, and an increasing numbers of common working group activities. The seminar will give an overview of these topics and also CERNâs linear collider studies, focusing on current activities and initial plans for the period 2011-16. n.b: The Council Chamber is also reserved for this colloquium with a live transmission from the Main Auditorium.
Towards a Future Linear Collider and The Linear Collider Studies at CERN
None
2011-10-06
During the week 18-22 October, more than 400 physicists will meet at CERN and in the CICG (International Conference Centre Geneva) to review the global progress towards a future linear collider. The 2010 International Workshop on Linear Colliders will study the physics, detectors and accelerator complex of a linear collider covering both the CLIC and ILC options. Among the topics presented and discussed will be the progress towards the CLIC Conceptual Design Report in 2011, the ILC Technical Design Report in 2012, physics and detector studies linked to these reports, and an increasing numbers of common working group activities. The seminar will give an overview of these topics and also CERN?s linear collider studies, focusing on current activities and initial plans for the period 2011-16. n.b: The Council Chamber is also reserved for this colloquium with a live transmission from the Main Auditorium.
IMPACTS: Industrial Technologies Program, Summary of Program...
Office of Energy Efficiency and Renewable Energy (EERE) (indexed site)
IMPACTS: Industrial Technologies Program, Summary of Program Results for CY2009 IMPACTS: Industrial Technologies Program, Summary of Program Results for CY2009 ...
Light Water Reactor Sustainability Program - Integrated Program...
Light Water Reactor Sustainability Program - Integrated Program Plan Light Water Reactor Sustainability Program - Integrated Program Plan The Light Water Reactor Sustainability ...
Directives, Delegations, and Other Requirements [Office of Management (MA)]
1992-09-04
To establish the policies, procedures, and specific responsibilities for the Department of Energy (DOE) Counterintelligence (CI) Program. This directive does not cancel any other directive.
Directives, Delegations, and Other Requirements [Office of Management (MA)]
1997-05-21
This chapter addresses plans for the acquisition and installation of operating environment hardware and software and design of a training program.
U.S. Department of Energy (DOE) - all webpages (Extended Search)
their potential and pursue opportunities in science, technology, engineering and mathematics. Through Expanding Your Horizon (EYH) Network programs, we provide STEM role models...
National Nuclear Security Administration (NNSA)
and dispose of many different hazardous substances, including radioactive materials, toxic chemicals, and biological agents and toxins.
There are a few programs NNSA uses...
U.S. Department of Energy (DOE) - all webpages (Extended Search)
Applied Geophysical Experience, is a unique educational program designed to introduce students in geophysics and related fields to "hands on" geophysical exploration and research....
U.S. Department of Energy (DOE) - all webpages (Extended Search)
National VolunteerMatch Retired and Senior Volunteer Program United Way of Northern New Mexico United Way of Santa Fe County Giving Employee Giving Campaign Holiday Food Drive...
U.S. Department of Energy (DOE) - all webpages (Extended Search)
and Application Center for Hydrogen Energy Research Programs ARPA-E Basic Energy Sciences ... Sea State Contour) Code Online Abstracts and Reports Water Power Personnel ...
Directives, Delegations, and Other Requirements [Office of Management (MA)]
2004-12-10
The Order establishes Counterintelligence Program requirements and responsibilities for the Department of Energy, including the National Nuclear Security Administration. Supersedes DOE 5670.3.
U.S. Department of Energy (DOE) - all webpages (Extended Search)
Program Description Inspiring girls to recognize their potential and pursue opportunities in science, technology, engineering and mathematics. Through Expanding Your Horizon (EYH) ...
Office of Energy Efficiency and Renewable Energy (EERE)
Headquarters Human Resources Operations promotes a variety of hiring flexibilities for managers to attract a diverse workforce, from Student Internship Program opportunities (Pathways), Veteran...
Solar Position Algorithm (SPA) - Energy Innovation Portal
U.S. Department of Energy (DOE) - all webpages (Extended Search)
Thermal Solar Thermal Energy Analysis Energy Analysis Find More Like This Return to Search Solar Position Algorithm (SPA) National Renewable Energy Laboratory Contact NREL About This Technology Technology Marketing Summary This algorithm calculates the solar zenith and azimuth angles in the period from the year -2000 to 6000, with uncertainties of +/- 0.0003 degrees based on the date, time, and location on Earth. (Reference: Reda, I.; Andreas, A., Solar Position Algorithm for Solar Radiation
Student's algorithm solves real-world problem
U.S. Department of Energy (DOE) - all webpages (Extended Search)
Student's algorithm solves real-world problem Supercomputing Challenge: student's algorithm solves real-world problem Students learn how to use powerful computers to analyze, model, and solve real-world problems. April 3, 2012 Jordon Medlock of Albuquerque's Manzano High School won the 2012 Lab-sponsored Supercomputing Challenge Jordon Medlock of Albuquerque's Manzano High School won the 2012 Lab-sponsored Supercomputing Challenge by creating a computer algorithm that automates the process of
Generation of attributes for learning algorithms
Hu, Yuh-Jyh; Kibler, D.
1996-12-31
Inductive algorithms rely strongly on their representational biases. Constructive induction can mitigate representational inadequacies. This paper introduces the notion of a relative gain measure and describes a new constructive induction algorithm (GALA) which is independent of the learning algorithm. Unlike most previous research on constructive induction, our methods are designed as preprocessing step before standard machine learning algorithms are applied. We present the results which demonstrate the effectiveness of GALA on artificial and real domains for several learners: C4.5, CN2, perceptron and backpropagation.
Java implementation of Class Association Rule algorithms
Energy Science and Technology Software Center
2007-08-30
Java implementation of three Class Association Rule mining algorithms, NETCAR, CARapriori, and clustering based rule mining. NETCAR algorithm is a novel algorithm developed by Makio Tamura. The algorithm is discussed in a paper: UCRL-JRNL-232466-DRAFT, and would be published in a peer review scientific journal. The software is used to extract combinations of genes relevant with a phenotype from a phylogenetic profile and a phenotype profile. The phylogenetic profiles is represented by a binary matrix andmore » a phenotype profile is represented by a binary vector. The present application of this software will be in genome analysis, however, it could be applied more generally.« less
Hybrid Discrete - Continuum Algorithms for Stochastic Reaction...
Office of Scientific and Technical Information (OSTI)
for Stochastic Reaction Networks. Citation Details In-Document Search Title: Hybrid Discrete - Continuum Algorithms for Stochastic Reaction Networks. Abstract not provided. ...
A High-Order Finite-Volume Algorithm for Fokker-Planck Collisions in Magnetized Plasmas
Xiong, Z; Cohen, R H; Rognlien, T D; Xu, X Q
2007-04-18
A high-order finite volume algorithm is developed for the Fokker-Planck Operator (FPO) describing Coulomb collisions in strongly magnetized plasmas. The algorithm is based on a general fourth-order reconstruction scheme for an unstructured grid in the velocity space spanned by parallel velocity and magnetic moment. The method provides density conservation and high-order-accurate evaluation of the FPO independent of the choice of the velocity coordinates. As an example, a linearized FPO in constant-of-motion coordinates, i.e. the total energy and the magnetic moment, is developed using the present algorithm combined with a cut-cell merging procedure. Numerical tests include the Spitzer thermalization problem and the return to isotropy for distributions initialized with velocity space loss cones. Utilization of the method for a nonlinear FPO is straightforward but requires evaluation of the Rosenbluth potentials.
Performance Models for the Spike Banded Linear System Solver
Manguoglu, Murat; Saied, Faisal; Sameh, Ahmed; Grama, Ananth
2011-01-01
With availability of large-scale parallel platforms comprised of tens-of-thousands of processors and beyond, there is significant impetus for the development of scalable parallel sparse linear system solvers and preconditioners. An integral part of this design process is the development of performance models capable of predicting performance and providing accurate cost models for the solvers and preconditioners. There has been some work in the past on characterizing performance of the iterative solvers themselves. In this paper, we investigate the problem of characterizing performance and scalability of banded preconditioners. Recent work has demonstrated the superior convergence properties and robustness of banded preconditioners,more » compared to state-of-the-art ILU family of preconditioners as well as algebraic multigrid preconditioners. Furthermore, when used in conjunction with efficient banded solvers, banded preconditioners are capable of significantly faster time-to-solution. Our banded solver, the Truncated Spike algorithm is specifically designed for parallel performance and tolerance to deep memory hierarchies. Its regular structure is also highly amenable to accurate performance characterization. Using these characteristics, we derive the following results in this paper: (i) we develop parallel formulations of the Truncated Spike solver, (ii) we develop a highly accurate pseudo-analytical parallel performance model for our solver, (iii) we show excellent predication capabilities of our model – based on which we argue the high scalability of our solver. Our pseudo-analytical performance model is based on analytical performance characterization of each phase of our solver. These analytical models are then parameterized using actual runtime information on target platforms. An important consequence of our performance models is that they reveal underlying performance bottlenecks in both serial and parallel formulations. All of our results are validated
Student Internship Programs Program Description
U.S. Department of Energy (DOE) - all webpages (Extended Search)
Student Internship Programs Program Description The objective of the Laboratory's student internship programs is to provide students with opportunities for meaningful hands- on experience supporting educational progress in their selected scientific or professional fields. The most significant impact of these internship experiences is observed in the intellectual growth experienced by the participants. Student interns are able to appreciate the practical value of their education efforts in their
Program Description | Robotics Internship Program
U.S. Department of Energy (DOE) - all webpages (Extended Search)
March 4, 2016. Apply Now for the Robotics Internship About the Internship Program Description Start of Appointment Renewal of Appointment End of Appointment Stipend Information...
An overview of SuperLU: Algorithms, implementation, and userinterface
Li, Xiaoye S.
2003-09-30
We give an overview of the algorithms, design philosophy,and implementation techniques in the software SuperLU, for solving sparseunsymmetric linear systems. In particular, we highlight the differencesbetween the sequential SuperLU (including its multithreaded extension)and parallel SuperLU_DIST. These include the numerical pivoting strategy,the ordering strategy for preserving sparsity, the ordering in which theupdating tasks are performed, the numerical kernel, and theparallelization strategy. Because of the scalability concern, theparallel code is drastically different from the sequential one. Wedescribe the user interfaces ofthe libraries, and illustrate how to usethe libraries most efficiently depending on some matrix characteristics.Finally, we give some examples of how the solver has been used inlarge-scale scientific applications, and the performance.
Microfabricated linear Paul-Straubel ion trap
Mangan, Michael A.; Blain, Matthew G.; Tigges, Chris P.; Linker, Kevin L.
2011-04-19
An array of microfabricated linear Paul-Straubel ion traps can be used for mass spectrometric applications. Each ion trap comprises two parallel inner RF electrodes and two parallel outer DC control electrodes symmetric about a central trap axis and suspended over an opening in a substrate. Neighboring ion traps in the array can share a common outer DC control electrode. The ions confined transversely by an RF quadrupole electric field potential well on the ion trap axis. The array can trap a wide array of ions.
Rf power sources for linear colliders
Allen, M.A.; Callin, R.S.; Caryotakis, G.; Deruyter, H.; Eppley, K.R.; Fant, K.S.; Farkas, Z.D.; Fowkes, W.R.; Hoag, H.A.; Feinstein, J.; Ko, K.; Koontz, R.F.; Kroll, N.M.; Lavine, T.L.; Lee, T.G.; Loew, G.A.; Miller, R.H.; Nelson, E.M.; Ruth, R.D.; Vlieks, A.E.; Wang, J.W.; Wilson, P.B. ); Boyd, J.K.; Houk, T.; Ryne, R.D.; Westenskow, G.A.; Yu, S.S. (Lawrence Live
1990-06-01
The next generation of linear colliders requires peak power sources of over 200 MW per meter at frequencies above 10 GHz at pulse widths of less than 100 nsec. Several power sources are under active development, including a conventional klystron with rf pulse compression, a relativistic klystron (RK) and a crossed-field amplifier. Power from one of these has energized a 0.5 meter two- section High Gradient Accelerator (HGA) and accelerated a beam at over 80 MeV meter. Results of tests with these experimental devices are presented here.
Micromechanism linear actuator with capillary force sealing
Sniegowski, Jeffry J.
1997-01-01
A class of micromachine linear actuators whose function is based on gas driven pistons in which capillary forces are used to seal the gas behind the piston. The capillary forces also increase the amount of force transmitted from the gas pressure to the piston. In a major subclass of such devices, the gas bubble is produced by thermal vaporization of a working fluid. Because of their dependence on capillary forces for sealing, such devices are only practical on the sub-mm size scale, but in that regime they produce very large force times distance (total work) values.
Jimenez, Edward Steven,
2013-09-01
The goal of this work is to develop a fast computed tomography (CT) reconstruction algorithm based on graphics processing units (GPU) that achieves significant improvement over traditional central processing unit (CPU) based implementations. The main challenge in developing a CT algorithm that is capable of handling very large datasets is parallelizing the algorithm in such a way that data transfer does not hinder performance of the reconstruction algorithm. General Purpose Graphics Processing (GPGPU) is a new technology that the Science and Technology (S&T) community is starting to adopt in many fields where CPU-based computing is the norm. GPGPU programming requires a new approach to algorithm development that utilizes massively multi-threaded environments. Multi-threaded algorithms in general are difficult to optimize since performance bottlenecks occur that are non-existent in single-threaded algorithms such as memory latencies. If an efficient GPU-based CT reconstruction algorithm can be developed; computational times could be improved by a factor of 20. Additionally, cost benefits will be realized as commodity graphics hardware could potentially replace expensive supercomputers and high-end workstations. This project will take advantage of the CUDA programming environment and attempt to parallelize the task in such a way that multiple slices of the reconstruction volume are computed simultaneously. This work will also take advantage of the GPU memory by utilizing asynchronous memory transfers, GPU texture memory, and (when possible) pinned host memory so that the memory transfer bottleneck inherent to GPGPU is amortized. Additionally, this work will take advantage of GPU-specific hardware (i.e. fast texture memory, pixel-pipelines, hardware interpolators, and varying memory hierarchy) that will allow for additional performance improvements.
Initial borehole acoustic televiewer data processing algorithms
Moore, T.K.
1988-06-01
With the development of a new digital televiewer, several algorithms have been developed in support of off-line data processing. This report describes the initial set of utilities developed to support data handling as well as data display. Functional descriptions, implementation details, and instructions for use of the seven algorithms are provided. 5 refs., 33 figs., 1 tab.
Limits on linearity of missile allocation optimization
Canavan, G.H.
1997-12-01
Optimizations of missile allocation based on linearized exchange equations produce accurate allocations, but the limits of validity of the linearization are not known. These limits are explored in the context of the upload of weapons by one side to initially small, equal forces of vulnerable and survivable weapons. The analysis compares analytic and numerical optimizations and stability induces based on aggregated interactions of the two missile forces, the first and second strikes they could deliver, and they resulting costs. This note discusses the costs and stability indices induced by unilateral uploading of weapons to an initially symmetrical low force configuration. These limits are quantified for forces with a few hundred missiles by comparing analytic and numerical optimizations of first strike costs. For forces of 100 vulnerable and 100 survivable missiles on each side, the analytic optimization agrees closely with the numerical solution. For 200 vulnerable and 200 survivable missiles on each side, the analytic optimization agrees with the induces to within about 10%, but disagrees with the allocation of the side with more weapons by about 50%. The disagreement comes from the interaction of the possession of more weapons with the shift of allocation from missiles to value that they induce.
Linear dimensions and volumes of human lungs
Hickman, David P.
2012-03-30
TOTAL LUNG Capacity is defined as “the inspiratory capacity plus the functional residual capacity; the volume of air contained in the lungs at the end of a maximal inspiration; also equals vital capacity plus residual volume” (from MediLexicon.com). Within the Results and Discussion section of their April 2012 Health Physics paper, Kramer et al. briefly noted that the lungs of their experimental subjects were “not fully inflated.” By definition and failure to obtain maximal inspiration, Kramer et. al. did not measure Total Lung Capacity (TLC). The TLC equation generated from this work will tend to underestimate TLC and does notmore » improve or update total lung capacity data provided by ICRP and others. Likewise, the five linear measurements performed by Kramer et. al. are only representative of the conditions of the measurement (i.e., not at-rest volume, but not fully inflated either). While there was significant work performed and the data are interesting, the data does not represent a maximal situation, a minimal situation, or an at-rest situation. Moreover, while interesting, the linear data generated by this study is limited by the conditions of the experiment and may not be fully comparative with other lung or inspiratory parameters, measures, or physical dimensions.« less
Terahertz-driven linear electron acceleration
Nanni, Emilio A.; Huang, Wenqian R.; Hong, Kyung-Han; Ravi, Koustuban; Fallahi, Arya; Moriena, Gustavo; Dwayne Miller, R. J.; Kärtner, Franz X.
2015-10-06
The cost, size and availability of electron accelerators are dominated by the achievable accelerating gradient. Conventional high-brightness radio-frequency accelerating structures operate with 30–50 MeVm^{-1 }gradients. Electron accelerators driven with optical or infrared sources have demonstrated accelerating gradients orders of magnitude above that achievable with conventional radio-frequency structures. However, laser-driven wakefield accelerators require intense femtosecond sources and direct laser-driven accelerators suffer from low bunch charge, sub-micron tolerances and sub-femtosecond timing requirements due to the short wavelength of operation. Here we demonstrate linear acceleration of electrons with keV energy gain using optically generated terahertz pulses. Terahertz-driven accelerating structures enable high-gradient electron/proton accelerators with simple accelerating structures, high repetition rates and significant charge per bunch. As a result, these ultra-compact terahertz accelerators with extremely short electron bunches hold great potential to have a transformative impact for free electron lasers, linear colliders, ultrafast electron diffraction, X-ray science and medical therapy with X-rays and electron beams.
Terahertz-driven linear electron acceleration
Nanni, Emilio A.; Huang, Wenqian R.; Hong, Kyung-Han; Ravi, Koustuban; Fallahi, Arya; Moriena, Gustavo; Dwayne Miller, R. J.; Kärtner, Franz X.
2015-10-06
The cost, size and availability of electron accelerators are dominated by the achievable accelerating gradient. Conventional high-brightness radio-frequency accelerating structures operate with 30–50 MeVm-1 gradients. Electron accelerators driven with optical or infrared sources have demonstrated accelerating gradients orders of magnitude above that achievable with conventional radio-frequency structures. However, laser-driven wakefield accelerators require intense femtosecond sources and direct laser-driven accelerators suffer from low bunch charge, sub-micron tolerances and sub-femtosecond timing requirements due to the short wavelength of operation. Here we demonstrate linear acceleration of electrons with keV energy gain using optically generated terahertz pulses. Terahertz-driven accelerating structures enable high-gradient electron/proton acceleratorsmore » with simple accelerating structures, high repetition rates and significant charge per bunch. As a result, these ultra-compact terahertz accelerators with extremely short electron bunches hold great potential to have a transformative impact for free electron lasers, linear colliders, ultrafast electron diffraction, X-ray science and medical therapy with X-rays and electron beams.« less
Petascale algorithms for reactor hydrodynamics.
Fischer, P.; Lottes, J.; Pointer, W. D.; Siegel, A.
2008-01-01
We describe recent algorithmic developments that have enabled large eddy simulations of reactor flows on up to P = 65, 000 processors on the IBM BG/P at the Argonne Leadership Computing Facility. Petascale computing is expected to play a pivotal role in the design and analysis of next-generation nuclear reactors. Argonne's SHARP project is focused on advanced reactor simulation, with a current emphasis on modeling coupled neutronics and thermal-hydraulics (TH). The TH modeling comprises a hierarchy of computational fluid dynamics approaches ranging from detailed turbulence computations, using DNS (direct numerical simulation) and LES (large eddy simulation), to full core analysis based on RANS (Reynolds-averaged Navier-Stokes) and subchannel models. Our initial study is focused on LES of sodium-cooled fast reactor cores. The aim is to leverage petascale platforms at DOE's Leadership Computing Facilities (LCFs) to provide detailed information about heat transfer within the core and to provide baseline data for less expensive RANS and subchannel models.
TRACC: Algorithm for Predicting and Tracking Barges on Inland Waterways
Energy Science and Technology Software Center
2010-04-23
Algorithm developed in this work is used to predict the location and estimate the traveling speed of a barge moving in inland waterway network. Measurements obtained from GPS or other systems are corrupted with measurement noise and reported at large, irregular time intervals. Thus, creating uncertainty about the current location of the barge and minimizing the effectiveness of emergency response activities in case of an accident or act of terrorism. Developing a prediction algorithm becomemore » a non-trivial problem due to estimation of speed becomes challenging, attributed to the complex interactions between multiple systems associated in the process. This software, uses systems approach in modeling the motion dynamics of the barge and estimates the location and speed of the barge at next, user defined, time interval. In this work, first, to estimate the speed a non-linear, stochastic modeling technique was developed that take local variations and interactions existing in the system. Output speed is then used as an observation in a statistically optimal filtering technique, Kalman filter, formulated in state-space to minimize numerous errors observed in the system. The combined system synergistically fuses the local information available with measurements obtained to predict the location and speed of traveling of the barge accurately.« less
The culture of the DOE community will be based on standards. Technical standards will formally integrate part of all DOE facility, program and project activities. The DOE will be recognized as a...
U.S. Department of Energy (DOE) - all webpages (Extended Search)
Program Review (IPR) Quarterly Business Review (QBR) Access to Capital Debt Management July 2013 Aug. 2013 Sept. 2013 Oct. 2013 Nov. 2013 Dec. 2013 Jan. 2014 Feb. 2014 March...
Energy Science and Technology Software Center
1999-02-18
The program is suitable for a lot of applications in applied mathematics, experimental physics, signal analytical system and some engineering applications range i.e. deconvolution spectrum, signal analysis and system property analysis etc.
Office of Energy Efficiency and Renewable Energy (EERE) (indexed site)
... for Preparing QA Project Plans." 8.4 American Society of Mechanical Engineers (ASME)NQA-1, "Quality Assurance Program Requirements for Nuclear Facilities." 8.5 American Nuclear ...
U.S. Department of Energy (DOE) - all webpages (Extended Search)
The focal point for basic and applied R&D programs with a primary focus on energy but also encompassing medical, biotechnology, high-energy physics, and advanced scientific ...
A successful candidate in this position will serve as an Program Analyst for the System Operations team in the area of regulatory compliance. The successful candidate will also become a subject...
U.S. Department of Energy (DOE) - all webpages (Extended Search)
Volunteer Program Volunteer Program Our good neighbor pledge includes active employee engagement in our communities through volunteering. More than 3,000 current and retired Lab employees have logged more than 1.8 million volunteer hours since 2007. August 19, 2015 Los Alamos National Laboratory employee volunteers with Mountain Canine Corps Lab employee Debbi Miller volunteers for the Mountain Canine Corps with her search and rescue dogs. She also volunteers with another search and rescue
U.S. Department of Energy (DOE) - all webpages (Extended Search)
Program Managers Program Managers Enabling remarkable discoveries and tools that transform our understanding of energy and matter and advance national, economic, and energy security. Advanced Scientific Computing Applied Mathematics: Pieter Swart, T-5 Computer Science: Pat McCormick, CCS-7 Computational Partnerships: Galen Shipman, CCS-7 Basic Energy Sciences Materials Sciences & Engineering: Toni Taylor, ADCLES-DO CINT National User Facility: Alex Lacerda, MPA-CINT (Acting Director)
U.S. Department of Energy (DOE) - all webpages (Extended Search)
Program Summaries Basic Energy Sciences (BES) BES Home About Research Facilities Science Highlights Benefits of BES Funding Opportunities Basic Energy Sciences Advisory Committee (BESAC) Community Resources Program Summaries Brochures Reports Accomplishments Presentations BES and Congress Science for Energy Flow Seeing Matter Nano for Energy Scale of Things Chart Contact Information Basic Energy Sciences U.S. Department of Energy SC-22/Germantown Building 1000 Independence Ave., SW Washington,
U.S. Department of Energy (DOE) - all webpages (Extended Search)
Educational Programs Educational Programs A collaboration between Los Alamos National Laboratory and the University of California at San Diego (UCSD) Jacobs School of Engineering Contact Institute Director Charles Farrar (505) 663-5330 Email UCSD EI Director Michael Todd (858) 534-5951 Professional Staff Assistant Ellie Vigil (505) 667-2818 Email Administrative Assistant Rebecca Duran (505) 665-8899 Email There are two educational components to the Engineering Institute. The Los Alamos Dynamic
Densmore, J.D.; Park, H.; Wollaber, A.B.; Rauenzahn, R.M.; Knoll, D.A.
2015-03-01
We present a moment-based acceleration algorithm applied to Monte Carlo simulation of thermal radiative-transfer problems. Our acceleration algorithm employs a continuum system of moments to accelerate convergence of stiff absorption–emission physics. The combination of energy-conserving tallies and the use of an asymptotic approximation in optically thick regions remedy the difficulties of local energy conservation and mitigation of statistical noise in such regions. We demonstrate the efficiency and accuracy of the developed method. We also compare directly to the standard linearization-based method of Fleck and Cummings [1]. A factor of 40 reduction in total computational time is achieved with the new algorithm for an equivalent (or more accurate) solution as compared with the Fleck–Cummings algorithm.
The computational physics program of the National MFE Computer Center
Mirin, A.A.
1988-01-01
The principal objective of the Computational Physics Group is to develop advanced numerical models for the investigation of plasma phenomena and the simulation of present and future magnetic confinement devices. Another major objective of the group is to develop efficient algorithms and programming techniques for current and future generation of supercomputers. The computational physics group is involved in several areas of fusion research. One main area is the application of Fokker-Planck/quasilinear codes to tokamaks. Another major area is the investigation of resistive magnetohydrodynamics in three dimensions, with applications to compact toroids. Another major area is the investigation of kinetic instabilities using a 3-D particle code. This work is often coupled with the task of numerically generating equilibria which model experimental devices. Ways to apply statistical closure approximations to study tokamak-edge plasma turbulence are being examined. In addition to these computational physics studies, the group has developed a number of linear systems solvers for general classes of physics problems and has been making a major effort at ascertaining how to efficiently utilize multiprocessor computers.
A flexible uncertainty quantification method for linearly coupled multi-physics systems
Chen, Xiao Ng, Brenda; Sun, Yunwei; Tong, Charles
2013-09-01
Highlights: We propose a modularly hybrid UQ methodology suitable for independent development of module-based multi-physics simulation. Our algorithmic framework allows for each module to have its own UQ method (either intrusive or non-intrusive). Information from each module is combined systematically to propagate global uncertainty. Our proposed approach can allow for easy swapping of new methods for any modules without the need to address incompatibilities. We demonstrate the proposed framework on a practical application involving a multi-species reactive transport model. -- Abstract: This paper presents a novel approach to building an integrated uncertainty quantification (UQ) methodology suitable for modern-day component-based approach for multi-physics simulation development. Our hybrid UQ methodology supports independent development of the most suitable UQ method, intrusive or non-intrusive, for each physics module by providing an algorithmic framework to couple these stochastic modules for propagating global uncertainties. We address algorithmic and computational issues associated with the construction of this hybrid framework. We demonstrate the utility of such a framework on a practical application involving a linearly coupled multi-species reactive transport model.
Shortcuts to adiabaticity from linear response theory
Acconcia, Thiago V.; Bonança, Marcus V. S.; Deffner, Sebastian
2015-10-23
A shortcut to adiabaticity is a finite-time process that produces the same final state as would result from infinitely slow driving. We show that such shortcuts can be found for weak perturbations from linear response theory. Moreover, with the help of phenomenological response functions, a simple expression for the excess work is found—quantifying the nonequilibrium excitations. For two specific examples, i.e., the quantum parametric oscillator and the spin 1/2 in a time-dependent magnetic field, we show that finite-time zeros of the excess work indicate the existence of shortcuts. We finally propose a degenerate family of protocols, which facilitates shortcuts to adiabaticity for specific and very short driving times.
Radio frequency focused interdigital linear accelerator
Swenson, Donald A.; Starling, W. Joel
2006-08-29
An interdigital (Wideroe) linear accelerator employing drift tubes, and associated support stems that couple to both the longitudinal and support stem electromagnetic fields of the linac, creating rf quadrupole fields along the axis of the linac to provide transverse focusing for the particle beam. Each drift tube comprises two separate electrodes operating at different electrical potentials as determined by cavity rf fields. Each electrode supports two fingers, pointing towards the opposite end of the drift tube, forming a four-finger geometry that produces an rf quadrupole field distribution along its axis. The fundamental periodicity of the structure is equal to one half of the particle wavelength .beta..lamda., where .beta. is the particle velocity in units of the velocity of light and .lamda. is the free space wavelength of the rf. Particles are accelerated in the gaps between drift tubes. The particle beam is focused in regions inside the drift tubes.
TLD linearity vs. beam energy and modality
Troncalli, Andrew J.; Chapman, Jane
2002-12-31
Thermoluminescent dosimetry (TLD) is considered to be a valuable dosimetric tool in determining patient dose. Lithium fluoride doped with magnesium and titanium (TLD-100) is widely used, as it does not display widely divergent energy dependence. For many years, we have known that TLD-100 shows supralinearity to dose. In a radiotherapy clinic, there are multiple energies and modality beams. This work investigates whether individual linearity corrections must be used for each beam or whether a single correction can be applied to all beams. The response of TLD as a function of dose was measured from 25 cGy to 1000 cGy on both electrons and photons from 6 to 18 MeV. This work shows that, within our measurement uncertainty, TLD-100 exhibits supralinearity at all megavoltage energies and modalities.
Faraday rotation assisted by linearly polarized light
Choi, Jai Min; Kim, Jang Myun; Cho, D.
2007-11-15
We demonstrate a type of chiral effect of an atomic medium. Polarization rotation of a probe beam is observed only when both a magnetic field and a linearly polarized coupling beam are present. We compare it with other chiral effects like optical activity, the Faraday effect, and the optically induced Faraday effect from the viewpoint of spatial inversion and time reversal transformations. As a theoretical model we consider a five-level configuration involving the cesium D2 transition. We use spin-polarized cold cesium atoms trapped in a magneto-optical trap to measure the polarization rotation versus probe detuning. The result shows reasonable agreement with a calculation from the master equation of the five-level configuration.
Shortcuts to adiabaticity from linear response theory
Acconcia, Thiago V.; Bonança, Marcus V. S.; Deffner, Sebastian
2015-10-23
A shortcut to adiabaticity is a finite-time process that produces the same final state as would result from infinitely slow driving. We show that such shortcuts can be found for weak perturbations from linear response theory. Moreover, with the help of phenomenological response functions, a simple expression for the excess work is found—quantifying the nonequilibrium excitations. For two specific examples, i.e., the quantum parametric oscillator and the spin 1/2 in a time-dependent magnetic field, we show that finite-time zeros of the excess work indicate the existence of shortcuts. We finally propose a degenerate family of protocols, which facilitates shortcuts tomore » adiabaticity for specific and very short driving times.« less
Sfermion precision measurements at a linear collider
A. Freitas et al.
2003-09-25
At future e{sup +}e{sup -} linear colliders, the event rates and clean signals of scalar fermion production--in particular for the scalar leptons--allow very precise measurements of their masses and couplings and the determination of their quantum numbers. Various methods are proposed for extracting these parameters from the data at the sfermion thresholds and in the continuum. At the same time, NLO radiative corrections and non-zero width effects have been calculated in order to match the experimental accuracy. The substantial mixing expected for the third generation sfermions opens up additional opportunities. Techniques are presented for determining potential CP-violating phases and for extracting tan {beta} from the stau sector, in particular at high values. The consequences of possible large mass differences in the stop and sbottom system are explored in dedicated analyses.
Linear nozzle with tailored gas plumes
Leon, David D.; Kozarek, Robert L.; Mansour, Adel; Chigier, Norman
2001-01-01
There is claimed a method for depositing fluid material from a linear nozzle in a substantially uniform manner across and along a surface. The method includes directing gaseous medium through said nozzle to provide a gaseous stream at the nozzle exit that entrains fluid material supplied to the nozzle, said gaseous stream being provided with a velocity profile across the nozzle width that compensates for the gaseous medium's tendency to assume an axisymmetric configuration after leaving the nozzle and before reaching the surface. There is also claimed a nozzle divided into respective side-by-side zones, or preferably chambers, through which a gaseous stream can be delivered in various velocity profiles across the width of said nozzle to compensate for the tendency of this gaseous medium to assume an axisymmetric configuration.
Linear nozzle with tailored gas plumes
Kozarek, Robert L.; Straub, William D.; Fischer, Joern E.; Leon, David D.
2003-01-01
There is claimed a method for depositing fluid material from a linear nozzle in a substantially uniform manner across and along a surface. The method includes directing gaseous medium through said nozzle to provide a gaseous stream at the nozzle exit that entrains fluid material supplied to the nozzle, said gaseous stream being provided with a velocity profile across the nozzle width that compensates for the gaseous medium's tendency to assume an axisymmetric configuration after leaving the nozzle and before reaching the surface. There is also claimed a nozzle divided into respective side-by-side zones, or preferably chambers, through which a gaseous stream can be delivered in various velocity profiles across the width of said nozzle to compensate for the tendency of this gaseous medium to assume an axisymmetric configuration.
Linear-array systems for aerospace NDE
Smith, Robert A.; Willsher, Stephen J.; Bending, Jamie M.
1999-12-02
Rapid large-area inspection of composite structures for impact damage and multi-layered aluminum skins for corrosion has been a recognized priority for several years in both military and civil aerospace applications. Approaches to this requirement have followed two clearly different routes: the development of novel large-area inspection systems, and the enhancement of current ultrasonic or eddy-current methods to reduce inspection times. Ultrasonic inspection is possible with standard flaw detection equipment but the addition of a linear ultrasonic array could reduce inspection times considerably. In order to investigate their potential, 9-element and 17-element linear ultrasonic arrays for composites, and 64-element arrays for aluminum skins, have been developed to DERA specifications for use with the ANDSCAN area scanning system. A 5 m{sup 2} composite wing surface has been scanned with a scan resolution of approximately 3 mm in 6 hours. With subsequent software and hardware improvements all four composite wing surfaces (top/bottom, left/right) of a military fighter aircraft can potentially be inspected in less than a day. Array technology has been very widely used in the medical ultrasound field although rarely above 10 MHz, whereas lap-joint inspection requires a pulse center-frequency of 12 to 20 MHz in order to resolve the separate interfaces in the lap joint. A 128 mm-long multi-element array of 5 mmx2 mm ultrasonic elements for use with the ANDSCAN scanning software was produced to a DERA specification by an NDT manufacturer with experience in the medical imaging field. This paper analyses the performance of the transducers that have been produced and evaluates their use in scanning systems of different configurations.
Application Program Interface for Engineering and Scientific Applications
Energy Science and Technology Software Center
2001-10-18
An Application Program Interface (API) for engineering and scientific applications. This system allows application developers to write to a single uniform interface, obtaining access to all solvers in the Trilinos framwork. This includes linear solvers, eigensolvers, non-linear solvers, and time-dependent solvers.
The design of a parallel adaptive paving all-quadrilateral meshing algorithm
Tautges, T.J.; Lober, R.R.; Vaughan, C.
1995-08-01
Adaptive finite element analysis demands a great deal of computational resources, and as such is most appropriately solved in a massively parallel computer environment. This analysis will require other parallel algorithms before it can fully utilize MP computers, one of which is parallel adaptive meshing. A version of the paving algorithm is being designed which operates in parallel but which also retains the robustness and other desirable features present in the serial algorithm. Adaptive paving in a production mode is demonstrated using a Babuska-Rheinboldt error estimator on a classic linearly elastic plate problem. The design of the parallel paving algorithm is described, and is based on the decomposition of a surface into {open_quotes}virtual{close_quotes} surfaces. The topology of the virtual surface boundaries is defined using mesh entities (mesh nodes and edges) so as to allow movement of these boundaries with smoothing and other operations. This arrangement allows the use of the standard paving algorithm on subdomain interiors, after the negotiation of the boundary mesh.
CALiPER Application Summary Report 19. LED Linear Pendants
none,
2012-10-01
Report 19 reviews the independently tested performance of nine LED linear pendants and also evaluates a collection of 11 linear pendant products available in both an LED and fluorescent version.
Project Profile: Commercial Development of an Advanced Linear...
Office of Energy Efficiency and Renewable Energy (EERE) (indexed site)
Project Profile: Commercial Development of an Advanced Linear-Fresnel-Based CSP Concept SkyFuel logo SkyFuel, under the CSP R&D FOA, is developing a commercial linear-Fresnel-based ...
Student Internship Programs Program Description
U.S. Department of Energy (DOE) - all webpages (Extended Search)
for a summer high school student to 75,000 for a Ph.D. student working full-time for a year. Program Coordinator: Scott Robbins Email: srobbins@lanl.gov Phone number: 663-5621...
Scenario Decomposition for 0-1 Stochastic Programs: Improvements and Asynchronous Implementation
Ryan, Kevin; Rajan, Deepak; Ahmed, Shabbir
2016-05-01
We recently proposed scenario decomposition algorithm for stochastic 0-1 programs finds an optimal solution by evaluating and removing individual solutions that are discovered by solving scenario subproblems. In our work, we develop an asynchronous, distributed implementation of the algorithm which has computational advantages over existing synchronous implementations of the algorithm. Improvements to both the synchronous and asynchronous algorithm are proposed. We also test the results on well known stochastic 0-1 programs from the SIPLIB test library and is able to solve one previously unsolved instance from the test set.
Text-Alternative Version: LED Replacements for Linear Fluorescent Lamps |
Department of Energy Replacements for Linear Fluorescent Lamps Text-Alternative Version: LED Replacements for Linear Fluorescent Lamps Below is the text-alternative version of the "LED Replacements for Linear Fluorescent Lamps" webcast, held June 20, 2011. Theresa Shoemaker: Welcome, ladies and gentlemen. I'm Terry Shoemaker with the Pacific Northwest National Laboratory, and I'd like to welcome you to today's webcast, "LED Replacements for Linear Fluorescent Lamps,"
Postdoctoral Program Program Description The Postdoctoral (Postdoc...
U.S. Department of Energy (DOE) - all webpages (Extended Search)
Postdoctoral Program Program Description The Postdoctoral (Postdoc) Research program offers the opportunity for appointees to perform research in a robust scientific R&D...
Machinist Pipeline/Apprentice Program Program Description
U.S. Department of Energy (DOE) - all webpages (Extended Search)
Machinist PipelineApprentice Program Program Description The Machinist Pipeline Program was created by the Prototype Fabrication Division to fill a critical need for skilled ...
Close, E.; Fong, C; Lee, E.
1991-10-30
Although this report is called a program document, it is not simply a user`s guide to running HILDA nor is it a programmer`s guide to maintaining and updating HILDA. It is a guide to HILDA as a program and as a model for designing and costing a heavy ion fusion (HIF) driver. HILDA represents the work and ideas of many people; as does the model upon which it is based. The project was initiated by Denis Keefe, the leader of the LBL HIFAR project. He suggested the name HILDA, which is an acronym for Heavy Ion Linac Driver Analysis. The conventions and style of development of the HILDA program are based on the original goals. It was desired to have a computer program that could estimate the cost and find an optimal design for Heavy Ion Fusion induction linac drivers. This program should model near-term machines as well as fullscale drivers. The code objectives were: (1) A relatively detailed, but easily understood model. (2) Modular, structured code to facilitate making changes in the model, the analysis reports, and the user interface. (3) Documentation that defines, and explains the system model, cost algorithm, program structure, and generated reports. With this tool a knowledgeable user would be able to examine an ensemble of drivers and find the driver that is minimum in cost, subject to stated constraints. This document contains a report section that describes how to use HILDA, some simple illustrative examples, and descriptions of the models used for the beam dynamics and component design. Associated with this document, as files on floppy disks, are the complete HILDA source code, much information that is needed to maintain and update HILDA, and some complete examples. These examples illustrate that the present version of HILDA can generate much useful information about the design of a HIF driver. They also serve as guides to what features would be useful to include in future updates. The HPD represents the current state of development of this project.
Accelerating k-NN Algorithm with Hybrid MPI and OpenSHMEM
Lin, Jian; Hamidouche, Khaled; Zheng, Jie; Lu, Xiaoyi; Vishnu, Abhinav; Panda, Dhabaleswar
2015-08-05
Machine Learning algorithms are benefiting from the continuous improvement of programming models, including MPI, MapReduce and PGAS. k-Nearest Neighbors (k-NN) algorithm is a widely used machine learning algorithm, applied to supervised learning tasks such as classification. Several parallel implementations of k-NN have been proposed in the literature and practice. However, on high-performance computing systems with high-speed interconnects, it is important to further accelerate existing designs of the k-NN algorithm through taking advantage of scalable programming models. To improve the performance of k-NN on large-scale environment with InfiniBand network, this paper proposes several alternative hybrid MPI+OpenSHMEM designs and performs a systemic evaluation and analysis on typical workloads. The hybrid designs leverage the one-sided memory access to better overlap communication with computation than the existing pure MPI design, and propose better schemes for efficient buffer management. The implementation based on k-NN program from MaTEx with MVAPICH2-X (Unified MPI+PGAS Communication Runtime over InfiniBand) shows up to 9.0% time reduction for training KDD Cup 2010 workload over 512 cores, and 27.6% time reduction for small workload with balanced communication and computation. Experiments of running with varied number of cores show that our design can maintain good scalability.
A Tensor Product Formulation of Strassen's Matrix Multiplication Algorithm with Memory Reduction
Kumar, B.; Huang, C.-H.; Sadayappan, P.; Johnson, R. W.
1995-01-01
In this article, we present a program generation strategy of Strassen's matrix multiplication algorithm using a programming methodology based on tensor product formulas. In this methodology, block recursive programs such as the fast Fourier Transforms and Strassen's matrix multiplication algorithm are expressed as algebraic formulas involving tensor products and other matrix operations. Such formulas can be systematically translated to high-performance parallel/vector codes for various architectures. In this article, we present a nonrecursive implementation of Strassen's algorithm for shared memory vector processors such as the Cray Y-MP. A previous implementation of Strassen's algorithm synthesized from tensor product formulas required workingmore » storage of size O(7 n ) for multiplying 2 n × 2 n matrices. We present a modified formulation in which the working storage requirement is reduced to O(4 n ). The modified formulation exhibits sufficient parallelism for efficient implementation on a shared memory multiprocessor. Performance results on a Cray Y-MP8/64 are presented.« less
Advanced CHP Control Algorithms: Scope Specification
Katipamula, Srinivas; Brambley, Michael R.
2006-04-28
The primary objective of this multiyear project is to develop algorithms for combined heat and power systems to ensure optimal performance, increase reliability, and lead to the goal of clean, efficient, reliable and affordable next generation energy systems.
Advanced Imaging Algorithms for Radiation Imaging Systems
Marleau, Peter
2015-10-01
The intent of the proposed work, in collaboration with University of Michigan, is to develop the algorithms that will bring the analysis from qualitative images to quantitative attributes of objects containing SNM. The first step to achieving this is to develop an indepth understanding of the intrinsic errors associated with the deconvolution and MLEM algorithms. A significant new effort will be undertaken to relate the image data to a posited three-dimensional model of geometric primitives that can be adjusted to get the best fit. In this way, parameters of the model such as sizes, shapes, and masses can be extracted for both radioactive and non-radioactive materials. This model-based algorithm will need the integrated response of a hypothesized configuration of material to be calculated many times. As such, both the MLEM and the model-based algorithm require significant increases in calculation speed in order to converge to solutions in practical amounts of time.
Genetic algorithms at UC Davis/LLNL
Vemuri, V.R.
1993-12-31
A tutorial introduction to genetic algorithms is given. This brief tutorial should serve the purpose of introducing the subject to the novice. The tutorial is followed by a brief commentary on the term project reports that follow.
Tracking Algorithm for Multi- Dimensional Array Transposition
U.S. Department of Energy (DOE) - all webpages (Extended Search)
192002 Yun (Helen) He, SC2002 1 MPI and OpenMP Paradigms on Cluster of SMP Architectures: the Vacancy Tracking Algorithm for Multi- Dimensional Array Transposition Yun (Helen) He...
Governance of the International Linear Collider Project
Foster, B.; Barish, B.; Delahaye, J.P.; Dosselli, U.; Elsen, E.; Harrison, M.; Mnich, J.; Paterson, J.M.; Richard, F.; Stapnes, S.; Suzuki, A.; Wormser, G.; Yamada, S.; /KEK, Tsukuba
2012-05-31
Governance models for the International Linear Collider Project are examined in the light of experience from similar international projects around the world. Recommendations for one path which could be followed to realize the ILC successfully are outlined. The International Linear Collider (ILC) is a unique endeavour in particle physics; fully international from the outset, it has no 'host laboratory' to provide infrastructure and support. The realization of this project therefore presents unique challenges, in scientific, technical and political arenas. This document outlines the main questions that need to be answered if the ILC is to become a reality. It describes the methodology used to harness the wisdom displayed and lessons learned from current and previous large international projects. From this basis, it suggests both general principles and outlines a specific model to realize the ILC. It recognizes that there is no unique model for such a laboratory and that there are often several solutions to a particular problem. Nevertheless it proposes concrete solutions that the authors believe are currently the best choices in order to stimulate discussion and catalyze proposals as to how to bring the ILC project to fruition. The ILC Laboratory would be set up by international treaty and be governed by a strong Council to whom a Director General and an associated Directorate would report. Council would empower the Director General to give strong management to the project. It would take its decisions in a timely manner, giving appropriate weight to the financial contributions of the member states. The ILC Laboratory would be set up for a fixed term, capable of extension by agreement of all the partners. The construction of the machine would be based on a Work Breakdown Structure and value engineering and would have a common cash fund sufficiently large to allow the management flexibility to optimize the project's construction. Appropriate contingency, clearly
Parallel Algorithms and Patterns (Technical Report) | SciTech...
Office of Scientific and Technical Information (OSTI)
Parallel Algorithms and Patterns Citation Details In-Document Search Title: Parallel Algorithms and Patterns Authors: Robey, Robert W. 1 + Show Author Affiliations Los Alamos ...
Efficient algorithm for generating spectra using line-by-lne...
Office of Scientific and Technical Information (OSTI)
Citation Details In-Document Search Title: Efficient algorithm for generating spectra ... Subject: 74 ATOMIC AND MOLECULAR PHYSICS; 70 PLASMA PHYSICS AND FUSION; ALGORITHMS; ...
Robust Algorithm for Computing Statistical Stark Broadening of...
Office of Scientific and Technical Information (OSTI)
Citation Details In-Document Search Title: Robust Algorithm for Computing Statistical ... Language: English Subject: 70 PLASMA PHYSICS AND FUSION; ACCURACY; ALGORITHMS; ...
Solar Position Algorithm for Solar Radiation Applications (Revised...
Office of Scientific and Technical Information (OSTI)
Solar Position Algorithm for Solar Radiation Applications (Revised) Citation Details In-Document Search Title: Solar Position Algorithm for Solar Radiation Applications (Revised) ...
New Design Methods and Algorithms for Multi-component Distillation...
Office of Energy Efficiency and Renewable Energy (EERE) (indexed site)
Design Methods and Algorithms for Multi-component Distillation Processes New Design Methods and Algorithms for Multi-component Distillation Processes multicomponent.pdf (517.32 KB) ...
New Algorithm Enables Faster Simulations of Ultrafast Processes
U.S. Department of Energy (DOE) - all webpages (Extended Search)
Algorithm Enables Faster Simulations of Ultrafast Processes New Algorithm Enables Faster ... Academy of Sciences, have developed a new real-time time-dependent density function ...
Program Year 2008 State Energy Program Formula
U.S. Department of Energy (DOE) State Energy Program (SEP), SEP Program Guidance Fiscal Year 2008, Program Year 2008, energy efficiency and renewable energy programs in the states, DOE Office of Energy Efficiency and Renewable Energy
Precision envelope detector and linear rectifier circuitry
Davis, Thomas J.
1980-01-01
Disclosed is a method and apparatus for the precise linear rectification and envelope detection of oscillatory signals. The signal is applied to a voltage-to-current converter which supplies current to a constant current sink. The connection between the converter and the sink is also applied through a diode and an output load resistor to a ground connection. The connection is also connected to ground through a second diode of opposite polarity from the diode in series with the load resistor. Very small amplitude voltage signals applied to the converter will cause a small change in the output current of the converter, and the difference between the output current and the constant current sink will be applied either directly to ground through the single diode, or across the output load resistor, dependent upon the polarity. Disclosed also is a full-wave rectifier utilizing constant current sinks and voltage-to-current converters. Additionally, disclosed is a combination of the voltage-to-current converters with differential integrated circuit preamplifiers to boost the initial signal amplitude, and with low pass filtering applied so as to obtain a video or signal envelope output.
Linear air-fuel sensor development
Garzon, F.; Miller, C.
1996-12-14
The electrochemical zirconia solid electrolyte oxygen sensor, is extensively used for monitoring oxygen concentrations in various fields. They are currently utilized in automobiles to monitor the exhaust gas composition and control the air-to-fuel ratio, thus reducing harmful emission components and improving fuel economy. Zirconia oxygen sensors, are divided into two classes of devices: (1) potentiometric or logarithmic air/fuel sensors; and (2) amperometric or linear air/fuel sensors. The potentiometric sensors are ideally suited to monitor the air-to-fuel ratio close to the complete combustion stoichiometry; a value of about 14.8 to 1 parts by volume. This occurs because the oxygen concentration changes by many orders of magnitude as the air/fuel ratio is varied through the stoichiometric value. However, the potentiometric sensor is not very sensitive to changes in oxygen partial pressure away from the stoichiometric point due to the logarithmic dependence of the output voltage signal on the oxygen partial pressure. It is often advantageous to operate gasoline power piston engines with excess combustion air; this improves fuel economy and reduces hydrocarbon emissions. To maintain stable combustion away from stoichiometry, and enable engines to operate in the excess oxygen (lean burn) region several limiting-current amperometric sensors have been reported. These sensors are based on the electrochemical oxygen ion pumping of a zirconia electrolyte. They typically show reproducible limiting current plateaus with an applied voltage caused by the gas diffusion overpotential at the cathode.
Liquid cooled, linear focus solar cell receiver
Kirpich, Aaron S.
1985-01-01
Separate structures for electrical insulation and thermal conduction are established within a liquid cooled, linear focus solar cell receiver for use with parabolic or Fresnel optical concentrators. The receiver includes a V-shaped aluminum extrusion having a pair of outer faces each formed with a channel receiving a string of solar cells in thermal contact with the extrusion. Each cell string is attached to a continuous glass cover secured within the channel with spring clips to isolate the string from the external environment. Repair or replacement of solar cells is effected simply by detaching the spring clips to remove the cover/cell assembly without interrupting circulation of coolant fluid through the receiver. The lower surface of the channel in thermal contact with the cells of the string is anodized to establish a suitable standoff voltage capability between the cells and the extrusion. Primary electrical insulation is provided by a dielectric tape disposed between the coolant tube and extrusion. Adjacent solar cells are soldered to interconnect members designed to accommodate thermal expansion and mismatches. The coolant tube is clamped into the extrusion channel with a releasably attachable clamping strip to facilitate easy removal of the receiver from the coolant circuit.
Liquid cooled, linear focus solar cell receiver
Kirpich, A.S.
1983-12-08
Separate structures for electrical insulation and thermal conduction are established within a liquid cooled, linear focus solar cell receiver for use with parabolic or Fresnel optical concentrators. The receiver includes a V-shaped aluminum extrusion having a pair of outer faces each formed with a channel receiving a string of solar cells in thermal contact with the extrusion. Each cell string is attached to a continuous glass cover secured within the channel with spring clips to isolate the string from the external environment. Repair or replacement of solar cells is effected simply by detaching the spring clips to remove the cover/cell assembly without interrupting circulation of coolant fluid through the receiver. The lower surface of the channel in thermal contact with the cells of the string is anodized to establish a suitable standoff voltage capability between the cells and the extrusion. Primary electrical insulation is provided by a dielectric tape disposed between the coolant tube and extrusion. Adjacent solar cells are soldered to interconnect members designed to accommodate thermal expansion and mismatches. The coolant tube is clamped into the extrusion channel with a releasably attachable clamping strip to facilitate easy removal of the receiver from the coolant circuit.
VINETA II: A linear magnetic reconnection experiment
Bohlin, H. Von Stechow, A.; Rahbarnia, K.; Grulke, O.; Klinger, T.; Ernst-Moritz-Arndt University, Domstr. 11, 17489 Greifswald
2014-02-15
A linear experiment dedicated to the study of driven magnetic reconnection is presented. The new device (VINETA II) is suitable for investigating both collisional and near collisionless reconnection. Reconnection is achieved by externally driving magnetic field lines towards an X-point, inducing a current in the background plasma which consequently modifies the magnetic field topology. Owing to the open field line configuration of the experiment, the current is limited by the axial sheath boundary conditions. A plasma gun is used as an additional electron source in order to counterbalance the charge separation effects and supply the required current. Two drive methods are used in the device. First, an oscillating current through two parallel conductors drive the reconnection. Second, a stationary X-point topology is formed by the parallel conductors, and the drive is achieved by an oscillating current through a third conductor. In the first setup, the magnetic field of the axial plasma current dominates the field topology near the X-point throughout most of the drive. The second setup allows for the amplitude of the plasma current as well as the motion of the flux to be set independently of the X-point topology of the parallel conductors.
Office of Energy Efficiency and Renewable Energy (EERE) (indexed site)
& DEVELOPMENT: PROGRAM ABSTRACTS Energy Efficiency and Renewable Energy Office of Transportation Technologies Office of Advanced Automotive Technologies Catalyst Layer Bipolar Plate Electrode Backing Layers INTEGRATED SYSTEMS Polymer Electrolyte Membrane Fuel Cells Fuel Cell Stack PEM STACK & STACK COMPONENTS Fuel Cell Stack System Air Management System Fuel Processor System For Transportation June 1999 ENERGY EFFICIENCY AND RENEWABLE ENERGY OFFICE OF TRANSPORTATION TECHNOLOGIES OFFICE
Drainage Algorithm for Geospatial Knowledge
Energy Science and Technology Software Center
2006-08-15
The Pacific Northwest National Laboratory (PNNL) has developed a prototype stream extraction algorithm that semi-automatically extracts and characterizes streams using a variety of multisensor imagery and digital terrain elevation data (DTEDÃÂ¯ÃÂÃÂ¢) data. The system is currently optimized for three types of single-band imagery: radar, visible, and thermal. Method of Solution: DRAGON: (1) classifies pixels into clumps of water objects based on the classification of water pixels by spectral signatures and neighborhood relationships, (2) uses themore » morphology operations (erosion and dilation) to separate out large lakes (or embayment), isolated lakes, ponds, wide rivers and narrow rivers, and (3) translates the river objects into vector objects. In detail, the process can be broken down into the following steps. A. Water pixels are initially identified using on the extend range and slope values (if an optional DEM file is available). B. Erode to the distance that defines a large water body and then dilate back. The resulting mask can be used to identify large lake and embayment objects that are then removed from the image. Since this operation be time consuming it is only performed if a simple test (i.e. a large box can be found somewhere in the image that contains only water pixels) that indicates a large water body is present. C. All water pixels are ÃÂ¢ÃÂÃÂclumpedÃÂ¢ÃÂÃÂ (in Imagine terminology clumping is when pixels of a common classification that touch are connected) and clumps which do not contain pure water pixels (e.g. dark cloud shadows) are removed D. The resulting true water pixels are clumped and water objects which are too small (e.g. ponds) or isolated lakes (i.e. isolated objects with a small compactness ratio) are removed. Note that at this point lakes have been identified has a byproduct of the filtering process and can be output has vector layers if needed. E. At this point only river pixels
General Transient Fluid Flow Algorithm
Energy Science and Technology Software Center
1992-03-12
SALE2D calculates two-dimensional fluid flows at all speeds, from the incompressible limit to highly supersonic. An implicit treatment of the pressure calculation similar to that in the Implicit Continuous-fluid Eulerian (ICE) technique provides this flow speed flexibility. In addition, the computing mesh may move with the fluid in a typical Lagrangian fashion, be held fixed in an Eulerian manner, or move in some arbitrarily specified way to provide a continuous rezoning capability. This latitude resultsmore » from use of an Arbitrary Lagrangian-Eulerian (ALE) treatment of the mesh. The partial differential equations solved are the Navier-Stokes equations and the mass and internal energy equations. The fluid pressure is determined from an equation of state and supplemented with an artificial viscous pressure for the computation of shock waves. The computing mesh consists of a two-dimensional network of quadrilateral cells for either cylindrical or Cartesian coordinates, and a variety of user-selectable boundary conditions are provided in the program.« less
McHugh, P.R.
1995-10-01
Fully coupled, Newton-Krylov algorithms are investigated for solving strongly coupled, nonlinear systems of partial differential equations arising in the field of computational fluid dynamics. Primitive variable forms of the steady incompressible and compressible Navier-Stokes and energy equations that describe the flow of a laminar Newtonian fluid in two-dimensions are specifically considered. Numerical solutions are obtained by first integrating over discrete finite volumes that compose the computational mesh. The resulting system of nonlinear algebraic equations are linearized using Newton`s method. Preconditioned Krylov subspace based iterative algorithms then solve these linear systems on each Newton iteration. Selected Krylov algorithms include the Arnoldi-based Generalized Minimal RESidual (GMRES) algorithm, and the Lanczos-based Conjugate Gradients Squared (CGS), Bi-CGSTAB, and Transpose-Free Quasi-Minimal Residual (TFQMR) algorithms. Both Incomplete Lower-Upper (ILU) factorization and domain-based additive and multiplicative Schwarz preconditioning strategies are studied. Numerical techniques such as mesh sequencing, adaptive damping, pseudo-transient relaxation, and parameter continuation are used to improve the solution efficiency, while algorithm implementation is simplified using a numerical Jacobian evaluation. The capabilities of standard Newton-Krylov algorithms are demonstrated via solutions to both incompressible and compressible flow problems. Incompressible flow problems include natural convection in an enclosed cavity, and mixed/forced convection past a backward facing step.
Linear Fixed-Field Multi-Pass Arcs for Recirculating Linear Accelerators
V.S. Morozov, S.A. Bogacz, Y.R. Roblin, K.B. Beard
2012-06-01
Recirculating Linear Accelerators (RLA's) provide a compact and efficient way of accelerating particle beams to medium and high energies by reusing the same linac for multiple passes. In the conventional scheme, after each pass, the different energy beams coming out of the linac are separated and directed into appropriate arcs for recirculation, with each pass requiring a separate fixed-energy arc. In this paper we present a concept of an RLA return arc based on linear combined-function magnets, in which two and potentially more consecutive passes with very different energies are transported through the same string of magnets. By adjusting the dipole and quadrupole components of the constituting linear combined-function magnets, the arc is designed to be achromatic and to have zero initial and final reference orbit offsets for all transported beam energies. We demonstrate the concept by developing a design for a droplet-shaped return arc for a dog-bone RLA capable of transporting two beam passes with momenta different by a factor of two. We present the results of tracking simulations of the two passes and lay out the path to end-to-end design and simulation of a complete dog-bone RLA.
Modeling and Algorithmic Approaches to Constitutively-Complex, Micro-structured Fluids
Forest, Mark Gregory
2014-05-06
The team for this Project made significant progress on modeling and algorithmic approaches to hydrodynamics of fluids with complex microstructure. Our advances are broken down into modeling and algorithmic approaches. In experiments a driven magnetic bead in a complex fluid accelerates out of the Stokes regime and settles into another apparent linear response regime. The modeling explains the take-off as a deformation of entanglements, and the longtime behavior is a nonlinear, far-from-equilibrium property. Furthermore, the model has predictive value, as we can tune microstructural properties relative to the magnetic force applied to the bead to exhibit all possible behaviors. Wave-theoretic probes of complex fluids have been extended in two significant directions, to small volumes and the nonlinear regime. Heterogeneous stress and strain features that lie beyond experimental capability were studied. It was shown that nonlinear penetration of boundary stress in confined viscoelastic fluids is not monotone, indicating the possibility of interlacing layers of linear and nonlinear behavior, and thus layers of variable viscosity. Models, algorithms, and codes were developed and simulations performed leading to phase diagrams of nanorod dispersion hydrodynamics in parallel shear cells and confined cavities representative of film and membrane processing conditions. Hydrodynamic codes for polymeric fluids are extended to include coupling between microscopic and macroscopic models, and to the strongly nonlinear regime.
High-gradient compact linear accelerator
Carder, B.M.
1998-05-26
A high-gradient linear accelerator comprises a solid-state stack in a vacuum of five sets of disc-shaped Blumlein modules each having a center hole through which particles are sequentially accelerated. Each Blumlein module is a sandwich of two outer conductive plates that bracket an inner conductive plate positioned between two dielectric plates with different thicknesses and dielectric constants. A third dielectric core in the shape of a hollow cylinder forms a casing down the series of center holes, and it has a dielectric constant different that the two dielectric plates that sandwich the inner conductive plate. In operation, all the inner conductive plates are charged to the same DC potential relative to the outer conductive plates. Next, all the inner conductive plates are simultaneously shorted to the outer conductive plates at the outer diameters. The signal short will propagate to the inner diameters at two different rates in each Blumlein module. A faster wave propagates quicker to the third dielectric core across the dielectric plates with the closer spacing and lower dielectric constant. When the faster wave reaches the inner extents of the outer and inner conductive plates, it reflects back outward and reverses the field in that segment of the dielectric core. All the field segments in the dielectric core are then in unipolar agreement until the slower wave finally propagates to the third dielectric core across the dielectric plates with the wider spacing and higher dielectric constant. During such unipolar agreement, particles in the core are accelerated with gradients that exceed twenty megavolts per meter. 10 figs.
High-gradient compact linear accelerator
Carder, Bruce M.
1998-01-01
A high-gradient linear accelerator comprises a solid-state stack in a vacuum of five sets of disc-shaped Blumlein modules each having a center hole through which particles are sequentially accelerated. Each Blumlein module is a sandwich of two outer conductive plates that bracket an inner conductive plate positioned between two dielectric plates with different thicknesses and dielectric constants. A third dielectric core in the shape of a hollow cylinder forms a casing down the series of center holes, and it has a dielectric constant different that the two dielectric plates that sandwich the inner conductive plate. In operation, all the inner conductive plates are charged to the same DC potential relative to the outer conductive plates. Next, all the inner conductive plates are simultaneously shorted to the outer conductive plates at the outer diameters. The signal short will propagate to the inner diameters at two different rates in each Blumlein module. A faster wave propagates quicker to the third dielectric core across the dielectric plates with the closer spacing and lower dielectric constant. When the faster wave reaches the inner extents of the outer and inner conductive plates, it reflects back outward and reverses the field in that segment of the dielectric core. All the field segments in the dielectric core are then in unipolar agreement until the slower wave finally propagates to the third dielectric core across the dielectric plates with the wider spacing and higher dielectric constant. During such unipolar agreement, particles in the core are accelerated with gradients that exceed twenty megavolts per meter.
Support Vector Machine algorithm for regression and classification
Energy Science and Technology Software Center
2001-08-01
The software is an implementation of the Support Vector Machine (SVM) algorithm that was invented and developed by Vladimir Vapnik and his co-workers at AT&T Bell Laboratories. The specific implementation reported here is an Active Set method for solving a quadratic optimization problem that forms the major part of any SVM program. The implementation is tuned to specific constraints generated in the SVM learning. Thus, it is more efficient than general-purpose quadratic optimization programs. Amore » decomposition method has been implemented in the software that enables processing large data sets. The size of the learning data is virtually unlimited by the capacity of the computer physical memory. The software is flexible and extensible. Two upper bounds are implemented to regulate the SVM learning for classification, which allow users to adjust the false positive and false negative rates. The software can be used either as a standalone, general-purpose SVM regression or classification program, or be embedded into a larger software system.« less
U.S. Department of Energy (DOE) - all webpages (Extended Search)
Program Description The Los Alamos STEM Challenge gives your students a unique opportunity to envision the future years of discovery at LANL. Students will develop 21st century skills as they collaborate in teams to research LANL projects and propose innovative future projects. They apply creativity and critical thinking skills as they visualize their own ideas through posters, videos, apps or essays describing potential future projects at LANL. Students are encouraged to learn about the
U.S. Department of Energy (DOE) - all webpages (Extended Search)
Libraries Programming Libraries ALTD Automatic Library Tracking Database Infrastructure To track and monitor library usage and better serve your software needs, we have enabled the Automatic Library Tracking Database (ALTD) on our prodcution systems, Hopper and Edison. ALTD is also availailable on Carver. ALTD, originally developed by National Institute for Computational Sciences and further developed at NERSC, automatically and transparently tracks all libraries linked into an application at
U.S. Department of Energy (DOE) - all webpages (Extended Search)
NERSC-8 Procurement Programming models File Storage and I/O Edison PDSF Genepool Queues and Scheduling Retired Systems Storage & File Systems Application Performance Data & Analytics Job Logs & Statistics Training & Tutorials Software Policies User Surveys NERSC Users Group Help Staff Blogs Request Repository Mailing List Need Help? Out-of-hours Status and Password help Call operations: 1-800-66-NERSC, option 1 or 510-486-6821 Account Support https://nim.nersc.gov
Atencio, Julian J.
2014-05-01
This presentation covers how to go about developing a human reliability program. In particular, it touches on conceptual thinking, raising awareness in an organization, the actions that go into developing a plan. It emphasizes evaluating all positions, eliminating positions from the pool due to mitigating factors, and keeping the process transparent. It lists components of the process and objectives in process development. It also touches on the role of leadership and the necessity for audit.
Maryland Efficiency Program Options
Office of Energy Efficiency and Renewable Energy (EERE)
Maryland Efficiency Program Options, from the Tool Kit Framework: Small Town University Energy Program (STEP).
STEP Program Benchmark Report, from the Tool Kit Framework: Small Town University Energy Program (STEP).
Hart, W.E.; Istrail, S. [Sandia National Labs., Albuquerque, NM (United States). Algorithms and Discrete Mathematics Dept.
1996-08-09
This paper considers the protein structure prediction problem for lattice and off-lattice protein folding models that explicitly represent side chains. Lattice models of proteins have proven extremely useful tools for reasoning about protein folding in unrestricted continuous space through analogy. This paper provides the first illustration of how rigorous algorithmic analyses of lattice models can lead to rigorous algorithmic analyses of off-lattice models. The authors consider two side chain models: a lattice model that generalizes the HP model (Dill 85) to explicitly represent side chains on the cubic lattice, and a new off-lattice model, the HP Tangent Spheres Side Chain model (HP-TSSC), that generalizes this model further by representing the backbone and side chains of proteins with tangent spheres. They describe algorithms for both of these models with mathematically guaranteed error bounds. In particular, the authors describe a linear time performance guaranteed approximation algorithm for the HP side chain model that constructs conformations whose energy is better than 865 of optimal in a face centered cubic lattice, and they demonstrate how this provides a 70% performance guarantee for the HP-TSSC model. This is the first algorithm in the literature for off-lattice protein structure prediction that has a rigorous performance guarantee. The analysis of the HP-TSSC model builds off of the work of Dancik and Hannenhalli who have developed a 16/30 approximation algorithm for the HP model on the hexagonal close packed lattice. Further, the analysis provides a mathematical methodology for transferring performance guarantees on lattices to off-lattice models. These results partially answer the open question of Karplus et al. concerning the complexity of protein folding models that include side chains.
Resistive Network Optimal Power Flow: Uniqueness and Algorithms
Tan, CW; Cai, DWH; Lou, X
2015-01-01
The optimal power flow (OPF) problem minimizes the power loss in an electrical network by optimizing the voltage and power delivered at the network buses, and is a nonconvex problem that is generally hard to solve. By leveraging a recent development on the zero duality gap of OPF, we propose a second-order cone programming convex relaxation of the resistive network OPF, and study the uniqueness of the optimal solution using differential topology, especially the Poincare-Hopf Index Theorem. We characterize the global uniqueness for different network topologies, e.g., line, radial, and mesh networks. This serves as a starting point to design distributed local algorithms with global behaviors that have low complexity, are computationally fast, and can run under synchronous and asynchronous settings in practical power grids.
Program Evaluation: Program Logic | Department of Energy
Office of Energy Efficiency and Renewable Energy (EERE) (indexed site)
Program Logic Program Evaluation: Program Logic Step four will help you develop a logical model for your program (learn more about the other steps in general program evaluations): What is a Logic Model? Benefits of Using Logic Modeling Pitfalls and How to Avoid Them Steps to Developing a Logic Model What is a Logic Model? Logic modeling is a thought process program evaluators have found to be useful for at least forty years and has become increasingly popular with program managers during the
Non-linear system identification in flow-induced vibration
Spanos, P.D.; Zeldin, B.A.; Lu, R.
1996-12-31
The paper introduces a method of identification of non-linear systems encountered in marine engineering applications. The non-linearity is accounted for by a combination of linear subsystems and known zero-memory non-linear transformations; an equivalent linear multi-input-single-output (MISO) system is developed for the identification problem. The unknown transfer functions of the MISO system are identified by assembling a system of linear equations in the frequency domain. This system is solved by performing the Cholesky decomposition of a related matrix. It is shown that the proposed identification method can be interpreted as a {open_quotes}Gram-Schmidt{close_quotes} type of orthogonal decomposition of the input-output quantities of the equivalent MISO system. A numerical example involving the identification of unknown parameters of flow (ocean wave) induced forces on offshore structures elucidates the applicability of the proposed method.
Electronic Non-Contacting Linear Position Measuring System
Post, Richard F.
2005-06-14
A non-contacting linear position location system employs a special transmission line to encode and transmit magnetic signals to a receiver on the object whose position is to be measured. The invention is useful as a non-contact linear locator of moving objects, e.g., to determine the location of a magnetic-levitation train for the operation of the linear-synchronous motor drive system.
Bootstrap performance profiles in stochastic algorithms assessment
Costa, Lino; Esprito Santo, Isabel A.C.P.; Oliveira, Pedro
2015-03-10
Optimization with stochastic algorithms has become a relevant research field. Due to its stochastic nature, its assessment is not straightforward and involves integrating accuracy and precision. Performance profiles for the mean do not show the trade-off between accuracy and precision, and parametric stochastic profiles require strong distributional assumptions and are limited to the mean performance for a large number of runs. In this work, bootstrap performance profiles are used to compare stochastic algorithms for different statistics. This technique allows the estimation of the sampling distribution of almost any statistic even with small samples. Multiple comparison profiles are presented for more than two algorithms. The advantages and drawbacks of each assessment methodology are discussed.
Factorization using the quadratic sieve algorithm
Davis, J.A.; Holdridge, D.B.
1983-01-01
Since the cryptosecurity of the RSA two key cryptoalgorithm is no greater than the difficulty of factoring the modulus (product of two secret primes), a code that implements the Quadratic Sieve factorization algorithm on the CRAY I computer has been developed at the Sandia National Laboratories to determine as sharply as possible the current state-of-the-art in factoring. Because all viable attacks on RSA thus far proposed are equivalent to factorization of the modulus, sharper bounds on the computational difficulty of factoring permit improved estimates for the size of RSA parameters needed for given levels of cryptosecurity. Analysis of the Quadratic Sieve indicates that it may be faster than any previously published general purpose algorithm for factoring large integers. The high speed of the CRAY I coupled with the capability of the CRAY to pipeline certain vectorized operations make this algorithm (and code) the front runner in current factoring techniques.
Factorization using the quadratic sieve algorithm
Davis, J.A.; Holdridge, D.B.
1983-12-01
Since the cryptosecurity of the RSA two key cryptoalgorithm is no greater than the difficulty of factoring the modulus (product of two secret primes), a code that implements the Quadratic Sieve factorization algorithm on the CRAY I computer has been developed at the Sandia National Laboratories to determine as sharply as possible the current state-of-the-art in factoring. Because all viable attacks on RSA thus far proposed are equivalent to factorization of the modulus, sharper bounds on the computational difficulty of factoring permit improved estimates for the size of RSA parameters needed for given levels of cryptosecurity. Analysis of the Quadratic Sieve indicates that it may be faster than any previously published general purpose algorithm for factoring large integers. The high speed of the CRAY I coupled with the capability of the CRAY to pipeline certain vectorized operations make this algorithm (and code) the front runner in current factoring techniques.
Intergovernmental Programs | Department of Energy
Office of Energy Efficiency and Renewable Energy (EERE) (indexed site)
Intergovernmental Programs Intergovernmental Programs Intergovernmental Programs Intergovernmental Programs Intergovernmental Programs Intergovernmental Programs Intergovernmental Programs Intergovernmental Programs Intergovernmental Programs Intergovernmental Programs Intergovernmental Programs Intergovernmental Programs Intergovernmental Programs Intergovernmental Programs The Office of Environmental Management supports, by means of grants and cooperative agreements, a number of
Drift problems in the automatic analysis of gamma-ray spectra using associative memory algorithms
Olmos, P.; Diaz, J.C.; Perez, J.M.; Aguayo, P. ); Gomez, P.; Rodellar, V. )
1994-06-01
Perturbations affecting nuclear radiation spectrometers during their operation frequently spoil the accuracy of automatic analysis methods. One of the problems usually found in practice refers to fluctuations in the spectrum gain and zero, produced by drifts in the detector and nuclear electronics. The pattern acquired in these conditions may be significantly different from that expected with stable instrumentation, thus complicating the identification and quantification of the radionuclides present in it. In this work, the performance of Associative Memory algorithms when dealing with spectra affected by drifts is explored considering a linear energy-calibration function. The formulation of the extended algorithm, constructed to quantify the possible presence of drifts in the spectrometer, is deduced and the results obtained from its application to several practical cases are commented.
Knot Undulator to Generate Linearly Polarized Photons with Low...
Office of Scientific and Technical Information (OSTI)
Heat load on beamline optics is a serious problem to generate pure linearly polarized ... Language: English Subject: 43 PARTICLE ACCELERATORS; OPTICS; PERMANENT MAGNETS; PHOTONS; ...
Vibronic coupling simulations for linear and nonlinear optical...
Office of Scientific and Technical Information (OSTI)
optical processes: Theory Citation Details In-Document Search Title: Vibronic coupling simulations for linear and nonlinear optical processes: Theory A comprehensive vibronic ...
Entropy-based separation of linear chain molecules by exploiting...
U.S. Department of Energy (DOE) - all webpages (Extended Search)
Entropy-based separation of linear chain molecules by exploiting differences in the saturation capacities in cage-type zeolites Previous Next List Rajamani Krishna, Jasper M. van...
Top Quark Anomalous Couplings at the International Linear Collider...
Office of Scientific and Technical Information (OSTI)
to a precision of approximately 1% for each of two choices of beam polarization. ... INTERMEDIATE BOSONS; LINEAR COLLIDERS; POLARIZATION; PROBES; QUARKS; SILICON; SIMULATION; ...
A posteriori error analysis of parameterized linear systems using...
Office of Scientific and Technical Information (OSTI)
Journal Article: A posteriori error analysis of parameterized linear systems using spectral methods. Citation Details In-Document Search Title: A posteriori error analysis of ...
Simultaneous linear optics and coupling correction for storage...
Office of Scientific and Technical Information (OSTI)
Journal Article: Simultaneous linear optics and coupling correction for storage rings with turn-by-turn beam position monitor data Citation Details In-Document Search Title:...
Unexpected Angular Dependence of X-Ray Magnetic Linear Dichroism
U.S. Department of Energy (DOE) - all webpages (Extended Search)
Unexpected Angular Dependence of X-Ray Magnetic Linear Dichroism Print Using spectroscopic ... The effect is unique in that it allows us to distinguish which atomic species magnetism ...
Linear electric field time-of-flight ion mass spectrometer
Funsten, Herbert O.; Feldman, William C.
2008-06-10
A linear electric field ion mass spectrometer having an evacuated enclosure with means for generating a linear electric field located in the evacuated enclosure and means for injecting a sample material into the linear electric field. A source of pulsed ionizing radiation injects ionizing radiation into the linear electric field to ionize atoms or molecules of the sample material, and timing means determine the time elapsed between ionization of atoms or molecules and arrival of an ion out of the ionized atoms or molecules at a predetermined position.
The Klynac: An Integrated Klystron and Linear Accelerator
Potter, J. M., Schwellenbach, D., Meidinger, A.
2012-08-07
The Klynac concept integrates an electron gun, a radio frequency (RF) power source, and a coupled-cavity linear accelerator into a single resonant system
Knot Undulator to Generate Linearly Polarized Photons with Low...
Office of Scientific and Technical Information (OSTI)
pure linearly polarized photons in the third generation synchrotron radiation facilities. ... Sponsoring Org: USDOE Country of Publication: United States Language: English Subject: 43 ...
DOE - Office of Legacy Management -- Stanford Linear Accelerator...
Office of Legacy Management (LM)
The Stanford Linear Accelerator Center was established in 1962 as a research facility for high energy particle physics. The Environmental Management mission at this site is to ...
Optimizing minimum free-energy crossing points in solution: Linear...
Office of Scientific and Technical Information (OSTI)
Optimizing minimum free-energy crossing points in solution: Linear-response free energyspin-flip density functional theory approach Citation Details In-Document Search Title:...
A Linear Theory of Microwave Instability in Electron Storage...
Office of Scientific and Technical Information (OSTI)
Title: A Linear Theory of Microwave Instability in Electron Storage Rings The well-known Haissinski distribution provides a stable equilibrium of longitudinal beam distribution in ...
Self-Sustained Micromechanical Oscillator with Linear Feedback...
Office of Scientific and Technical Information (OSTI)
Publisher's Accepted Manuscript: Self-Sustained Micromechanical Oscillator with Linear Feedback This content will become publicly available on July 1, 2017 Prev Next Title: ...
Linearly Polarized Thermal Emitter for More Efficient Thermophotovolta...
U.S. Department of Energy (DOE) - all webpages (Extended Search)
Linearly Polarized Thermal Emitter for More Efficient Thermophotovoltaic Devices Ames ... than can be used to create more efficient thermophotovoltaic devices for power generation. ...
U.S. Department of Energy (DOE) - all webpages (Extended Search)
Version Control Tools Programming Libraries Performance and Debugging Tools Grid Software and Services NERSC Software Downloads Software Page Template Policies User Surveys NERSC Users Group Help Staff Blogs Request Repository Mailing List Need Help? Out-of-hours Status and Password help Call operations: 1-800-66-NERSC, option 1 or 510-486-6821 Account Support https://nim.nersc.gov accounts@nersc.gov 1-800-66-NERSC, option 2 or 510-486-8612 Consulting and questions http://help.nersc.gov
Berkeley Algorithms Help Researchers Understand Dark Energy
U.S. Department of Energy (DOE) - all webpages (Extended Search)
Algorithms Help Researchers Understand Dark Energy Berkeley Algorithms Help Researchers Understand Dark Energy November 24, 2014 Contact: Linda Vu, +1 510 495 2402, lvu@lbl.gov Scientists believe that dark energy-the mysterious force that is accelerating cosmic expansion-makes up about 70 percent of the mass and energy of the universe. But because they don't know what it is, they cannot observe it directly. To unlock the mystery of dark energy and its influence on the universe, researchers
Speckle imaging algorithms for planetary imaging
Johansson, E.
1994-11-15
I will discuss the speckle imaging algorithms used to process images of the impact sites of the collision of comet Shoemaker-Levy 9 with Jupiter. The algorithms use a phase retrieval process based on the average bispectrum of the speckle image data. High resolution images are produced by estimating the Fourier magnitude and Fourier phase of the image separately, then combining them and inverse transforming to achieve the final result. I will show raw speckle image data and high-resolution image reconstructions from our recent experiment at Lick Observatory.
Graph algorithms in the titan toolkit.
McLendon, William Clarence, III; Wylie, Brian Neil
2009-10-01
Graph algorithms are a key component in a wide variety of intelligence analysis activities. The Graph-Based Informatics for Non-Proliferation and Counter-Terrorism project addresses the critical need of making these graph algorithms accessible to Sandia analysts in a manner that is both intuitive and effective. Specifically we describe the design and implementation of an open source toolkit for doing graph analysis, informatics, and visualization that provides Sandia with novel analysis capability for non-proliferation and counter-terrorism.
Lin, Youzuo; O'Malley, Daniel; Vesselinov, Velimir V.
2016-08-19
Inverse modeling seeks model parameters given a set of observations. However, for practical problems because the number of measurements is often large and the model parameters are also numerous, conventional methods for inverse modeling can be computationally expensive. We have developed a new, computationally-efficient parallel Levenberg-Marquardt method for solving inverse modeling problems with a highly parameterized model space. Levenberg-Marquardt methods require the solution of a linear system of equations which can be prohibitively expensive to compute for moderate to large-scale problems. Our novel method projects the original linear problem down to a Krylov subspace, such that the dimensionality of themore » problem can be significantly reduced. Furthermore, we store the Krylov subspace computed when using the first damping parameter and recycle the subspace for the subsequent damping parameters. The efficiency of our new inverse modeling algorithm is significantly improved using these computational techniques. We apply this new inverse modeling method to invert for random transmissivity fields in 2D and a random hydraulic conductivity field in 3D. Our algorithm is fast enough to solve for the distributed model parameters (transmissivity) in the model domain. The algorithm is coded in Julia and implemented in the MADS computational framework (http://mads.lanl.gov). By comparing with Levenberg-Marquardt methods using standard linear inversion techniques such as QR or SVD methods, our Levenberg-Marquardt method yields a speed-up ratio on the order of ~101 to ~102 in a multi-core computational environment. Furthermore, our new inverse modeling method is a powerful tool for characterizing subsurface heterogeneity for moderate- to large-scale problems.« less
Resource-Efficient Generataion of Linear Cluster States by Linear Optics with postselection
Uskov, Dmitry B; Alsing, Paul; Fanto, Michael; Kaplan, Lev; Kim, R; Szep, Atilla; Smith IV, Amos M
2015-01-01
We report on theoretical research in photonic cluster-state computing. Finding optimal schemes of generating non-classical photonic states is of critical importance for this field as physically implementable photon-photon entangling operations are currently limited to measurement-assisted stochastic transformations. A critical parameter for assessing the efficiency of such transformations is the success probability of a desired measurement outcome. At present there are several experimental groups that are capable of generating multi-photon cluster states carrying more than eight qubits. Separate photonic qubits or small clusters can be fused into a single cluster state by a probabilistic optical CZ gate conditioned on simultaneous detectionmore » of all photons with 1/9 success probability for each gate. This design mechanically follows the original theoretical scheme of cluster state generation proposed more than a decade ago by Raussendorf, Browne, and Briegel. The optimality of the destructive CZ gate in application to linear optical cluster state generation has not been analyzed previously. Our results reveal that this method is far from the optimal one. Employing numerical optimization we have identified that the maximal success probability of fusing n unentangled dual-rail optical qubits into a linear cluster state is equal to 1/2^n-1; an m-tuple of photonic Bell pair states, commonly generated via spontaneous parametric down-conversion, can be fused into a single cluster with the maximal success probability of 1/4^m-1.« less
Resource-efficient generation of linear cluster states by linear optics with postselection
Uskov, D. B.; Alsing, P. M.; Fanto, M. L.; Kaplan, L.; Kim, R.; Szep, A.; Smith, A. M.
2015-01-30
Here we report on theoretical research in photonic cluster-state computing. Finding optimal schemes of generating non-classical photonic states is of critical importance for this field as physically implementable photon-photon entangling operations are currently limited to measurement-assisted stochastic transformations. A critical parameter for assessing the efficiency of such transformations is the success probability of a desired measurement outcome. At present there are several experimental groups that are capable of generating multi-photon cluster states carrying more than eight qubits. Separate photonic qubits or small clusters can be fused into a single cluster state by a probabilistic optical CZ gate conditioned on simultaneous detection of all photons with 1/9 success probability for each gate. This design mechanically follows the original theoretical scheme of cluster state generation proposed more than a decade ago by Raussendorf, Browne, and Briegel. The optimality of the destructive CZ gate in application to linear optical cluster state generation has not been analyzed previously. Our results reveal that this method is far from the optimal one. Employing numerical optimization we have identified that the maximal success probability of fusing n unentangled dual-rail optical qubits into a linear cluster state is equal to 1/2^{n-1}; an m-tuple of photonic Bell pair states, commonly generated via spontaneous parametric down-conversion, can be fused into a single cluster with the maximal success probability of 1/4^{m-1}.
Resource-Efficient Generataion of Linear Cluster States by Linear Optics with postselection
Uskov, Dmitry B; Alsing, Paul; Fanto, Michael; Kaplan, Lev; Kim, R; Szep, Atilla; Smith IV, Amos M
2015-01-01
We report on theoretical research in photonic cluster-state computing. Finding optimal schemes of generating non-classical photonic states is of critical importance for this field as physically implementable photon-photon entangling operations are currently limited to measurement-assisted stochastic transformations. A critical parameter for assessing the efficiency of such transformations is the success probability of a desired measurement outcome. At present there are several experimental groups that are capable of generating multi-photon cluster states carrying more than eight qubits. Separate photonic qubits or small clusters can be fused into a single cluster state by a probabilistic optical CZ gate conditioned on simultaneous detection of all photons with 1/9 success probability for each gate. This design mechanically follows the original theoretical scheme of cluster state generation proposed more than a decade ago by Raussendorf, Browne, and Briegel. The optimality of the destructive CZ gate in application to linear optical cluster state generation has not been analyzed previously. Our results reveal that this method is far from the optimal one. Employing numerical optimization we have identified that the maximal success probability of fusing n unentangled dual-rail optical qubits into a linear cluster state is equal to 1/2^n-1; an m-tuple of photonic Bell pair states, commonly generated via spontaneous parametric down-conversion, can be fused into a single cluster with the maximal success probability of 1/4^m-1.
Resource-efficient generation of linear cluster states by linear optics with postselection
Uskov, D. B.; Alsing, P. M.; Fanto, M. L.; Kaplan, L.; Kim, R.; Szep, A.; Smith, A. M.
2015-01-30
Here we report on theoretical research in photonic cluster-state computing. Finding optimal schemes of generating non-classical photonic states is of critical importance for this field as physically implementable photon-photon entangling operations are currently limited to measurement-assisted stochastic transformations. A critical parameter for assessing the efficiency of such transformations is the success probability of a desired measurement outcome. At present there are several experimental groups that are capable of generating multi-photon cluster states carrying more than eight qubits. Separate photonic qubits or small clusters can be fused into a single cluster state by a probabilistic optical CZ gate conditioned on simultaneousmore » detection of all photons with 1/9 success probability for each gate. This design mechanically follows the original theoretical scheme of cluster state generation proposed more than a decade ago by Raussendorf, Browne, and Briegel. The optimality of the destructive CZ gate in application to linear optical cluster state generation has not been analyzed previously. Our results reveal that this method is far from the optimal one. Employing numerical optimization we have identified that the maximal success probability of fusing n unentangled dual-rail optical qubits into a linear cluster state is equal to 1/2n-1; an m-tuple of photonic Bell pair states, commonly generated via spontaneous parametric down-conversion, can be fused into a single cluster with the maximal success probability of 1/4m-1.« less
Solar receiver heliostat reflector having a linear drive and position information system
Horton, Richard H.
1980-01-01
A heliostat for a solar receiver system comprises an improved drive and control system for the heliostat reflector assembly. The heliostat reflector assembly is controllably driven in a predetermined way by a light-weight drive system so as to be angularly adjustable in both elevation and azimuth to track the sun and efficiently continuously reflect the sun's rays to a focal zone, i.e., heat receiver, which forms part of a solar energy utilization system, such as a solar energy fueled electrical power generation system. The improved drive system includes linear stepping motors which comprise low weight, low cost, electronic pulse driven components. One embodiment comprises linear stepping motors controlled by a programmed, electronic microprocessor. Another embodiment comprises a tape driven system controlled by a position control magnetic tape.
Berkolaiko, G.; Kuipers, J.
2013-12-15
Electronic transport through chaotic quantum dots exhibits universal behaviour which can be understood through the semiclassical approximation. Within the approximation, calculation of transport moments reduces to codifying classical correlations between scattering trajectories. These can be represented as ribbon graphs and we develop an algorithmic combinatorial method to generate all such graphs with a given genus. This provides an expansion of the linear transport moments for systems both with and without time reversal symmetry. The computational implementation is then able to progress several orders further than previous semiclassical formulae as well as those derived from an asymptotic expansion of random matrix results. The patterns observed also suggest a general form for the higher orders.
Calculation of excitation energies from the CC2 linear response theory using Cholesky decomposition
Baudin, Pablo; qLEAP Center for Theoretical Chemistry, Department of Chemistry, Aarhus University, Langelandsgade 140, DK-8000 Aarhus C ; Marn, Jos Snchez; Cuesta, Inmaculada Garca; Snchez de Mers, Alfredo M. J.
2014-03-14
A new implementation of the approximate coupled cluster singles and doubles CC2 linear response model is reported. It employs a Cholesky decomposition of the two-electron integrals that significantly reduces the computational cost and the storage requirements of the method compared to standard implementations. Our algorithm also exploits a partitioning form of the CC2 equations which reduces the dimension of the problem and avoids the storage of doubles amplitudes. We present calculation of excitation energies of benzene using a hierarchy of basis sets and compare the results with conventional CC2 calculations. The reduction of the scaling is evaluated as well as the effect of the Cholesky decomposition parameter on the quality of the results. The new algorithm is used to perform an extrapolation to complete basis set investigation on the spectroscopically interesting benzylallene conformers. A set of calculations on medium-sized molecules is carried out to check the dependence of the accuracy of the results on the decomposition thresholds. Moreover, CC2 singlet excitation energies of the free base porphin are also presented.
Weglein, Arthur B.; Stolt, Bob H.
2012-03-01
Extracting information from seismic data requires knowledge of seismic wave propagation and reflection. The commonly used method involves solving linearly for a reflectivity at every point within the Earth, but this book follows an alternative approach which invokes inverse scattering theory. By developing the theory of seismic imaging from basic principles, the authors relate the different models of seismic propagation, reflection and imaging - thus providing links to reflectivity-based imaging on the one hand and to nonlinear seismic inversion on the other. The comprehensive and physically complete linear imaging foundation developed presents new results at the leading edge of seismic processing for target location and identification. This book serves as a fundamental guide to seismic imaging principles and algorithms and their foundation in inverse scattering theory and is a valuable resource for working geoscientists, scientific programmers and theoretical physicists.
Beryllium Program - Hanford Site
U.S. Department of Energy (DOE) - all webpages (Extended Search)
Program About Us About Hanford Cleanup Hanford History Hanford Site Wide Programs Beryllium Program Beryllium Program Points of Contact Beryllium Facilities & Areas Beryllium Program Information Hanford CBDPP Committee Beryllium FAQs Beryllium Related Links Hanford Beryllium Awareness Group (BAG) Program Performance Assessments Beryllium Program Feedback Beryllium Health Advocates Primary Contractors/Employers Medical Testing and Surveillance Facilities General Resources Contact Us Beryllium
Geothermal Technologies Program Overview - Peer Review Program
Milliken, JoAnn
2011-06-06
This Geothermal Technologies Program presentation was delivered on June 6, 2011 at a Program Peer Review meeting. It contains annual budget, Recovery Act, funding opportunities, upcoming program activities, and more.
THE LEVENBERG-MARQUARDT ALGORITHM: IMPLEMENTATION AND THEORY
Office of Scientific and Technical Information (OSTI)
... Since is usually a nonlinear function of p, we linearize F(x+p) and obtain the linear least squares problem Hv) |1F(X) + F'(X)PI| . Of course, this linearization is not valid ...
On the convergence of inexact Uzawa algorithms
Welfert, B.D.
1994-12-31
The author considers the solution of symmetric indefinite systems which can be cast in matrix block form, where diagonal blocks A and C are symmetric positive definite and semi-definite, respectively. Systems of this type arise frequently in quadratic minimization problems, as well as mixed finite element discretizations of fluid flow equation. The author uses the Uzawa algorithm to precondition the matrix equations.
Gamma-ray spectral analysis algorithm library
Energy Science and Technology Software Center
2013-05-06
The routines of the Gauss Algorithms library are used to implement special purpose products that need to analyze gamma-ray spectra from Ge semiconductor detectors as a part of their function. These routines provide the ability to calibrate energy, calibrate peakwidth, search for peaks, search for regions, and fit the spectral data in a given region to locate gamma rays.
Gamma-ray Spectral Analysis Algorithm Library
Energy Science and Technology Software Center
1997-09-25
The routines of the Gauss Algorithm library are used to implement special purpose products that need to analyze gamma-ray spectra from GE semiconductor detectors as a part of their function. These routines provide the ability to calibrate energy, calibrate peakwidth, search for peaks, search for regions, and fit the spectral data in a given region to locate gamma rays.
PDES. FIPS Standard Data Encryption Algorithm
Nessett, D.N.
1992-03-03
PDES performs the National Bureau of Standards FIPS Pub. 46 data encryption/description algorithm used for the cryptographic protection of computer data. The DES algorithm is designed to encipher and decipher blocks of data consisting of 64 bits under control of a 64-bit key. The key is generated in such a way that each of the 56 bits used directly by the algorithm are random and the remaining 8 error-detecting bits are set to make the parity of each 8-bit byte of the key odd, i.e. there is an odd number of 1 bits in each 8-bit byte. Each member of a group of authorized users of encrypted computer data must have the key that was used to encipher the data in order to use it. Data can be recovered from cipher only by using exactly the same key used to encipher it, but with the schedule of addressing the key bits altered so that the deciphering process is the reverse of the enciphering process. A block of data to be enciphered is subjected to an initial permutation, then to a complex key-dependent computation, and finally to a permutation which is the inverse of the initial permutation. Two PDES routines are included; both perform the same calculation. One, identified as FDES.MAR, is designed to achieve speed in execution, while the other identified as PDES.MAR, presents a clearer view of how the algorithm is executed.
Control algorithms for autonomous robot navigation
Jorgensen, C.C.
1985-09-20
This paper examines control algorithm requirements for autonomous robot navigation outside laboratory environments. Three aspects of navigation are considered: navigation control in explored terrain, environment interactions with robot sensors, and navigation control in unanticipated situations. Major navigation methods are presented and relevance of traditional human learning theory is discussed. A new navigation technique linking graph theory and incidental learning is introduced.
Variable-energy drift-tube linear accelerator
Swenson, Donald A.; Boyd, Jr., Thomas J.; Potter, James M.; Stovall, James E.
1984-01-01
A linear accelerator system includes a plurality of post-coupled drift-tubes wherein each post coupler is bistably positionable to either of two positions which result in different field distributions. With binary control over a plurality of post couplers, a significant accumlative effect in the resulting field distribution is achieved yielding a variable-energy drift-tube linear accelerator.
Linear Concentrator System Basics for Concentrating Solar Power
Office of Energy Efficiency and Renewable Energy (EERE)
Linear concentrating solar power (CSP) collectors capture the sun's energy with large mirrors that reflect and focus the sunlight onto a linear receiver tube. The receiver contains a fluid that is heated by the sunlight and then used to heat a traditional power cycle that spins a turbine that drives a generator to produce electricity.
Variable-energy drift-tube linear accelerator
Swenson, D.A.; Boyd, T.J. Jr.; Potter, J.M.; Stovall, J.E.
A linear accelerator system includes a plurality of post-coupled drift-tubes wherein each post coupler is bistably positionable to either of two positions which result in different field distributions. With binary control over a plurality of post couplers, a significant accumlative effect in the resulting field distribution is achieved yielding a variable-energy drift-tube linear accelerator.
Differentially pumped dual linear quadrupole ion trap mass spectrometer
Owen, Benjamin C.; Kenttamaa, Hilkka I.
2015-10-20
The present disclosure provides a new tandem mass spectrometer and methods of using the same for analyzing charged particles. The differentially pumped dual linear quadrupole ion trap mass spectrometer of the present disclose includes a combination of two linear quadrupole (LQIT) mass spectrometers with differentially pumped vacuum chambers.
Drift tube suspension for high intensity linear accelerators
Liska, Donald J.; Schamaun, Roger G.; Clark, Donald C.; Potter, R. Christopher; Frank, Joseph A.
1982-01-01
The disclosure relates to a drift tube suspension for high intensity linear accelerators. The system comprises a series of box-sections girders independently adjustably mounted on a linear accelerator. A plurality of drift tube holding stems are individually adjustably mounted on each girder.
Drift tube suspension for high intensity linear accelerators
Liska, D.J.; Schamaun, R.G.; Clark, D.C.; Potter, R.C.; Frank, J.A.
1980-03-11
The disclosure relates to a drift tube suspension for high intensity linear accelerators. The system comprises a series of box-sections girders independently adjustably mounted on a linear accelerator. A plurality of drift tube holding stems are individually adjustably mounted on each girder.
Machinist Pipeline/Apprentice Program Program Description
U.S. Department of Energy (DOE) - all webpages (Extended Search)
cost effective than previous time-based programs Moves apprentices to journeyworker status more quickly Program Coordinator: Heidi Hahn Email: hahn@lanl.gov Phone number:...
Program Year 2008 State Energy Program Formula
Energy.gov [DOE] (indexed site)
420 (covering the State Energy Program) and 10 CFR part ... are the official sources for program requirements. ... of energy efficiency and renewable energy products, ...
Existing Facilities Rebate Program
The NYSERDA Existing Facilities program merges the former Peak Load Reduction and Enhanced Commercial and Industrial Performance programs. The new program offers a broad array of different...
Nonlinear vs. linear biasing in Trp-cage folding simulations
Spiwok, Vojt?ch Oborsk, Pavel; Krlov, Blanka; Pazrikov, Jana
2015-03-21
Biased simulations have great potential for the study of slow processes, including protein folding. Atomic motions in molecules are nonlinear, which suggests that simulations with enhanced sampling of collective motions traced by nonlinear dimensionality reduction methods may perform better than linear ones. In this study, we compare an unbiased folding simulation of the Trp-cage miniprotein with metadynamics simulations using both linear (principle component analysis) and nonlinear (Isomap) low dimensional embeddings as collective variables. Folding of the mini-protein was successfully simulated in 200?ns simulation with linear biasing and non-linear motion biasing. The folded state was correctly predicted as the free energy minimum in both simulations. We found that the advantage of linear motion biasing is that it can sample a larger conformational space, whereas the advantage of nonlinear motion biasing lies in slightly better resolution of the resulting free energy surface. In terms of sampling efficiency, both methods are comparable.
Flexible Language Constructs for Large Parallel Programs
Rosing, Matt; Schnabel, Robert
1994-01-01
The goal of the research described in this article is to develop flexible language constructs for writing large data parallel numerical programs for distributed memory (multiple instruction multiple data [MIMD]) multiprocessors. Previously, several models have been developed to support synchronization and communication. Models for global synchronization include single instruction multiple data (SIMD), single program multiple data (SPMD), and sequential programs annotated with data distribution statements. The two primary models for communication include implicit communication based on shared memory and explicit communication based on messages. None of these models by themselves seem sufficient to permit the natural and efficient expression ofmore » the variety of algorithms that occur in large scientific computations. In this article, we give an overview of a new language that combines many of these programming models in a clean manner. This is done in a modular fashion such that different models can be combined to support large programs. Within a module, the selection of a model depends on the algorithm and its efficiency requirements. In this article, we give an overview of the language and discuss some of the critical implementation details.« less
Semi-Implicit Reversible Algorithms for Rigid Body Rotational Dynamics
Nukala, Phani K; Shelton Jr, William Allison
2006-09-01
This paper presents two semi-implicit algorithms based on splitting methodology for rigid body rotational dynamics. The first algorithm is a variation of partitioned Runge-Kutta (PRK) methodology that can be formulated as a splitting method. The second algorithm is akin to a multiple time stepping scheme and is based on modified Crouch-Grossman (MCG) methodology, which can also be expressed as a splitting algorithm. These algorithms are second-order accurate and time-reversible; however, they are not Poisson integrators, i.e., non-symplectic. These algorithms conserve some of the first integrals of motion, but some others are not conserved; however, the fluctuations in these invariants are bounded over exponentially long time intervals. These algorithms exhibit excellent long-term behavior because of their reversibility property and their (approximate) Poisson structure preserving property. The numerical results indicate that the proposed algorithms exhibit superior performance compared to some of the currently well known algorithms such as the Simo-Wong algorithm, Newmark algorithm, discrete Moser-Veselov algorithm, Lewis-Simo algorithm, and the LIEMID[EA] algorithm.
Linear-scaling implementation of the direct random-phase approximation
Kllay, Mihly
2015-05-28
We report the linear-scaling implementation of the direct random-phase approximation (dRPA) for closed-shell molecular systems. As a bonus, linear-scaling algorithms are also presented for the second-order screened exchange extension of dRPA as well as for the second-order MllerPlesset (MP2) method and its spin-scaled variants. Our approach is based on an incremental scheme which is an extension of our previous local correlation method [Rolik et al., J. Chem. Phys. 139, 094105 (2013)]. The approach extensively uses local natural orbitals to reduce the size of the molecular orbital basis of local correlation domains. In addition, we also demonstrate that using natural auxiliary functions [M. Kllay, J. Chem. Phys. 141, 244113 (2014)], the size of the auxiliary basis of the domains and thus that of the three-center Coulomb integral lists can be reduced by an order of magnitude, which results in significant savings in computation time. The new approach is validated by extensive test calculations for energies and energy differences. Our benchmark calculations also demonstrate that the new method enables dRPA calculations for molecules with more than 1000 atoms and 10?000 basis functions on a single processor.
Expanded studies of linear collider final focus systems at the Final Focus Test Beam
Tenenbaum, P.G.
1995-12-01
In order to meet their luminosity goals, linear colliders operating in the center-of-mass energy range from 3,50 to 1,500 GeV will need to deliver beams which are as small as a few Manometers tall, with x:y aspect ratios as large as 100. The Final Focus Test Beam (FFTB) is a prototype for the final focus demanded by these colliders: its purpose is to provide demagnification equivalent to those in the future linear collider, which corresponds to a focused spot size in the FFTB of 1.7 microns (horizontal) by 60 manometers (vertical). In order to achieve the desired spot sizes, the FFTB beam optics must be tuned to eliminate aberrations and other errors, and to ensure that the optics conform to the desired final conditions and the measured initial conditions of the beam. Using a combination of incoming-beam diagnostics. beam-based local diagnostics, and global tuning algorithms, the FFTB beam size has been reduced to a stable final size of 1.7 microns by 70 manometers. In addition, the chromatic properties of the FFTB have been studied using two techniques and found to be acceptable. Descriptions of the hardware and techniques used in these studies are presented, along with results and suggestions for future research.
Better Buildings Neighborhood Program Business Models Guide: Program Administrator Description
Better Buildings Neighborhood Program Business Models Guide: Program Administrator Business Models, Program Administrator Description.
Robust parallel iterative solvers for linear and least-squares problems, Final Technical Report
Saad, Yousef
2014-01-16
The primary goal of this project is to study and develop robust iterative methods for solving linear systems of equations and least squares systems. The focus of the Minnesota team is on algorithms development, robustness issues, and on tests and validation of the methods on realistic problems. 1. The project begun with an investigation on how to practically update a preconditioner obtained from an ILU-type factorization, when the coefficient matrix changes. 2. We investigated strategies to improve robustness in parallel preconditioners in a specific case of a PDE with discontinuous coefficients. 3. We explored ways to adapt standard preconditioners for solving linear systems arising from the Helmholtz equation. These are often difficult linear systems to solve by iterative methods. 4. We have also worked on purely theoretical issues related to the analysis of Krylov subspace methods for linear systems. 5. We developed an effective strategy for performing ILU factorizations for the case when the matrix is highly indefinite. The strategy uses shifting in some optimal way. The method was extended to the solution of Helmholtz equations by using complex shifts, yielding very good results in many cases. 6. We addressed the difficult problem of preconditioning sparse systems of equations on GPUs. 7. A by-product of the above work is a software package consisting of an iterative solver library for GPUs based on CUDA. This was made publicly available. It was the first such library that offers complete iterative solvers for GPUs. 8. We considered another form of ILU which blends coarsening techniques from Multigrid with algebraic multilevel methods. 9. We have released a new version on our parallel solver - called pARMS [new version is version 3]. As part of this we have tested the code in complex settings - including the solution of Maxwell and Helmholtz equations and for a problem of crystal growth.10. As an application of polynomial preconditioning we considered the
INDDGO: Integrated Network Decomposition & Dynamic programming for Graph Optimization
Groer, Christopher S; Sullivan, Blair D; Weerapurage, Dinesh P
2012-10-01
It is well-known that dynamic programming algorithms can utilize tree decompositions to provide a way to solve some \\emph{NP}-hard problems on graphs where the complexity is polynomial in the number of nodes and edges in the graph, but exponential in the width of the underlying tree decomposition. However, there has been relatively little computational work done to determine the practical utility of such dynamic programming algorithms. We have developed software to construct tree decompositions using various heuristics and have created a fast, memory-efficient dynamic programming implementation for solving maximum weighted independent set. We describe our software and the algorithms we have implemented, focusing on memory saving techniques for the dynamic programming. We compare the running time and memory usage of our implementation with other techniques for solving maximum weighted independent set, including a commercial integer programming solver and a semi-definite programming solver. Our results indicate that it is possible to solve some instances where the underlying decomposition has width much larger than suggested by the literature. For certain types of problems, our dynamic programming code runs several times faster than these other methods.
A garbage collection algorithm for shared memory parallel processors
Crammond, J. )
1988-12-01
This paper describes a technique for adapting the Morris sliding garbage collection algorithm to execute on parallel machines with shared memory. The algorithm is described within the framework of an implementation of the parallel logic language Parlog. However, the algorithm is a general one and can easily be adapted to parallel Prolog systems and to other languages. The performance of the algorithm executing a few simple Parlog benchmarks is analyzed. Finally, it is shown how the technique for parallelizing the sequential algorithm can be adapted for a semi-space copying algorithm.
Krumel, L.J.
1996-12-31
The Atmospheric Radiation Measurement Program is a multi-laboratory, interagency program as part of DOE`s principal entry into the US Global Change Research Program. Two issues addressed are the radiation budget and its spectral dependence, and radiative and other properties of clouds. Measures of solar flux divergence and energy exchanges between clouds, the earth, its oceans, and the atmosphere through various altitudes are sought. Additionally, the program seeks to provide measurements to calibrate satellite radiance products and validate their associated flux retrieval algorithms. Unmanned Aerospace Vehicles fly long, extended missions. MPIR is one of the primary instruments on the ARM-UAV campaigns. A shutter mechanism has been developed and flown as part of an airborne imaging radiometer having application to spacecraft or other applications requiring low vibration, high reliability, and long life. The device could be employed in other cases where a reciprocating platform is needed. Typical shutters and choppers utilize a spinning disc, or in very small instruments, a vibrating vane to continually interrupt incident light or radiation that enters the system. A spinning disk requires some sort of bearings that usually have limited life, and at a minimum introduce issues of reliability. Friction, lubrication and contamination always remain critical areas of concern, as well as the need for power to operate. Dual vibrating vanes may be dynamically well balanced as a set and are frictionless. However, these are limited by size in a practical sense. In addition, multiples of these devices are difficult to synchronize.
Human Reliability Program Overview
Bodin, Michael
2012-09-25
This presentation covers the high points of the Human Reliability Program, including certification/decertification, critical positions, due process, organizational structure, program components, personnel security, an overview of the US DOE reliability program, retirees and academia, and security program integration.
Vehicle Technologies Program Overview
none,
2006-09-05
Overview of the Vehicle Technologies Program including external assessment and market view; internal assessment, program history and progress; program justification and federal role; program vision, mission, approach, strategic goals, outputs, and outcomes; and performance goals.
Mentoring Program | The Ames Laboratory
U.S. Department of Energy (DOE) - all webpages (Extended Search)
Mentoring Program Mentoring Program Mentee Questionnaire Mentor Questionnaire Ideas for Mentoring Program Activities...
STEM Education Program Inventory
U.S. Department of Energy (DOE) - all webpages (Extended Search)
Issue for STEM Education Program Inventory Title of Program* Requestor Contact Information First Name* Last Name* Phone Number* E-mail* Fax Number Institution Name Program Description* Issue Information Leading Organization* Location of Program / Event Program Address Program Website To select multiple options, press CTRL and click. Type of Program (if Other, enter information in the box to the right.)* Workforce Development Student Programs Public Engagement in Life Long Learning
Residential Buildings Integration Program
Office of Energy Efficiency and Renewable Energy (EERE) (indexed site)
... Program Existing Homes HUD The residential program is grounded on technology and research. ... * Quantitative (reporting) * Qualitative (account management, peer exchange ...
Utility Partnerships Program Overview
2014-10-03
Document describes the Utility Partnerships Program within the U.S. Department of Energy's Federal Energy Management Program.
Wisconsin Clean Transportation Program
2011 DOE Hydrogen and Fuel Cells Program, and Vehicle Technologies Program Annual Merit Review and Peer Evaluation
This document is the HQ Mediation Program's brochure. It generally discusses the services the program offers.
U.S. Department of Energy (DOE) - all webpages (Extended Search)
Careers » Career Options » Students » Undergraduate Program Undergraduate Student Program The Undergraduate Student (UGS) program is a year-round educational program that provides students with relevant research experience while they are pursuing an undergraduate degree. Contact Program Manager Scott Robbins Student Programs (505) 667-3639 Email Program Coordinator Emily Robinson Student Programs (505) 665-0964 Email Deadline for continuing and returning students: you are required to submit
Office of Energy Efficiency and Renewable Energy (EERE) (indexed site)
... seeks to evaluate a home's fixed characteristics, while holding occupant-determined ... algorithms & data: Sherman Air-leakage database, FSEC, RECS, Building America, NREL, ...
Non-linear stochastic growth rates and redshift space distortions
Jennings, Elise; Jennings, David
2015-04-09
The linear growth rate is commonly defined through a simple deterministic relation between the velocity divergence and the matter overdensity in the linear regime. We introduce a formalism that extends this to a non-linear, stochastic relation between θ = ∇ ∙ v(x,t)/aH and δ. This provides a new phenomenological approach that examines the conditional mean <θ|δ>, together with the fluctuations of θ around this mean. We also measure these stochastic components using N-body simulations and find they are non-negative and increase with decreasing scale from ~10 per cent at k < 0.2 h Mpc-1 to 25 per cent at kmore » ~ 0.45 h Mpc-1 at z = 0. Both the stochastic relation and non-linearity are more pronounced for haloes, M ≤ 5 × 1012 M⊙ h-1, compared to the dark matter at z = 0 and 1. Non-linear growth effects manifest themselves as a rotation of the mean <θ|δ> away from the linear theory prediction -fLTδ, where fLT is the linear growth rate. This rotation increases with wavenumber, k, and we show that it can be well-described by second-order Lagrangian perturbation theory (2LPT) fork < 0.1 h Mpc-1. Furthermore, the stochasticity in the θ – δ relation is not so simply described by 2LPT, and we discuss its impact on measurements of fLT from two-point statistics in redshift space. Furthermore, given that the relationship between δ and θ is stochastic and non-linear, this will have implications for the interpretation and precision of fLT extracted using models which assume a linear, deterministic expression.« less
Structure/Function Studies of Proteins Using Linear Scaling Quantum Mechanical Methodologies
Merz, K. M.
2004-07-19
We developed a linear-scaling semiempirical quantum mechanical (QM) program (DivCon). Using DivCon we can now routinely carry out calculations at the fully QM level on systems containing up to about 15 thousand atoms. We also implemented a Poisson-Boltzmann (PM) method into DivCon in order to compute solvation free energies and electrostatic properties of macromolecules in solution. This new suite of programs has allowed us to bring the power of quantum mechanics to bear on important biological problems associated with protein folding, drug design and enzyme catalysis. Hence, we have garnered insights into biological systems that have been heretofore impossible to obtain using classical simulation techniques.
Fast computation algorithms for speckle pattern simulation
Nascov, Victor; Samoilă, Cornel; Ursuţiu, Doru
2013-11-13
We present our development of a series of efficient computation algorithms, generally usable to calculate light diffraction and particularly for speckle pattern simulation. We use mainly the scalar diffraction theory in the form of Rayleigh-Sommerfeld diffraction formula and its Fresnel approximation. Our algorithms are based on a special form of the convolution theorem and the Fast Fourier Transform. They are able to evaluate the diffraction formula much faster than by direct computation and we have circumvented the restrictions regarding the relative sizes of the input and output domains, met on commonly used procedures. Moreover, the input and output planes can be tilted each to other and the output domain can be off-axis shifted.
Explicit 2-D Hydrodynamic FEM Program
Energy Science and Technology Software Center
1996-08-07
DYNA2D* is a vectorized, explicit, two-dimensional, axisymmetric and plane strain finite element program for analyzing the large deformation dynamic and hydrodynamic response of inelastic solids. DYNA2D* contains 13 material models and 9 equations of state (EOS) to cover a wide range of material behavior. The material models implemented in all machine versions are: elastic, orthotropic elastic, kinematic/isotropic elastic plasticity, thermoelastoplastic, soil and crushable foam, linear viscoelastic, rubber, high explosive burn, isotropic elastic-plastic, temperature-dependent elastic-plastic. Themore » isotropic and temperature-dependent elastic-plastic models determine only the deviatoric stresses. Pressure is determined by one of 9 equations of state including linear polynomial, JWL high explosive, Sack Tuesday high explosive, Gruneisen, ratio of polynomials, linear polynomial with energy deposition, ignition and growth of reaction in HE, tabulated compaction, and tabulated.« less
Explicit 2-D Hydrodynamic FEM Program
1996-08-07
DYNA2D* is a vectorized, explicit, two-dimensional, axisymmetric and plane strain finite element program for analyzing the large deformation dynamic and hydrodynamic response of inelastic solids. DYNA2D* contains 13 material models and 9 equations of state (EOS) to cover a wide range of material behavior. The material models implemented in all machine versions are: elastic, orthotropic elastic, kinematic/isotropic elastic plasticity, thermoelastoplastic, soil and crushable foam, linear viscoelastic, rubber, high explosive burn, isotropic elastic-plastic, temperature-dependent elastic-plastic. The isotropic and temperature-dependent elastic-plastic models determine only the deviatoric stresses. Pressure is determined by one of 9 equations of state including linear polynomial, JWL high explosive, Sack Tuesday high explosive, Gruneisen, ratio of polynomials, linear polynomial with energy deposition, ignition and growth of reaction in HE, tabulated compaction, and tabulated.
Algorithmic Techniques for Massive Data Sets
Moses Charikar
2006-04-03
This report describes the progress made during the Early Career Principal Investigator (ECPI) project on Algorithmic Techniques for Large Data Sets. Research was carried out in the areas of dimension reduction, clustering and finding structure in data, aggregating information from different sources and designing efficient methods for similarity search for high dimensional data. A total of nine different research results were obtained and published in leading conferences and journals.
Automated Algorithm for MFRSR Data Analysis
U.S. Department of Energy (DOE) - all webpages (Extended Search)
Automated Algorithm for MFRSR Data Analysis M. D. Alexandrov and B. Cairns Columbia University and National Aeronautics and Space Administration Goddard Institute for Space Studies New York, New York A. A. Lacis and B. E. Carlson National Aeronautics and Space Administration Goddard Institute for Space Studies New York, New York A. Marshak National Aeronautics and Space Administration Goddard Space Flight Center Greenbelt, Maryland We present a substantial upgrade of our previously developed
Effective Yukawa couplings and flavor-changing Higgs boson decays at linear colliders
Gabrielli, E.; Mele, B.
2011-04-01
We analyze the advantages of a linear-collider program for testing a recent theoretical proposal where the Higgs boson Yukawa couplings are radiatively generated, keeping unchanged the standard-model mechanism for electroweak-gauge-symmetry breaking. Fermion masses arise at a large energy scale through an unknown mechanism, and the standard model at the electroweak scale is regarded as an effective field theory. In this scenario, Higgs boson decays into photons and electroweak gauge-boson pairs are considerably enhanced for a light Higgs boson, which makes a signal observation at the LHC straightforward. On the other hand, the clean environment of a linear collider is required to directly probe the radiative fermionic sector of the Higgs boson couplings. Also, we show that the flavor-changing Higgs boson decays are dramatically enhanced with respect to the standard model. In particular, we find a measurable branching ratio in the range (10{sup -4}-10{sup -3}) for the decay H{yields}bs for a Higgs boson lighter than 140 GeV, depending on the high-energy scale where Yukawa couplings vanish. We present a detailed analysis of the Higgs boson production cross sections at linear colliders for interesting decay signatures, as well as branching-ratio correlations for different flavor-conserving/nonconserving fermionic decays.
Linear beam-beam tune shift calculations for the Tevatron Collider
Johnson, D.
1989-01-12
A realistic estimate of the linear beam-beam tune shift is necessary for the selection of an optimum working point in the tune diagram. Estimates of the beam-beam tune shift using the ''Round Beam Approximation'' (RBA) have over estimated the tune shift for the Tevatron. For a hadron machine with unequal lattice functions and beam sizes, an explicit calculation using the beam size at the crossings is required. Calculations for various Tevatron lattices used in Collider operation are presented. Comparisons between the RBA and the explicit calculation, for elliptical beams, are presented. This paper discusses the calculation of the linear tune shift using the program SYNCH. Selection of a working point is discussed. The magnitude of the tune shift is influenced by the choice of crossing points in the lattice as determined by the pbar ''cogging effects''. Also discussed is current cogging procedures and presents results of calculations for tune shifts at various crossing points in the lattice. Finally, a comparison of early pbar tune measurements with the present linear tune shift calculations is presented. 17 refs., 13 figs., 3 tabs.
Evaluating cloud retrieval algorithms with the ARM BBHRP framework
Mlawer,E.; Dunn,M.; Mlawer, E.; Shippert, T.; Troyan, D.; Johnson, K. L.; Miller, M. A.; Delamere, J.; Turner, D. D.; Jensen, M. P.; Flynn, C.; Shupe, M.; Comstock, J.; Long, C. N.; Clough, S. T.; Sivaraman, C.; Khaiyer, M.; Xie, S.; Rutan, D.; Minnis, P.
2008-03-10
Climate and weather prediction models require accurate calculations of vertical profiles of radiative heating. Although heating rate calculations cannot be directly validated due to the lack of corresponding observations, surface and top-of-atmosphere measurements can indirectly establish the quality of computed heating rates through validation of the calculated irradiances at the atmospheric boundaries. The ARM Broadband Heating Rate Profile (BBHRP) project, a collaboration of all the working groups in the program, was designed with these heating rate validations as a key objective. Given the large dependence of radiative heating rates on cloud properties, a critical component of BBHRP radiative closure analyses has been the evaluation of cloud microphysical retrieval algorithms. This evaluation is an important step in establishing the necessary confidence in the continuous profiles of computed radiative heating rates produced by BBHRP at the ARM Climate Research Facility (ACRF) sites that are needed for modeling studies. This poster details the continued effort to evaluate cloud property retrieval algorithms within the BBHRP framework, a key focus of the project this year. A requirement for the computation of accurate heating rate profiles is a robust cloud microphysical product that captures the occurrence, height, and phase of clouds above each ACRF site. Various approaches to retrieve the microphysical properties of liquid, ice, and mixed-phase clouds have been processed in BBHRP for the ACRF Southern Great Plains (SGP) and the North Slope of Alaska (NSA) sites. These retrieval methods span a range of assumptions concerning the parameterization of cloud location, particle density, size, shape, and involve different measurement sources. We will present the radiative closure results from several different retrieval approaches for the SGP site, including those from Microbase, the current 'reference' retrieval approach in BBHRP. At the NSA, mixed-phase clouds and
Cable Damage Detection System and Algorithms Using Time Domain Reflectometry
Clark, G A; Robbins, C L; Wade, K A; Souza, P R
2009-03-24
This report describes the hardware system and the set of algorithms we have developed for detecting damage in cables for the Advanced Development and Process Technologies (ADAPT) Program. This program is part of the W80 Life Extension Program (LEP). The system could be generalized for application to other systems in the future. Critical cables can undergo various types of damage (e.g. short circuits, open circuits, punctures, compression) that manifest as changes in the dielectric/impedance properties of the cables. For our specific problem, only one end of the cable is accessible, and no exemplars of actual damage are available. This work addresses the detection of dielectric/impedance anomalies in transient time domain reflectometry (TDR) measurements on the cables. The approach is to interrogate the cable using time domain reflectometry (TDR) techniques, in which a known pulse is inserted into the cable, and reflections from the cable are measured. The key operating principle is that any important cable damage will manifest itself as an electrical impedance discontinuity that can be measured in the TDR response signal. Machine learning classification algorithms are effectively eliminated from consideration, because only a small number of cables is available for testing; so a sufficient sample size is not attainable. Nonetheless, a key requirement is to achieve very high probability of detection and very low probability of false alarm. The approach is to compare TDR signals from possibly damaged cables to signals or an empirical model derived from reference cables that are known to be undamaged. This requires that the TDR signals are reasonably repeatable from test to test on the same cable, and from cable to cable. Empirical studies show that the repeatability issue is the 'long pole in the tent' for damage detection, because it is has been difficult to achieve reasonable repeatability. This one factor dominated the project. The two-step model-based approach is
Method and apparatus of highly linear optical modulation
DeRose, Christopher; Watts, Michael R.
2016-05-03
In a new optical intensity modulator, a nonlinear change in refractive index is used to balance the nonlinearities in the optical transfer function in a way that leads to highly linear optical intensity modulation.
Physics Case for the International Linear Collider (Technical...
Office of Scientific and Technical Information (OSTI)
We summarize the physics case for the International Linear Collider (ILC). We review the ... in accord with the expected schedule of operation of the accelerator and the results of ...
Linear Scaling of the Exciton Binding Energy versus the Band...
Office of Scientific and Technical Information (OSTI)
Linear Scaling of the Exciton Binding Energy versus the Band Gap of Two-Dimensional Materials This content will become publicly available on August 6, 2016 Prev Next Title:...
Spin relaxation and linear-in-electric-field frequency shift...
Office of Scientific and Technical Information (OSTI)
arbitrary, time-independent magnetic field Citation Details In-Document Search Title: Spin relaxation and linear-in-electric-field frequency shift in an arbitrary, time-independen...
Fourth order resonance of a high intensity linear accelerator...
Office of Scientific and Technical Information (OSTI)
For a high intensity beam, the 4nu1 resonance of a linear accelerator is manifested through the octupolar term of space charge potential when the depressed phase advance sigma ...
Linear and cubic response to the initial eccentricity in heavy...
Office of Scientific and Technical Information (OSTI)
Linear and cubic response to the initial eccentricity in heavy-ion collisions Citation Details In-Document Search This content will become publicly available on January 21, 2017 ...
Linear Scaling Electronic Structure Methods with Periodic Boundary Conditions
Gustavo E. Scuseria
2008-02-08
The methodological development and computational implementation of linear scaling quantum chemistry methods for the accurate calculation of electronic structure and properties of periodic systems (solids, surfaces, and polymers) and their application to chemical problems of DOE relevance.
Scaling Up Coordinate Descent Algorithms for Large ?1 Regularization Problems
Scherrer, Chad; Halappanavar, Mahantesh; Tewari, Ambuj; Haglin, David J.
2012-07-03
We present a generic framework for parallel coordinate descent (CD) algorithms that has as special cases the original sequential algorithms of Cyclic CD and Stochastic CD, as well as the recent parallel Shotgun algorithm of Bradley et al. We introduce two novel parallel algorithms that are also special cases---Thread-Greedy CD and Coloring-Based CD---and give performance measurements for an OpenMP implementation of these.
Free piston variable-stroke linear-alternator generator
Haaland, Carsten M.
1998-01-01
A free-piston variable stroke linear-alternator AC power generator for a combustion engine. An alternator mechanism and oscillator system generates AC current. The oscillation system includes two oscillation devices each having a combustion cylinder and a flying turnbuckle. The flying turnbuckle moves in accordance with the oscillation device. The alternator system is a linear alternator coupled between the two oscillation devices by a slotted connecting rod.
Free piston variable-stroke linear-alternator generator
Haaland, C.M.
1998-12-15
A free-piston variable stroke linear-alternator AC power generator for a combustion engine is described. An alternator mechanism and oscillator system generates AC current. The oscillation system includes two oscillation devices each having a combustion cylinder and a flying turnbuckle. The flying turnbuckle moves in accordance with the oscillation device. The alternator system is a linear alternator coupled between the two oscillation devices by a slotted connecting rod. 8 figs.
Producing Linear Alpha Olefins From Biomass - Energy Innovation Portal
U.S. Department of Energy (DOE) - all webpages (Extended Search)
Producing Linear Alpha Olefins From Biomass Great Lakes Bioenergy Research Center Contact GLBRC About This Technology Technology Marketing Summary Linear alpha olefins (LAOs) are valuable commodity chemicals traditionally derived from petroleum. They are versatile building blocks for making a range of chemical products like polyethylene, synthetic oils, plasticizers, detergents and oilfield fluids. Relying on fossil fuel to manufacture LAOs is problematic. Not only are the standard methods
International Linear Collider Technical Design Report - Volume 2: Physics
Office of Scientific and Technical Information (OSTI)
(Technical Report) | SciTech Connect International Linear Collider Technical Design Report - Volume 2: Physics Citation Details In-Document Search Title: International Linear Collider Technical Design Report - Volume 2: Physics Authors: Baer, Howard ; Barklow, Tim ; Fujii, Keisuke ; Gao, Yuanning ; Hoang, Andre ; Kanemura, Shinya ; List, Jenny ; Logan, Heather E. ; Nomerotski, Andrei ; Perelstein, Maxim ; Peskin, Michael E. ; Poschl, Roman ; Reuter, Jurgen ; Riemann, Sabine ; Savoy-Navarro,
JLab Supports International Linear Collider Cavity Development Work |
U.S. Department of Energy (DOE) - all webpages (Extended Search)
Jefferson Lab Supports International Linear Collider Cavity Development Work JLab Supports International Linear Collider Cavity Development Work NEWPORT NEWS, Va. Feb. 12, 2008 - It's not often that major-league baseball and nuclear physics get to share the limelight, but that's what's happening at the Department of Energy's Jefferson Lab. The baseball connection involves a nine-cell niobium cavity developed by KEK accelerator scientists in Japan as one of several designs being tested for
Direct Probes of Linearly Polarized Gluons inside Unpolarized Hadrons
Boer, Daniel; Brodsky, Stanley J.; Mulders, Piet J.; Pisano, Cristian; /Cagliari U. /INFN, Cagliari
2011-02-07
We show that the unmeasured distribution of linearly polarized gluons inside unpolarized hadrons can be directly probed in jet or heavy quark pair production both in electron-hadron and hadron-hadron collisions. We present expressions for the simplest cos 2{phi} asymmetries and estimate their maximal value in the particular case of electron-hadron collisions. Measurements of the linearly polarized gluon distribution in the proton should be feasible in future EIC or LHeC experiments.
Accessing the Distribution of Linearly Polarized Gluons in Unpolarized Hadrons
Boer, Daniel; Brodsky, Stanley J.; Mulders, Piet J.; Pisano, Cristian; /Cagliari U. /INFN, Cagliari
2011-08-19
Gluons inside unpolarized hadrons can be linearly polarized provided they have a nonzero transverse momentum. The simplest and theoretically safest way to probe this distribution of linearly polarized gluons is through cos2{phi} asymmetries in heavy quark pair or dijet production in electron-hadron collisions. Future Electron-Ion Collider (EIC) or Large Hadron electron Collider (LHeC) experiments are ideally suited for this purpose. Here we estimate the maximum asymmetries for EIC kinematics.
Proceedings of the Oak Ridge Electron Linear Accelerator (ORELA) Workshop
Dunn, M.E.
2006-02-27
The Oak Ridge National Laboratory (ORNL) organized a workshop at ORNL July 14-15, 2005, to highlight the unique measurement capabilities of the Oak Ridge Electron Linear Accelerator (ORELA) facility and to emphasize the important role of ORELA for performing differential cross-section measurements in the low-energy resonance region that is important for nuclear applications such as nuclear criticality safety, nuclear reactor and fuel cycle analysis, stockpile stewardship, weapons research, medical diagnosis, and nuclear astrophysics. The ORELA workshop (hereafter referred to as the Workshop) provided the opportunity to exchange ideas and information pertaining to nuclear cross-section measurements and their importance for nuclear applications from a variety of perspectives throughout the U.S. Department of Energy (DOE). Approximately 50 people, representing DOE, universities, and seven U.S. national laboratories, attended the Workshop. The objective of the Workshop was to emphasize the technical community endorsement for ORELA in meeting nuclear data challenges in the years to come. The Workshop further emphasized the need for a better understanding of the gaps in basic differential nuclear measurements and identified the efforts needed to return ORELA to a reliable functional measurement facility. To accomplish the Workshop objective, nuclear data experts from national laboratories and universities were invited to provide talks emphasizing the unique and vital role of the ORELA facility for addressing nuclear data needs. ORELA is operated on a full cost-recovery basis with no single sponsor providing complete base funding for the facility. Consequently, different programmatic sponsors benefit by receiving accurate cross-section data measurements at a reduced cost to their respective programs; however, leveraging support for a complex facility such as ORELA has a distinct disadvantage in that the programmatic funds are only used to support program
New algorithms for the symmetric tridiagonal eigenvalue computation
Pan, V. |
1994-12-31
The author presents new algorithms that accelerate the bisection method for the symmetric eigenvalue problem. The algorithms rely on some new techniques, which include acceleration of Newton`s iteration and can also be further applied to acceleration of some other iterative processes, in particular, of iterative algorithms for approximating polynomial zeros.
Low-level RF signal processing for the Next Linear Collider Test Accelerator
Holmes, S.; Ziomek, C.; Adolphsen, C.
1997-05-12
In the X-band accelerator system for the Next Linear Collider Test Accelerator (NLCTA), the Low Level RF (LLRF) drive system must be very phase stable, but concurrently, be very phase agile. Phase agility is needed to make the Stanford Linear Doubler (SLED) power multiplier systems Energy work and to shape the RF waveforms to compensate beam loading in the accelerator sections. Similarly, precision fast phase and amplitude monitors are required to view, track, and feed back on RF signals at various locations throughout the system. The LLRF is composed of several subsystems: the RF Reference System generates and distributes a reference 11.424 GHz signal to all of the RF stations, the Signal Processing Chassis creates the RF waveforms with the appropriate phase modulation, and the Phase Detector Assembly measures the amplitude and phase of monitor3ed RF signals. The LLRF is run via VXI instrumentation. These instruments are controlled using HP VEE graphical programming software. Programs have been developed to shape the RF waveform, calibrate the phase modulators and demodulators, and display the measured waveforms. This paper describes these and other components of the LLRF system.
Ling, Julia; Templeton, Jeremy Alan
2015-08-04
Reynolds Averaged Navier Stokes (RANS) models are widely used in industry to predict fluid flows, despite their acknowledged deficiencies. Not only do RANS models often produce inaccurate flow predictions, but there are very limited diagnostics available to assess RANS accuracy for a given flow configuration. If experimental or higher fidelity simulation results are not available for RANS validation, there is no reliable method to evaluate RANS accuracy. This paper explores the potential of utilizing machine learning algorithms to identify regions of high RANS uncertainty. Three different machine learning algorithms were evaluated: support vector machines, Adaboost decision trees, and random forests. The algorithms were trained on a database of canonical flow configurations for which validated direct numerical simulation or large eddy simulation results were available, and were used to classify RANS results on a point-by-point basis as having either high or low uncertainty, based on the breakdown of specific RANS modeling assumptions. Classifiers were developed for three different basic RANS eddy viscosity model assumptions: the isotropy of the eddy viscosity, the linearity of the Boussinesq hypothesis, and the non-negativity of the eddy viscosity. It is shown that these classifiers are able to generalize to flows substantially different from those on which they were trained. As a result, feature selection techniques, model evaluation, and extrapolation detection are discussed in the context of turbulence modeling applications.
Ling, Julia; Templeton, Jeremy Alan
2015-08-04
Reynolds Averaged Navier Stokes (RANS) models are widely used in industry to predict fluid flows, despite their acknowledged deficiencies. Not only do RANS models often produce inaccurate flow predictions, but there are very limited diagnostics available to assess RANS accuracy for a given flow configuration. If experimental or higher fidelity simulation results are not available for RANS validation, there is no reliable method to evaluate RANS accuracy. This paper explores the potential of utilizing machine learning algorithms to identify regions of high RANS uncertainty. Three different machine learning algorithms were evaluated: support vector machines, Adaboost decision trees, and random forests.more » The algorithms were trained on a database of canonical flow configurations for which validated direct numerical simulation or large eddy simulation results were available, and were used to classify RANS results on a point-by-point basis as having either high or low uncertainty, based on the breakdown of specific RANS modeling assumptions. Classifiers were developed for three different basic RANS eddy viscosity model assumptions: the isotropy of the eddy viscosity, the linearity of the Boussinesq hypothesis, and the non-negativity of the eddy viscosity. It is shown that these classifiers are able to generalize to flows substantially different from those on which they were trained. As a result, feature selection techniques, model evaluation, and extrapolation detection are discussed in the context of turbulence modeling applications.« less
Evaluation of machine learning algorithms for prediction of regions of high RANS uncertainty
Ling, Julia; Templeton, Jeremy Alan
2015-08-04
Reynolds Averaged Navier Stokes (RANS) models are widely used in industry to predict fluid flows, despite their acknowledged deficiencies. Not only do RANS models often produce inaccurate flow predictions, but there are very limited diagnostics available to assess RANS accuracy for a given flow configuration. If experimental or higher fidelity simulation results are not available for RANS validation, there is no reliable method to evaluate RANS accuracy. This paper explores the potential of utilizing machine learning algorithms to identify regions of high RANS uncertainty. Three different machine learning algorithms were evaluated: support vector machines, Adaboost decision trees, and random forests. The algorithms were trained on a database of canonical flow configurations for which validated direct numerical simulation or large eddy simulation results were available, and were used to classify RANS results on a point-by-point basis as having either high or low uncertainty, based on the breakdown of specific RANS modeling assumptions. Classifiers were developed for three different basic RANS eddy viscosity model assumptions: the isotropy of the eddy viscosity, the linearity of the Boussinesq hypothesis, and the non-negativity of the eddy viscosity. It is shown that these classifiers are able to generalize to flows substantially different from those on which they were trained. As a result, feature selection techniques, model evaluation, and extrapolation detection are discussed in the context of turbulence modeling applications.
Probabilistic Swinging Door Algorithm as Applied to Photovoltaic Power Ramping Event Detection
Florita, Anthony; Zhang, Jie; Brancucci Martinez-Anido, Carlo; Hodge, Bri-Mathias; Cui, Mingjian
2015-10-02
Photovoltaic (PV) power generation experiences power ramping events due to cloud interference. Depending on the extent of PV aggregation and local grid features, such power variability can be constructive or destructive to measures of uncertainty regarding renewable power generation; however, it directly influences contingency planning, production costs, and the overall reliable operation of power systems. For enhanced power system flexibility, and to help mitigate the negative impacts of power ramping, it is desirable to analyze events in a probabilistic fashion so degrees of beliefs concerning system states and forecastability are better captured and uncertainty is explicitly quantified. A probabilistic swinging door algorithm is developed and presented in this paper. It is then applied to a solar data set of PV power generation. The probabilistic swinging door algorithm builds on results from the original swinging door algorithm, first used for data compression in trend logging, and it is described by two uncertain parameters: (i) e, the threshold sensitivity to a given ramp, and (ii) s, the residual of the piecewise linear ramps. These two parameters determine the distribution of ramps and capture the uncertainty in PV power generation.
Evaluation of machine learning algorithms for prediction of regions of high RANS uncertainty
Ling, Julia; Templeton, Jeremy Alan
2015-08-04
Reynolds Averaged Navier Stokes (RANS) models are widely used in industry to predict fluid flows, despite their acknowledged deficiencies. Not only do RANS models often produce inaccurate flow predictions, but there are very limited diagnostics available to assess RANS accuracy for a given flow configuration. If experimental or higher fidelity simulation results are not available for RANS validation, there is no reliable method to evaluate RANS accuracy. This paper explores the potential of utilizing machine learning algorithms to identify regions of high RANS uncertainty. Three different machine learning algorithms were evaluated: support vector machines, Adaboost decision trees, and random forests.more » The algorithms were trained on a database of canonical flow configurations for which validated direct numerical simulation or large eddy simulation results were available, and were used to classify RANS results on a point-by-point basis as having either high or low uncertainty, based on the breakdown of specific RANS modeling assumptions. Classifiers were developed for three different basic RANS eddy viscosity model assumptions: the isotropy of the eddy viscosity, the linearity of the Boussinesq hypothesis, and the non-negativity of the eddy viscosity. It is shown that these classifiers are able to generalize to flows substantially different from those on which they were trained. As a result, feature selection techniques, model evaluation, and extrapolation detection are discussed in the context of turbulence modeling applications.« less
High School Internship Program
U.S. Department of Energy (DOE) - all webpages (Extended Search)
High School Internship Program High School Internship Program Point your career towards Los Alamos Lab: work with the best minds on the planet in an inclusive environment that is rich in intellectual vitality and opportunities for growth. Contact Program Manager Scott Robbins Student Programs (505) 667-3639 Email Program Coordinator Emily Robinson Student Programs (505) 665-0964 Email Opportunities for Northern New Mexico high school seniors The High School Internship Program provides qualified
Zuehlsdorff, T. J. Payne, M. C.; Hine, N. D. M.; Haynes, P. D.
2015-11-28
We present a solution of the full time-dependent density-functional theory (TDDFT) eigenvalue equation in the linear response formalism exhibiting a linear-scaling computational complexity with system size, without relying on the simplifying Tamm-Dancoff approximation (TDA). The implementation relies on representing the occupied and unoccupied subspaces with two different sets of in situ optimised localised functions, yielding a very compact and efficient representation of the transition density matrix of the excitation with the accuracy associated with a systematic basis set. The TDDFT eigenvalue equation is solved using a preconditioned conjugate gradient algorithm that is very memory-efficient. The algorithm is validated on a small test molecule and a good agreement with results obtained from standard quantum chemistry packages is found, with the preconditioner yielding a significant improvement in convergence rates. The method developed in this work is then used to reproduce experimental results of the absorption spectrum of bacteriochlorophyll in an organic solvent, where it is demonstrated that the TDA fails to reproduce the main features of the low energy spectrum, while the full TDDFT equation yields results in good qualitative agreement with experimental data. Furthermore, the need for explicitly including parts of the solvent into the TDDFT calculations is highlighted, making the treatment of large system sizes necessary that are well within reach of the capabilities of the algorithm introduced here. Finally, the linear-scaling properties of the algorithm are demonstrated by computing the lowest excitation energy of bacteriochlorophyll in solution. The largest systems considered in this work are of the same order of magnitude as a variety of widely studied pigment-protein complexes, opening up the possibility of studying their properties without having to resort to any semiclassical approximations to parts of the protein environment.
A Unified Differential Evolution Algorithm for Global Optimization
Qiang, Ji; Mitchell, Chad
2014-06-24
Abstract?In this paper, we propose a new unified differential evolution (uDE) algorithm for single objective global optimization. Instead of selecting among multiple mutation strategies as in the conventional differential evolution algorithm, this algorithm employs a single equation as the mutation strategy. It has the virtue of mathematical simplicity and also provides users the flexbility for broader exploration of different mutation strategies. Numerical tests using twelve basic unimodal and multimodal functions show promising performance of the proposed algorithm in comparison to convential differential evolution algorithms.
State Energy Program Competitive Financial Assistance Program
State Energy Program (SEP) dedicates a portion of its funding each year to provide competitively awarded financial assistance to U.S. states and territories to advance policies, programs, and market strategies.
Easy and hard testbeds for real-time search algorithms
Koenig, S.; Simmons, R.G.
1996-12-31
Although researchers have studied which factors influence the behavior of traditional search algorithms, currently not much is known about how domain properties influence the performance of real-time search algorithms. In this paper we demonstrate, both theoretically and experimentally, that Eulerian state spaces (a super set of undirected state spaces) are very easy for some existing real-time search algorithms to solve: even real-time search algorithms that can be intractable, in general, are efficient for Eulerian state spaces. Because traditional real-time search testbeds (such as the eight puzzle and gridworlds) are Eulerian, they cannot be used to distinguish between efficient and inefficient real-time search algorithms. It follows that one has to use non-Eulerian domains to demonstrate the general superiority of a given algorithm. To this end, we present two classes of hard-to-search state spaces and demonstrate the performance of various real-time search algorithms on them.
An Adaptive Unified Differential Evolution Algorithm for Global Optimization
Qiang, Ji; Mitchell, Chad
2014-11-03
In this paper, we propose a new adaptive unified differential evolution algorithm for single-objective global optimization. Instead of the multiple mutation strate- gies proposed in conventional differential evolution algorithms, this algorithm employs a single equation unifying multiple strategies into one expression. It has the virtue of mathematical simplicity and also provides users the flexibility for broader exploration of the space of mutation operators. By making all control parameters in the proposed algorithm self-adaptively evolve during the process of optimization, it frees the application users from the burden of choosing appro- priate control parameters and also improves the performance of the algorithm. In numerical tests using thirteen basic unimodal and multimodal functions, the proposed adaptive unified algorithm shows promising performance in compari- son to several conventional differential evolution algorithms.
Algorithms for Contact in a Mulitphysics Environment
Energy Science and Technology Software Center
2001-12-19
Many codes require either a contact capability or a need to determine geometric proximity of non-connected topological entities (which is a subset of what contact requires). ACME is a library to provide services to determine contact forces and/or geometric proximity interactions. This includes generic capabilities such as determining points in Cartesian volumes, finding faces in Cartesian volumes, etc. ACME can be run in single or multi-processor mode (the basic algorithms have been tested up tomore » 4500 processors).« less
A Monte Carlo algorithm for degenerate plasmas
Turrell, A.E. Sherlock, M.; Rose, S.J.
2013-09-15
A procedure for performing Monte Carlo calculations of plasmas with an arbitrary level of degeneracy is outlined. It has possible applications in inertial confinement fusion and astrophysics. Degenerate particles are initialised according to the FermiDirac distribution function, and scattering is via a Pauli blocked binary collision approximation. The algorithm is tested against degenerate electronion equilibration, and the degenerate resistivity transport coefficient from unmagnetised first order transport theory. The code is applied to the cold fuel shell and alpha particle equilibration problem of inertial confinement fusion.
Higher-degree linear approximations of nonlinear systems
Karahan, S.
1989-01-01
In this dissertation, the author develops a new method for obtaining higher degree linear approximations of nonlinear control systems. The standard approach in the analysis and synthesis of nonlinear systems is a first order approximation by a linear model. This is usually performed by obtaining a series expansion of the system at some nominal operating point and retaining only the first degree terms in the series. The accuracy of this approximation depends on how far the system moves away from the normal point, and on the relative magnitudes of the higher degree terms in the series expansion. The approximation is achieved by finding an appropriate nonlinear coordinate transformation-feedback pair to perform the higher degree linearization. With the proposed method, one can improve the accuracy of the approximation up to arbitrarily higher degrees, provided certain solvability conditions are satisfied. The Hunt-Su linearizability theorem makes these conditions precise. This approach is similar to Poincare's Normal Form Theorem in formulation, but different in its solution method. After some mathematical background the author derives a set of equations (called the Homological Equations). A solution to this system of linear equations is equivalent to the solution to the problem of approximate linearization. However, it is generally not possible to solve the system of equations exactly. He outlines a method for systematically finding approximate solutions to these equations using singular value decomposition, while minimizing an error with respect to some defined norm.
Soewono, C. N.; Takaki, N.
2012-07-01
In this work genetic algorithm was proposed to solve fuel loading pattern optimization problem in thorium fueled heavy water reactor. The objective function of optimization was to maximize the conversion ratio and minimize power peaking factor. Those objectives were simultaneously optimized using non-dominated Pareto-based population ranking optimal method. Members of non-dominated population were assigned selection probabilities based on their rankings in a manner similar to Baker's single criterion ranking selection procedure. A selected non-dominated member was bred through simple mutation or one-point crossover process to produce a new member. The genetic algorithm program was developed in FORTRAN 90 while neutronic calculation and analysis was done by COREBN code, a module of core burn-up calculation for SRAC. (authors)
Optimized Swinging Door Algorithm for Wind Power Ramp Event Detection: Preprint
Cui, Mingjian; Zhang, Jie; Florita, Anthony R.; Hodge, Bri-Mathias; Ke, Deping; Sun, Yuanzhang
2015-08-06
Significant wind power ramp events (WPREs) are those that influence the integration of wind power, and they are a concern to the continued reliable operation of the power grid. As wind power penetration has increased in recent years, so has the importance of wind power ramps. In this paper, an optimized swinging door algorithm (SDA) is developed to improve ramp detection performance. Wind power time series data are segmented by the original SDA, and then all significant ramps are detected and merged through a dynamic programming algorithm. An application of the optimized SDA is provided to ascertain the optimal parameter of the original SDA. Measured wind power data from the Electric Reliability Council of Texas (ERCOT) are used to evaluate the proposed optimized SDA.
Vehicle Technologies Program Implementation
none,
2009-06-19
The Vehicle Technologies Program takes a systematic approach to Program implementation. Elements of this approach include the evaluation of new technologies, competitive selection of projects and partners, review of Program and project improvement, project tracking, and portfolio management and adjustment.
Wind 80 Solar 247 Water 50 Buildings 222 FEMP 32 WIP 270 Hydrogen & Fuel Cells 174 Biomass 220 Industrial 96 Infrastructure ... DOE's Tribal Energy Program Program Funding History Program ...
DOE Laboratory Accreditation Program
Administered by the Office of Worker Safety and Health Policy, the DOE Laboratory Accreditation Program (DOELAP) is responsible for implementing performance standards for DOE contractor external dosimetry and radiobioassay programs through periodic performance testing and on-site program assessments.
U.S. Department of Energy (DOE) - all webpages (Extended Search)
Apprentice Program Over the years Y-12 has produced numerous training programs. Many of them have been developed at Y-12 to meet special needs. The training programs have ranged...
Instrument design and optimization using genetic algorithms
Hoelzel, Robert; Bentley, Phillip M.; Fouquet, Peter
2006-10-15
This article describes the design of highly complex physical instruments by using a canonical genetic algorithm (GA). The procedure can be applied to all instrument designs where performance goals can be quantified. It is particularly suited to the optimization of instrument design where local optima in the performance figure of merit are prevalent. Here, a GA is used to evolve the design of the neutron spin-echo spectrometer WASP which is presently being constructed at the Institut Laue-Langevin, Grenoble, France. A comparison is made between this artificial intelligence approach and the traditional manual design methods. We demonstrate that the search of parameter space is more efficient when applying the genetic algorithm, and the GA produces a significantly better instrument design. Furthermore, it is found that the GA increases flexibility, by facilitating the reoptimization of the design after changes in boundary conditions during the design phase. The GA also allows the exploration of 'nonstandard' magnet coil geometries. We conclude that this technique constitutes a powerful complementary tool for the design and optimization of complex scientific apparatus, without replacing the careful thought processes employed in traditional design methods.
"Greenbook Algorithms and Hardware Needs Analysis"
De Jong, Wibe A.; Oehmen, Chris S.; Baxter, Douglas J.
2007-01-09
"This document describes the algorithms, and hardware balance requirements needed to enable the solution of real scientific problems in the DOE core mission areas of environmental and subsurface chemistry, computational and systems biology, and climate science. The MSCF scientific drivers have been outlined in the Greenbook, which is available online at http://mscf.emsl.pnl.gov/docs/greenbook_for_web.pdf . Historically, the primary science driver has been the chemical and the molecular dynamics of the biological science area, whereas the remaining applications in the biological and environmental systems science areas have been occupying a smaller segment of the available hardware resources. To go from science drivers to hardware balance requirements, the major applications were identified. Major applications on the MSCF resources are low- to high-accuracy electronic structure methods, molecular dynamics, regional climate modeling, subsurface transport, and computational biology. The algorithms of these applications were analyzed to identify the computational kernels in both sequential and parallel execution. This analysis shows that a balanced architecture is needed with respect to processor speed, peak flop rate, peak integer operation rate, and memory hierarchy, interprocessor communication, and disk access and storage. A single architecture can satisfy the needs of all of the science areas, although some areas may take greater advantage of certain aspects of the architecture. "
Parallel preconditioning for the solution of nonsymmetric banded linear systems
Amodio, P.; Mazzia, F.
1994-12-31
Many computational techniques require the solution of banded linear systems. Common examples derive from the solution of partial differential equations and of boundary value problems. In particular the authors are interested in the parallel solution of block Hessemberg linear systems Gx = f, arising from the solution of ordinary differential equations by means of boundary value methods (BVMs), even if the considered preconditioning may be applied to any block banded linear system. BVMs have been extensively investigated in the last few years and their stability properties give promising results. A new class of BVMs called Reverse Adams, which are BV-A-stable for orders up to 6, and BV-A{sub 0}-stable for orders up to 9, have been studied.
New non-linear photovoltaic effect in uniform bipolar semiconductor
Volovichev, I.
2014-11-21
A linear theory of the new non-linear photovoltaic effect in the closed circuit consisting of a non-uniformly illuminated uniform bipolar semiconductor with neutral impurities is developed. The non-uniform photo-excitation of impurities results in the position-dependant current carrier mobility that breaks the semiconductor homogeneity and induces the photo-electromotive force (emf). As both the electron (or hole) mobility gradient and the current carrier generation rate depend on the light intensity, the photo-emf and the short-circuit current prove to be non-linear functions of the incident light intensity at an arbitrarily low illumination. The influence of the sample size on the photovoltaic effect magnitude is studied. Physical relations and distinctions between the considered effect and the Dember and bulk photovoltaic effects are also discussed.
U.S. Department of Energy (DOE) - all webpages (Extended Search)
Programming Programming on Franklin Compiling Codes on Franklin Cray provides a convenient set of wrapper commands which should be used in almost all cases for compiling and...
U.S. Department of Energy (DOE) - all webpages (Extended Search)
New Commercial Program Development Commercial Current Promotions Industrial Federal Agriculture EnergySmart Grocer Program Close-out BPA and CLEAResult have concluded negotiations...
Graduate Research Assistant Program
U.S. Department of Energy (DOE) - all webpages (Extended Search)
... Living in Los Alamos Ombuds Program Scholarships Student Association Autobiographies Student Programs Advisory Committee (internal) 2015 Student Liaison Contact List (pdf)
2009 DOE Hydrogen Program and Vehicle Technologies Program Annual Merit Review and Peer Evaluation Meeting, May 18-22, 2009 -- Washington D.C.
" Generation by Program Sponsorship...
Energy Information Administration (EIA) (indexed site)
A49. Total Inputs of Energy for Heat, Power, and Electricity" " Generation by Program Sponsorship, Industry Group, Selected" " Industries, and Type of Energy-Management Program, ...
" Generation, by Program Sponsorship...
Energy Information Administration (EIA) (indexed site)
by Total Inputs of Energy for Heat, Power, and Electricity" " Generation, by Program Sponsorship, Industry Group, Selected" " Industries, and Type of Energy-Management Program, ...
U.S. Department of Energy (DOE) - all webpages (Extended Search)
... against business environment risk, reducing program dependency on a single developer. ... U.S. DEPARTMENT OF ENERGY TECHNOLOGY PROGRAM PLAN CHAPTER 2: SOLID OXIDE FUEL CELLS ...
Defense Waste Management Programs
U.S. Department of Energy (DOE) - all webpages (Extended Search)
Nuclear Energy Defense Waste Management Programs Advanced Nuclear Energy Nuclear Energy ... Twitter Google + Vimeo Newsletter Signup SlideShare Defense Waste Management Programs ...
NREL: Education Center - Programs
U.S. Department of Energy (DOE) - all webpages (Extended Search)
Education Center Printable Version Programs NREL's Education Center in Golden, Colorado, offers a variety of program topics and experiences for students and adult groups addressing...
Publication and Product Library
This 2-page fact sheet provides a brief introduction to the DOE Hydrogen Program. It describes the program mission and answers the question: Why Hydrogen?
U.S. Department of Energy (DOE) - all webpages (Extended Search)
Contract ECP Employee Concerns Program Employees Concern Program provides a place for ... with a mechanism to report employee concerns without fear of retaliation and ...
Hydropower Program Technology Overview
Not Available
2001-10-01
New fact sheets for the DOE Office of Power Technologies (OPT) that provide technology overviews, description of DOE programs, and market potential for each OPT program area.
U.S. Department of Energy (DOE) - all webpages (Extended Search)
Civilian Nuclear Program Civilian Nuclear Program Los Alamos is committed to using its advanced nuclear expertise and unique facilities to meet the civilian nuclear national ...
2009 DOE Hydrogen Program and Vehicle Technologies Program Annual Merit Review and Peer Evaluation Meeting, May 18-22, 2009 -- Washington D.C.
Energy.gov [DOE] (indexed site)
and Programs, the Tribal Energy Program provides financial and technical assistance to Tribes to develop their renewable energy resources and reduce their energy consumption. ...
Weapons Program Associate Directors
U.S. Department of Energy (DOE) - all webpages (Extended Search)
integration we have achieved between the various components of the program," said Bret Knapp, Principal Associate Director for Weapons Programs. "They have both done an...
Department of Energy (DOE) Fire Protection Program provides published fire safety directives (orders, standards, and guidance documents), a range of oversight activities, an annual fire protection program summary.
Some problems in sequencing and scheduling utilizing branch and bound algorithms
Gim, B.
1988-01-01
This dissertation deals with branch and bound algorithms which are applied to the two-machine flow-shop problem with sparse precedence constraints and the optimal sequencing and scheduling of multiple feedstocks in a batch-type digester problem. The problem studied here is to find a schedule which minimizes the maximum flow time with the requirement that the schedule does not violate a set of sparse precedence constraints. This research provides a branch and bound algorithm which employs a lower bounding rule and is based on an adjustment of the sequence obtained by applying Johnson's algorithm. It is demonstrated that this lower bounding procedure in conjunction with Kurisu's branching rule is effective for the sparse precedence constraints problem case. Biomass to methane production systems have the potential of supplying 25% of the national gas demand. The optimal operation of a batch digester system requires the sequencing and scheduling of all batches from multiple feedstocks during a fixed time horizon. A significant characteristic of these systems is that the feedstock decays in storage before use in the digester system. The operational problem is to determine the time to allocate to each batch of several feedstocks and then sequence the individual batches so as to maximize biogas production for a single batch type digester over a fixed planning horizon. This research provides a branch and bound algorithm for sequencing and a two-step hierarchical dynamic programming procedure for time allocation scheduling. An efficient heuristic algorithm is developed for large problems and demonstrated to yield excellent results.
Starke, G.
1994-12-31
For nonselfadjoint elliptic boundary value problems which are preconditioned by a substructuring method, i.e., nonoverlapping domain decomposition, the author introduces and studies the concept of subspace orthogonalization. In subspace orthogonalization variants of Krylov methods the computation of inner products and vector updates, and the storage of basis elements is restricted to a (presumably small) subspace, in this case the edge and vertex unknowns with respect to the partitioning into subdomains. The author investigates subspace orthogonalization for two specific iterative algorithms, GMRES and the full orthogonalization method (FOM). This is intended to eliminate certain drawbacks of the Arnoldi-based Krylov subspace methods mentioned above. Above all, the length of the Arnoldi recurrences grows linearly with the iteration index which is therefore restricted to the number of basis elements that can be held in memory. Restarts become necessary and this often results in much slower convergence. The subspace orthogonalization methods, in contrast, require the storage of only the edge and vertex unknowns of each basis element which means that one can iterate much longer before restarts become necessary. Moreover, the computation of inner products is also restricted to the edge and vertex points which avoids the disturbance of the computational flow associated with the solution of subdomain problems. The author views subspace orthogonalization as an alternative to restarting or truncating Krylov subspace methods for nonsymmetric linear systems of equations. Instead of shortening the recurrences, one restricts them to a subset of the unknowns which has to be carefully chosen in order to be able to extend this partial solution to the entire space. The author discusses the convergence properties of these iteration schemes and its advantages compared to restarted or truncated versions of Krylov methods applied to the full preconditioned system.
Magnetic levitation configuration incorporating levitation, guidance and linear synchronous motor
Coffey, Howard T.
1993-01-01
A propulsion and suspension system for an inductive repulsion type magnetically levitated vehicle which is propelled and suspended by a system which includes propulsion windings which form a linear synchronous motor and conductive guideways, adjacent to the propulsion windings, where both combine to partially encircling the vehicle-borne superconducting magnets. A three phase power source is used with the linear synchronous motor to produce a traveling magnetic wave which in conjunction with the magnets propel the vehicle. The conductive guideway combines with the superconducting magnets to provide for vehicle leviation.
Magnetic levitation configuration incorporating levitation, guidance and linear synchronous motor
Coffey, H.T.
1993-10-19
A propulsion and suspension system for an inductive repulsion type magnetically levitated vehicle which is propelled and suspended by a system which includes propulsion windings which form a linear synchronous motor and conductive guideways, adjacent to the propulsion windings, where both combine to partially encircling the vehicle-borne superconducting magnets. A three phase power source is used with the linear synchronous motor to produce a traveling magnetic wave which in conjunction with the magnets propel the vehicle. The conductive guideway combines with the superconducting magnets to provide for vehicle levitation. 3 figures.
Search for Linear Polarization of the Cosmic Background Radiation
Lubin, P. M.; Smoot, G. F.
1978-10-01
We present preliminary measurements of the linear polarization of the cosmic microwave background (3 deg K blackbody) radiation. These ground-based measurements are made at 9 mm wavelength. We find no evidence for linear polarization, and set an upper limit for a polarized component of 0.8 m deg K with a 95% confidence level. This implies that the present rate of expansion of the Universe is isotropic to one part in 10{sup 6}, assuming no re-ionization of the primordial plasma after recombination
LDRD final report on a unified linear reference system
Espinoza, J. Jr.; Mackoy, R.D.; Fletcher, D.R.
1997-06-01
The purpose of the project was to describe existing deficiencies in Geographic Information Systems for transportation (GIS-T) applications and prescribe solutions that would benefit the transportation community in general. After an in-depth literature search and much consultation with noted transportation experts, the need for a common linear reference system that integrated and supported the planning and operational needs of the transportation community became very apparent. The focus of the project was set on a unified linear reference system and how to go about its requirements definition, design, implementation, and promulgation to the transportation community.
Klystron switching power supplies for the Internation Linear Collider
Fraioli, Andrea; /Cassino U. /INFN, Pisa
2009-12-01
The International Linear Collider is a majestic High Energy Physics particle accelerator that will give physicists a new cosmic doorway to explore energy regimes beyond the reach of today's accelerators. ILC will complement the Large Hadron Collider (LHC), a proton-proton collider at the European Center for Nuclear Research (CERN) in Geneva, Switzerland, by producing electron-positron collisions at center of mass energy of about 500 GeV. In particular, the subject of this dissertation is the R&D for a solid state Marx Modulator and relative switching power supply for the International Linear Collider Main LINAC Radio Frequency stations.
LDRD final report : autotuning for scalable linear algebra.
Heroux, Michael Allen; Marker, Bryan
2011-09-01
This report summarizes the progress made as part of a one year lab-directed research and development (LDRD) project to fund the research efforts of Bryan Marker at the University of Texas at Austin. The goal of the project was to develop new techniques for automatically tuning the performance of dense linear algebra kernels. These kernels often represent the majority of computational time in an application. The primary outcome from this work is a demonstration of the value of model driven engineering as an approach to accurately predict and study performance trade-offs for dense linear algebra computations.
Linear-array ultrasonic waveguide transducer for under sodium viewing.
Sheen, S. H.; Chien, H. T.; Wang, K.; Lawrence, W. P.; Engel, D.; Nuclear Engineering Division
2010-09-01
In this report, we first present the basic design of a low-noise waveguide and its performance followed by a review of the array transducer technology. The report then presents the concept and basic designs of arrayed waveguide transducers that can apply to under-sodium viewing for in-service inspection of fast reactors. Depending on applications, the basic waveguide arrays consist of designs for sideway and downward viewing. For each viewing application, two array geometries, linear and circular, are included in design analysis. Methods to scan a 2-D target using a linear array waveguide transducer are discussed. Future plan to develop a laboratory array waveguide prototype is also presented.