National Library of Energy BETA

Sample records for massively parallel microcell-based

  1. Discontinuous Methods for Accurate, Massively Parallel Quantum...

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Investigator for Discontinuous Methods for Accurate, Massively Parallel Quantum Molecular Dynamics. Discontinuous Methods for Accurate, Massively Parallel Quantum...

  2. Impact analysis on a massively parallel computer

    SciTech Connect (OSTI)

    Zacharia, T.; Aramayo, G.A.

    1994-06-01

    Advanced mathematical techniques and computer simulation play a major role in evaluating and enhancing the design of beverage cans, industrial, and transportation containers for improved performance. Numerical models are used to evaluate the impact requirements of containers used by the Department of Energy (DOE) for transporting radioactive materials. Many of these models are highly compute-intensive. An analysis may require several hours of computational time on current supercomputers despite the simplicity of the models being studied. As computer simulations and materials databases grow in complexity, massively parallel computers have become important tools. Massively parallel computational research at the Oak Ridge National Laboratory (ORNL) and its application to the impact analysis of shipping containers is briefly described in this paper.

  3. Template based parallel checkpointing in a massively parallel computer system

    DOE Patents [OSTI]

    Archer, Charles Jens; Inglett, Todd Alan

    2009-01-13

    A method and apparatus for a template based parallel checkpoint save for a massively parallel super computer system using a parallel variation of the rsync protocol, and network broadcast. In preferred embodiments, the checkpoint data for each node is compared to a template checkpoint file that resides in the storage and that was previously produced. Embodiments herein greatly decrease the amount of data that must be transmitted and stored for faster checkpointing and increased efficiency of the computer system. Embodiments are directed to a parallel computer system with nodes arranged in a cluster with a high speed interconnect that can perform broadcast communication. The checkpoint contains a set of actual small data blocks with their corresponding checksums from all nodes in the system. The data blocks may be compressed using conventional non-lossy data compression algorithms to further reduce the overall checkpoint size.

  4. A Massively Parallel Solver for the Mechanical Harmonic Analysis...

    Office of Scientific and Technical Information (OSTI)

    Details In-Document Search Title: A Massively Parallel Solver for the Mechanical Harmonic Analysis of Accelerator Cavities ACE3P is a 3D massively parallel simulation suite that...

  5. MASSIVE HYBRID PARALLELISM FOR FULLY IMPLICIT MULTIPHYSICS

    SciTech Connect (OSTI)

    Cody J. Permann; David Andrs; John W. Peterson; Derek R. Gaston

    2013-05-01

    As hardware advances continue to modify the supercomputing landscape, traditional scientific software development practices will become more outdated, ineffective, and inefficient. The process of rewriting/retooling existing software for new architectures is a Sisyphean task, and results in substantial hours of development time, effort, and money. Software libraries which provide an abstraction of the resources provided by such architectures are therefore essential if the computational engineering and science communities are to continue to flourish in this modern computing environment. The Multiphysics Object Oriented Simulation Environment (MOOSE) framework enables complex multiphysics analysis tools to be built rapidly by scientists, engineers, and domain specialists, while also allowing them to both take advantage of current HPC architectures, and efficiently prepare for future supercomputer designs. MOOSE employs a hybrid shared-memory and distributed-memory parallel model and provides a complete and consistent interface for creating multiphysics analysis tools. In this paper, a brief discussion of the mathematical algorithms underlying the framework and the internal object-oriented hybrid parallel design are given. Representative massively parallel results from several applications areas are presented, and a brief discussion of future areas of research for the framework are provided.

  6. PFLOTRAN User Manual: A Massively Parallel Reactive Flow and...

    Office of Scientific and Technical Information (OSTI)

    Technical Report: PFLOTRAN User Manual: A Massively Parallel Reactive Flow and Transport Model for Describing Surface and Subsurface Processes Citation Details In-Document Search...

  7. PFLOTRAN User Manual: A Massively Parallel Reactive Flow and...

    Office of Scientific and Technical Information (OSTI)

    PFLOTRAN User Manual: A Massively Parallel Reactive Flow and Transport Model for Describing Surface and Subsurface Processes Lichtner, Peter OFM Research; Karra, Satish Los...

  8. Massively Parallel LES of Azimuthal Thermo-Acoustic Instabilities...

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Massively Parallel LES of Azimuthal Thermo-Acoustic Instabilities in Annular Gas Turbines Authors: Wolf, P., Staffelbach, G., Roux, A., Gicquel, L., Poinsot, T., Moureau, V. ...

  9. Massively Parallel Models of the Human Circulatory System (Conference...

    Office of Scientific and Technical Information (OSTI)

    Massively Parallel Models of the Human Circulatory System Citation Details In-Document ... Sponsoring Org: USDOE Country of Publication: United States Language: English Subject: 59 ...

  10. Massively parallel mesh generation for physics codes

    SciTech Connect (OSTI)

    Hardin, D.D.

    1996-06-01

    Massively parallel processors (MPPs) will soon enable realistic 3-D physical modeling of complex objects and systems. Work is planned or presently underway to port many of LLNL`s physical modeling codes to MPPs. LLNL`s DSI3D electromagnetics code already can solve 40+ million zone problems on the 256 processor Meiko. However, the author lacks the software necessary to generate and manipulate the large meshes needed to model many complicated 3-D geometries. State-of-the-art commercial mesh generators run on workstations and have a practical limit of several hundred thousand elements. In the foreseeable future MPPs will solve problems with a billion mesh elements. The objective of the Parallel Mesh Generation (PMESH) Project is to develop a unique mesh generation system that can construct large 3-D meshes (up to a billion elements) on MPPs. Such a capability will remove a critical roadblock to unleashing the power of MPPs for physical analysis and will put LLNL at the forefront of mesh generation technology. PMESH will ``front-end`` a variety of LLNL 3-D physics codes, including those in the areas of electromagnetics, structural mechanics, thermal analysis, and hydrodynamics. The DSI3D and DYNA3D codes are already running on MPPs. The primary goal of the PMESH project is to provide the robust generation of large meshes for complicated 3-D geometries through the appropriate distribution of the generation task between the user`s workstation and the MPP. Secondary goals are to support the unique features of LLNL physics codes (e.g., unusual elements) and to minimize the user effort required to generate different meshes for the same geometry. PMESH`s capabilities are essential because mesh generation is presently a major limiting factor in simulating larger and more complex 3-D geometries. PMESH will significantly enhance LLNL`s capabilities in physical simulation by advancing the state-of-the-art in large mesh generation by 2 to 3 orders of magnitude.

  11. Efficient parallel global garbage collection on massively parallel computers

    SciTech Connect (OSTI)

    Kamada, Tomio; Matsuoka, Satoshi; Yonezawa, Akinori

    1994-12-31

    On distributed-memory high-performance MPPs where processors are interconnected by an asynchronous network, efficient Garbage Collection (GC) becomes difficult due to inter-node references and references within pending, unprocessed messages. The parallel global GC algorithm (1) takes advantage of reference locality, (2) efficiently traverses references over nodes, (3) admits minimum pause time of ongoing computations, and (4) has been shown to scale up to 1024 node MPPs. The algorithm employs a global weight counting scheme to substantially reduce message traffic. The two methods for confirming the arrival of pending messages are used: one counts numbers of messages and the other uses network `bulldozing.` Performance evaluation in actual implementations on a multicomputer with 32-1024 nodes, Fujitsu AP1000, reveals various favorable properties of the algorithm.

  12. Massively Parallel LES of Azimuthal Thermo-Acoustic Instabilities in

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Annular Gas Turbines | Argonne Leadership Computing Facility Massively Parallel LES of Azimuthal Thermo-Acoustic Instabilities in Annular Gas Turbines Authors: Wolf, P., Staffelbach, G., Roux, A., Gicquel, L., Poinsot, T., Moureau, V. Increasingly stringent regulations and the need to tackle rising fuel prices have placed great emphasis on the design of aeronautical gas turbines, which are unfortunately more and more prone to combustion instabilities. In the particular field of annular

  13. Requirements for supercomputing in energy research: The transition to massively parallel computing

    SciTech Connect (OSTI)

    Not Available

    1993-02-01

    This report discusses: The emergence of a practical path to TeraFlop computing and beyond; requirements of energy research programs at DOE; implementation: supercomputer production computing environment on massively parallel computers; and implementation: user transition to massively parallel computing.

  14. BlueGene/L Applications: Parallelism on a Massive Scale (Journal...

    Office of Scientific and Technical Information (OSTI)

    BlueGeneL Applications: Parallelism on a Massive Scale Citation Details In-Document Search Title: BlueGeneL Applications: Parallelism on a Massive Scale You are accessing a ...

  15. Routing performance analysis and optimization within a massively parallel computer

    DOE Patents [OSTI]

    Archer, Charles Jens; Peters, Amanda; Pinnow, Kurt Walter; Swartz, Brent Allen

    2013-04-16

    An apparatus, program product and method optimize the operation of a massively parallel computer system by, in part, receiving actual performance data concerning an application executed by the plurality of interconnected nodes, and analyzing the actual performance data to identify an actual performance pattern. A desired performance pattern may be determined for the application, and an algorithm may be selected from among a plurality of algorithms stored within a memory, the algorithm being configured to achieve the desired performance pattern based on the actual performance data.

  16. A massively parallel fractional step solver for incompressible flows

    SciTech Connect (OSTI)

    Houzeaux, G. Vazquez, M. Aubry, R. Cela, J.M.

    2009-09-20

    This paper presents a parallel implementation of fractional solvers for the incompressible Navier-Stokes equations using an algebraic approach. Under this framework, predictor-corrector and incremental projection schemes are seen as sub-classes of the same class, making apparent its differences and similarities. An additional advantage of this approach is to set a common basis for a parallelization strategy, which can be extended to other split techniques or to compressible flows. The predictor-corrector scheme consists in solving the momentum equation and a modified 'continuity' equation (namely a simple iteration for the pressure Schur complement) consecutively in order to converge to the monolithic solution, thus avoiding fractional errors. On the other hand, the incremental projection scheme solves only one iteration of the predictor-corrector per time step and adds a correction equation to fulfill the mass conservation. As shown in the paper, these two schemes are very well suited for massively parallel implementation. In fact, when compared with monolithic schemes, simpler solvers and preconditioners can be used to solve the non-symmetric momentum equations (GMRES, Bi-CGSTAB) and to solve the symmetric continuity equation (CG, Deflated CG). This gives good speedup properties of the algorithm. The implementation of the mesh partitioning technique is presented, as well as the parallel performances and speedups for thousands of processors.

  17. Comparing current cluster, massively parallel, and accelerated systems

    SciTech Connect (OSTI)

    Barker, Kevin J; Davis, Kei; Hoisie, Adolfy; Kerbyson, Darren J; Pakin, Scott; Lang, Mike; Sancho Pitarch, Jose C

    2010-01-01

    Currently there is large architectural diversity in high perfonnance computing systems. They include 'commodity' cluster systems that optimize per-node performance for small jobs, massively parallel processors (MPPs) that optimize aggregate perfonnance for large jobs, and accelerated systems that optimize both per-node and aggregate performance but only for applications custom-designed to take advantage of such systems. Because of these dissimilarities, meaningful comparisons of achievable performance are not straightforward. In this work we utilize a methodology that combines both empirical analysis and performance modeling to compare clusters (represented by a 4,352-core IB cluster), MPPs (represented by a 147,456-core BG/P), and accelerated systems (represented by the 129,600-core Roadrunner) across a workload of four applications. Strengths of our approach include the ability to compare architectures - as opposed to specific implementations of an architecture - attribute each application's performance bottlenecks to characteristics unique to each system, and to explore performance scenarios in advance of their availability for measurement. Our analysis illustrates that application performance is essentially unrelated to relative peak performance but that application performance can be both predicted and explained using modeling.

  18. Massively parallel processor networks with optical express channels

    DOE Patents [OSTI]

    Deri, R.J.; Brooks, E.D. III; Haigh, R.E.; DeGroot, A.J.

    1999-08-24

    An optical method for separating and routing local and express channel data comprises interconnecting the nodes in a network with fiber optic cables. A single fiber optic cable carries both express channel traffic and local channel traffic, e.g., in a massively parallel processor (MPP) network. Express channel traffic is placed on, or filtered from, the fiber optic cable at a light frequency or a color different from that of the local channel traffic. The express channel traffic is thus placed on a light carrier that skips over the local intermediate nodes one-by-one by reflecting off of selective mirrors placed at each local node. The local-channel-traffic light carriers pass through the selective mirrors and are not reflected. A single fiber optic cable can thus be threaded throughout a three-dimensional matrix of nodes with the x,y,z directions of propagation encoded by the color of the respective light carriers for both local and express channel traffic. Thus frequency division multiple access is used to hierarchically separate the local and express channels to eliminate the bucket brigade latencies that would otherwise result if the express traffic had to hop between every local node to reach its ultimate destination. 3 figs.

  19. Massively parallel processor networks with optical express channels

    DOE Patents [OSTI]

    Deri, Robert J.; Brooks, III, Eugene D.; Haigh, Ronald E.; DeGroot, Anthony J.

    1999-01-01

    An optical method for separating and routing local and express channel data comprises interconnecting the nodes in a network with fiber optic cables. A single fiber optic cable carries both express channel traffic and local channel traffic, e.g., in a massively parallel processor (MPP) network. Express channel traffic is placed on, or filtered from, the fiber optic cable at a light frequency or a color different from that of the local channel traffic. The express channel traffic is thus placed on a light carrier that skips over the local intermediate nodes one-by-one by reflecting off of selective mirrors placed at each local node. The local-channel-traffic light carriers pass through the selective mirrors and are not reflected. A single fiber optic cable can thus be threaded throughout a three-dimensional matrix of nodes with the x,y,z directions of propagation encoded by the color of the respective light carriers for both local and express channel traffic. Thus frequency division multiple access is used to hierarchically separate the local and express channels to eliminate the bucket brigade latencies that would otherwise result if the express traffic had to hop between every local node to reach its ultimate destination.

  20. SWAMP+: multiple subsequence alignment using associative massive parallelism

    SciTech Connect (OSTI)

    Steinfadt, Shannon Irene [Los Alamos National Laboratory; Baker, Johnnie W [KENT STATE UNIV.

    2010-10-18

    A new parallel algorithm SWAMP+ incorporates the Smith-Waterman sequence alignment on an associative parallel model known as ASC. It is a highly sensitive parallel approach that expands traditional pairwise sequence alignment. This is the first parallel algorithm to provide multiple non-overlapping, non-intersecting subsequence alignments with the accuracy of Smith-Waterman. The efficient algorithm provides multiple alignments similar to BLAST while creating a better workflow for the end users. The parallel portions of the code run in O(m+n) time using m processors. When m = n, the algorithmic analysis becomes O(n) with a coefficient of two, yielding a linear speedup. Implementation of the algorithm on the SIMD ClearSpeed CSX620 confirms this theoretical linear speedup with real timings.

  1. Factorization of large integers on a massively parallel computer

    SciTech Connect (OSTI)

    Davis, J.A.; Holdridge, D.B.

    1988-01-01

    Our interest in integer factorization at Sandia National Laboratories is motivated by cryptographic applications and in particular the security of the RSA encryption-decryption algorithm. We have implemented our version of the quadratic sieve procedure on the NCUBE computer with 1024 processors (nodes). The new code is significantly different in all important aspects from the program used to factor number of order 10/sup 70/ on a single processor CRAY computer. Capabilities of parallel processing and limitation of small local memory necessitated this entirely new implementation. This effort involved several restarts as realizations of program structures that seemed appealing bogged down due to inter-processor communications. We are presently working with integers of magnitude about 10/sup 70/ in tuning this code to the novel hardware. 6 refs., 3 figs.

  2. High performance computing in chemistry and massively parallel computers: A simple transition?

    SciTech Connect (OSTI)

    Kendall, R.A.

    1993-03-01

    A review of the various problems facing any software developer targeting massively parallel processing (MPP) systems is presented. Issues specific to computational chemistry application software will be also outlined. Computational chemistry software ported to and designed for the Intel Touchstone Delta Supercomputer will be discussed. Recommendations for future directions will also be made.

  3. Large-eddy simulation of the Rayleigh-Taylor instability on a massively parallel computer

    SciTech Connect (OSTI)

    Amala, P.A.K.

    1995-03-01

    A computational model for the solution of the three-dimensional Navier-Stokes equations is developed. This model includes a turbulence model: a modified Smagorinsky eddy-viscosity with a stochastic backscatter extension. The resultant equations are solved using finite difference techniques: the second-order explicit Lax-Wendroff schemes. This computational model is implemented on a massively parallel computer. Programming models on massively parallel computers are next studied. It is desired to determine the best programming model for the developed computational model. To this end, three different codes are tested on a current massively parallel computer: the CM-5 at Los Alamos. Each code uses a different programming model: one is a data parallel code; the other two are message passing codes. Timing studies are done to determine which method is the fastest. The data parallel approach turns out to be the fastest method on the CM-5 by at least an order of magnitude. The resultant code is then used to study a current problem of interest to the computational fluid dynamics community. This is the Rayleigh-Taylor instability. The Lax-Wendroff methods handle shocks and sharp interfaces poorly. To this end, the Rayleigh-Taylor linear analysis is modified to include a smoothed interface. The linear growth rate problem is then investigated. Finally, the problem of the randomly perturbed interface is examined. Stochastic backscatter breaks the symmetry of the stationary unstable interface and generates a mixing layer growing at the experimentally observed rate. 115 refs., 51 figs., 19 tabs.

  4. Analysis of gallium arsenide deposition in a horizontal chemical vapor deposition reactor using massively parallel computations

    SciTech Connect (OSTI)

    Salinger, A.G.; Shadid, J.N.; Hutchinson, S.A.

    1998-01-01

    A numerical analysis of the deposition of gallium from trimethylgallium (TMG) and arsine in a horizontal CVD reactor with tilted susceptor and a three inch diameter rotating substrate is performed. The three-dimensional model includes complete coupling between fluid mechanics, heat transfer, and species transport, and is solved using an unstructured finite element discretization on a massively parallel computer. The effects of three operating parameters (the disk rotation rate, inlet TMG fraction, and inlet velocity) and two design parameters (the tilt angle of the reactor base and the reactor width) on the growth rate and uniformity are presented. The nonlinear dependence of the growth rate uniformity on the key operating parameters is discussed in detail. Efficient and robust algorithms for massively parallel reacting flow simulations, as incorporated into our analysis code MPSalsa, make detailed analysis of this complicated system feasible.

  5. Massively parallel Monte Carlo for many-particle simulations on GPUs

    SciTech Connect (OSTI)

    Anderson, Joshua A.; Jankowski, Eric; Grubb, Thomas L.; Engel, Michael; Glotzer, Sharon C.

    2013-12-01

    Current trends in parallel processors call for the design of efficient massively parallel algorithms for scientific computing. Parallel algorithms for Monte Carlo simulations of thermodynamic ensembles of particles have received little attention because of the inherent serial nature of the statistical sampling. In this paper, we present a massively parallel method that obeys detailed balance and implement it for a system of hard disks on the GPU. We reproduce results of serial high-precision Monte Carlo runs to verify the method. This is a good test case because the hard disk equation of state over the range where the liquid transforms into the solid is particularly sensitive to small deviations away from the balance conditions. On a Tesla K20, our GPU implementation executes over one billion trial moves per second, which is 148 times faster than on a single Intel Xeon E5540 CPU core, enables 27 times better performance per dollar, and cuts energy usage by a factor of 13. With this improved performance we are able to calculate the equation of state for systems of up to one million hard disks. These large system sizes are required in order to probe the nature of the melting transition, which has been debated for the last forty years. In this paper we present the details of our computational method, and discuss the thermodynamics of hard disks separately in a companion paper.

  6. Molecular Dynamics Simulations from SNL's Large-scale Atomic/Molecular Massively Parallel Simulator (LAMMPS)

    DOE Data Explorer [Office of Scientific and Technical Information (OSTI)]

    Plimpton, Steve; Thompson, Aidan; Crozier, Paul

    LAMMPS (http://lammps.sandia.gov/index.html) stands for Large-scale Atomic/Molecular Massively Parallel Simulator and is a code that can be used to model atoms or, as the LAMMPS website says, as a parallel particle simulator at the atomic, meso, or continuum scale. This Sandia-based website provides a long list of animations from large simulations. These were created using different visualization packages to read LAMMPS output, and each one provides the name of the PI and a brief description of the work done or visualization package used. See also the static images produced from simulations at http://lammps.sandia.gov/pictures.html The foundation paper for LAMMPS is: S. Plimpton, Fast Parallel Algorithms for Short-Range Molecular Dynamics, J Comp Phys, 117, 1-19 (1995), but the website also lists other papers describing contributions to LAMMPS over the years.

  7. A massively parallel adaptive finite element method with dynamic load balancing

    SciTech Connect (OSTI)

    Devine, K.D.; Flaherty, J.E.; Wheat, S.R.; Maccabe, A.B.

    1993-05-01

    We construct massively parallel, adaptive finite element methods for the solution of hyperbolic conservation laws in one and two dimensions. Spatial discretization is performed by a discontinuous Galerkin finite element method using a basis of piecewise Legendre polynomials. Temporal discretization utilizes a Runge-Kutta method. Dissipative fluxes and projection limiting prevent oscillations near solution discontinuities. The resulting method is of high order and may be parallelized efficiently on MIMD computers. We demonstrate parallel efficiency through computations on a 1024-processor nCUBE/2 hypercube. We also present results using adaptive p-refinement to reduce the computational cost of the method. We describe tiling, a dynamic, element-based data migration system. Tiling dynamically maintains global load balance in the adaptive method by overlapping neighborhoods of processors, where each neighborhood performs local load balancing. We demonstrate the effectiveness of the dynamic load balancing with adaptive p-refinement examples.

  8. A massively parallel adaptive finite element method with dynamic load balancing

    SciTech Connect (OSTI)

    Devine, K.D.; Flaherty, J.E.; Wheat, S.R.; Maccabe, A.B.

    1993-12-31

    The authors construct massively parallel adaptive finite element methods for the solution of hyperbolic conservation laws. Spatial discretization is performed by a discontinuous Galerkin finite element method using a basis of piecewise Legendre polynomials. Temporal discretization utilizes a Runge-Kutta method. Dissipative fluxes and projection limiting prevent oscillations near solution discontinuities. The resulting method is of high order and may be parallelized efficiently on MIMD computers. They demonstrate parallel efficiency through computations on a 1024-processor nCUBE/2 hypercube. They present results using adaptive p-refinement to reduce the computational cost of the method, and tiling, a dynamic, element-based data migration system that maintains global load balance of the adaptive method by overlapping neighborhoods of processors that each perform local balancing.

  9. Implementation, capabilities, and benchmarking of Shift, a massively parallel Monte Carlo radiation transport code

    DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)

    Pandya, Tara M.; Johnson, Seth R.; Evans, Thomas M.; Davidson, Gregory G.; Hamilton, Steven P.; Godfrey, Andrew T.

    2015-12-21

    This paper discusses the implementation, capabilities, and validation of Shift, a massively parallel Monte Carlo radiation transport package developed and maintained at Oak Ridge National Laboratory. It has been developed to scale well from laptop to small computing clusters to advanced supercomputers. Special features of Shift include hybrid capabilities for variance reduction such as CADIS and FW-CADIS, and advanced parallel decomposition and tally methods optimized for scalability on supercomputing architectures. Shift has been validated and verified against various reactor physics benchmarks and compares well to other state-of-the-art Monte Carlo radiation transport codes such as MCNP5, CE KENO-VI, and OpenMC. Somemore » specific benchmarks used for verification and validation include the CASL VERA criticality test suite and several Westinghouse AP1000® problems. These benchmark and scaling studies show promising results.« less

  10. Implementation, capabilities, and benchmarking of Shift, a massively parallel Monte Carlo radiation transport code

    SciTech Connect (OSTI)

    Pandya, Tara M.; Johnson, Seth R.; Evans, Thomas M.; Davidson, Gregory G.; Hamilton, Steven P.; Godfrey, Andrew T.

    2015-12-21

    This paper discusses the implementation, capabilities, and validation of Shift, a massively parallel Monte Carlo radiation transport package developed and maintained at Oak Ridge National Laboratory. It has been developed to scale well from laptop to small computing clusters to advanced supercomputers. Special features of Shift include hybrid capabilities for variance reduction such as CADIS and FW-CADIS, and advanced parallel decomposition and tally methods optimized for scalability on supercomputing architectures. Shift has been validated and verified against various reactor physics benchmarks and compares well to other state-of-the-art Monte Carlo radiation transport codes such as MCNP5, CE KENO-VI, and OpenMC. Some specific benchmarks used for verification and validation include the CASL VERA criticality test suite and several Westinghouse AP1000® problems. These benchmark and scaling studies show promising results.

  11. A Massively Parallel Solver for the Mechanical Harmonic Analysis of Accelerator Cavities

    SciTech Connect (OSTI)

    O. Kononenko

    2015-02-17

    ACE3P is a 3D massively parallel simulation suite that developed at SLAC National Accelerator Laboratory that can perform coupled electromagnetic, thermal and mechanical study. Effectively utilizing supercomputer resources, ACE3P has become a key simulation tool for particle accelerator R and D. A new frequency domain solver to perform mechanical harmonic response analysis of accelerator components is developed within the existing parallel framework. This solver is designed to determine the frequency response of the mechanical system to external harmonic excitations for time-efficient accurate analysis of the large-scale problems. Coupled with the ACE3P electromagnetic modules, this capability complements a set of multi-physics tools for a comprehensive study of microphonics in superconducting accelerating cavities in order to understand the RF response and feedback requirements for the operational reliability of a particle accelerator. (auth)

  12. Capabilities, Implementation, and Benchmarking of Shift, a Massively Parallel Monte Carlo Radiation Transport Code

    DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)

    Pandya, Tara M; Johnson, Seth R; Evans, Thomas M; Davidson, Gregory G; Hamilton, Steven P; Godfrey, Andrew T

    2016-01-01

    This work discusses the implementation, capabilities, and validation of Shift, a massively parallel Monte Carlo radiation transport package developed and maintained at Oak Ridge National Laboratory. It has been developed to scale well from laptop to small computing clusters to advanced supercomputers. Special features of Shift include hybrid capabilities for variance reduction such as CADIS and FW-CADIS, and advanced parallel decomposition and tally methods optimized for scalability on supercomputing architectures. Shift has been validated and verified against various reactor physics benchmarks and compares well to other state-of-the-art Monte Carlo radiation transport codes such as MCNP5, CE KENO-VI, and OpenMC. Somemorespecific benchmarks used for verification and validation include the CASL VERA criticality test suite and several Westinghouse AP1000 R problems. These benchmark and scaling studies show promising results.less

  13. DGDFT: A massively parallel method for large scale density functional theory calculations

    SciTech Connect (OSTI)

    Hu, Wei Yang, Chao; Lin, Lin

    2015-09-28

    We describe a massively parallel implementation of the recently developed discontinuous Galerkin density functional theory (DGDFT) method, for efficient large-scale Kohn-Sham DFT based electronic structure calculations. The DGDFT method uses adaptive local basis (ALB) functions generated on-the-fly during the self-consistent field iteration to represent the solution to the Kohn-Sham equations. The use of the ALB set provides a systematic way to improve the accuracy of the approximation. By using the pole expansion and selected inversion technique to compute electron density, energy, and atomic forces, we can make the computational complexity of DGDFT scale at most quadratically with respect to the number of electrons for both insulating and metallic systems. We show that for the two-dimensional (2D) phosphorene systems studied here, using 37 basis functions per atom allows us to reach an accuracy level of 1.3 × 10{sup −4} Hartree/atom in terms of the error of energy and 6.2 × 10{sup −4} Hartree/bohr in terms of the error of atomic force, respectively. DGDFT can achieve 80% parallel efficiency on 128,000 high performance computing cores when it is used to study the electronic structure of 2D phosphorene systems with 3500-14 000 atoms. This high parallel efficiency results from a two-level parallelization scheme that we will describe in detail.

  14. The Fortran-P Translator: Towards Automatic Translation of Fortran 77 Programs for Massively Parallel Processors

    DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)

    O'keefe, Matthew; Parr, Terence; Edgar, B. Kevin; Anderson, Steve; Woodward, Paul; Dietz, Hank

    1995-01-01

    Massively parallel processors (MPPs) hold the promise of extremely high performance that, if realized, could be used to study problems of unprecedented size and complexity. One of the primary stumbling blocks to this promise has been the lack of tools to translate application codes to MPP form. In this article we show how applications codes written in a subset of Fortran 77, called Fortran-P, can be translated to achieve good performance on several massively parallel machines. This subset can express codes that are self-similar, where the algorithm applied to the global data domain is also applied to each subdomain. Wemore » have found many codes that match the Fortran-P programming style and have converted them using our tools. We believe a self-similar coding style will accomplish what a vectorizable style has accomplished for vector machines by allowing the construction of robust, user-friendly, automatic translation systems that increase programmer productivity and generate fast, efficient code for MPPs.« less

  15. A Faster Parallel Algorithm and Efficient Multithreaded Implementations for Evaluating Betweenness Centrality on Massive Datasets

    SciTech Connect (OSTI)

    Madduri, Kamesh; Ediger, David; Jiang, Karl; Bader, David A.; Chavarria-Miranda, Daniel

    2009-02-15

    We present a new lock-free parallel algorithm for computing betweenness centralityof massive small-world networks. With minor changes to the data structures, ouralgorithm also achieves better spatial cache locality compared to previous approaches. Betweenness centrality is a key algorithm kernel in HPCS SSCA#2, a benchmark extensively used to evaluate the performance of emerging high-performance computing architectures for graph-theoretic computations. We design optimized implementations of betweenness centrality and the SSCA#2 benchmark for two hardware multithreaded systems: a Cray XMT system with the Threadstorm processor, and a single-socket Sun multicore server with the UltraSPARC T2 processor. For a small-world network of 134 million vertices and 1.073 billion edges, the 16-processor XMT system and the 8-core Sun Fire T5120 server achieve TEPS scores (an algorithmic performance count for the SSCA#2 benchmark) of 160 million and 90 million respectively, which corresponds to more than a 2X performance improvement over the previous parallel implementations. To better characterize the performance of these multithreaded systems, we correlate the SSCA#2 performance results with data from the memory-intensive STREAM and RandomAccess benchmarks. Finally, we demonstrate the applicability of our implementation to analyze massive real-world datasets by computing approximate betweenness centrality for a large-scale IMDb movie-actor network.

  16. User's Guide for TOUGH2-MP - A Massively Parallel Version of the TOUGH2 Code

    SciTech Connect (OSTI)

    Earth Sciences Division; Zhang, Keni; Zhang, Keni; Wu, Yu-Shu; Pruess, Karsten

    2008-05-27

    TOUGH2-MP is a massively parallel (MP) version of the TOUGH2 code, designed for computationally efficient parallel simulation of isothermal and nonisothermal flows of multicomponent, multiphase fluids in one, two, and three-dimensional porous and fractured media. In recent years, computational requirements have become increasingly intensive in large or highly nonlinear problems for applications in areas such as radioactive waste disposal, CO2 geological sequestration, environmental assessment and remediation, reservoir engineering, and groundwater hydrology. The primary objective of developing the parallel-simulation capability is to significantly improve the computational performance of the TOUGH2 family of codes. The particular goal for the parallel simulator is to achieve orders-of-magnitude improvement in computational time for models with ever-increasing complexity. TOUGH2-MP is designed to perform parallel simulation on multi-CPU computational platforms. An earlier version of TOUGH2-MP (V1.0) was based on the TOUGH2 Version 1.4 with EOS3, EOS9, and T2R3D modules, a software previously qualified for applications in the Yucca Mountain project, and was designed for execution on CRAY T3E and IBM SP supercomputers. The current version of TOUGH2-MP (V2.0) includes all fluid property modules of the standard version TOUGH2 V2.0. It provides computationally efficient capabilities using supercomputers, Linux clusters, or multi-core PCs, and also offers many user-friendly features. The parallel simulator inherits all process capabilities from V2.0 together with additional capabilities for handling fractured media from V1.4. This report provides a quick starting guide on how to set up and run the TOUGH2-MP program for users with a basic knowledge of running the (standard) version TOUGH2 code, The report also gives a brief technical description of the code, including a discussion of parallel methodology, code structure, as well as mathematical and numerical methods used

  17. Compact Graph Representations and Parallel Connectivity Algorithms for Massive Dynamic Network Analysis

    SciTech Connect (OSTI)

    Madduri, Kamesh; Bader, David A.

    2009-02-15

    Graph-theoretic abstractions are extensively used to analyze massive data sets. Temporal data streams from socioeconomic interactions, social networking web sites, communication traffic, and scientific computing can be intuitively modeled as graphs. We present the first study of novel high-performance combinatorial techniques for analyzing large-scale information networks, encapsulating dynamic interaction data in the order of billions of entities. We present new data structures to represent dynamic interaction networks, and discuss algorithms for processing parallel insertions and deletions of edges in small-world networks. With these new approaches, we achieve an average performance rate of 25 million structural updates per second and a parallel speedup of nearly28 on a 64-way Sun UltraSPARC T2 multicore processor, for insertions and deletions to a small-world network of 33.5 million vertices and 268 million edges. We also design parallel implementations of fundamental dynamic graph kernels related to connectivity and centrality queries. Our implementations are freely distributed as part of the open-source SNAP (Small-world Network Analysis and Partitioning) complex network analysis framework.

  18. 3-D readout-electronics packaging for high-bandwidth massively paralleled imager

    DOE Patents [OSTI]

    Kwiatkowski, Kris; Lyke, James

    2007-12-18

    Dense, massively parallel signal processing electronics are co-packaged behind associated sensor pixels. Microchips containing a linear or bilinear arrangement of photo-sensors, together with associated complex electronics, are integrated into a simple 3-D structure (a "mirror cube"). An array of photo-sensitive cells are disposed on a stacked CMOS chip's surface at a 45.degree. angle from light reflecting mirror surfaces formed on a neighboring CMOS chip surface. Image processing electronics are held within the stacked CMOS chip layers. Electrical connections couple each of said stacked CMOS chip layers and a distribution grid, the connections for distributing power and signals to components associated with each stacked CSMO chip layer.

  19. A Massively Parallel Sparse Eigensolver for Structural Dynamics Finite Element Analysis

    SciTech Connect (OSTI)

    Day, David M.; Reese, G.M.

    1999-05-01

    Eigenanalysis is a critical component of structural dynamics which is essential for determinating the vibrational response of systems. This effort addresses the development of numerical algorithms associated with scalable eigensolver techniques suitable for use on massively parallel, distributed memory computers that are capable of solving large scale structural dynamics problems. An iterative Lanczos method was determined to be the best choice for the application. Scalability of the eigenproblem depends on scalability of the underlying linear solver. A multi-level solver (FETI) was selected as most promising for this component. Issues relating to heterogeneous materials, mechanisms and multipoint constraints have been examined, and the linear solver algorithm has been developed to incorporate features that result in a scalable, robust algorithm for practical structural dynamics applications. The resulting tools have been demonstrated on large problems representative of a weapon's system.

  20. GPAW - massively parallel electronic structure calculations with Python-based software.

    SciTech Connect (OSTI)

    Enkovaara, J.; Romero, N.; Shende, S.; Mortensen, J.

    2011-01-01

    Electronic structure calculations are a widely used tool in materials science and large consumer of supercomputing resources. Traditionally, the software packages for these kind of simulations have been implemented in compiled languages, where Fortran in its different versions has been the most popular choice. While dynamic, interpreted languages, such as Python, can increase the effciency of programmer, they cannot compete directly with the raw performance of compiled languages. However, by using an interpreted language together with a compiled language, it is possible to have most of the productivity enhancing features together with a good numerical performance. We have used this approach in implementing an electronic structure simulation software GPAW using the combination of Python and C programming languages. While the chosen approach works well in standard workstations and Unix environments, massively parallel supercomputing systems can present some challenges in porting, debugging and profiling the software. In this paper we describe some details of the implementation and discuss the advantages and challenges of the combined Python/C approach. We show that despite the challenges it is possible to obtain good numerical performance and good parallel scalability with Python based software.

  1. Genetic algorithm based task reordering to improve the performance of batch scheduled massively parallel scientific applications

    DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)

    Sankaran, Ramanan; Angel, Jordan; Brown, W. Michael

    2015-04-08

    The growth in size of networked high performance computers along with novel accelerator-based node architectures has further emphasized the importance of communication efficiency in high performance computing. The world's largest high performance computers are usually operated as shared user facilities due to the costs of acquisition and operation. Applications are scheduled for execution in a shared environment and are placed on nodes that are not necessarily contiguous on the interconnect. Furthermore, the placement of tasks on the nodes allocated by the scheduler is sub-optimal, leading to performance loss and variability. Here, we investigate the impact of task placement on themore » performance of two massively parallel application codes on the Titan supercomputer, a turbulent combustion flow solver (S3D) and a molecular dynamics code (LAMMPS). Benchmark studies show a significant deviation from ideal weak scaling and variability in performance. The inter-task communication distance was determined to be one of the significant contributors to the performance degradation and variability. A genetic algorithm-based parallel optimization technique was used to optimize the task ordering. This technique provides an improved placement of the tasks on the nodes, taking into account the application's communication topology and the system interconnect topology. As a result, application benchmarks after task reordering through genetic algorithm show a significant improvement in performance and reduction in variability, therefore enabling the applications to achieve better time to solution and scalability on Titan during production.« less

  2. Genetic algorithm based task reordering to improve the performance of batch scheduled massively parallel scientific applications

    SciTech Connect (OSTI)

    Sankaran, Ramanan; Angel, Jordan; Brown, W. Michael

    2015-04-08

    The growth in size of networked high performance computers along with novel accelerator-based node architectures has further emphasized the importance of communication efficiency in high performance computing. The world's largest high performance computers are usually operated as shared user facilities due to the costs of acquisition and operation. Applications are scheduled for execution in a shared environment and are placed on nodes that are not necessarily contiguous on the interconnect. Furthermore, the placement of tasks on the nodes allocated by the scheduler is sub-optimal, leading to performance loss and variability. Here, we investigate the impact of task placement on the performance of two massively parallel application codes on the Titan supercomputer, a turbulent combustion flow solver (S3D) and a molecular dynamics code (LAMMPS). Benchmark studies show a significant deviation from ideal weak scaling and variability in performance. The inter-task communication distance was determined to be one of the significant contributors to the performance degradation and variability. A genetic algorithm-based parallel optimization technique was used to optimize the task ordering. This technique provides an improved placement of the tasks on the nodes, taking into account the application's communication topology and the system interconnect topology. As a result, application benchmarks after task reordering through genetic algorithm show a significant improvement in performance and reduction in variability, therefore enabling the applications to achieve better time to solution and scalability on Titan during production.

  3. Tracking the roots of cellulase hyperproduction by the fungus Trichoderma reesei using massively parallel DNA sequencing

    SciTech Connect (OSTI)

    Le Crom, Stphane; Schackwitz, Wendy; Pennacchiod, Len; Magnuson, Jon K.; Culley, David E.; Collett, James R.; Martin, Joel X.; Druzhinina, Irina S.; Mathis, Hugues; Monot, Frdric; Seiboth, Bernhard; Cherry, Barbara; Rey, Michael; Berka, Randy; Kubicek, Christian P.; Baker, Scott E.; Margeot, Antoine

    2009-09-22

    Trichoderma reesei (teleomorph Hypocrea jecorina) is the main industrial source of cellulases and hemicellulases harnessed for the hydrolysis of biomass to simple sugars, which can then be converted to biofuels, such as ethanol, and other chemicals. The highly productive strains in use today were generated by classical mutagenesis. To learn how cellulase production was improved by these techniques, we performed massively parallel sequencing to identify mutations in the genomes of two hyperproducing strains (NG14, and its direct improved descendant, RUT C30). We detected a surprisingly high number of mutagenic events: 223 single nucleotides variants, 15 small deletions or insertions and 18 larger deletions leading to the loss of more than 100 kb of genomic DNA. From these events we report previously undocumented non-synonymous mutations in 43 genes that are mainly involved in nuclear transport, mRNA stability, transcription, secretion/vacuolar targeting, and metabolism. This homogeneity of functional categories suggests that multiple changes are necessary to improve cellulase production and not simply a few clear-cut mutagenic events. Phenotype microarrays show that some of these mutations result in strong changes in the carbon assimilation pattern of the two mutants with respect to the wild type strain QM6a. Our analysis provides the first genome-wide insights into the changes induced by classical mutagenesis in a filamentous fungus, and suggests new areas for the generation of enhanced T. reesei strains for industrial applications such as biofuel production.

  4. Massively parallel computation of 3D flow and reactions in chemical vapor deposition reactors

    SciTech Connect (OSTI)

    Salinger, A.G.; Shadid, J.N.; Hutchinson, S.A.; Hennigan, G.L.; Devine, K.D.; Moffat, H.K.

    1997-12-01

    Computer modeling of Chemical Vapor Deposition (CVD) reactors can greatly aid in the understanding, design, and optimization of these complex systems. Modeling is particularly attractive in these systems since the costs of experimentally evaluating many design alternatives can be prohibitively expensive, time consuming, and even dangerous, when working with toxic chemicals like Arsine (AsH{sub 3}): until now, predictive modeling has not been possible for most systems since the behavior is three-dimensional and governed by complex reaction mechanisms. In addition, CVD reactors often exhibit large thermal gradients, large changes in physical properties over regions of the domain, and significant thermal diffusion for gas mixtures with widely varying molecular weights. As a result, significant simplifications in the models have been made which erode the accuracy of the models` predictions. In this paper, the authors will demonstrate how the vast computational resources of massively parallel computers can be exploited to make possible the analysis of models that include coupled fluid flow and detailed chemistry in three-dimensional domains. For the most part, models have either simplified the reaction mechanisms and concentrated on the fluid flow, or have simplified the fluid flow and concentrated on rigorous reactions. An important CVD research thrust has been in detailed modeling of fluid flow and heat transfer in the reactor vessel, treating transport and reaction of chemical species either very simply or as a totally decoupled problem. Using the analogy between heat transfer and mass transfer, and the fact that deposition is often diffusion limited, much can be learned from these calculations; however, the effects of thermal diffusion, the change in physical properties with composition, and the incorporation of surface reaction mechanisms are not included in this model, nor can transitions to three-dimensional flows be detected.

  5. System and method for representing and manipulating three-dimensional objects on massively parallel architectures

    DOE Patents [OSTI]

    Karasick, Michael S.; Strip, David R.

    1996-01-01

    A parallel computing system is described that comprises a plurality of uniquely labeled, parallel processors, each processor capable of modelling a three-dimensional object that includes a plurality of vertices, faces and edges. The system comprises a front-end processor for issuing a modelling command to the parallel processors, relating to a three-dimensional object. Each parallel processor, in response to the command and through the use of its own unique label, creates a directed-edge (d-edge) data structure that uniquely relates an edge of the three-dimensional object to one face of the object. Each d-edge data structure at least includes vertex descriptions of the edge and a description of the one face. As a result, each processor, in response to the modelling command, operates upon a small component of the model and generates results, in parallel with all other processors, without the need for processor-to-processor intercommunication.

  6. System and method for representing and manipulating three-dimensional objects on massively parallel architectures

    DOE Patents [OSTI]

    Karasick, M.S.; Strip, D.R.

    1996-01-30

    A parallel computing system is described that comprises a plurality of uniquely labeled, parallel processors, each processor capable of modeling a three-dimensional object that includes a plurality of vertices, faces and edges. The system comprises a front-end processor for issuing a modeling command to the parallel processors, relating to a three-dimensional object. Each parallel processor, in response to the command and through the use of its own unique label, creates a directed-edge (d-edge) data structure that uniquely relates an edge of the three-dimensional object to one face of the object. Each d-edge data structure at least includes vertex descriptions of the edge and a description of the one face. As a result, each processor, in response to the modeling command, operates upon a small component of the model and generates results, in parallel with all other processors, without the need for processor-to-processor intercommunication. 8 figs.

  7. Method and apparatus for obtaining stack traceback data for multiple computing nodes of a massively parallel computer system

    DOE Patents [OSTI]

    Gooding, Thomas Michael; McCarthy, Patrick Joseph

    2010-03-02

    A data collector for a massively parallel computer system obtains call-return stack traceback data for multiple nodes by retrieving partial call-return stack traceback data from each node, grouping the nodes in subsets according to the partial traceback data, and obtaining further call-return stack traceback data from a representative node or nodes of each subset. Preferably, the partial data is a respective instruction address from each node, nodes having identical instruction address being grouped together in the same subset. Preferably, a single node of each subset is chosen and full stack traceback data is retrieved from the call-return stack within the chosen node.

  8. Massively parallel processing on the Intel Paragon system: One tool in achieving the goals of the Human Genome Project

    SciTech Connect (OSTI)

    Ecklund, D.J.

    1993-12-31

    A massively parallel computing system is one tool that has been adopted by researchers in the Human Genome Project. This tool is one of many in a toolbox of theories, algorithms, and systems that are used to attack the many questions posed by the project. A good tool functions well when applied alone to the problem for which it was devised. A superior tool achieves its solitary goal, and supports and interacts with other tools to achieve goals beyond the scope of any individual tool. The author believes that Intel`s massively parallel Paragon{trademark} XP/S system is a superior tool. This paper presents specific requirements for a superior computing tool for the Human Genome Project (HGP) and shows how the Paragon system addresses these requirements. Computing requirements for HGP are based on three factors: (1) computing requirements of algorithms currently used in sequence homology, protein folding, and database insertion/retrieval; (2) estimates of the computing requirements of new applications arising from evolving biological theories; and (3) the requirements for facilities that support collaboration among scientists in a project of this magnitude. The Paragon system provides many hardware and software features that effectively address these requirements.

  9. A Novel Algorithm for Solving the Multidimensional Neutron Transport Equation on Massively Parallel Architectures

    SciTech Connect (OSTI)

    Azmy, Yousry

    2014-06-10

    We employ the Integral Transport Matrix Method (ITMM) as the kernel of new parallel solution methods for the discrete ordinates approximation of the within-group neutron transport equation. The ITMM abandons the repetitive mesh sweeps of the traditional source iterations (SI) scheme in favor of constructing stored operators that account for the direct coupling factors among all the cells' fluxes and between the cells' and boundary surfaces' fluxes. The main goals of this work are to develop the algorithms that construct these operators and employ them in the solution process, determine the most suitable way to parallelize the entire procedure, and evaluate the behavior and parallel performance of the developed methods with increasing number of processes, P. The fastest observed parallel solution method, Parallel Gauss-Seidel (PGS), was used in a weak scaling comparison with the PARTISN transport code, which uses the source iteration (SI) scheme parallelized with the Koch-baker-Alcouffe (KBA) method. Compared to the state-of-the-art SI-KBA with diffusion synthetic acceleration (DSA), this new method- even without acceleration/preconditioning-is completitive for optically thick problems as P is increased to the tens of thousands range. For the most optically thick cells tested, PGS reduced execution time by an approximate factor of three for problems with more than 130 million computational cells on P = 32,768. Moreover, the SI-DSA execution times's trend rises generally more steeply with increasing P than the PGS trend. Furthermore, the PGS method outperforms SI for the periodic heterogeneous layers (PHL) configuration problems. The PGS method outperforms SI and SI-DSA on as few as P = 16 for PHL problems and reduces execution time by a factor of ten or more for all problems considered with more than 2 million computational cells on P = 4.096.

  10. Analysis and selection of optimal function implementations in massively parallel computer

    DOE Patents [OSTI]

    Archer, Charles Jens; Peters, Amanda; Ratterman, Joseph D.

    2011-05-31

    An apparatus, program product and method optimize the operation of a parallel computer system by, in part, collecting performance data for a set of implementations of a function capable of being executed on the parallel computer system based upon the execution of the set of implementations under varying input parameters in a plurality of input dimensions. The collected performance data may be used to generate selection program code that is configured to call selected implementations of the function in response to a call to the function under varying input parameters. The collected performance data may be used to perform more detailed analysis to ascertain the comparative performance of the set of implementations of the function under the varying input parameters.

  11. LiNbO{sub 3}: A photovoltaic substrate for massive parallel manipulation and patterning of nano-objects

    SciTech Connect (OSTI)

    Carrascosa, M.; García-Cabañes, A.; Jubera, M.; Ramiro, J. B.; Agulló-López, F.

    2015-12-15

    The application of evanescent photovoltaic (PV) fields, generated by visible illumination of Fe:LiNbO{sub 3} substrates, for parallel massive trapping and manipulation of micro- and nano-objects is critically reviewed. The technique has been often referred to as photovoltaic or photorefractive tweezers. The main advantage of the new method is that the involved electrophoretic and/or dielectrophoretic forces do not require any electrodes and large scale manipulation of nano-objects can be easily achieved using the patterning capabilities of light. The paper describes the experimental techniques for particle trapping and the main reported experimental results obtained with a variety of micro- and nano-particles (dielectric and conductive) and different illumination configurations (single beam, holographic geometry, and spatial light modulator projection). The report also pays attention to the physical basis of the method, namely, the coupling of the evanescent photorefractive fields to the dielectric response of the nano-particles. The role of a number of physical parameters such as the contrast and spatial periodicities of the illumination pattern or the particle deposition method is discussed. Moreover, the main properties of the obtained particle patterns in relation to potential applications are summarized, and first demonstrations reviewed. Finally, the PV method is discussed in comparison to other patterning strategies, such as those based on the pyroelectric response and the electric fields associated to domain poling of ferroelectric materials.

  12. Massively-parallel electron dynamics calculations in real-time and real-space: Toward applications to nanostructures of more than ten-nanometers in size

    SciTech Connect (OSTI)

    Noda, Masashi; Ishimura, Kazuya; Nobusada, Katsuyuki; Yabana, Kazuhiro; Boku, Taisuke

    2014-05-15

    A highly efficient program of massively parallel calculations for electron dynamics has been developed in an effort to apply the method to optical response of nanostructures of more than ten-nanometers in size. The approach is based on time-dependent density functional theory calculations in real-time and real-space. The computational code is implemented by using simple algorithms with a finite-difference method in space derivative and Taylor expansion in time-propagation. Since the computational program is free from the algorithms of eigenvalue problems and fast-Fourier-transformation, which are usually implemented in conventional quantum chemistry or band structure calculations, it is highly suitable for massively parallel calculations. Benchmark calculations using the K computer at RIKEN demonstrate that the parallel efficiency of the program is very high on more than 60 000 CPU cores. The method is applied to optical response of arrays of C{sub 60} orderly nanostructures of more than 10 nm in size. The computed absorption spectrum is in good agreement with the experimental observation.

  13. Method and apparatus for analyzing error conditions in a massively parallel computer system by identifying anomalous nodes within a communicator set

    DOE Patents [OSTI]

    Gooding, Thomas Michael

    2011-04-19

    An analytical mechanism for a massively parallel computer system automatically analyzes data retrieved from the system, and identifies nodes which exhibit anomalous behavior in comparison to their immediate neighbors. Preferably, anomalous behavior is determined by comparing call-return stack tracebacks for each node, grouping like nodes together, and identifying neighboring nodes which do not themselves belong to the group. A node, not itself in the group, having a large number of neighbors in the group, is a likely locality of error. The analyzer preferably presents this information to the user by sorting the neighbors according to number of adjoining members of the group.

  14. Method and apparatus for routing data in an inter-nodal communications lattice of a massively parallel computer system by dynamically adjusting local routing strategies

    DOE Patents [OSTI]

    Archer, Charles Jens; Musselman, Roy Glenn; Peters, Amanda; Pinnow, Kurt Walter; Swartz, Brent Allen; Wallenfelt, Brian Paul

    2010-03-16

    A massively parallel computer system contains an inter-nodal communications network of node-to-node links. Each node implements a respective routing strategy for routing data through the network, the routing strategies not necessarily being the same in every node. The routing strategies implemented in the nodes are dynamically adjusted during application execution to shift network workload as required. Preferably, adjustment of routing policies in selective nodes is performed at synchronization points. The network may be dynamically monitored, and routing strategies adjusted according to detected network conditions.

  15. Method and apparatus for routing data in an inter-nodal communications lattice of a massively parallel computer system by employing bandwidth shells at areas of overutilization

    DOE Patents [OSTI]

    Archer, Charles Jens; Musselman, Roy Glenn; Peters, Amanda; Pinnow, Kurt Walter; Swartz, Brent Allen; Wallenfelt, Brian Paul

    2010-04-27

    A massively parallel computer system contains an inter-nodal communications network of node-to-node links. An automated routing strategy routes packets through one or more intermediate nodes of the network to reach a final destination. The default routing strategy is altered responsive to detection of overutilization of a particular path of one or more links, and at least some traffic is re-routed by distributing the traffic among multiple paths (which may include the default path). An alternative path may require a greater number of link traversals to reach the destination node.

  16. Method and apparatus for routing data in an inter-nodal communications lattice of a massively parallel computer system by routing through transporter nodes

    DOE Patents [OSTI]

    Archer, Charles Jens; Musselman, Roy Glenn; Peters, Amanda; Pinnow, Kurt Walter; Swartz, Brent Allen; Wallenfelt, Brian Paul

    2010-11-16

    A massively parallel computer system contains an inter-nodal communications network of node-to-node links. An automated routing strategy routes packets through one or more intermediate nodes of the network to reach a destination. Some packets are constrained to be routed through respective designated transporter nodes, the automated routing strategy determining a path from a respective source node to a respective transporter node, and from a respective transporter node to a respective destination node. Preferably, the source node chooses a routing policy from among multiple possible choices, and that policy is followed by all intermediate nodes. The use of transporter nodes allows greater flexibility in routing.

  17. Method and apparatus for routing data in an inter-nodal communications lattice of a massively parallel computer system by dynamic global mapping of contended links

    DOE Patents [OSTI]

    Archer, Charles Jens; Musselman, Roy Glenn; Peters, Amanda; Pinnow, Kurt Walter; Swartz, Brent Allen; Wallenfelt, Brian Paul

    2011-10-04

    A massively parallel nodal computer system periodically collects and broadcasts usage data for an internal communications network. A node sending data over the network makes a global routing determination using the network usage data. Preferably, network usage data comprises an N-bit usage value for each output buffer associated with a network link. An optimum routing is determined by summing the N-bit values associated with each link through which a data packet must pass, and comparing the sums associated with different possible routes.

  18. User's guide of TOUGH2-EGS-MP: A Massively Parallel Simulator with Coupled Geomechanics for Fluid and Heat Flow in Enhanced Geothermal Systems VERSION 1.0

    SciTech Connect (OSTI)

    Xiong, Yi; Fakcharoenphol, Perapon; Wang, Shihao; Winterfeld, Philip H.; Zhang, Keni; Wu, Yu-Shu

    2013-12-01

    TOUGH2-EGS-MP is a parallel numerical simulation program coupling geomechanics with fluid and heat flow in fractured and porous media, and is applicable for simulation of enhanced geothermal systems (EGS). TOUGH2-EGS-MP is based on the TOUGH2-MP code, the massively parallel version of TOUGH2. In TOUGH2-EGS-MP, the fully-coupled flow-geomechanics model is developed from linear elastic theory for thermo-poro-elastic systems and is formulated in terms of mean normal stress as well as pore pressure and temperature. Reservoir rock properties such as porosity and permeability depend on rock deformation, and the relationships between these two, obtained from poro-elasticity theories and empirical correlations, are incorporated into the simulation. This report provides the user with detailed information on the TOUGH2-EGS-MP mathematical model and instructions for using it for Thermal-Hydrological-Mechanical (THM) simulations. The mathematical model includes the fluid and heat flow equations, geomechanical equation, and discretization of those equations. In addition, the parallel aspects of the code, such as domain partitioning and communication between processors, are also included. Although TOUGH2-EGS-MP has the capability for simulating fluid and heat flows coupled with geomechanical effects, it is up to the user to select the specific coupling process, such as THM or only TH, in a simulation. There are several example problems illustrating applications of this program. These example problems are described in detail and their input data are presented. Their results demonstrate that this program can be used for field-scale geothermal reservoir simulation in porous and fractured media with fluid and heat flow coupled with geomechanical effects.

  19. CX-001635: Categorical Exclusion Determination

    Broader source: Energy.gov [DOE]

    Solar American Institute Incubator - Semprius - Massively Parallel Microcell-Based Module ArrayCX(s) Applied: B3.6Date: 04/08/2010Location(s): Durham, North CarolinaOffice(s): Energy Efficiency and Renewable Energy, Golden Field Office

  20. Method and apparatus for routing data in an inter-nodal communications lattice of a massively parallel computer system by semi-randomly varying routing policies for different packets

    DOE Patents [OSTI]

    Archer, Charles Jens; Musselman, Roy Glenn; Peters, Amanda; Pinnow, Kurt Walter; Swartz, Brent Allen; Wallenfelt, Brian Paul

    2010-11-23

    A massively parallel computer system contains an inter-nodal communications network of node-to-node links. Nodes vary a choice of routing policy for routing data in the network in a semi-random manner, so that similarly situated packets are not always routed along the same path. Semi-random variation of the routing policy tends to avoid certain local hot spots of network activity, which might otherwise arise using more consistent routing determinations. Preferably, the originating node chooses a routing policy for a packet, and all intermediate nodes in the path route the packet according to that policy. Policies may be rotated on a round-robin basis, selected by generating a random number, or otherwise varied.

  1. Parallel computing works

    SciTech Connect (OSTI)

    Not Available

    1991-10-23

    An account of the Caltech Concurrent Computation Program (C{sup 3}P), a five year project that focused on answering the question: Can parallel computers be used to do large-scale scientific computations '' As the title indicates, the question is answered in the affirmative, by implementing numerous scientific applications on real parallel computers and doing computations that produced new scientific results. In the process of doing so, C{sup 3}P helped design and build several new computers, designed and implemented basic system software, developed algorithms for frequently used mathematical computations on massively parallel machines, devised performance models and measured the performance of many computers, and created a high performance computing facility based exclusively on parallel computers. While the initial focus of C{sup 3}P was the hypercube architecture developed by C. Seitz, many of the methods developed and lessons learned have been applied successfully on other massively parallel architectures.

  2. High-speed massively parallel scanning

    DOE Patents [OSTI]

    Decker, Derek E.

    2010-07-06

    A new technique for recording a series of images of a high-speed event (such as, but not limited to: ballistics, explosives, laser induced changes in materials, etc.) is presented. Such technique(s) makes use of a lenslet array to take image picture elements (pixels) and concentrate light from each pixel into a spot that is much smaller than the pixel. This array of spots illuminates a detector region (e.g., film, as one embodiment) which is scanned transverse to the light, creating tracks of exposed regions. Each track is a time history of the light intensity for a single pixel. By appropriately configuring the array of concentrated spots with respect to the scanning direction of the detection material, different tracks fit between pixels and sufficient lengths are possible which can be of interest in several high-speed imaging applications.

  3. Parallel Dislocation Simulator

    Energy Science and Technology Software Center (OSTI)

    2006-10-30

    ParaDiS is software capable of simulating the motion, evolution, and interaction of dislocation networks in single crystals using massively parallel computer architectures. The software is capable of outputting the stress-strain response of a single crystal whose plastic deformation is controlled by the dislocation processes.

  4. Differences Between Distributed and Parallel Systems

    SciTech Connect (OSTI)

    Brightwell, R.; Maccabe, A.B.; Rissen, R.

    1998-10-01

    Distributed systems have been studied for twenty years and are now coming into wider use as fast networks and powerful workstations become more readily available. In many respects a massively parallel computer resembles a network of workstations and it is tempting to port a distributed operating system to such a machine. However, there are significant differences between these two environments and a parallel operating system is needed to get the best performance out of a massively parallel system. This report characterizes the differences between distributed systems, networks of workstations, and massively parallel systems and analyzes the impact of these differences on operating system design. In the second part of the report, we introduce Puma, an operating system specifically developed for massively parallel systems. We describe Puma portals, the basic building blocks for message passing paradigms implemented on top of Puma, and show how the differences observed in the first part of the report have influenced the design and implementation of Puma.

  5. Ultrascalable petaflop parallel supercomputer

    DOE Patents [OSTI]

    Blumrich, Matthias A.; Chen, Dong; Chiu, George; Cipolla, Thomas M.; Coteus, Paul W.; Gara, Alan G.; Giampapa, Mark E.; Hall, Shawn; Haring, Rudolf A.; Heidelberger, Philip; Kopcsay, Gerard V.; Ohmacht, Martin; Salapura, Valentina; Sugavanam, Krishnan; Takken, Todd

    2010-07-20

    A massively parallel supercomputer of petaOPS-scale includes node architectures based upon System-On-a-Chip technology, where each processing node comprises a single Application Specific Integrated Circuit (ASIC) having up to four processing elements. The ASIC nodes are interconnected by multiple independent networks that optimally maximize the throughput of packet communications between nodes with minimal latency. The multiple networks may include three high-speed networks for parallel algorithm message passing including a Torus, collective network, and a Global Asynchronous network that provides global barrier and notification functions. These multiple independent networks may be collaboratively or independently utilized according to the needs or phases of an algorithm for optimizing algorithm processing performance. The use of a DMA engine is provided to facilitate message passing among the nodes without the expenditure of processing resources at the node.

  6. TRANSIMS Parallelization

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    TRANSIMS Parallelization This email address is being protected from spambots. You need JavaScript enabled to view it. - TRACC Director This email address is being protected from spambots. You need JavaScript enabled to view it. - Associate Computational Transportation Engineer Background TRANSIMS was originally developed by Los Alamos National Laboratory to run exclusively on a Linux cluster environment. In this initial version, the only parallelized component was the microsimulator. It worked

  7. PARALLEL HOP: A SCALABLE HALO FINDER FOR MASSIVE COSMOLOGICAL...

    Office of Scientific and Technical Information (OSTI)

    Center for Astrophysics and Space Sciences, University of California, San Diego, CA 92093 ... Subject: 79 ASTROPHYSICS, COSMOLOGY AND ASTRONOMY; DATA ANALYSIS; DISTRIBUTION; GALAXIES; ...

  8. Massively Parallel Models of the Human Circulatory System (Conference...

    Office of Scientific and Technical Information (OSTI)

    Language: English Subject: 59 BASIC BIOLOGICAL SCIENCES; 97 MATHEMATICS, COMPUTING, AND INFORMATION SCIENCE Word Cloud More Like This Full Text preview image File size NAView Full ...

  9. Parallel Batch Scripts

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Parallel Batch Scripts Parallel Batch Scripts Parallel Environments on Genepool You can run parallel jobs that use MPI or OpenMP on Genepool as long as you make the appropriate changes to your submission script! To investigate the parallel environments that are available on Genepool, you can use Command Description qconf -sp <pename> Show the configuration for the specified parallel environment. qconf -spl Show a list of all currently configured parallel environments. Basic Parallel

  10. Supertwistors and massive particles

    SciTech Connect (OSTI)

    Mezincescu, Luca; Routh, Alasdair J.; Townsend, Paul K.

    2014-07-15

    In the (super)twistor formulation of massless (super)particle mechanics, the mass-shell constraint is replaced by a spin-shell constraint from which the spin content can be read off. We extend this formalism to massive (super)particles (with N-extended spacetime supersymmetry) in three and four spacetime dimensions, explaining how the spin-shell constraints are related to spin, and we use it to prove equivalence of the massive N=1 and BPS-saturated N=2 superparticle actions. We also find the supertwistor form of the action for spinning particles with N-extended worldline supersymmetry, massless in four dimensions and massive in three dimensions, and we show how this simplifies special features of the N=2 case. -- Highlights: Spin-shell constraints are related to Poincar Casimirs. Twistor form of 4D spinning particle for spin N/2. Twistor proof of scalar/antisymmetric tensor equivalence for 4D spin 0. Twistor form of 3D particle with arbitrary spin. Proof of equivalence of N=1 and N=2 BPS massive 4D superparticles.

  11. Special parallel processing workshop

    SciTech Connect (OSTI)

    1994-12-01

    This report contains viewgraphs from the Special Parallel Processing Workshop. These viewgraphs deal with topics such as parallel processing performance, message passing, queue structure, and other basic concept detailing with parallel processing.

  12. Parallel Python GDB

    Energy Science and Technology Software Center (OSTI)

    2012-08-05

    PGDB is a lightweight parallel debugger softrware product. It utilizes the open souce gdb debugger inside of a parallel python framework.

  13. A Comprehensive Look at High Performance Parallel I/O

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    A Comprehensive Look at High Performance Parallel I/O A Comprehensive Look at High Performance Parallel I/O Book Signing @ SC14! Nov. 18, 5 p.m. in Booth 1939 November 10, 2014 Contact: Linda Vu, +1 510 495 2402, lvu@lbl.gov HighPerf Parallel IO In the 1990s, high performance computing (HPC) made a dramatic transition to massively parallel processors. As this model solidified over the next 20 years, supercomputing performance increased from gigaflops-billions of calculations per second-to

  14. Parallel flow diffusion battery

    DOE Patents [OSTI]

    Yeh, H.C.; Cheng, Y.S.

    1984-01-01

    A parallel flow diffusion battery for determining the mass distribution of an aerosol has a plurality of diffusion cells mounted in parallel to an aerosol stream, each diffusion cell including a stack of mesh wire screens of different density.

  15. Parallel flow diffusion battery

    DOE Patents [OSTI]

    Yeh, Hsu-Chi; Cheng, Yung-Sung

    1984-08-07

    A parallel flow diffusion battery for determining the mass distribution of an aerosol has a plurality of diffusion cells mounted in parallel to an aerosol stream, each diffusion cell including a stack of mesh wire screens of different density.

  16. Applications of Parallel Computers

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Computers Applications of Parallel Computers UCB CS267 Spring 2015 Tuesday & Thursday, 9:30-11:00 Pacific Time Applications of Parallel Computers, CS267, is a graduate-level course...

  17. Parallel Atomistic Simulations

    SciTech Connect (OSTI)

    HEFFELFINGER,GRANT S.

    2000-01-18

    Algorithms developed to enable the use of atomistic molecular simulation methods with parallel computers are reviewed. Methods appropriate for bonded as well as non-bonded (and charged) interactions are included. While strategies for obtaining parallel molecular simulations have been developed for the full variety of atomistic simulation methods, molecular dynamics and Monte Carlo have received the most attention. Three main types of parallel molecular dynamics simulations have been developed, the replicated data decomposition, the spatial decomposition, and the force decomposition. For Monte Carlo simulations, parallel algorithms have been developed which can be divided into two categories, those which require a modified Markov chain and those which do not. Parallel algorithms developed for other simulation methods such as Gibbs ensemble Monte Carlo, grand canonical molecular dynamics, and Monte Carlo methods for protein structure determination are also reviewed and issues such as how to measure parallel efficiency, especially in the case of parallel Monte Carlo algorithms with modified Markov chains are discussed.

  18. Parallel integrated thermal management

    DOE Patents [OSTI]

    Bennion, Kevin; Thornton, Matthew

    2014-08-19

    Embodiments discussed herein are directed to managing the heat content of two vehicle subsystems through a single coolant loop having parallel branches for each subsystem.

  19. Optimize Parallel Pumping Systems

    Broader source: Energy.gov [DOE]

    This tip sheet describes how to optimize the performance of multiple pumps operating continuously as part of a parallel pumping system.

  20. Thought Leaders during Crises in Massive Social Networks

    SciTech Connect (OSTI)

    Corley, Courtney D.; Farber, Robert M.; Reynolds, William

    2012-05-24

    The vast amount of social media data that can be gathered from the internet coupled with workflows that utilize both commodity systems and massively parallel supercomputers, such as the Cray XMT, open new vistas for research to support health, defense, and national security. Computer technology now enables the analysis of graph structures containing more than 4 billion vertices joined by 34 billion edges along with metrics and massively parallel algorithms that exhibit near-linear scalability according to number of processors. The challenge lies in making this massive data and analysis comprehensible to an analyst and end-users that require actionable knowledge to carry out their duties. Simply stated, we have developed language and content agnostic techniques to reduce large graphs built from vast media corpora into forms people can understand. Specifically, our tools and metrics act as a survey tool to identify thought leaders' -- those members that lead or reflect the thoughts and opinions of an online community, independent of the source language.

  1. Eclipse Parallel Tools Platform

    Energy Science and Technology Software Center (OSTI)

    2005-02-18

    Designing and developing parallel programs is an inherently complex task. Developers must choose from the many parallel architectures and programming paradigms that are available, and face a plethora of tools that are required to execute, debug, and analyze parallel programs i these environments. Few, if any, of these tools provide any degree of integration, or indeed any commonality in their user interfaces at all. This further complicates the parallel developer's task, hampering software engineering practices,more » and ultimately reducing productivity. One consequence of this complexity is that best practice in parallel application development has not advanced to the same degree as more traditional programming methodologies. The result is that there is currently no open-source, industry-strength platform that provides a highly integrated environment specifically designed for parallel application development. Eclipse is a universal tool-hosting platform that is designed to providing a robust, full-featured, commercial-quality, industry platform for the development of highly integrated tools. It provides a wide range of core services for tool integration that allow tool producers to concentrate on their tool technology rather than on platform specific issues. The Eclipse Integrated Development Environment is an open-source project that is supported by over 70 organizations, including IBM, Intel and HP. The Eclipse Parallel Tools Platform (PTP) plug-in extends the Eclipse framwork by providing support for a rich set of parallel programming languages and paradigms, and a core infrastructure for the integration of a wide variety of parallel tools. The first version of the PTP is a prototype that only provides minimal functionality for parallel tool integration of a wide variety of parallel tools. The first version of the PTP is a prototype that only provides minimal functionality for parallel tool integration, support for a small number of parallel architectures

  2. Sinus histiocytosis with massive lymphadenopathy

    SciTech Connect (OSTI)

    Pastakia, B.; Weiss, S.H.

    1987-11-01

    Gallium uptake corresponding to the extent of the disease in a patient with histologically proven sinus histiocytosis with massive lymphadenopathy (SHML) is reported. Computerized tomography confirmed the presence of bilateral retrobulbar masses, involvement of both lateral recti, erosion of the bony orbital floor with encroachment of tumor into the right maxillary antrum, and retropharyngeal involvement.

  3. Parallel programming with PCN

    SciTech Connect (OSTI)

    Foster, I.; Tuecke, S.

    1991-12-01

    PCN is a system for developing and executing parallel programs. It comprises a high-level programming language, tools for developing and debugging programs in this language, and interfaces to Fortran and C that allow the reuse of existing code in multilingual parallel programs. Programs developed using PCN are portable across many different workstations, networks, and parallel computers. This document provides all the information required to develop parallel programs with the PCN programming system. In includes both tutorial and reference material. It also presents the basic concepts that underly PCN, particularly where these are likely to be unfamiliar to the reader, and provides pointers to other documentation on the PCN language, programming techniques, and tools. PCN is in the public domain. The latest version of both the software and this manual can be obtained by anonymous FTP from Argonne National Laboratory in the directory pub/pcn at info.mcs.anl.gov (c.f. Appendix A).

  4. UPC (Unified Parallel C)

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    UPC (Unified Parallel C) Description Unified Parallel C is a partitioned global address space (PGAS) language and an extension of the C programming language. Availability UPC is available on Edison and Hopper via both the Cray compilers, as well as through Berkeley UPC, a portable high-performance UPC compiler and runtime implementation. Using UPC To compile a UPC source file using the Cray compilers, you must first swap the Cray compiler with the default compiler. On Hopper: % module swap

  5. A Massive Stellar Burst Before the Supernova

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    A Massive Stellar Burst Before the Supernova A Massive Stellar Burst Before the Supernova February 6, 2013 Contact: Linda Vu, lvu@lbl.gov, +1 510 495 2402 An automated supernova ...

  6. Strangeness production with massive'' gluons

    SciTech Connect (OSTI)

    Biro, T.S. Institut fuer Theoretische Physik, Justus-Liebig-Universitaet, Giessen ); Levai, P.; Mueller, B. )

    1990-11-01

    We present a perturbative calculation of strange-quark production by the processes {ital g}{r arrow}{ital s}+{ital {bar s}}, {ital g}+{ital g}{r arrow}{ital s}+{ital {bar s}}, and {ital q}+{ital {bar q}}{r arrow}{ital s}+{ital {bar s}} in a quark-gluon plasma containing gluons that are effectively massive'' due to medium effects. We consider only transverse polarizations of the gluons. We find that for a gluon mass beyond 300 MeV the one-gluon decay dominates, and that there is an {ital enhancement} of {ital s{bar s}} production from massive gluons compared with the massless case.

  7. An efficient parallel algorithm for matrix-vector multiplication

    SciTech Connect (OSTI)

    Hendrickson, B.; Leland, R.; Plimpton, S.

    1993-03-01

    The multiplication of a vector by a matrix is the kernel computation of many algorithms in scientific computation. A fast parallel algorithm for this calculation is therefore necessary if one is to make full use of the new generation of parallel supercomputers. This paper presents a high performance, parallel matrix-vector multiplication algorithm that is particularly well suited to hypercube multiprocessors. For an n x n matrix on p processors, the communication cost of this algorithm is O(n/[radical]p + log(p)), independent of the matrix sparsity pattern. The performance of the algorithm is demonstrated by employing it as the kernel in the well-known NAS conjugate gradient benchmark, where a run time of 6.09 seconds was observed. This is the best published performance on this benchmark achieved to date using a massively parallel supercomputer.

  8. Parallel Tensor Compression for Large-Scale Scientific Data.

    SciTech Connect (OSTI)

    Kolda, Tamara G.; Ballard, Grey; Austin, Woody Nathan

    2015-10-01

    As parallel computing trends towards the exascale, scientific data produced by high-fidelity simulations are growing increasingly massive. For instance, a simulation on a three-dimensional spatial grid with 512 points per dimension that tracks 64 variables per grid point for 128 time steps yields 8 TB of data. By viewing the data as a dense five way tensor, we can compute a Tucker decomposition to find inherent low-dimensional multilinear structure, achieving compression ratios of up to 10000 on real-world data sets with negligible loss in accuracy. So that we can operate on such massive data, we present the first-ever distributed memory parallel implementation for the Tucker decomposition, whose key computations correspond to parallel linear algebra operations, albeit with nonstandard data layouts. Our approach specifies a data distribution for tensors that avoids any tensor data redistribution, either locally or in parallel. We provide accompanying analysis of the computation and communication costs of the algorithms. To demonstrate the compression and accuracy of the method, we apply our approach to real-world data sets from combustion science simulations. We also provide detailed performance results, including parallel performance in both weak and strong scaling experiments.

  9. Parallel time integration software

    Energy Science and Technology Software Center (OSTI)

    2014-07-01

    This package implements an optimal-scaling multigrid solver for the (non) linear systems that arise from the discretization of problems with evolutionary behavior. Typically, solution algorithms for evolution equations are based on a time-marching approach, solving sequentially for one time step after the other. Parallelism in these traditional time-integrarion techniques is limited to spatial parallelism. However, current trends in computer architectures are leading twards system with more, but not faster. processors. Therefore, faster compute speeds mustmore » come from greater parallelism. One approach to achieve parallelism in time is with multigrid, but extending classical multigrid methods for elliptic poerators to this setting is a significant achievement. In this software, we implement a non-intrusive, optimal-scaling time-parallel method based on multigrid reduction techniques. The examples in the package demonstrate optimality of our multigrid-reduction-in-time algorithm (MGRIT) for solving a variety of parabolic equations in two and three sparial dimensions. These examples can also be used to show that MGRIT can achieve significant speedup in comparison to sequential time marching on modern architectures.« less

  10. Parallel community climate model: Description and user`s guide

    SciTech Connect (OSTI)

    Drake, J.B.; Flanery, R.E.; Semeraro, B.D.; Worley, P.H.

    1996-07-15

    This report gives an overview of a parallel version of the NCAR Community Climate Model, CCM2, implemented for MIMD massively parallel computers using a message-passing programming paradigm. The parallel implementation was developed on an Intel iPSC/860 with 128 processors and on the Intel Delta with 512 processors, and the initial target platform for the production version of the code is the Intel Paragon with 2048 processors. Because the implementation uses a standard, portable message-passing libraries, the code has been easily ported to other multiprocessors supporting a message-passing programming paradigm. The parallelization strategy used is to decompose the problem domain into geographical patches and assign each processor the computation associated with a distinct subset of the patches. With this decomposition, the physics calculations involve only grid points and data local to a processor and are performed in parallel. Using parallel algorithms developed for the semi-Lagrangian transport, the fast Fourier transform and the Legendre transform, both physics and dynamics are computed in parallel with minimal data movement and modest change to the original CCM2 source code. Sequential or parallel history tapes are written and input files (in history tape format) are read sequentially by the parallel code to promote compatibility with production use of the model on other computer systems. A validation exercise has been performed with the parallel code and is detailed along with some performance numbers on the Intel Paragon and the IBM SP2. A discussion of reproducibility of results is included. A user`s guide for the PCCM2 version 2.1 on the various parallel machines completes the report. Procedures for compilation, setup and execution are given. A discussion of code internals is included for those who may wish to modify and use the program in their own research.

  11. Parallel optical sampler

    DOE Patents [OSTI]

    Tauke-Pedretti, Anna; Skogen, Erik J; Vawter, Gregory A

    2014-05-20

    An optical sampler includes a first and second 1.times.n optical beam splitters splitting an input optical sampling signal and an optical analog input signal into n parallel channels, respectively, a plurality of optical delay elements providing n parallel delayed input optical sampling signals, n photodiodes converting the n parallel optical analog input signals into n respective electrical output signals, and n optical modulators modulating the input optical sampling signal or the optical analog input signal by the respective electrical output signals, and providing n successive optical samples of the optical analog input signal. A plurality of output photodiodes and eADCs convert the n successive optical samples to n successive digital samples. The optical modulator may be a photodiode interconnected Mach-Zehnder Modulator. A method of sampling the optical analog input signal is disclosed.

  12. Parallel programming with Ada

    SciTech Connect (OSTI)

    Kok, J.

    1988-01-01

    To the human programmer the ease of coding distributed computing is highly dependent on the suitability of the employed programming language. But with a particular language it is also important whether the possibilities of one or more parallel architectures can efficiently be addressed by available language constructs. In this paper the possibilities are discussed of the high-level language Ada and in particular of its tasking concept as a descriptional tool for the design and implementation of numerical and other algorithms that allow execution of parts in parallel. Language tools are explained and their use for common applications is shown. Conclusions are drawn about the usefulness of several Ada concepts.

  13. Parallel programming with PCN

    SciTech Connect (OSTI)

    Foster, I.; Tuecke, S.

    1993-01-01

    PCN is a system for developing and executing parallel programs. It comprises a high-level programming language, tools for developing and debugging programs in this language, and interfaces to Fortran and Cthat allow the reuse of existing code in multilingual parallel programs. Programs developed using PCN are portable across many different workstations, networks, and parallel computers. This document provides all the information required to develop parallel programs with the PCN programming system. It includes both tutorial and reference material. It also presents the basic concepts that underlie PCN, particularly where these are likely to be unfamiliar to the reader, and provides pointers to other documentation on the PCN language, programming techniques, and tools. PCN is in the public domain. The latest version of both the software and this manual can be obtained by anonymous ftp from Argonne National Laboratory in the directory pub/pcn at info.mcs. ani.gov (cf. Appendix A). This version of this document describes PCN version 2.0, a major revision of the PCN programming system. It supersedes earlier versions of this report.

  14. Parallel Multigrid Equation Solver

    Energy Science and Technology Software Center (OSTI)

    2001-09-07

    Prometheus is a fully parallel multigrid equation solver for matrices that arise in unstructured grid finite element applications. It includes a geometric and an algebraic multigrid method and has solved problems of up to 76 mullion degrees of feedom, problems in linear elasticity on the ASCI blue pacific and ASCI red machines.

  15. Parallel Total Energy

    Energy Science and Technology Software Center (OSTI)

    2004-10-21

    This is a total energy electronic structure code using Local Density Approximation (LDA) of the density funtional theory. It uses the plane wave as the wave function basis set. It can sue both the norm conserving pseudopotentials and the ultra soft pseudopotentials. It can relax the atomic positions according to the total energy. It is a parallel code using MP1.

  16. A Massive Stellar Burst Before the Supernova

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    A Massive Stellar Burst Before the Supernova A Massive Stellar Burst Before the Supernova February 6, 2013 Contact: Linda Vu, lvu@lbl.gov, +1 510 495 2402 An automated supernova hunt is shedding new light on the death sequence of massive stars-specifically, the kind that self-destruct in Type IIn supernova explosions. Digging through the Palomar Transient Factory (PTF) data archive housed at the Department of Energy's National Energy Research Scientific Computing Center (NERSC) at Lawrence

  17. Parallel grid population

    DOE Patents [OSTI]

    Wald, Ingo; Ize, Santiago

    2015-07-28

    Parallel population of a grid with a plurality of objects using a plurality of processors. One example embodiment is a method for parallel population of a grid with a plurality of objects using a plurality of processors. The method includes a first act of dividing a grid into n distinct grid portions, where n is the number of processors available for populating the grid. The method also includes acts of dividing a plurality of objects into n distinct sets of objects, assigning a distinct set of objects to each processor such that each processor determines by which distinct grid portion(s) each object in its distinct set of objects is at least partially bounded, and assigning a distinct grid portion to each processor such that each processor populates its distinct grid portion with any objects that were previously determined to be at least partially bounded by its distinct grid portion.

  18. Exploiting Network Parallelism

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Exploiting Network Parallelism for Improving Data Transfer Performance Dan Gunter ∗ , Raj Kettimuthu † , Ezra Kissel ‡ , Martin Swany ‡ , Jun Yi § , Jason Zurawski ¶ ∗ Advanced Computing for Science Department, Lawrence Berkeley National Laboratory, Berkeley, CA † Mathematics and Computer Science Division, Argonne National Laboratory Argonne, IL ‡ School of Informatics and Computing, Indiana University, Bloomington, IN § Computation Institute, University of Chicago/Argonne

  19. Xyce parallel electronic simulator.

    SciTech Connect (OSTI)

    Keiter, Eric Richard; Mei, Ting; Russo, Thomas V.; Rankin, Eric Lamont; Schiek, Richard Louis; Thornquist, Heidi K.; Fixel, Deborah A.; Coffey, Todd Stirling; Pawlowski, Roger Patrick; Santarelli, Keith R.

    2010-05-01

    This document is a reference guide to the Xyce Parallel Electronic Simulator, and is a companion document to the Xyce Users' Guide. The focus of this document is (to the extent possible) exhaustively list device parameters, solver options, parser options, and other usage details of Xyce. This document is not intended to be a tutorial. Users who are new to circuit simulation are better served by the Xyce Users' Guide.

  20. Detecting torsion from massive electrodynamics

    SciTech Connect (OSTI)

    Garcia de Andrade, L.C.; Lopes, M. )

    1993-11-01

    A new method of detecting torsion in the case of massive electrodynamics is proposed. The method is based on the study of spectral lines of hydrogen-like atoms placed in a torsion field, where the interaction energy between the torsion vector field Q and an electric dipole is given by [epsilon] [approximately] p [center dot] Q. All the methods designed so far have been based on spinning test particles interacting with magnetic fields in which the energy splitting is given by [epsilon] [approximately] S [center dot] B on a Stern-Gerlach type experiment. The authors arrive at an energy splitting of order of [epsilon] [approximately] 10[sup [minus]21]erg[approximately]10[sup [minus]9]eV, which is within the frequency band of radio waves. 15 refs.

  1. Hybrid Parallel Programming with MPI and Unified Parallel C | Argonne

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Leadership Computing Facility Parallel Programming with MPI and Unified Parallel C Authors: Dinan, J., Balaji, P., Lusk, E., Sadayappan, P., Thakur, R. The Message Passing Interface (MPI) is one of the most widely used programming models for parallel computing. However, the amount of memory available to an MPI process is limited by the amount of local memory within a compute node. Partitioned Global Address Space (PGAS) models such as Unified Parallel C (UPC) are growing in popularity

  2. Allinea DDT as a Parallel Debugging Alternative to Totalview

    SciTech Connect (OSTI)

    Antypas, K.B.

    2007-03-05

    Totalview, from the Etnus Corporation, is a sophisticated and feature rich software debugger for parallel applications. As Totalview has gained in popularity and market share its pricing model has increased to the point where it is often prohibitively expensive for massively parallel supercomputers. Additionally, many of Totalview's advanced features are not used by members of the scientific computing community. For these reasons, supercomputing centers have begun to search for a basic parallel debugging tool which can be used as an alternative to Totalview. As the cost and complexity of Totalview has increased over the years, scientific computing centers have started searching for a viable parallel debugging alternative. DDT (Distributed Debugging Tool) from Allinea Software is a relatively new parallel debugging tool which aims to provide much of the same functionality as Totalview. This review outlines the basic features and limitations of DDT to determine if it can be a reasonable substitute for Totalview. DDT was tested on the NERSC platforms Bassi, Seaborg, Jacquard and Davinci with Fortran90, C, and C++ codes using MPI and OpenMP for parallelism.

  3. Applied Parallel Metadata Indexing

    SciTech Connect (OSTI)

    Jacobi, Michael R

    2012-08-01

    The GPFS Archive is parallel archive is a parallel archive used by hundreds of users in the Turquoise collaboration network. It houses 4+ petabytes of data in more than 170 million files. Currently, users must navigate the file system to retrieve their data, requiring them to remember file paths and names. A better solution might allow users to tag data with meaningful labels and searach the archive using standard and user-defined metadata, while maintaining security. last summer, I developed the backend to a tool that adheres to these design goals. The backend works by importing GPFS metadata into a MongoDB cluster, which is then indexed on each attribute. This summer, the author implemented security and developed the user interfae for the search tool. To meet security requirements, each database table is associated with a single user, which only stores records that the user may read, and requires a set of credentials to access. The interface to the search tool is implemented using FUSE (Filesystem in USErspace). FUSE is an intermediate layer that intercepts file system calls and allows the developer to redefine how those calls behave. In the case of this tool, FUSE interfaces with MongoDB to issue queries and populate output. A FUSE implementation is desirable because it allows users to interact with the search tool using commands they are already familiar with. These security and interface additions are essential for a usable product.

  4. Parallel ptychographic reconstruction

    DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)

    Nashed, Youssef S. G.; Vine, David J.; Peterka, Tom; Deng, Junjing; Ross, Rob; Jacobsen, Chris

    2014-12-19

    Ptychography is an imaging method whereby a coherent beam is scanned across an object, and an image is obtained by iterative phasing of the set of diffraction patterns. It is able to be used to image extended objects at a resolution limited by scattering strength of the object and detector geometry, rather than at an optics-imposed limit. As technical advances allow larger fields to be imaged, computational challenges arise for reconstructing the correspondingly larger data volumes, yet at the same time there is also a need to deliver reconstructed images immediately so that one can evaluate the next steps tomoretake in an experiment. Here we present a parallel method for real-time ptychographic phase retrieval. It uses a hybrid parallel strategy to divide the computation between multiple graphics processing units (GPUs) and then employs novel techniques to merge sub-datasets into a single complex phase and amplitude image. Results are shown on a simulated specimen and a real dataset from an X-ray experiment conducted at a synchrotron light source.less

  5. Parallel ptychographic reconstruction

    SciTech Connect (OSTI)

    Nashed, Youssef S. G.; Vine, David J.; Peterka, Tom; Deng, Junjing; Ross, Rob; Jacobsen, Chris

    2014-12-19

    Ptychography is an imaging method whereby a coherent beam is scanned across an object, and an image is obtained by iterative phasing of the set of diffraction patterns. It is able to be used to image extended objects at a resolution limited by scattering strength of the object and detector geometry, rather than at an optics-imposed limit. As technical advances allow larger fields to be imaged, computational challenges arise for reconstructing the correspondingly larger data volumes, yet at the same time there is also a need to deliver reconstructed images immediately so that one can evaluate the next steps to take in an experiment. Here we present a parallel method for real-time ptychographic phase retrieval. It uses a hybrid parallel strategy to divide the computation between multiple graphics processing units (GPUs) and then employs novel techniques to merge sub-datasets into a single complex phase and amplitude image. Results are shown on a simulated specimen and a real dataset from an X-ray experiment conducted at a synchrotron light source.

  6. Unified Parallel Software

    Energy Science and Technology Software Center (OSTI)

    2003-12-01

    UPS (Unified Paralled Software is a collection of software tools libraries, scripts, executables) that assist in parallel programming. This consists of: o libups.a C/Fortran callable routines for message passing (utilities written on top of MPI) and file IO (utilities written on top of HDF). o libuserd-HDF.so EnSight user-defined reader for visualizing data files written with UPS File IO. o ups_libuserd_query, ups_libuserd_prep.pl, ups_libuserd_script.pl Executables/scripts to get information from data files and to simplify the use ofmore » EnSight on those data files. o ups_io_rm/ups_io_cp Manipulate data files written with UPS File IO These tools are portable to a wide variety of Unix platforms.« less

  7. CASL-U-2015-0170-000 SHIFT: A Massively Parallel Monte Carlo

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    on Mathematics and Computation (M&C), Supercomputing in Nuclear Applications ... results from the Babcock and Wilcox 1810 (BW1810) criticality experiments 18 are presented. ...

  8. BlueGene/L Applications: Parallelism on a Massive Scale (Journal...

    Office of Scientific and Technical Information (OSTI)

    131,072 processors and absolute performance with a peak rate of 367 TFlops. BGL has led the Top500 list the last four times with a Linpack rate of 280.6 TFlops for the full ...

  9. Massive Hanford Test Reactor Removed - Plutonium Recycle Test...

    Office of Environmental Management (EM)

    Massive Hanford Test Reactor Removed - Plutonium Recycle Test Reactor removed from Hanford's 300 Area Massive Hanford Test Reactor Removed - Plutonium Recycle Test Reactor removed ...

  10. A new quasidilaton theory of massive gravity (Journal Article...

    Office of Scientific and Technical Information (OSTI)

    A new quasidilaton theory of massive gravity Citation Details In-Document Search Title: A new quasidilaton theory of massive gravity We present a new quasidilaton theory of...

  11. SDSS-III: Massive Spectroscopic Surveys of the Distant Universe...

    Office of Scientific and Technical Information (OSTI)

    Massive Spectroscopic Surveys of the Distant Universe, the Milky Way Galaxy, and Extra-Solar Planetary Systems Citation Details In-Document Search Title: SDSS-III: Massive...

  12. A cosmological study in massive gravity theory

    SciTech Connect (OSTI)

    Pan, Supriya Chakraborty, Subenoy

    2015-09-15

    A detailed study of the various cosmological aspects in massive gravity theory has been presented in the present work. For the homogeneous and isotropic FLRW model, the deceleration parameter has been evaluated, and, it has been examined whether there is any transition from deceleration to acceleration in recent past, or not. With the proper choice of the free parameters, it has been shown that the massive gravity theory is equivalent to Einstein gravity with a modified Newtonian gravitational constant together with a negative cosmological constant. Also, in this context, it has been examined whether the emergent scenario is possible, or not, in massive gravity theory. Finally, we have done a cosmographic analysis in massive gravity theory.

  13. Parallel Computing Summer Research Internship

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Recommended Reading & Resources Parallel Computing Summer Research Internship Creates next-generation leaders in HPC research and applications development Contacts Program Co-Lead ...

  14. Dynamic Star Formation in the Massive DR21 Filament

    SciTech Connect (OSTI)

    Schneider, N.; Csengeri, T.; Bontemps, S.; Motte, F.; Simon, R.; Hennebelle, P.; Federrath, C.; Klessen, R.; /ZAH, Heidelberg /KIPAC, Menlo Park

    2010-08-25

    The formation of massive stars is a highly complex process in which it is unclear whether the star-forming gas is in global gravitational collapse or an equilibrium state supported by turbulence and/or magnetic fields. By studying one of the most massive and dense star-forming regions in the Galaxy at a distance of less than 3 kpc, i.e. the filament containing the well-known sources DR21 and DR21(OH), we attempt to obtain observational evidence to help us to discriminate between these two views. We use molecular line data from our {sup 13}CO 1 {yields} 0, CS 2 {yields} 1, and N{sub 2}H{sup +} 1 {yields} 0 survey of the Cygnus X region obtained with the FCRAO and CO, CS, HCO{sup +}, N{sub 2}H{sup +}, and H{sub 2}CO data obtained with the IRAM 30m telescope. We observe a complex velocity field and velocity dispersion in the DR21 filament in which regions of the highest column-density, i.e., dense cores, have a lower velocity dispersion than the surrounding gas and velocity gradients that are not (only) due to rotation. Infall signatures in optically thick line profiles of HCO{sup +} and {sup 12}CO are observed along and across the whole DR21 filament. By modelling the observed spectra, we obtain a typical infall speed of {approx}0.6 km s{sup -1} and mass accretion rates of the order of a few 10{sup -3} M{sub {circle_dot}} yr{sup -1} for the two main clumps constituting the filament. These massive clumps (4900 and 3300 M{sub {circle_dot}} at densities of around 10{sup 5} cm{sup -3} within 1 pc diameter) are both gravitationally contracting. The more massive of the clumps, DR21(OH), is connected to a sub-filament, apparently 'falling' onto the clump. This filament runs parallel to the magnetic field. Conclusions. All observed kinematic features in the DR21 filament (velocity field, velocity dispersion, and infall), its filamentary morphology, and the existence of (a) sub-filament(s) can be explained if the DR21 filament was formed by the convergence of flows on large

  15. A two-level parallel direct search implementation for arbitrarily sized objective functions

    SciTech Connect (OSTI)

    Hutchinson, S.A.; Shadid, N.; Moffat, H.K.

    1994-12-31

    In the past, many optimization schemes for massively parallel computers have attempted to achieve parallel efficiency using one of two methods. In the case of large and expensive objective function calculations, the optimization itself may be run in serial and the objective function calculations parallelized. In contrast, if the objective function calculations are relatively inexpensive and can be performed on a single processor, then the actual optimization routine itself may be parallelized. In this paper, a scheme based upon the Parallel Direct Search (PDS) technique is presented which allows the objective function calculations to be done on an arbitrarily large number (p{sub 2}) of processors. If, p, the number of processors available, is greater than or equal to 2p{sub 2} then the optimization may be parallelized as well. This allows for efficient use of computational resources since the objective function calculations can be performed on the number of processors that allow for peak parallel efficiency and then further speedup may be achieved by parallelizing the optimization. Results are presented for an optimization problem which involves the solution of a PDE using a finite-element algorithm as part of the objective function calculation. The optimum number of processors for the finite-element calculations is less than p/2. Thus, the PDS method is also parallelized. Performance comparisons are given for a nCUBE 2 implementation.

  16. A two-level parallel direct search implementation for arbitrarily sized objective functions

    SciTech Connect (OSTI)

    Hutchinson, S.A.; Shadid, J.N.; Moffat, H.K.; Ng, K.T.

    1994-02-21

    In the past, many optimization schemes for massively parallel computers have attempted to achieve parallel efficiency using one of two methods. In the case of large and expensive objective function calculations, the optimization itself may be run in serial and the objective function calculations parallelized. In contrast, if the objective function calculations are relatively inexpensive and can be performed on a single processor, then the actual optimization routine, itself may be parallelized. In this paper, a scheme based upon the Parallel Direct Search (PDS) technique is presented which allows the objective function calculations to be done on an arbitrarily large number (p2) of processors. If, p, the number of processors available, is greater than or equal to 2p{sub 2} then the optimization may be parallelized as well. This allows for efficient use of computational resources since the objective function calculations can be performed on the number of processors that allow for peak parallel efficiency and then further speedup may be achieved by parallelizing the optimization. Results are presented for an optimization problem which involves the solution of a PDE using a finite-element algorithm as part of the objective function calculation. The optimum number of processors for the finite-element calculations is less than p/2. Thus, the PDS method is also parallelized. Performance comparisons are given for a nCUBE 2 implementation.

  17. Wakefield Simulation of CLIC PETS Structure Using Parallel 3D Finite Element Time-Domain Solver T3P

    SciTech Connect (OSTI)

    Candel, A.; Kabel, A.; Lee, L.; Li, Z.; Ng, C.; Schussman, G.; Ko, K.; Syratchev, I.; /CERN

    2009-06-19

    In recent years, SLAC's Advanced Computations Department (ACD) has developed the parallel 3D Finite Element electromagnetic time-domain code T3P. Higher-order Finite Element methods on conformal unstructured meshes and massively parallel processing allow unprecedented simulation accuracy for wakefield computations and simulations of transient effects in realistic accelerator structures. Applications include simulation of wakefield damping in the Compact Linear Collider (CLIC) power extraction and transfer structure (PETS).

  18. Parallel processing for control applications

    SciTech Connect (OSTI)

    Telford, J. W.

    2001-01-01

    Parallel processing has been a topic of discussion in computer science circles for decades. Using more than one single computer to control a process has many advantages that compensate for the additional cost. Initially multiple computers were used to attain higher speeds. A single cpu could not perform all of the operations necessary for real time operation. As technology progressed and cpu's became faster, the speed issue became less significant. The additional processing capabilities however continue to make high speeds an attractive element of parallel processing. Another reason for multiple processors is reliability. For the purpose of this discussion, reliability and robustness will be the focal paint. Most contemporary conceptions of parallel processing include visions of hundreds of single computers networked to provide 'computing power'. Indeed our own teraflop machines are built from large numbers of computers configured in a network (and thus limited by the network). There are many approaches to parallel configfirations and this presentation offers something slightly different from the contemporary networked model. In the world of embedded computers, which is a pervasive force in contemporary computer controls, there are many single chip computers available. If one backs away from the PC based parallel computing model and considers the possibilities of a parallel control device based on multiple single chip computers, a new area of possibilities becomes apparent. This study will look at the use of multiple single chip computers in a parallel configuration with emphasis placed on maximum reliability.

  19. Primordial Li abundance and massive particles

    SciTech Connect (OSTI)

    Latin-Capital-Letter-Eth apo, H.

    2012-10-20

    The problem of the observed lithium abundance coming from the Big Bang Nucleosynthesis is as of yet unsolved. One of the proposed solutions is including relic massive particles into the Big Bang Nucleosynthesis. We investigated the effects of such particles on {sup 4}HeX{sup -}+{sup 2}H{yields}{sup 6}Li+X{sup -}, where the X{sup -} is the negatively charged massive particle. We demonstrate the dominance of long-range part of the potential on the cross-section.

  20. Growth histories in bimetric massive gravity

    SciTech Connect (OSTI)

    Berg, Marcus; Buchberger, Igor; Enander, Jonas; Mrtsell, Edvard; Sjrs, Stefan E-mail: igor.buchberger@kau.se E-mail: edvard@fysik.su.se

    2012-12-01

    We perform cosmological perturbation theory in Hassan-Rosen bimetric gravity for general homogeneous and isotropic backgrounds. In the de Sitter approximation, we obtain decoupled sets of massless and massive scalar gravitational fluctuations. Matter perturbations then evolve like in Einstein gravity. We perturb the future de Sitter regime by the ratio of matter to dark energy, producing quasi-de Sitter space. In this more general setting the massive and massless fluctuations mix. We argue that in the quasi-de Sitter regime, the growth of structure in bimetric gravity differs from that of Einstein gravity.

  1. NON-AQUEOUS DISSOLUTION OF MASSIVE PLUTONIUM

    DOE Patents [OSTI]

    Reavis, J.G.; Leary, J.A.; Walsh, K.A.

    1959-05-12

    A method is presented for obtaining non-aqueous solutions or plutonium from massive forms of the metal. In the present invention massive plutonium is added to a salt melt consisting of 10 to 40 weight per cent of sodium chloride and the balance zinc chloride. The plutonium reacts at about 800 deg C with the zinc chloride to form a salt bath of plutonium trichloride, sodium chloride, and metallic zinc. The zinc is separated from the salt melt by forcing the molten mixture through a Pyrex filter.

  2. Parallel Computing Summer Research Internship

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Recommended Reading & Resources Parallel Computing Summer Research Internship Creates next-generation leaders in HPC research and applications development Contacts Program Co-Lead Robert (Bob) Robey Email Program Co-Lead Gabriel Rockefeller Email Program Co-Lead Hai Ah Nam Email Professional Staff Assistant Nickole Aguilar Garcia (505) 665-3048 Email Recommended Reading & References The Parallel Computing Summer Research Internship covers a broad range of topics that you may not have

  3. Parallel Computing Summer Research Internship

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    LaboratoryNational Security Education Center Menu About Seminar Series Summer Schools Workshops Viz Collab IS&T Projects NSEC » Information Science and Technology Institute (ISTI) » Summer School Programs » Parallel Computing Parallel Computing Summer Research Internship Creates next-generation leaders in HPC research and applications development Contacts Program Co-Lead Robert (Bob) Robey Email Program Co-Lead Gabriel Rockefeller Email Program Co-Lead Hai Ah Nam Email Professional Staff

  4. Endpoint-based parallel data processing in a parallel active messaging interface of a parallel computer

    DOE Patents [OSTI]

    Archer, Charles J.; Blocksome, Michael A.; Ratterman, Joseph D.; Smith, Brian E.

    2014-08-12

    Endpoint-based parallel data processing in a parallel active messaging interface (`PAMI`) of a parallel computer, the PAMI composed of data communications endpoints, each endpoint including a specification of data communications parameters for a thread of execution on a compute node, including specifications of a client, a context, and a task, the compute nodes coupled for data communications through the PAMI, including establishing a data communications geometry, the geometry specifying, for tasks representing processes of execution of the parallel application, a set of endpoints that are used in collective operations of the PAMI including a plurality of endpoints for one of the tasks; receiving in endpoints of the geometry an instruction for a collective operation; and executing the instruction for a collective operation through the endpoints in dependence upon the geometry, including dividing data communications operations among the plurality of endpoints for one of the tasks.

  5. Endpoint-based parallel data processing in a parallel active messaging interface of a parallel computer

    DOE Patents [OSTI]

    Archer, Charles J; Blocksome, Michael E; Ratterman, Joseph D; Smith, Brian E

    2014-02-11

    Endpoint-based parallel data processing in a parallel active messaging interface ('PAMI') of a parallel computer, the PAMI composed of data communications endpoints, each endpoint including a specification of data communications parameters for a thread of execution on a compute node, including specifications of a client, a context, and a task, the compute nodes coupled for data communications through the PAMI, including establishing a data communications geometry, the geometry specifying, for tasks representing processes of execution of the parallel application, a set of endpoints that are used in collective operations of the PAMI including a plurality of endpoints for one of the tasks; receiving in endpoints of the geometry an instruction for a collective operation; and executing the instruction for a collective opeartion through the endpoints in dependence upon the geometry, including dividing data communications operations among the plurality of endpoints for one of the tasks.

  6. Parallel Programming and Optimization for Intel Architecture

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Parallel Programming and Optimization for Intel Architecture Parallel Programming and Optimization for Intel Architecture August 14, 2015 by Richard Gerber Intel is sponsoring a ...

  7. Computing contingency statistics in parallel.

    SciTech Connect (OSTI)

    Bennett, Janine Camille; Thompson, David; Pebay, Philippe Pierre

    2010-09-01

    Statistical analysis is typically used to reduce the dimensionality of and infer meaning from data. A key challenge of any statistical analysis package aimed at large-scale, distributed data is to address the orthogonal issues of parallel scalability and numerical stability. Many statistical techniques, e.g., descriptive statistics or principal component analysis, are based on moments and co-moments and, using robust online update formulas, can be computed in an embarrassingly parallel manner, amenable to a map-reduce style implementation. In this paper we focus on contingency tables, through which numerous derived statistics such as joint and marginal probability, point-wise mutual information, information entropy, and {chi}{sup 2} independence statistics can be directly obtained. However, contingency tables can become large as data size increases, requiring a correspondingly large amount of communication between processors. This potential increase in communication prevents optimal parallel speedup and is the main difference with moment-based statistics where the amount of inter-processor communication is independent of data size. Here we present the design trade-offs which we made to implement the computation of contingency tables in parallel.We also study the parallel speedup and scalability properties of our open source implementation. In particular, we observe optimal speed-up and scalability when the contingency statistics are used in their appropriate context, namely, when the data input is not quasi-diffuse.

  8. Scientists say climate change could cause a 'massive' tree die...

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Climate change could cause a 'massive' tree die-off in the U.S. Southwest Scientists say climate change could cause a 'massive' tree die-off in the U.S. Southwest In a troubling ...

  9. Materials Project Releases Massive Trove of Battery and Molecule...

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Materials Project Releases Massive Trove of Battery and Molecule Data Materials Project Releases Massive Trove of Battery and Molecule Data June 8, 2016 Julie Chao, JHChao@lbl.gov, ...

  10. Monumental effort: How a dedicated team completed a massive beam...

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Plus One Share on Facebook An overhead crane lifts the massive box into the NSTX-U test cell (Photo by Mike Viola) An overhead crane lifts the massive box into the NSTX-U test ...

  11. Search for massive resonances in dijet systems containing jets...

    Office of Scientific and Technical Information (OSTI)

    massive resonances in dijet systems containing jets tagged as W or Z boson decays in pp collisions at ?s 8 TeV Re-direct Destination: Search for massive resonances in dijet...

  12. Parallel partitioning strategies for the adaptive solution of conservation laws

    SciTech Connect (OSTI)

    Devine, K.D.; Flaherty, J.E.; Loy, R.M.

    1995-12-31

    We describe and examine the performance of adaptive methods for Solving hyperbolic systems of conservation laws on massively parallel computers. The differential system is approximated by a discontinuous Galerkin finite element method with a hierarchical Legendre piecewise polynomial basis for the spatial discretization. Fluxes at element boundaries are computed by solving an approximate Riemann problem; a projection limiter is applied to keep the average solution monotone; time discretization is performed by Runge-Kutta integration; and a p-refinement-based error estimate is used as an enrichment indicator. Adaptive order (p-) and mesh (h-) refinement algorithms are presented and demonstrated. Using an element-based dynamic load balancing algorithm called tiling and adaptive p-refinement, parallel efficiencies of over 60% are achieved on a 1024-processor nCUBE/2 hypercube. We also demonstrate a fast, tree-based parallel partitioning strategy for three-dimensional octree-structured meshes. This method produces partition quality comparable to recursive spectral bisection at a greatly reduced cost.

  13. Cosmology in general massive gravity theories

    SciTech Connect (OSTI)

    Comelli, D.; Nesti, F.; Pilo, L. E-mail: fabrizio.nesti@aquila.infn.it

    2014-05-01

    We study the cosmological FRW flat solutions generated in general massive gravity theories. Such a model are obtained adding to the Einstein General Relativity action a peculiar non derivative potentials, function of the metric components, that induce the propagation of five gravitational degrees of freedom. This large class of theories includes both the case with a residual Lorentz invariance as well as the case with rotational invariance only. It turns out that the Lorentz-breaking case is selected as the only possibility. Moreover it turns out that that perturbations around strict Minkowski or dS space are strongly coupled. The upshot is that even though dark energy can be simply accounted by massive gravity modifications, its equation of state w{sub eff} has to deviate from -1. Indeed, there is an explicit relation between the strong coupling scale of perturbations and the deviation of w{sub eff} from -1. Taking into account current limits on w{sub eff} and submillimiter tests of the Newton's law as a limit on the possible strong coupling scale, we find that it is still possible to have a weakly coupled theory in a quasi dS background. Future experimental improvements on short distance tests of the Newton's law may be used to tighten the deviation of w{sub eff} form -1 in a weakly coupled massive gravity theory.

  14. Designing a parallel simula machine

    SciTech Connect (OSTI)

    Papazoglou, M.P.; Georgiadis, P.I.; Maritsas, D.G.

    1983-10-01

    The parallel simula machine (PSM) architecture is based upon a master/slave topology, incorporating a master microprocessor. Interconnection circuitry between the master and slave processor modules uses a timesharing system bus and various programmable interrupt control units. Common and private memory modules reside in the PSM, and direct memory access transfers ease the master processor's workload. 5 references.

  15. Parallel, Distributed Scripting with Python

    SciTech Connect (OSTI)

    Miller, P J

    2002-05-24

    Parallel computers used to be, for the most part, one-of-a-kind systems which were extremely difficult to program portably. With SMP architectures, the advent of the POSIX thread API and OpenMP gave developers ways to portably exploit on-the-box shared memory parallelism. Since these architectures didn't scale cost-effectively, distributed memory clusters were developed. The associated MPI message passing libraries gave these systems a portable paradigm too. Having programmers effectively use this paradigm is a somewhat different question. Distributed data has to be explicitly transported via the messaging system in order for it to be useful. In high level languages, the MPI library gives access to data distribution routines in C, C++, and FORTRAN. But we need more than that. Many reasonable and common tasks are best done in (or as extensions to) scripting languages. Consider sysadm tools such as password crackers, file purgers, etc ... These are simple to write in a scripting language such as Python (an open source, portable, and freely available interpreter). But these tasks beg to be done in parallel. Consider the a password checker that checks an encrypted password against a 25,000 word dictionary. This can take around 10 seconds in Python (6 seconds in C). It is trivial to parallelize if you can distribute the information and co-ordinate the work.

  16. Parallelization

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    ... that many intermediate calculations are done in place rather than saving values inmemory. ... many computer cores to leverage more computing power. 3 1.2 Magnetohydrodynamics A ...

  17. (The use of parallel computers and multiple scattering Green function methods in condensed matter physics)

    SciTech Connect (OSTI)

    Stocks, G.M.

    1990-11-30

    The traveler presented invited lectures Parallelizing the Multiple Scattering KKR and KKR-CPA Codes'' at a workshop on Parallel Codes and Algorithms for Electronic Structure of Solids,'' held at the Science and Engineering Research Council (SERC) Daresbury Laboratory, and SCF-KKR-CPA Calculations'' at a meeting on KKR'' and related scattering theory, held at the University of Bristol. The Daresbury meeting reviewed the use of massively parallel computers in condensed matter physics, an area in which ORNL is playing a leading role. The Bristol meeting highlighted the great progress that has been made in recent years in the first principles theory and calculation of the properties of materials based on multiple-scattering Green function methods. This is an area in which, historically, ORNL has had a strong presence. The traveler collaborated with scientists at SERC Daresbury Laboratory on the use of the massively parallel INTEL i860 supercomputer in the calculation of the electronic and ground state properties of alloys and high {Tc} superconductors. At the Universities of Warwick and Bristol, the traveler collaborated with Dr. J. B. Staunton and Prof. B. L. Gyorffy on spin, charge, and pairing fluctuations in the Hubbard model.

  18. Parallel multiplex laser feedback interferometry

    SciTech Connect (OSTI)

    Zhang, Song; Tan, Yidong; Zhang, Shulian

    2013-12-15

    We present a parallel multiplex laser feedback interferometer based on spatial multiplexing which avoids the signal crosstalk in the former feedback interferometer. The interferometer outputs two close parallel laser beams, whose frequencies are shifted by two acousto-optic modulators by 2Ω simultaneously. A static reference mirror is inserted into one of the optical paths as the reference optical path. The other beam impinges on the target as the measurement optical path. Phase variations of the two feedback laser beams are simultaneously measured through heterodyne demodulation with two different detectors. Their subtraction accurately reflects the target displacement. Under typical room conditions, experimental results show a resolution of 1.6 nm and accuracy of 7.8 nm within the range of 100 μm.

  19. Parallel Computing Summer Research Internship

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Mentors Parallel Computing Summer Research Internship Creates next-generation leaders in HPC research and applications development Contacts Program Co-Lead Robert (Bob) Robey Email Program Co-Lead Gabriel Rockefeller Email Program Co-Lead Hai Ah Nam Email Professional Staff Assistant Nickole Aguilar Garcia (505) 665-3048 Email 2016: Mentors Bob Robey Bob Robey XCP-2: EULERIAN CODES Bob Robey is a Research Scientist in the Eulerian Applications group at Los Alamos National Laboratory. He is the

  20. Parallel Computing Summer Research Internship

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Students Parallel Computing Summer Research Internship Creates next-generation leaders in HPC research and applications development Contacts Program Co-Lead Robert (Bob) Robey Email Program Co-Lead Gabriel Rockefeller Email Program Co-Lead Hai Ah Nam Email Professional Staff Assistant Nickole Aguilar Garcia (505) 665-3048 Email 2016: Students Peter Ahrens Peter Ahrens Electrical Engineering & Computer Science BS UC Berkeley Jenniffer Estrada Jenniffer Estrada Computer Science MS Youngstown

  1. Parallel Computing Summer Research Internship

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Guide to Los Alamos Parallel Computing Summer Research Internship Creates next-generation leaders in HPC research and applications development Contacts Program Co-Lead Robert (Bob) Robey Email Program Co-Lead Gabriel Rockefeller Email Program Co-Lead Hai Ah Nam Email Professional Staff Assistant Nickole Aguilar Garcia (505) 665-3048 Email Guide to Los Alamos During your 10-week internship, we hope you have the opportunity to explore and enjoy Los Alamos and the surrounding area. Here are some

  2. Parallel Power Grid Simulation Toolkit

    Energy Science and Technology Software Center (OSTI)

    2015-09-14

    ParGrid is a 'wrapper' that integrates a coupled Power Grid Simulation toolkit consisting of a library to manage the synchronization and communication of independent simulations. The included library code in ParGid, named FSKIT, is intended to support the coupling multiple continuous and discrete even parallel simulations. The code is designed using modern object oriented C++ methods utilizing C++11 and current Boost libraries to ensure compatibility with multiple operating systems and environments.

  3. A new quasidilaton theory of massive gravity

    SciTech Connect (OSTI)

    Mukohyama, Shinji

    2014-12-01

    We present a new quasidilaton theory of Poincare invariant massive gravity, based on the recently proposed framework of matter coupling that makes it possible for the kinetic energy of the quasidilaton scalar to couple to both physical and fiducial metrics simultaneously. We find a scaling-type exact solution that expresses a self-accelerating de Sitter universe, and then analyze linear perturbations around it. It is shown that in a range of parameters all physical degrees of freedom have non-vanishing quadratic kinetic terms and are stable in the subhorizon limit, while the effective Newton's constant for the background is kept positive.

  4. Parallelization and checkpointing of GPU applications through program transformation

    SciTech Connect (OSTI)

    Solano-Quinde, Lizandro Dami#19;an

    2012-11-15

    GPUs have emerged as a powerful tool for accelerating general-purpose applications. The availability of programming languages that makes writing general-purpose applications for running on GPUs tractable have consolidated GPUs as an alternative for accelerating general purpose applications. Among the areas that have beneffited from GPU acceleration are: signal and image processing, computational fluid dynamics, quantum chemistry, and, in general, the High Performance Computing (HPC) Industry. In order to continue to exploit higher levels of parallelism with GPUs, multi-GPU systems are gaining popularity. In this context, single-GPU applications are parallelized for running in multi-GPU systems. Furthermore, multi-GPU systems help to solve the GPU memory limitation for applications with large application memory footprint. Parallelizing single-GPU applications has been approached by libraries that distribute the workload at runtime, however, they impose execution overhead and are not portable. On the other hand, on traditional CPU systems, parallelization has been approached through application transformation at pre-compile time, which enhances the application to distribute the workload at application level and does not have the issues of library-based approaches. Hence, a parallelization scheme for GPU systems based on application transformation is needed. Like any computing engine of today, reliability is also a concern in GPUs. GPUs are vulnerable to transient and permanent failures. Current checkpoint/restart techniques are not suitable for systems with GPUs. Checkpointing for GPU systems present new and interesting challenges, primarily due to the natural differences imposed by the hardware design, the memory subsystem architecture, the massive number of threads, and the limited amount of synchronization among threads. Therefore, a checkpoint/restart technique suitable for GPU systems is needed. The goal of this work is to exploit higher levels of parallelism

  5. Parallel continuation-based global optimization for molecular conformation and protein folding

    SciTech Connect (OSTI)

    Coleman, T.F.; Wu, Z. [Cornell Univ., Ithaca, NY (United States)

    1994-12-31

    This paper presents the authors` recent work on developing parallel algorithms and software for solving the global minimization problem for molecular conformation, especially protein folding. Global minimization problems are difficult to solve when the objective functions have many local minimizers, such as the energy functions for protein folding. In their approach, to avoid directly minimizing a ``difficult`` function, a special integral transformation is introduced to transform the function into a class of gradually deformed, but ``smoother`` or ``easier`` functions. An optimization procedure is then applied to the new functions successively, to trace their solutions back to the original function. The method can be applied to a large class of nonlinear partially separable functions including energy functions for molecular conformation and protein folding. Mathematical theory for the method, as a special continuation approach to global optimization, is established. Algorithms with different solution tracing strategies are developed. Different levels of parallelism are exploited for the implementation of the algorithms on massively parallel architectures.

  6. METHYL CYANIDE OBSERVATIONS TOWARD MASSIVE PROTOSTARS

    SciTech Connect (OSTI)

    Rosero, V.; Hofner, P.; Kurtz, S.; Bieging, J.; Araya, E. D.

    2013-07-01

    We report the results of a survey in the CH{sub 3}CN J = 12 {yields} 11 transition toward a sample of massive proto-stellar candidates. The observations were carried out with the 10 m Submillimeter Telescope on Mount Graham, AZ. We detected this molecular line in 9 out of 21 observed sources. In six cases this is the first detection of this transition. We also obtained full beam sampled cross-scans for five sources which show that the lower K-components can be extended on the arcminute angular scale. The higher K-components, however, are always found to be compact with respect to our 36'' beam. A Boltzmann population diagram analysis of the central spectra indicates CH{sub 3}CN column densities of about 10{sup 14} cm{sup -2}, and rotational temperatures above 50 K, which confirms these sources as hot molecular cores. Independent fits to line velocity and width for the individual K-components resulted in the detection of an increasing blueshift with increasing line excitation for four sources. Comparison with mid-infrared (mid-IR) images from the SPITZER GLIMPSE/IRAC archive for six sources show that the CH{sub 3}CN emission is generally coincident with a bright mid-IR source. Our data clearly show that the CH{sub 3}CN J = 12 {yields} 11 transition is a good probe of the hot molecular gas near massive protostars, and provide the basis for future interferometric studies.

  7. Knowledge Discovery from Massive Healthcare Claims Data

    SciTech Connect (OSTI)

    Chandola, Varun; Sukumar, Sreenivas R; Schryver, Jack C

    2013-01-01

    The role of big data in addressing the needs of the present healthcare system in US and rest of the world has been echoed by government, private, and academic sectors. There has been a growing emphasis to explore the promise of big data analytics in tapping the potential of the massive healthcare data emanating from private and government health insurance providers. While the domain implications of such collaboration are well known, this type of data has been explored to a limited extent in the data mining community. The objective of this paper is two fold: first, we introduce the emerging domain of big"healthcare claims data to the KDD community, and second, we describe the success and challenges that we encountered in analyzing this data using state of art analytics for massive data. Specically, we translate the problem of analyzing healthcare data into some of the most well-known analysis problems in the data mining community, social network analysis, text mining, and temporal analysis and higher order feature construction, and describe how advances within each of these areas can be leveraged to understand the domain of healthcare. Each case study illustrates a unique intersection of data mining and healthcare with a common objective of improving the cost-care ratio by mining for opportunities to improve healthcare operations and reducing hat seems to fall under fraud, waste,and abuse.

  8. Dipolar dark matter with massive bigravity

    SciTech Connect (OSTI)

    Blanchet, Luc; Heisenberg, Lavinia

    2015-12-14

    Massive gravity theories have been developed as viable IR modifications of gravity motivated by dark energy and the problem of the cosmological constant. On the other hand, modified gravity and modified dark matter theories were developed with the aim of solving the problems of standard cold dark matter at galactic scales. Here we propose to adapt the framework of ghost-free massive bigravity theories to reformulate the problem of dark matter at galactic scales. We investigate a promising alternative to dark matter called dipolar dark matter (DDM) in which two different species of dark matter are separately coupled to the two metrics of bigravity and are linked together by an internal vector field. We show that this model successfully reproduces the phenomenology of dark matter at galactic scales (i.e. MOND) as a result of a mechanism of gravitational polarisation. The model is safe in the gravitational sector, but because of the particular couplings of the matter fields and vector field to the metrics, a ghost in the decoupling limit is present in the dark matter sector. However, it might be possible to push the mass of the ghost beyond the strong coupling scale by an appropriate choice of the parameters of the model. Crucial questions to address in future work are the exact mass of the ghost, and the cosmological implications of the model.

  9. Performance analysis of high quality parallel preconditioners applied to 3D finite element structural analysis

    SciTech Connect (OSTI)

    Kolotilina, L.; Nikishin, A.; Yeremin, A.

    1994-12-31

    The solution of large systems of linear equations is a crucial bottleneck when performing 3D finite element analysis of structures. Also, in many cases the reliability and robustness of iterative solution strategies, and their efficiency when exploiting hardware resources, fully determine the scope of industrial applications which can be solved on a particular computer platform. This is especially true for modern vector/parallel supercomputers with large vector length and for modern massively parallel supercomputers. Preconditioned iterative methods have been successfully applied to industrial class finite element analysis of structures. The construction and application of high quality preconditioners constitutes a high percentage of the total solution time. Parallel implementation of high quality preconditioners on such architectures is a formidable challenge. Two common types of existing preconditioners are the implicit preconditioners and the explicit preconditioners. The implicit preconditioners (e.g. incomplete factorizations of several types) are generally high quality but require solution of lower and upper triangular systems of equations per iteration which are difficult to parallelize without deteriorating the convergence rate. The explicit type of preconditionings (e.g. polynomial preconditioners or Jacobi-like preconditioners) require sparse matrix-vector multiplications and can be parallelized but their preconditioning qualities are less than desirable. The authors present results of numerical experiments with Factorized Sparse Approximate Inverses (FSAI) for symmetric positive definite linear systems. These are high quality preconditioners that possess a large resource of parallelism by construction without increasing the serial complexity.

  10. Scalable parallel distance field construction for large-scale applications

    DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)

    Yu, Hongfeng; Xie, Jinrong; Ma, Kwan -Liu; Kolla, Hemanth; Chen, Jacqueline H.

    2015-10-01

    Computing distance fields is fundamental to many scientific and engineering applications. Distance fields can be used to direct analysis and reduce data. In this paper, we present a highly scalable method for computing 3D distance fields on massively parallel distributed-memory machines. Anew distributed spatial data structure, named parallel distance tree, is introduced to manage the level sets of data and facilitate surface tracking overtime, resulting in significantly reduced computation and communication costs for calculating the distance to the surface of interest from any spatial locations. Our method supports several data types and distance metrics from real-world applications. We demonstrate itsmore » efficiency and scalability on state-of-the-art supercomputers using both large-scale volume datasets and surface models. We also demonstrate in-situ distance field computation on dynamic turbulent flame surfaces for a petascale combustion simulation. In conclusion, our work greatly extends the usability of distance fields for demanding applications.« less

  11. Scalable Parallel Algebraic Multigrid Solvers

    SciTech Connect (OSTI)

    Bank, R; Lu, S; Tong, C; Vassilevski, P

    2005-03-23

    The authors propose a parallel algebraic multilevel algorithm (AMG), which has the novel feature that the subproblem residing in each processor is defined over the entire partition domain, although the vast majority of unknowns for each subproblem are associated with the partition owned by the corresponding processor. This feature ensures that a global coarse description of the problem is contained within each of the subproblems. The advantages of this approach are that interprocessor communication is minimized in the solution process while an optimal order of convergence rate is preserved; and the speed of local subproblem solvers can be maximized using the best existing sequential algebraic solvers.

  12. A Novel Application of Parallel Betweenness Centrality to Power Grid Contingency Analysis

    SciTech Connect (OSTI)

    Jin, Shuangshuang; Huang, Zhenyu; Chen, Yousu; Chavarra-Miranda, Daniel; Feo, John T.; Wong, Pak C.

    2010-04-19

    In Energy Management Systems, contingency analysis is commonly performed for identifying and mitigating potentially harmful power grid component failures. The exponentially increasing combinatorial number of failure modes imposes a significant computational burden for massive contingency analysis. It is critical to select a limited set of high-impact contingency cases within the constraint of computing power and time requirements to make it possible for real-time power system vulnerability assessment. In this paper, we present a novel application of parallel betweenness centrality to power grid contingency selection. We cross-validate the proposed method using the model and data of the western US power grid, and implement it on a Cray XMT system - a massively multithreaded architecture - leveraging its advantages for parallel execution of irregular algorithms, such as graph analysis. We achieve a speedup of 55 times (on 64 processors) compared against the single-processor version of the same code running on the Cray XMT. We also compare an OpenMP-based version of the same code running on an HP Superdome shared-memory machine. The performance of the Cray XMT code shows better scalability and resource utilization, and shorter execution time for large-scale power grids. This proposed approach has been evaluated in PNNLs Electricity Infrastructure Operations Center (EIOC). It is expected to provide a quick and efficient solution to massive contingency selection problems to help power grid operators to identify and mitigate potential widespread cascading power grid failures in real time.

  13. Redshift-space distortions in massive neutrino and evolving dark...

    Office of Scientific and Technical Information (OSTI)

    Redshift-space distortions in massive neutrino and evolving dark energy cosmologies ... This content will become publicly available on March 16, 2017 Title: Redshift-space ...

  14. Search for massive WH resonances decaying into the $$\\ell \

    DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)

    Khachatryan, Vardan

    2016-04-28

    In this study, a search for a massive resonancemore » $${\\mathrm{W}^{\\prime }}$$ decaying into a W and a Higgs boson in the $$\\ell \

  15. On the massive transformation from ferrite to austenite in laser...

    Office of Scientific and Technical Information (OSTI)

    austenite in laser welded mo-bearing stainless steels. Citation Details In-Document Search Title: On the massive transformation from ferrite to austenite in laser welded ...

  16. Xyce parallel electronic simulator design.

    SciTech Connect (OSTI)

    Thornquist, Heidi K.; Rankin, Eric Lamont; Mei, Ting; Schiek, Richard Louis; Keiter, Eric Richard; Russo, Thomas V.

    2010-09-01

    This document is the Xyce Circuit Simulator developer guide. Xyce has been designed from the 'ground up' to be a SPICE-compatible, distributed memory parallel circuit simulator. While it is in many respects a research code, Xyce is intended to be a production simulator. As such, having software quality engineering (SQE) procedures in place to insure a high level of code quality and robustness are essential. Version control, issue tracking customer support, C++ style guildlines and the Xyce release process are all described. The Xyce Parallel Electronic Simulator has been under development at Sandia since 1999. Historically, Xyce has mostly been funded by ASC, the original focus of Xyce development has primarily been related to circuits for nuclear weapons. However, this has not been the only focus and it is expected that the project will diversify. Like many ASC projects, Xyce is a group development effort, which involves a number of researchers, engineers, scientists, mathmaticians and computer scientists. In addition to diversity of background, it is to be expected on long term projects for there to be a certain amount of staff turnover, as people move on to different projects. As a result, it is very important that the project maintain high software quality standards. The point of this document is to formally document a number of the software quality practices followed by the Xyce team in one place. Also, it is hoped that this document will be a good source of information for new developers.

  17. Information hiding in parallel programs

    SciTech Connect (OSTI)

    Foster, I.

    1992-01-30

    A fundamental principle in program design is to isolate difficult or changeable design decisions. Application of this principle to parallel programs requires identification of decisions that are difficult or subject to change, and the development of techniques for hiding these decisions. We experiment with three complex applications, and identify mapping, communication, and scheduling as areas in which decisions are particularly problematic. We develop computational abstractions that hide such decisions, and show that these abstractions can be used to develop elegant solutions to programming problems. In particular, they allow us to encode common structures, such as transforms, reductions, and meshes, as software cells and templates that can reused in different applications. An important characteristic of these structures is that they do not incorporate mapping, communication, or scheduling decisions: these aspects of the design are specified separately, when composing existing structures to form applications. This separation of concerns allows the same cells and templates to be reused in different contexts.

  18. Hybrid Optimization Parallel Search PACKage

    Energy Science and Technology Software Center (OSTI)

    2009-11-10

    HOPSPACK is open source software for solving optimization problems without derivatives. Application problems may have a fully nonlinear objective function, bound constraints, and linear and nonlinear constraints. Problem variables may be continuous, integer-valued, or a mixture of both. The software provides a framework that supports any derivative-free type of solver algorithm. Through the framework, solvers request parallel function evaluation, which may use MPI (multiple machines) or multithreading (multiple processors/cores on one machine). The framework providesmore » a Cache and Pending Cache of saved evaluations that reduces execution time and facilitates restarts. Solvers can dynamically create other algorithms to solve subproblems, a useful technique for handling multiple start points and integer-valued variables. HOPSPACK ships with the Generating Set Search (GSS) algorithm, developed at Sandia as part of the APPSPACK open source software project.« less

  19. Device for balancing parallel strings

    DOE Patents [OSTI]

    Mashikian, Matthew S.

    1985-01-01

    A battery plant is described which features magnetic circuit means in association with each of the battery strings in the battery plant for balancing the electrical current flow through the battery strings by equalizing the voltage across each of the battery strings. Each of the magnetic circuit means generally comprises means for sensing the electrical current flow through one of the battery strings, and a saturable reactor having a main winding connected electrically in series with the battery string, a bias winding connected to a source of alternating current and a control winding connected to a variable source of direct current controlled by the sensing means. Each of the battery strings is formed by a plurality of batteries connected electrically in series, and these battery strings are connected electrically in parallel across common bus conductors.

  20. Parallel computing in enterprise modeling.

    SciTech Connect (OSTI)

    Goldsby, Michael E.; Armstrong, Robert C.; Shneider, Max S.; Vanderveen, Keith; Ray, Jaideep; Heath, Zach; Allan, Benjamin A.

    2008-08-01

    This report presents the results of our efforts to apply high-performance computing to entity-based simulations with a multi-use plugin for parallel computing. We use the term 'Entity-based simulation' to describe a class of simulation which includes both discrete event simulation and agent based simulation. What simulations of this class share, and what differs from more traditional models, is that the result sought is emergent from a large number of contributing entities. Logistic, economic and social simulations are members of this class where things or people are organized or self-organize to produce a solution. Entity-based problems never have an a priori ergodic principle that will greatly simplify calculations. Because the results of entity-based simulations can only be realized at scale, scalable computing is de rigueur for large problems. Having said that, the absence of a spatial organizing principal makes the decomposition of the problem onto processors problematic. In addition, practitioners in this domain commonly use the Java programming language which presents its own problems in a high-performance setting. The plugin we have developed, called the Parallel Particle Data Model, overcomes both of these obstacles and is now being used by two Sandia frameworks: the Decision Analysis Center, and the Seldon social simulation facility. While the ability to engage U.S.-sized problems is now available to the Decision Analysis Center, this plugin is central to the success of Seldon. Because Seldon relies on computationally intensive cognitive sub-models, this work is necessary to achieve the scale necessary for realistic results. With the recent upheavals in the financial markets, and the inscrutability of terrorist activity, this simulation domain will likely need a capability with ever greater fidelity. High-performance computing will play an important part in enabling that greater fidelity.

  1. WCH Removes Massive Test Reactor | Department of Energy

    Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

    WCH Removes Massive Test Reactor WCH Removes Massive Test Reactor Addthis Description Hanford's River Corridor contractor, Washington Closure Hanford, has met a significant cleanup challenge on the U.S. Department of Energy's (DOE) Hanford Site by removing a 1,082-ton nuclear test reactor from the 300 Area.

  2. The design of a parallel adaptive paving all-quadrilateral meshing algorithm

    SciTech Connect (OSTI)

    Tautges, T.J.; Lober, R.R.; Vaughan, C.

    1995-08-01

    Adaptive finite element analysis demands a great deal of computational resources, and as such is most appropriately solved in a massively parallel computer environment. This analysis will require other parallel algorithms before it can fully utilize MP computers, one of which is parallel adaptive meshing. A version of the paving algorithm is being designed which operates in parallel but which also retains the robustness and other desirable features present in the serial algorithm. Adaptive paving in a production mode is demonstrated using a Babuska-Rheinboldt error estimator on a classic linearly elastic plate problem. The design of the parallel paving algorithm is described, and is based on the decomposition of a surface into {open_quotes}virtual{close_quotes} surfaces. The topology of the virtual surface boundaries is defined using mesh entities (mesh nodes and edges) so as to allow movement of these boundaries with smoothing and other operations. This arrangement allows the use of the standard paving algorithm on subdomain interiors, after the negotiation of the boundary mesh.

  3. MACHO (MAssive Compact Halo Objects) Data

    DOE Data Explorer [Office of Scientific and Technical Information (OSTI)]

    The primary aim of the MACHO Project is to test the hypothesis that a significant fraction of the dark matter in the halo of the Milky Way is made up of objects like brown dwarfs or planets: these objects have come to be known as MACHOs, for MAssive Compact Halo Objects. The signature of these objects is the occasional amplification of the light from extragalactic stars by the gravitational lens effect. The amplification can be large, but events are extremely rare: it is necessary to monitor photometrically several million stars for a period of years in order to obtain a useful detection rate. For this purpose MACHO has a two channel system that employs eight CCDs, mounted on the 50 inch telescope at Mt. Stromlo. The high data rate (several GBytes per night) is accommodated by custom electronics and on-line data reduction. The Project has taken more than 27,000 images with this system since June 1992. Analysis of a subset of these data has yielded databases containing light curves in two colors for 8 million stars in the LMC and 10 million in the bulge of the Milky Way. A search for microlensing has turned up four candidates toward the Large Magellanic Cloud and 45 toward the Galactic Bulge. The web page for data provides links to MACHO Project data portals and various specialized interfaces for viewing or searching the data. (Specialized Interface)

  4. Dark aspects of massive spinor electrodynamics

    SciTech Connect (OSTI)

    Kim, Edward J.; Kouwn, Seyen; Oh, Phillial; Park, Chan-Gyung E-mail: seyen@ewha.ac.kr E-mail: parkc@jbnu.ac.kr

    2014-07-01

    We investigate the cosmology of massive spinor electrodynamics when torsion is non-vanishing. A non-minimal interaction is introduced between the torsion and the vector field and the coupling constant between them plays an important role in subsequential cosmology. It is shown that the mass of the vector field and torsion conspire to generate dark energy and pressureless dark matter, and for generic values of the coupling constant, the theory effectively provides an interacting model between them with an additional energy density of the form ? 1/a{sup 6}. The evolution equations mimic ?CDM behavior up to 1/a{sup 3} term and the additional term represents a deviation from ?CDM. We show that the deviation is compatible with the observational data, if it is very small. We find that the non-minimal interaction is responsible for generating an effective cosmological constant which is directly proportional to the mass squared of the vector field and the mass of the photon within its current observational limit could be the source of the dark energy.

  5. Parallel Programming and Optimization for Intel Architecture

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Parallel Programming and Optimization for Intel Architecture Parallel Programming and Optimization for Intel Architecture August 14, 2015 by Richard Gerber Intel is sponsoring a series of webinars entitled "Parallel Programming and Optimization for Intel Architecture." Here's the schedule for August (Registration link is: https://attendee.gotowebinar.com/register/6325131222429932289) Mon, August 17 - "Hello world from Intel Xeon Phi coprocessors". Overview of architecture,

  6. CASL - The Michigan Parallel Characteristics Transport Code

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    The Michigan Parallel Characteristics Transport Code Verification of MPACT: The Michigan Parallel Characteristics Transport Code Benjamin Collins, Brendan Kochunas, Daniel Jabbay, Thomas Downar, William Martin Department of Nuclear Engineering and Radiological Sciences University of Michigan Andrew Godfrey Oak Ridge National Laboroatory MPACT (Michigan PArallel Characteristics Transport Code) is a new reactor analysis tool being developed at the University of Michigan as an advanced pin-resolved

  7. Parallel auto-correlative statistics with VTK.

    SciTech Connect (OSTI)

    Pebay, Philippe Pierre; Bennett, Janine Camille

    2013-08-01

    This report summarizes existing statistical engines in VTK and presents both the serial and parallel auto-correlative statistics engines. It is a sequel to [PT08, BPRT09b, PT09, BPT09, PT10] which studied the parallel descriptive, correlative, multi-correlative, principal component analysis, contingency, k-means, and order statistics engines. The ease of use of the new parallel auto-correlative statistics engine is illustrated by the means of C++ code snippets and algorithm verification is provided. This report justifies the design of the statistics engines with parallel scalability in mind, and provides scalability and speed-up analysis results for the autocorrelative statistics engine.

  8. Large N phase transitions in massive N = 2 gauge theories

    SciTech Connect (OSTI)

    Russo, J. G.

    2014-07-23

    Using exact results obtained from localization on S{sup 4}, we explore the large N limit of N = 2 super Yang-Mills theories with massive matter multiplets. In this talk we discuss two cases: N = 2* theory, describing a massive hypermultiplet in the adjoint representation, and super QCD with massive quarks. When the radius of the four-sphere is sent to infinity these theories are described by solvable matrix models, which exhibit a number of interesting phenomena including quantum phase transitions at finite 't Hooft coupling.

  9. SEGUE 2: THE LEAST MASSIVE GALAXY

    SciTech Connect (OSTI)

    Kirby, Evan N.; Boylan-Kolchin, Michael; Bullock, James S.; Kaplinghat, Manoj; Cohen, Judith G.; Geha, Marla

    2013-06-10

    Segue 2, discovered by Belokurov et al., is a galaxy with a luminosity of only 900 L{sub Sun }. We present Keck/DEIMOS spectroscopy of 25 members of Segue 2-a threefold increase in spectroscopic sample size. The velocity dispersion is too small to be measured with our data. The upper limit with 90% (95%) confidence is {sigma}{sub v} < 2.2 (2.6) km s{sup -1}, the most stringent limit for any galaxy. The corresponding limit on the mass within the three-dimensional half-light radius (46 pc) is M{sub 1/2} < 1.5 (2.1) Multiplication-Sign 10{sup 5} M{sub Sun }. Segue 2 is the least massive galaxy known. We identify Segue 2 as a galaxy rather than a star cluster based on the wide dispersion in [Fe/H] (from -2.85 to -1.33) among the member stars. The stars' [{alpha}/Fe] ratios decline with increasing [Fe/H], indicating that Segue 2 retained Type Ia supernova ejecta despite its presently small mass and that star formation lasted for at least 100 Myr. The mean metallicity, ([Fe/H]) = -2.22 {+-} 0.13 (about the same as the Ursa Minor galaxy, 330 times more luminous than Segue 2), is higher than expected from the luminosity-metallicity relation defined by more luminous dwarf galaxy satellites of the Milky Way. Segue 2 may be the barest remnant of a tidally stripped, Ursa Minor-sized galaxy. If so, it is the best example of an ultra-faint dwarf galaxy that came to be ultra-faint through tidal stripping. Alternatively, Segue 2 could have been born in a very low mass dark matter subhalo (v{sub max} < 10 km s{sup -1}), below the atomic hydrogen cooling limit.

  10. Parallel 3D Finite Element Particle-in-Cell Simulations with Pic3P

    SciTech Connect (OSTI)

    Candel, A.; Kabel, A.; Lee, L.; Li, Z.; Ng, C.; Schussman, G.; Ko, K.; Ben-Zvi, I.; Kewisch, J.; /Brookhaven

    2009-06-19

    SLAC's Advanced Computations Department (ACD) has developed the parallel 3D Finite Element electromagnetic Particle-In-Cell code Pic3P. Designed for simulations of beam-cavity interactions dominated by space charge effects, Pic3P solves the complete set of Maxwell-Lorentz equations self-consistently and includes space-charge, retardation and boundary effects from first principles. Higher-order Finite Element methods with adaptive refinement on conformal unstructured meshes lead to highly efficient use of computational resources. Massively parallel processing with dynamic load balancing enables large-scale modeling of photoinjectors with unprecedented accuracy, aiding the design and operation of next-generation accelerator facilities. Applications include the LCLS RF gun and the BNL polarized SRF gun.

  11. Broadcasting a message in a parallel computer

    DOE Patents [OSTI]

    Berg, Jeremy E.; Faraj, Ahmad A.

    2011-08-02

    Methods, systems, and products are disclosed for broadcasting a message in a parallel computer. The parallel computer includes a plurality of compute nodes connected together using a data communications network. The data communications network optimized for point to point data communications and is characterized by at least two dimensions. The compute nodes are organized into at least one operational group of compute nodes for collective parallel operations of the parallel computer. One compute node of the operational group assigned to be a logical root. Broadcasting a message in a parallel computer includes: establishing a Hamiltonian path along all of the compute nodes in at least one plane of the data communications network and in the operational group; and broadcasting, by the logical root to the remaining compute nodes, the logical root's message along the established Hamiltonian path.

  12. Secretary Chu Announces New Institute to Help Scientists Improve Massive

    Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

    Data Set Research on DOE Supercomputers | Department of Energy Institute to Help Scientists Improve Massive Data Set Research on DOE Supercomputers Secretary Chu Announces New Institute to Help Scientists Improve Massive Data Set Research on DOE Supercomputers March 29, 2012 - 2:48pm Addthis Washington D.C. - Energy Secretary Steven Chu today announced $5 million to establish the Scalable Data Management, Analysis and Visualization (SDAV) Institute as part of the Obama Administration's

  13. Massive Hanford Test Reactor Removed - Plutonium Recycle Test Reactor

    Office of Environmental Management (EM)

    removed from Hanford's 300 Area | Department of Energy Massive Hanford Test Reactor Removed - Plutonium Recycle Test Reactor removed from Hanford's 300 Area Massive Hanford Test Reactor Removed - Plutonium Recycle Test Reactor removed from Hanford's 300 Area January 22, 2014 - 12:00pm Addthis Media Contacts Cameron Hardy, DOE 509-376-5365 Cameron.Hardy@re.doe.gov Mark McKenna, Washington Closure 509-372-9032 media@wch-rcc.com RICHLAND, WA - Hanford's River Corridor contractor, Washington

  14. Protecting Recovery Act Cleanup Site During Massive Wildfire | Department

    Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

    of Energy Protecting Recovery Act Cleanup Site During Massive Wildfire Protecting Recovery Act Cleanup Site During Massive Wildfire Effective safety procedures in place at Los Alamos National Laboratory would have provided protections in the event that the raging Las Conchas fire had spread to the site of an American Recovery and Reinvestment Act project. "Our procedures not only placed the waste excavation site, Materials Disposal Area B (MDA-B), into a safe posture so it was well

  15. Materials Project Releases Massive Trove of Battery and Molecule Data

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Materials Project Releases Massive Trove of Battery and Molecule Data Materials Project Releases Massive Trove of Battery and Molecule Data June 8, 2016 Julie Chao, JHChao@lbl.gov, (510) 486-6491 materialsproject2 Screen shot from the Materials Project website. The Materials Project, a Google-like database of material properties aimed at accelerating innovation, has released an enormous trove of data to the public, giving scientists working on fuel cells, photovoltaics, thermoelectrics and a

  16. New approach for the solution of optimal control problems on parallel machines. Doctoral thesis

    SciTech Connect (OSTI)

    Stech, D.J.

    1990-01-01

    This thesis develops a highly parallel solution method for nonlinear optimal control problems. Balakrishnan's epsilon method is used in conjunction with the Rayleigh-Ritz method to convert the dynamic optimization of the optimal control problem into a static optimization problem. Walsh functions and orthogonal polynomials are used as basis functions to implement the Rayleigh-Ritz method. The resulting static optimization problem is solved using matrix operations which have well defined massively parallel solution methods. To demonstrate the method, a variety of nonlinear optimal control problems are solved. The nonlinear Raleigh problem with quadratic cost and nonlinear van der Pol problem with quadratic cost and terminal constraints on the states are solved in both serial and parallel on an eight processor Intel Hypercube. The solutions using both Walsh functions and Legendre polynomials as basis functions are given. In addition to these problems which are solved in parallel, a more complex nonlinear minimum time optimal control problem and nonlinear optimal control problem with an inequality constraint on the control are solved. Results show the method converges quickly, even from relatively poor initial guesses for the nominal trajectories.

  17. Massively Multi-core Acceleration of a Document-Similarity Classifier to Detect Web Attacks

    SciTech Connect (OSTI)

    Ulmer, C; Gokhale, M; Top, P; Gallagher, B; Eliassi-Rad, T

    2010-01-14

    This paper describes our approach to adapting a text document similarity classifier based on the Term Frequency Inverse Document Frequency (TFIDF) metric to two massively multi-core hardware platforms. The TFIDF classifier is used to detect web attacks in HTTP data. In our parallel hardware approaches, we design streaming, real time classifiers by simplifying the sequential algorithm and manipulating the classifier's model to allow decision information to be represented compactly. Parallel implementations on the Tilera 64-core System on Chip and the Xilinx Virtex 5-LX FPGA are presented. For the Tilera, we employ a reduced state machine to recognize dictionary terms without requiring explicit tokenization, and achieve throughput of 37MB/s at slightly reduced accuracy. For the FPGA, we have developed a set of software tools to help automate the process of converting training data to synthesizable hardware and to provide a means of trading off between accuracy and resource utilization. The Xilinx Virtex 5-LX implementation requires 0.2% of the memory used by the original algorithm. At 166MB/s (80X the software) the hardware implementation is able to achieve Gigabit network throughput at the same accuracy as the original algorithm.

  18. System and method for a parallel immunoassay system

    DOE Patents [OSTI]

    Stevens, Fred J.

    2002-01-01

    A method and system for detecting a target antigen using massively parallel immunoassay technology. In this system, high affinity antibodies of the antigen are covalently linked to small beads or particles. The beads are exposed to a solution containing DNA-oligomer-mimics of the antigen. The mimics which are reactive with the covalently attached antibody or antibodies will bind to the appropriate antibody molecule on the bead. The particles or beads are then washed to remove any unbound DNA-oligomer-mimics and are then immobilized or trapped. The bead-antibody complexes are then exposed to a test solution which may contain the targeted antigens. If the antigen is present it will replace the mimic since it has a greater affinity for the respective antibody. The particles are then removed from the solution leaving a residual solution. This residual solution is applied a DNA chip containing many samples of complimentary DNA. If the DNA tag from a mimic binds with its complimentary DNA, it indicates the presence of the target antigen. A flourescent tag can be used to more easily identify the bound DNA tag.

  19. Xyce parallel electronic simulator : users' guide.

    SciTech Connect (OSTI)

    Mei, Ting; Rankin, Eric Lamont; Thornquist, Heidi K.; Santarelli, Keith R.; Fixel, Deborah A.; Coffey, Todd Stirling; Russo, Thomas V.; Schiek, Richard Louis; Warrender, Christina E.; Keiter, Eric Richard; Pawlowski, Roger Patrick

    2011-05-01

    This manual describes the use of the Xyce Parallel Electronic Simulator. Xyce has been designed as a SPICE-compatible, high-performance analog circuit simulator, and has been written to support the simulation needs of the Sandia National Laboratories electrical designers. This development has focused on improving capability over the current state-of-the-art in the following areas: (1) Capability to solve extremely large circuit problems by supporting large-scale parallel computing platforms (up to thousands of processors). Note that this includes support for most popular parallel and serial computers; (2) Improved performance for all numerical kernels (e.g., time integrator, nonlinear and linear solvers) through state-of-the-art algorithms and novel techniques. (3) Device models which are specifically tailored to meet Sandia's needs, including some radiation-aware devices (for Sandia users only); and (4) Object-oriented code design and implementation using modern coding practices that ensure that the Xyce Parallel Electronic Simulator will be maintainable and extensible far into the future. Xyce is a parallel code in the most general sense of the phrase - a message passing parallel implementation - which allows it to run efficiently on the widest possible number of computing platforms. These include serial, shared-memory and distributed-memory parallel as well as heterogeneous platforms. Careful attention has been paid to the specific nature of circuit-simulation problems to ensure that optimal parallel efficiency is achieved as the number of processors grows. The development of Xyce provides a platform for computational research and development aimed specifically at the needs of the Laboratory. With Xyce, Sandia has an 'in-house' capability with which both new electrical (e.g., device model development) and algorithmic (e.g., faster time-integration methods, parallel solver algorithms) research and development can be performed. As a result, Xyce is a unique

  20. Adding Data Management Services to Parallel File Systems

    SciTech Connect (OSTI)

    Brandt, Scott

    2015-03-04

    The objective of this project, called DAMASC for “Data Management in Scientific Computing”, is to coalesce data management with parallel file system management to present a declarative interface to scientists for managing, querying, and analyzing extremely large data sets efficiently and predictably. Managing extremely large data sets is a key challenge of exascale computing. The overhead, energy, and cost of moving massive volumes of data demand designs where computation is close to storage. In current architectures, compute/analysis clusters access data in a physically separate parallel file system and largely leave it scientist to reduce data movement. Over the past decades the high-end computing community has adopted middleware with multiple layers of abstractions and specialized file formats such as NetCDF-4 and HDF5. These abstractions provide a limited set of high-level data processing functions, but have inherent functionality and performance limitations: middleware that provides access to the highly structured contents of scientific data files stored in the (unstructured) file systems can only optimize to the extent that file system interfaces permit; the highly structured formats of these files often impedes native file system performance optimizations. We are developing Damasc, an enhanced high-performance file system with native rich data management services. Damasc will enable efficient queries and updates over files stored in their native byte-stream format while retaining the inherent performance of file system data storage via declarative queries and updates over views of underlying files. Damasc has four key benefits for the development of data-intensive scientific code: (1) applications can use important data-management services, such as declarative queries, views, and provenance tracking, that are currently available only within database systems; (2) the use of these services becomes easier, as they are provided within a familiar file

  1. Parallel Climate Analysis Toolkit (ParCAT)

    SciTech Connect (OSTI)

    Smith, Brian Edward

    2013-06-30

    The parallel analysis toolkit (ParCAT) provides parallel statistical processing of large climate model simulation datasets. ParCAT provides parallel point-wise average calculations, frequency distributions, sum/differences of two datasets, and difference-of-average and average-of-difference for two datasets for arbitrary subsets of simulation time. ParCAT is a command-line utility that can be easily integrated in scripts or embedded in other application. ParCAT supports CMIP5 post-processed datasets as well as non-CMIP5 post-processed datasets. ParCAT reads and writes standard netCDF files.

  2. Distributed parallel messaging for multiprocessor systems

    DOE Patents [OSTI]

    Chen, Dong; Heidelberger, Philip; Salapura, Valentina; Senger, Robert M; Steinmacher-Burrow, Burhard; Sugawara, Yutaka

    2013-06-04

    A method and apparatus for distributed parallel messaging in a parallel computing system. The apparatus includes, at each node of a multiprocessor network, multiple injection messaging engine units and reception messaging engine units, each implementing a DMA engine and each supporting both multiple packet injection into and multiple reception from a network, in parallel. The reception side of the messaging unit (MU) includes a switch interface enabling writing of data of a packet received from the network to the memory system. The transmission side of the messaging unit, includes switch interface for reading from the memory system when injecting packets into the network.

  3. Paradyn a parallel nonlinear, explicit, three-dimensional finite-element code for solid and structural mechanics user manual

    SciTech Connect (OSTI)

    Hoover, C G; DeGroot, A J; Sherwood, R J

    2000-06-01

    ParaDyn is a parallel version of the DYNA3D computer program, a three-dimensional explicit finite-element program for analyzing the dynamic response of solids and structures. The ParaDyn program has been used as a production tool for over three years for analyzing problems which range in size from a few tens of thousands of elements to between one-million and ten-million elements. ParaDyn runs on parallel computers provided by the Department of Energy Accelerated Strategic Computing Initiative (ASCI) and the Department of Defense High Performance Computing and Modernization Program. Preprocessing and post-processing software utilities and tools are designed to facilitate the generation of partitioned domains for processors on a massively parallel computer and the visualization of both resultant data and boundary data generated in a parallel simulation. This manual provides a brief overview of the parallel implementation; describes techniques for running the ParaDyn program, tools and utilities; and provides examples of parallel simulations.

  4. Parallel I/O in Practice

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    art. This tutorial sheds light on the state-of-the-art in parallel IO and provides the knowledge necessary for attendees to best leverage IO resources available to them. We...

  5. Asynchronous parallel pattern search for nonlinear optimization

    SciTech Connect (OSTI)

    P. D. Hough; T. G. Kolda; V. J. Torczon

    2000-01-01

    Parallel pattern search (PPS) can be quite useful for engineering optimization problems characterized by a small number of variables (say 10--50) and by expensive objective function evaluations such as complex simulations that take from minutes to hours to run. However, PPS, which was originally designed for execution on homogeneous and tightly-coupled parallel machine, is not well suited to the more heterogeneous, loosely-coupled, and even fault-prone parallel systems available today. Specifically, PPS is hindered by synchronization penalties and cannot recover in the event of a failure. The authors introduce a new asynchronous and fault tolerant parallel pattern search (AAPS) method and demonstrate its effectiveness on both simple test problems as well as some engineering optimization problems

  6. Parallel programming with PCN. Revision 1

    SciTech Connect (OSTI)

    Foster, I.; Tuecke, S.

    1991-12-01

    PCN is a system for developing and executing parallel programs. It comprises a high-level programming language, tools for developing and debugging programs in this language, and interfaces to Fortran and C that allow the reuse of existing code in multilingual parallel programs. Programs developed using PCN are portable across many different workstations, networks, and parallel computers. This document provides all the information required to develop parallel programs with the PCN programming system. In includes both tutorial and reference material. It also presents the basic concepts that underly PCN, particularly where these are likely to be unfamiliar to the reader, and provides pointers to other documentation on the PCN language, programming techniques, and tools. PCN is in the public domain. The latest version of both the software and this manual can be obtained by anonymous FTP from Argonne National Laboratory in the directory pub/pcn at info.mcs.anl.gov (c.f. Appendix A).

  7. PISTON (Portable Data Parallel Visualization and Analysis)

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    in a data-parallel way. By using nVidia's freely downloadable Thrust library and our own tools, we can generate executable codes for different acceleration hardware architectures...

  8. Feature Clustering for Accelerating Parallel Coordinate Descent

    SciTech Connect (OSTI)

    Scherrer, Chad; Tewari, Ambuj; Halappanavar, Mahantesh; Haglin, David J.

    2012-12-06

    We demonstrate an approach for accelerating calculation of the regularization path for L1 sparse logistic regression problems. We show the benefit of feature clustering as a preconditioning step for parallel block-greedy coordinate descent algorithms.

  9. Optimize Parallel Pumping Systems: Industrial Technologies Program...

    Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

    ... to operate the number of pumps needed to meet variable fow rate requirements effciently. ... Parallel pumps provide balanced or equal fow rates when the same models are used and their ...

  10. HOPSPACK: Hybrid Optimization Parallel Search Package.

    SciTech Connect (OSTI)

    Gray, Genetha A.; Kolda, Tamara G.; Griffin, Joshua; Taddy, Matt; Martinez-Canales, Monica

    2008-12-01

    In this paper, we describe the technical details of HOPSPACK (Hybrid Optimization Parallel SearchPackage), a new software platform which facilitates combining multiple optimization routines into asingle, tightly-coupled, hybrid algorithm that supports parallel function evaluations. The frameworkis designed such that existing optimization source code can be easily incorporated with minimalcode modification. By maintaining the integrity of each individual solver, the strengths and codesophistication of the original optimization package are retained and exploited.4

  11. Spontaneous Lorentz and diffeomorphism violation, massive modes, and gravity

    SciTech Connect (OSTI)

    Bluhm, Robert; Fung Shuhong; Kostelecky, V. Alan

    2008-03-15

    Theories with spontaneous local Lorentz and diffeomorphism violation contain massless Nambu-Goldstone modes, which arise as field excitations in the minimum of the symmetry-breaking potential. If the shape of the potential also allows excitations above the minimum, then an alternative gravitational Higgs mechanism can occur in which massive modes involving the metric appear. The origin and basic properties of the massive modes are addressed in the general context involving an arbitrary tensor vacuum value. Special attention is given to the case of bumblebee models, which are gravitationally coupled vector theories with spontaneous local Lorentz and diffeomorphism violation. Mode expansions are presented in both local and spacetime frames, revealing the Nambu-Goldstone and massive modes via decomposition of the metric and bumblebee fields, and the associated symmetry properties and gauge fixing are discussed. The class of bumblebee models with kinetic terms of the Maxwell form is used as a focus for more detailed study. The nature of the associated conservation laws and the interpretation as a candidate alternative to Einstein-Maxwell theory are investigated. Explicit examples involving smooth and Lagrange-multiplier potentials are studied to illustrate features of the massive modes, including their origin, nature, dispersion laws, and effects on gravitational interactions. In the weak static limit, the massive mode and Lagrange-multiplier fields are found to modify the Newton and Coulomb potentials. The nature and implications of these modifications are examined.

  12. Cosmological stability bound in massive gravity and bigravity

    SciTech Connect (OSTI)

    Fasiello, Matteo; Tolley, Andrew J. E-mail: andrew.j.tolley@case.edu

    2013-12-01

    We give a simple derivation of a cosmological bound on the graviton mass for spatially flat FRW solutions in massive gravity with an FRW reference metric and for bigravity theories. This bound comes from the requirement that the kinetic term of the helicity zero mode of the graviton is positive definite. The bound is dependent only on the parameters in the massive gravity potential and the Hubble expansion rate for the two metrics. We derive the decoupling limit of bigravity and FRW massive gravity, and use this to give an independent derivation of the cosmological bound. We recover our previous results that the tension between satisfying the Friedmann equation and the cosmological bound is sufficient to rule out all observationally relevant FRW solutions for massive gravity with an FRW reference metric. In contrast, in bigravity this tension is resolved due to different nature of the Vainshtein mechanism. We find that in bigravity theories there exists an FRW solution with late-time self-acceleration for which the kinetic terms for the helicity-2, helicity-1 and helicity-0 are generically nonzero and positive making this a compelling candidate for a model of cosmic acceleration. We confirm that the generalized bound is saturated for the candidate partially massless (bi)gravity theories but the existence of helicity-1/helicity-0 interactions implies the absence of the conjectured partially massless symmetry for both massive gravity and bigravity.

  13. Parallel phase model : a programming model for high-end parallel machines with manycores.

    SciTech Connect (OSTI)

    Wu, Junfeng; Wen, Zhaofang; Heroux, Michael Allen; Brightwell, Ronald Brian

    2009-04-01

    This paper presents a parallel programming model, Parallel Phase Model (PPM), for next-generation high-end parallel machines based on a distributed memory architecture consisting of a networked cluster of nodes with a large number of cores on each node. PPM has a unified high-level programming abstraction that facilitates the design and implementation of parallel algorithms to exploit both the parallelism of the many cores and the parallelism at the cluster level. The programming abstraction will be suitable for expressing both fine-grained and coarse-grained parallelism. It includes a few high-level parallel programming language constructs that can be added as an extension to an existing (sequential or parallel) programming language such as C; and the implementation of PPM also includes a light-weight runtime library that runs on top of an existing network communication software layer (e.g. MPI). Design philosophy of PPM and details of the programming abstraction are also presented. Several unstructured applications that inherently require high-volume random fine-grained data accesses have been implemented in PPM with very promising results.

  14. Translation invariant time-dependent solutions to massive gravity

    SciTech Connect (OSTI)

    Mourad, J.; Steer, D.A. E-mail: steer@apc.univ-paris7.fr

    2013-12-01

    Homogeneous time-dependent solutions of massive gravity generalise the plane wave solutions of the linearised Fierz-Pauli equations for a massive spin-two particle, as well as the Kasner solutions of General Relativity. We show that they also allow a clear counting of the degrees of freedom and represent a simplified framework to work out the constraints, the equations of motion and the initial value formulation. We work in the vielbein formulation of massive gravity, find the phase space resulting from the constraints and show that several disconnected sectors of solutions exist some of which are unstable. The initial values determine the sector to which a solution belongs. Classically, the theory is not pathological but quantum mechanically the theory may suffer from instabilities. The latter are not due to an extra ghost-like degree of freedom.

  15. Massive gravitational waves in Chern-Simons modified gravity

    SciTech Connect (OSTI)

    Myung, Yun Soo; Moon, Taeyoon E-mail: tymoon@inje.ac.kr

    2014-10-01

    We consider the nondynamical Chern-Simons (nCS) modified gravity, which is regarded as a parity-odd theory of massive gravity in four dimensions. We first find polarization modes of gravitational waves for θ=x/μ in nCS modified gravity by using the Newman-Penrose formalism where the null complex tetrad is necessary to specify gravitational waves. We show that in the Newman–Penrose formalism, the number of polarization modes is one in addition to an unspecified Ψ{sub 4}, implying three degrees of freedom for θ=x/μ. This compares with two for a canonical embedding of θ=t/μ. Also, if one introduces the Ricci tensor formalism to describe a massive graviton arising from the nCS modified gravity, one finds one massive mode after making second-order wave equations, which is compared to five found from the parity-even Einstein–Weyl gravity.

  16. Massive Stars in Colliding Wind Systems: the GLAST Perspective

    SciTech Connect (OSTI)

    Reimer, Anita; Reimer, Olaf; /Stanford U., HEPL /KIPAC, Menlo Park

    2011-11-29

    Colliding winds of massive stars in binary systems are considered as candidate sites of high-energy non-thermal photon emission. They are already among the suggested counterparts for a few individual unidentified EGRET sources, but may constitute a detectable source population for the GLAST observatory. The present work investigates such population study of massive colliding wind systems at high-energy gamma-rays. Based on the recent detailed model (Reimer et al. 2006) for non-thermal photon production in prime candidate systems, we unveil the expected characteristics of this source class in the observables accessible at LAT energies. Combining the broadband emission model with the presently cataloged distribution of such systems and their individual parameters allows us to conclude on the expected maximum number of LAT-detections among massive stars in colliding wind binary systems.

  17. Parallel Algorithms and Patterns (Technical Report) | SciTech...

    Office of Scientific and Technical Information (OSTI)

    Parallel Algorithms and Patterns Citation Details In-Document Search Title: Parallel Algorithms and Patterns Authors: Robey, Robert W. 1 + Show Author Affiliations Los Alamos ...

  18. Parallel Integral Curves (Book) | SciTech Connect

    Office of Scientific and Technical Information (OSTI)

    SciTech Connect Search Results Book: Parallel Integral Curves Citation Details In-Document Search Title: Parallel Integral Curves Authors: Pugmire, Dave 1 ; Peterka, Tom 2 ; ...

  19. Linux Kernel Co-Scheduling For Bulk Synchronous Parallel Applications...

    Office of Scientific and Technical Information (OSTI)

    Linux Kernel Co-Scheduling For Bulk Synchronous Parallel Applications Citation Details In-Document Search Title: Linux Kernel Co-Scheduling For Bulk Synchronous Parallel ...

  20. Hanford Waste Treatment Plant Sets Massive Protective Shield door in

    Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

    Pretreatment Facility | Department of Energy Waste Treatment Plant Sets Massive Protective Shield door in Pretreatment Facility Hanford Waste Treatment Plant Sets Massive Protective Shield door in Pretreatment Facility January 12, 2011 - 12:00pm Addthis The carbon steel doors come together to form an upside-down L-shape. The 102-ton door was set on top of the 85-ton door that was installed at the end of December. The carbon steel doors come together to form an upside-down L-shape. The

  1. Java Parallel Secure Stream for Grid Computing

    SciTech Connect (OSTI)

    Chen, Jie; Akers, Walter; Chen, Ying; Watson, William

    2001-09-01

    The emergence of high speed wide area networks makes grid computing a reality. However grid applications that need reliable data transfer still have difficulties to achieve optimal TCP performance due to network tuning of TCP window size to improve the bandwidth and to reduce latency on a high speed wide area network. This paper presents a pure Java package called JPARSS (Java Par-allel Secure Stream) that divides data into partitions that are sent over several parallel Java streams simultaneously and allows Java or Web applications to achieve optimal TCP performance in a gird environment without the necessity of tuning the TCP window size. Several experimental results are provided to show that using parallel stream is more effective than tuning TCP window size. In addi-tion X.509 certificate based single sign-on mechanism and SSL based connection establishment are integrated into this package. Finally a few applications using this package will be discussed.

  2. Parallel Harness for Informatic Stream Hashing

    Energy Science and Technology Software Center (OSTI)

    2012-09-11

    PHISH is a lightweight framework which a set of independent processes can use to exchange data as they run on the same desktop machine, on processors of a parallel machine, or on different machines across a network. This enables them to work in a coordinated parallel fashion to perform computations on either streaming, archived, or self-generated data. The PHISH distribution includes a simple, portable library for performing data exchanges in useful patterns either via MPImore » message-passing or ZMQ sockets. PHISH input scripts are used to describe a data-processing algorithm, and additional tools provided in the PHISH distribution convert the script into a form that can be launched as a parallel job.« less

  3. Xyce parallel electronic simulator release notes.

    SciTech Connect (OSTI)

    Keiter, Eric Richard; Hoekstra, Robert John; Mei, Ting; Russo, Thomas V.; Schiek, Richard Louis; Thornquist, Heidi K.; Rankin, Eric Lamont; Coffey, Todd Stirling; Pawlowski, Roger Patrick; Santarelli, Keith R.

    2010-05-01

    The Xyce Parallel Electronic Simulator has been written to support, in a rigorous manner, the simulation needs of the Sandia National Laboratories electrical designers. Specific requirements include, among others, the ability to solve extremely large circuit problems by supporting large-scale parallel computing platforms, improved numerical performance and object-oriented code design and implementation. The Xyce release notes describe: Hardware and software requirements New features and enhancements Any defects fixed since the last release Current known defects and defect workarounds For up-to-date information not available at the time these notes were produced, please visit the Xyce web page at http://www.cs.sandia.gov/xyce.

  4. Parallel Implementation of Power System Dynamic Simulation

    SciTech Connect (OSTI)

    Jin, Shuangshuang; Huang, Zhenyu; Diao, Ruisheng; Wu, Di; Chen, Yousu

    2013-07-21

    Dynamic simulation of power system transient stability is important for planning, monitoring, operation, and control of electrical power systems. However, modeling the system dynamics and network involves the computationally intensive time-domain solution of numerous differential and algebraic equations (DAE). This results in a transient stability implementation that may not maintain the real-time constraints of an online security assessment. This paper presents a parallel implementation of the dynamic simulation on a high-performance computing (HPC) platform using parallel simulation algorithms and computation architectures. It enables the simulation to run even faster than real time, enabling the look-ahead capability of upcoming stability problems in the power grid.

  5. Berkeley Unified Parallel C (UPC) Compiler

    Energy Science and Technology Software Center (OSTI)

    2003-04-06

    This program is a portable, open-source, compiler for the UPC language, which is based on the Open64 framework, and has extensive support for optimizations. This compiler operated by translating UPC into ANS/ISO C for compilation by a native compiler and linking with a UPC Runtime Library. This design eases portability to both shared and distributed memory parallel architectures. For proper operation the "Berkeley Unified Parallel C (UPC) Runtime Library" and its dependencies are required. Compatiblemore » replacements which implement "The Berkeley UPC Runtime Specification" are possible.« less

  6. Data communications in a parallel active messaging interface of a parallel computer

    DOE Patents [OSTI]

    Archer, Charles J; Blocksome, Michael A; Ratterman, Joseph D; Smith, Brian E

    2013-11-12

    Data communications in a parallel active messaging interface (`PAMI`) of a parallel computer composed of compute nodes that execute a parallel application, each compute node including application processors that execute the parallel application and at least one management processor dedicated to gathering information regarding data communications. The PAMI is composed of data communications endpoints, each endpoint composed of a specification of data communications parameters for a thread of execution on a compute node, including specifications of a client, a context, and a task, the compute nodes and the endpoints coupled for data communications through the PAMI and through data communications resources. Embodiments function by gathering call site statistics describing data communications resulting from execution of data communications instructions and identifying in dependence upon the call cite statistics a data communications algorithm for use in executing a data communications instruction at a call site in the parallel application.

  7. Searches for massive neutrinos in nuclear beta decay

    SciTech Connect (OSTI)

    Jaros, J.A.

    1992-10-01

    The status of searches for massive neutrinos in nuclear beta decay is reviewed. The claim by an ITEP group that the electron antineutrino mass > 17eV has been disputed by all the subsequent experiments. Current measurements of the tritium beta spectrum limit m[sub [bar [nu

  8. Perturbation Theory of Massive Yang-Mills Fields

    DOE R&D Accomplishments [OSTI]

    Veltman, M.

    1968-08-01

    Perturbation theory of massive Yang-Mills fields is investigated with the help of the Bell-Treiman transformation. Diagrams containing one closed loop are shown to be convergent if there are more than four external vector boson lines. The investigation presented does not exclude the possibility that the theory is renormalizable.

  9. Parallel programming with PCN. Revision 2

    SciTech Connect (OSTI)

    Foster, I.; Tuecke, S.

    1993-01-01

    PCN is a system for developing and executing parallel programs. It comprises a high-level programming language, tools for developing and debugging programs in this language, and interfaces to Fortran and Cthat allow the reuse of existing code in multilingual parallel programs. Programs developed using PCN are portable across many different workstations, networks, and parallel computers. This document provides all the information required to develop parallel programs with the PCN programming system. It includes both tutorial and reference material. It also presents the basic concepts that underlie PCN, particularly where these are likely to be unfamiliar to the reader, and provides pointers to other documentation on the PCN language, programming techniques, and tools. PCN is in the public domain. The latest version of both the software and this manual can be obtained by anonymous ftp from Argonne National Laboratory in the directory pub/pcn at info.mcs. ani.gov (cf. Appendix A). This version of this document describes PCN version 2.0, a major revision of the PCN programming system. It supersedes earlier versions of this report.

  10. Linked-View Parallel Coordinate Plot Renderer

    Energy Science and Technology Software Center (OSTI)

    2011-06-28

    This software allows multiple linked views for interactive querying via map-based data selection, bar chart analytic overlays, and high dynamic range (HDR) line renderings. The major component of the visualization package is a parallel coordinate renderer with binning, curved layouts, shader-based rendering, and other techniques to allow interactive visualization of multidimensional data.

  11. The parallel virtual file system for portals.

    SciTech Connect (OSTI)

    Schutt, James Alan

    2004-04-01

    This report presents the result of an effort to re-implement the Parallel Virtual File System (PVFS) using Portals as the transport. This report provides short overviews of PVFS and Portals, and describes the design and implementation of PVFS over Portals. Finally, the results of performance testing of both stock PVFS and PVFS over Portals are presented.

  12. Message passing with parallel queue traversal

    DOE Patents [OSTI]

    Underwood, Keith D.; Brightwell, Ronald B.; Hemmert, K. Scott

    2012-05-01

    In message passing implementations, associative matching structures are used to permit list entries to be searched in parallel fashion, thereby avoiding the delay of linear list traversal. List management capabilities are provided to support list entry turnover semantics and priority ordering semantics.

  13. Parallel Performance of a Combustion Chemistry Simulation

    DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)

    Skinner, Gregg; Eigenmann, Rudolf

    1995-01-01

    We used a description of a combustion simulation's mathematical and computational methods to develop a version for parallel execution. The result was a reasonable performance improvement on small numbers of processors. We applied several important programming techniques, which we describe, in optimizing the application. This work has implications for programming languages, compiler design, and software engineering.

  14. Parallel stitching of 2D materials

    DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)

    Ling, Xi; Wu, Lijun; Lin, Yuxuan; Ma, Qiong; Wang, Ziqiang; Song, Yi; Yu, Lili; Huang, Shengxi; Fang, Wenjing; Zhang, Xu; et al

    2016-01-27

    Diverse parallel stitched 2D heterostructures, including metal–semiconductor, semiconductor–semiconductor, and insulator–semiconductor, are synthesized directly through selective “sowing” of aromatic molecules as the seeds in the chemical vapor deposition (CVD) method. Lastly, the methodology enables the large-scale fabrication of lateral heterostructures, which offers tremendous potential for its application in integrated circuits.

  15. Parallel Algebraic Multigrids for Structural mechanics

    SciTech Connect (OSTI)

    Brezina, M; Tong, C; Becker, R

    2004-05-11

    This paper presents the results of a comparison of three parallel algebraic multigrid (AMG) preconditioners for structural mechanics applications. In particular, they are interested in investigating both the scalability and robustness of the preconditioners. Numerical results are given for a range of structural mechanics problems with various degrees of difficulty.

  16. Communication Graph Generator for Parallel Programs

    Energy Science and Technology Software Center (OSTI)

    2014-04-08

    Graphator is a collection of relatively simple sequential programs that generate communication graphs/matrices for commonly occurring patterns in parallel programs. Currently, there is support for five communication patterns: two-dimensional 4-point stencil, four-dimensional 8-point stencil, all-to-alls over sub-communicators, random near-neighbor communication, and near-neighbor communication.

  17. Collectively loading an application in a parallel computer

    DOE Patents [OSTI]

    Aho, Michael E.; Attinella, John E.; Gooding, Thomas M.; Miller, Samuel J.; Mundy, Michael B.

    2016-01-05

    Collectively loading an application in a parallel computer, the parallel computer comprising a plurality of compute nodes, including: identifying, by a parallel computer control system, a subset of compute nodes in the parallel computer to execute a job; selecting, by the parallel computer control system, one of the subset of compute nodes in the parallel computer as a job leader compute node; retrieving, by the job leader compute node from computer memory, an application for executing the job; and broadcasting, by the job leader to the subset of compute nodes in the parallel computer, the application for executing the job.

  18. Multitasking TORT under UNICOS: Parallel performance models and measurements

    SciTech Connect (OSTI)

    Barnett, A.; Azmy, Y.Y.

    1999-09-27

    The existing parallel algorithms in the TORT discrete ordinates code were updated to function in a UNICOS environment. A performance model for the parallel overhead was derived for the existing algorithms. The largest contributors to the parallel overhead were identified and a new algorithm was developed. A parallel overhead model was also derived for the new algorithm. The results of the comparison of parallel performance models were compared to applications of the code to two TORT standard test problems and a large production problem. The parallel performance models agree well with the measured parallel overhead.

  19. FETI Prime Domain Decomposition base Parallel Iterative Solver Library Ver.1.0

    Energy Science and Technology Software Center (OSTI)

    2003-09-15

    FETI Prime is a library for the iterative solution of linear equations in solid and structural mechanics. The algorithm employs preconditioned conjugate gradients, with a domain decomposition-based preconditioner. The software is written in C++ and is designed for use with massively parallel computers, using MPI. The algorithm is based on the FETI-DP method, with additional capabilities for handling constraint equations, as well as interfacing with the Salinas structural dynamics code and the Finite Element Interfacemore » (FEI) library. Practical Application: FETI Prime is designed for use with finite element-based simulation codes for solid and structural mechanics. The solver uses element matrices, connectivity information, nodal information, and force vectors computed by the host code and provides back the solution to the linear system of equations, to the user specified level of accuracy, The library is compiled with the host code and becomes an integral part of the host code executable.« less

  20. Final Report, DE-FG01-06ER25718 Domain Decomposition and Parallel Computing

    SciTech Connect (OSTI)

    Widlund, Olof B.

    2015-06-09

    The goal of this project is to develop and improve domain decomposition algorithms for a variety of partial differential equations such as those of linear elasticity and electro-magnetics.These iterative methods are designed for massively parallel computing systems and allow the fast solution of the very large systems of algebraic equations that arise in large scale and complicated simulations. A special emphasis is placed on problems arising from Maxwell's equation. The approximate solvers, the preconditioners, are combined with the conjugate gradient method and must always include a solver of a coarse model in order to have a performance which is independent of the number of processors used in the computer simulation. A recent development allows for an adaptive construction of this coarse component of the preconditioner.

  1. A Pervasive Parallel Processing Framework For Data Visualization And Analysis At Extreme Scale Final Scientific and Technical Report

    SciTech Connect (OSTI)

    Geveci, Berk

    2014-10-31

    The evolution of the computing world from teraflop to petaflop has been relatively effortless,with several of the existing programming models scaling effectively to the petascale. The migration to exascale, however, poses considerable challenges. All industry trends infer that the exascale machine will be built using processors containing hundreds to thousands of cores per chip. It can be inferred that efficient concurrency on exascale machines requires a massive amount of concurrent threads, each performing many operations on a localized piece of data. Currently, visualization libraries and applications are based off what is known as the visualization pipeline. In the pipeline model, algorithms are encapsulated as filters with inputs and outputs. These filters are connected by setting the output of one component to the input of another. Parallelism in the visualization pipeline is achieved by replicating the pipeline for each processing thread. This works well for today’s distributed memory parallel computers but cannot be sustained when operating on processors with thousands of cores. Our project investigates a new visualization framework designed to exhibit the pervasive parallelism necessary for extreme scale machines. Our framework achieves this by defining algorithms in terms of worklets, which are localized stateless operations. Worklets are atomic operations that execute when invoked unlike filters, which execute when a pipeline request occurs. The worklet design allows execution on a massive amount of lightweight threads with minimal overhead. Only with such fine-grained parallelism can we hope to fill the billions of threads we expect will be necessary for efficient computation on an exascale machine.

  2. Data communications in a parallel active messaging interface of a parallel computer

    DOE Patents [OSTI]

    Archer, Charles J; Blocksome, Michael A; Ratterman, Joseph D; Smith, Brian E

    2014-02-11

    Data communications in a parallel active messaging interface ('PAMI') or a parallel computer, the parallel computer including a plurality of compute nodes that execute a parallel application, the PAMI composed of data communications endpoints, each endpoint including a specification of data communications parameters for a thread of execution of a compute node, including specification of a client, a context, and a task, the compute nodes and the endpoints coupled for data communications instruction, the instruction characterized by instruction type, the instruction specifying a transmission of transfer data from the origin endpoint to a target endpoint and transmitting, in accordance witht the instruction type, the transfer data from the origin endpoin to the target endpoint.

  3. Data communications in a parallel active messaging interface of a parallel computer

    DOE Patents [OSTI]

    Archer, Charles J; Blocksome, Michael A; Ratterman, Joseph D; Smith, Brian E

    2013-10-29

    Data communications in a parallel active messaging interface (`PAMI`) of a parallel computer, the parallel computer including a plurality of compute nodes that execute a parallel application, the PAMI composed of data communications endpoints, each endpoint including a specification of data communications parameters for a thread of execution on a compute node, including specifications of a client, a context, and a task, the compute nodes and the endpoints coupled for data communications through the PAMI and through data communications resources, including receiving in an origin endpoint of the PAMI a data communications instruction, the instruction characterized by an instruction type, the instruction specifying a transmission of transfer data from the origin endpoint to a target endpoint and transmitting, in accordance with the instruction type, the transfer data from the origin endpoint to the target endpoint.

  4. Translation invariant time-dependent solutions to massive gravity II

    SciTech Connect (OSTI)

    Mourad, J.; Steer, D.A. E-mail: steer@apc.univ-paris7.fr

    2014-06-01

    This paper is a sequel to JCAP 12 (2013) 004 and is also devoted to translation-invariant solutions of ghost-free massive gravity in its moving frame formulation. Here we consider a mass term which is linear in the vielbein (corresponding to a ?{sub 3} term in the 4D metric formulation) in addition to the cosmological constant. We determine explicitly the constraints, and from the initial value formulation show that the time-dependent solutions can have singularities at a finite time. Although the constraints give, as in the ?{sub 1} case, the correct number of degrees of freedom for a massive spin two field, we show that the lapse function can change sign at a finite time causing a singular time evolution. This is very different to the ?{sub 1} case where time evolution is always well defined. We conclude that the ?{sub 3} mass term can be pathological and should be treated with care.

  5. Searches for massive neutrinos in nuclear beta decay

    SciTech Connect (OSTI)

    Jaros, J.A.

    1992-10-01

    The status of searches for massive neutrinos in nuclear beta decay is reviewed. The claim by an ITEP group that the electron antineutrino mass > 17eV has been disputed by all the subsequent experiments. Current measurements of the tritium beta spectrum limit m{sub {bar {nu}}e} < 10 eV. The status of the 17 keV neutrino is reviewed. The strong null results from INS Tokyo and Argonne, and deficiencies in the experiments which reported positive effects, make it unreasonable to ascribe the spectral distortions seen by Simpson, Hime, and others to a 17keV neutrino. Several new ideas on how to search for massive neutrinos in nuclear beta decay are discussed.

  6. INTERNAL GRAVITY WAVES IN MASSIVE STARS: ANGULAR MOMENTUM TRANSPORT

    SciTech Connect (OSTI)

    Rogers, T. M.; Lin, D. N. C.; McElwaine, J. N.; Lau, H. H. B. E-mail: lin@ucolick.org E-mail: hblau@astro.uni-bonn.de

    2013-07-20

    We present numerical simulations of internal gravity waves (IGW) in a star with a convective core and extended radiative envelope. We report on amplitudes, spectra, dissipation, and consequent angular momentum transport by such waves. We find that these waves are generated efficiently and transport angular momentum on short timescales over large distances. We show that, as in Earth's atmosphere, IGW drive equatorial flows which change magnitude and direction on short timescales. These results have profound consequences for the observational inferences of massive stars, as well as their long term angular momentum evolution. We suggest IGW angular momentum transport may explain many observational mysteries, such as: the misalignment of hot Jupiters around hot stars, the Be class of stars, Ni enrichment anomalies in massive stars, and the non-synchronous orbits of interacting binaries.

  7. The halo model in a massive neutrino cosmology

    SciTech Connect (OSTI)

    Massara, Elena; Villaescusa-Navarro, Francisco; Viel, Matteo E-mail: villaescusa@oats.inaf.it

    2014-12-01

    We provide a quantitative analysis of the halo model in the context of massive neutrino cosmologies. We discuss all the ingredients necessary to model the non-linear matter and cold dark matter power spectra and compare with the results of N-body simulations that incorporate massive neutrinos. Our neutrino halo model is able to capture the non-linear behavior of matter clustering with a ?20% accuracy up to very non-linear scales of k=10 h/Mpc (which would be affected by baryon physics). The largest discrepancies arise in the range k=0.51 h/Mpc where the 1-halo and 2-halo terms are comparable and are present also in a massless neutrino cosmology. However, at scales k<0.2 h/Mpc our neutrino halo model agrees with the results of N-body simulations at the level of 8% for total neutrino masses of <0.3 eV. We also model the neutrino non-linear density field as a sum of a linear and clustered component and predict the neutrino power spectrum and the cold dark matter-neutrino cross-power spectrum up to k=1 h/Mpc with ?30% accuracy. For masses below 0.15 eV the neutrino halo model captures the neutrino induced suppression, casted in terms of matter power ratios between massive and massless scenarios, with a 2% agreement with the results of N-body/neutrino simulations. Finally, we provide a simple application of the halo model: the computation of the clustering of galaxies, in massless and massive neutrinos cosmologies, using a simple Halo Occupation Distribution scheme and our halo model extension.

  8. A symmetric approach to the massive nonlinear sigma model

    DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)

    Ferrari, Ruggero

    2011-09-28

    In the present study we extend to the massive case the procedure of divergences subtraction, previously introduced for the massless nonlinear sigma model (D = 4). Perturbative expansion in the number of loops is successfully constructed. The resulting theory depends on the Spontaneous Symmetry Breaking parameter v, on the mass m and on the radiative correction parameter Λ. Fermions are not considered in the present work. SU(2) Ⓧ SU(2) is the group used.

  9. THE ROLE OF THE MAGNETOROTATIONAL INSTABILITY IN MASSIVE STARS

    SciTech Connect (OSTI)

    Wheeler, J. Craig; Kagan, Daniel; Chatzopoulos, Emmanouil

    2015-01-20

    The magnetorotational instability (MRI) is key to physics in accretion disks and is widely considered to play some role in massive star core collapse. Models of rotating massive stars naturally develop very strong shear at composition boundaries, a necessary condition for MRI instability, and the MRI is subject to triply diffusive destabilizing effects in radiative regions. We have used the MESA stellar evolution code to compute magnetic effects due to the Spruit-Tayler (ST) mechanism and the MRI, separately and together, in a sample of massive star models. We find that the MRI can be active in the later stages of massive star evolution, leading to mixing effects that are not captured in models that neglect the MRI. The MRI and related magnetorotational effects can move models of given zero-age main sequence mass across ''boundaries'' from degenerate CO cores to degenerate O/Ne/Mg cores and from degenerate O/Ne/Mg cores to iron cores, thus affecting the final evolution and the physics of core collapse. The MRI acting alone can slow the rotation of the inner core in general agreement with the observed ''initial'' rotation rates of pulsars. The MRI analysis suggests that localized fields ?10{sup 12} G may exist at the boundary of the iron core. With both the ST and MRI mechanisms active in the 20 M {sub ?} model, we find that the helium shell mixes entirely out into the envelope. Enhanced mixing could yield a population of yellow or even blue supergiant supernova progenitors that would not be standard SN IIP.

  10. Parallel paving: An algorithm for generating distributed, adaptive, all-quadrilateral meshes on parallel computers

    SciTech Connect (OSTI)

    Lober, R.R.; Tautges, T.J.; Vaughan, C.T.

    1997-03-01

    Paving is an automated mesh generation algorithm which produces all-quadrilateral elements. It can additionally generate these elements in varying sizes such that the resulting mesh adapts to a function distribution, such as an error function. While powerful, conventional paving is a very serial algorithm in its operation. Parallel paving is the extension of serial paving into parallel environments to perform the same meshing functions as conventional paving only on distributed, discretized models. This extension allows large, adaptive, parallel finite element simulations to take advantage of paving`s meshing capabilities for h-remap remeshing. A significantly modified version of the CUBIT mesh generation code has been developed to host the parallel paving algorithm and demonstrate its capabilities on both two dimensional and three dimensional surface geometries and compare the resulting parallel produced meshes to conventionally paved meshes for mesh quality and algorithm performance. Sandia`s {open_quotes}tiling{close_quotes} dynamic load balancing code has also been extended to work with the paving algorithm to retain parallel efficiency as subdomains undergo iterative mesh refinement.

  11. Locating hardware faults in a parallel computer

    DOE Patents [OSTI]

    Archer, Charles J.; Megerian, Mark G.; Ratterman, Joseph D.; Smith, Brian E.

    2010-04-13

    Locating hardware faults in a parallel computer, including defining within a tree network of the parallel computer two or more sets of non-overlapping test levels of compute nodes of the network that together include all the data communications links of the network, each non-overlapping test level comprising two or more adjacent tiers of the tree; defining test cells within each non-overlapping test level, each test cell comprising a subtree of the tree including a subtree root compute node and all descendant compute nodes of the subtree root compute node within a non-overlapping test level; performing, separately on each set of non-overlapping test levels, an uplink test on all test cells in a set of non-overlapping test levels; and performing, separately from the uplink tests and separately on each set of non-overlapping test levels, a downlink test on all test cells in a set of non-overlapping test levels.

  12. Parallel machine architecture for production rule systems

    DOE Patents [OSTI]

    Allen, Jr., John D.; Butler, Philip L.

    1989-01-01

    A parallel processing system for production rule programs utilizes a host processor for storing production rule right hand sides (RHS) and a plurality of rule processors for storing left hand sides (LHS). The rule processors operate in parallel in the recognize phase of the system recognize -Act Cycle to match their respective LHS's against a stored list of working memory elements (WME) in order to find a self consistent set of WME's. The list of WME is dynamically varied during the Act phase of the system in which the host executes or fires rule RHS's for those rules for which a self-consistent set has been found by the rule processors. The host transmits instructions for creating or deleting working memory elements as dictated by the rule firings until the rule processors are unable to find any further self-consistent working memory element sets at which time the production rule system is halted.

  13. Three-dimensional Casimir piston for massive scalar fields

    SciTech Connect (OSTI)

    Lim, S.C. Teo, L.P.

    2009-08-15

    We consider Casimir force acting on a three-dimensional rectangular piston due to a massive scalar field subject to periodic, Dirichlet and Neumann boundary conditions. Exponential cut-off method is used to derive the Casimir energy. It is shown that the divergent terms do not contribute to the Casimir force acting on the piston, thus render a finite well-defined Casimir force acting on the piston. Explicit expressions for the total Casimir force acting on the piston is derived, which show that the Casimir force is always attractive for all the different boundary conditions considered. As a function of a - the distance from the piston to the opposite wall, it is found that the magnitude of the Casimir force behaves like 1/a{sup 4} when a{yields}0{sup +} and decays exponentially when a{yields}{infinity}. Moreover, the magnitude of the Casimir force is always a decreasing function of a. On the other hand, passing from massless to massive, we find that the effect of the mass is insignificant when a is small, but the magnitude of the force is decreased for large a in the massive case.

  14. The evolutionary tracks of young massive star clusters

    SciTech Connect (OSTI)

    Pfalzner, S.; Steinhausen, M.; Vincke, K.; Menten, K.; Parmentier, G.

    2014-10-20

    Stars mostly form in groups consisting of a few dozen to several ten thousand members. For 30 years, theoretical models have provided a basic concept of how such star clusters form and develop: they originate from the gas and dust of collapsing molecular clouds. The conversion from gas to stars being incomplete, the leftover gas is expelled, leading to cluster expansion and stars becoming unbound. Observationally, a direct confirmation of this process has proved elusive, which is attributed to the diversity of the properties of forming clusters. Here we take into account that the true cluster masses and sizes are masked, initially by the surface density of the background and later by the still present unbound stars. Based on the recent observational finding that in a given star-forming region the star formation efficiency depends on the local density of the gas, we use an analytical approach combined with N-body simulations to reveal evolutionary tracks for young massive clusters covering the first 10 Myr. Just like the Hertzsprung-Russell diagram is a measure for the evolution of stars, these tracks provide equivalent information for clusters. Like stars, massive clusters form and develop faster than their lower-mass counterparts, explaining why so few massive cluster progenitors are found.

  15. FORTRAN Extensions for Modular Parallel Processing

    Energy Science and Technology Software Center (OSTI)

    1996-01-12

    FORTRAN M is a small set of extensions to FORTRAN that supports a modular approach to the construction of sequential and parallel programs. FORTRAN M programs use channels to plug together processes which may be written in FORTRAN M or FORTRAN 77. Processes communicate by sending and receiving messages on channels. Channels and processes can be created dynamically, but programs remain deterministic unless specialized nondeterministic constructs are used.

  16. Parallel Integrated Thermal Management - Energy Innovation Portal

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Vehicles and Fuels Vehicles and Fuels Early Stage R&D Early Stage R&D Find More Like This Return to Search Parallel Integrated Thermal Management National Renewable Energy Laboratory Contact NREL About This Technology Technology Marketing Summary Many current cooling systems for hybrid electric vehicles (HEVs) with a high power electric drive system utilize a low temperature liquid cooling loop for cooling the power electronics system and electric machines associated with the electric

  17. Parallel Heuristics for Scalable Community Detection

    SciTech Connect (OSTI)

    Lu, Howard; Kalyanaraman, Anantharaman; Halappanavar, Mahantesh; Choudhury, Sutanay

    2014-05-17

    Community detection has become a fundamental operation in numerous graph-theoretic applications. It is used to reveal natural divisions that exist within real world networks without imposing prior size or cardinality constraints on the set of communities. Despite its potential for application, there is only limited support for community detection on large-scale parallel computers, largely owing to the irregular and inherently sequential nature of the underlying heuristics. In this paper, we present parallelization heuristics for fast community detection using the Louvain method as the serial template. The Louvain method is an iterative heuristic for modularity optimization. Originally developed by Blondel et al. in 2008, the method has become increasingly popular owing to its ability to detect high modularity community partitions in a fast and memory-efficient manner. However, the method is also inherently sequential, thereby limiting its scalability to problems that can be solved on desktops. Here, we observe certain key properties of this method that present challenges for its parallelization, and consequently propose multiple heuristics that are designed to break the sequential barrier. Our heuristics are agnostic to the underlying parallel architecture. For evaluation purposes, we implemented our heuristics on shared memory (OpenMP) and distributed memory (MapReduce-MPI) machines, and tested them over real world graphs derived from multiple application domains (internet, biological, natural language processing). Experimental results demonstrate the ability of our heuristics to converge to high modularity solutions comparable to those output by the serial algorithm in nearly the same number of iterations, while also drastically reducing time to solution.

  18. Runtime System Library for Parallel Weather Modules

    Energy Science and Technology Software Center (OSTI)

    1997-07-22

    RSL is a Fortran-callable runtime library for use in implementing regular-grid weather forecast models, with nesting, on scalable distributed memory parallel computers. It provides high-level routines for finite-difference stencil communications and inter-domain exchange of data for nested forcing and feedback. RSL supports a unique point-wise domain-decomposition strategy to facilitate load-balancing.

  19. Parallel Molecular Dynamics Program for Molecules

    Energy Science and Technology Software Center (OSTI)

    1995-03-07

    ParBond is a parallel classical molecular dynamics code that models bonded molecular systems, typically of an organic nature. It uses classical force fields for both non-bonded Coulombic and Van der Waals interactions and for 2-, 3-, and 4-body bonded (bond, angle, dihedral, and improper) interactions. It integrates Newton''s equation of motion for the molecular system and evaluates various thermodynamical properties of the system as it progresses.

  20. Xyce(™) Parallel Electronic Simulator

    Energy Science and Technology Software Center (OSTI)

    2013-10-03

    The Xyce Parallel Electronic Simulator simulates electronic circuit behavior in DC, AC, HB, MPDE and transient mode using standard analog (DAE) and/or device (PDE) device models including several age and radiation aware devices. It supports a variety of computing platforms (both serial and parallel) computers. Lastly, it uses a variety of modern solution algorithms dynamic parallel load-balancing and iterative solvers.! ! Xyce is primarily used to simulate the voltage and current behavior of a circuitmore » network (a network of electronic devices connected via a conductive network). As a tool, it is mainly used for the design and analysis of electronic circuits.! ! Kirchoff's conservation laws are enforced over a network using modified nodal analysis. This results in a set of differential algebraic equations (DAEs). The resulting nonlinear problem is solved iteratively using a fully coupled Newton method, which in turn results in a linear system that is solved by either a standard sparse-direct solver or iteratively using Trilinos linear solver packages, also developed at Sandia National Laboratories.« less

  1. Xyce parallel electronic simulator : reference guide.

    SciTech Connect (OSTI)

    Mei, Ting; Rankin, Eric Lamont; Thornquist, Heidi K.; Santarelli, Keith R.; Fixel, Deborah A.; Coffey, Todd Stirling; Russo, Thomas V.; Schiek, Richard Louis; Warrender, Christina E.; Keiter, Eric Richard; Pawlowski, Roger Patrick

    2011-05-01

    This document is a reference guide to the Xyce Parallel Electronic Simulator, and is a companion document to the Xyce Users Guide. The focus of this document is (to the extent possible) exhaustively list device parameters, solver options, parser options, and other usage details of Xyce. This document is not intended to be a tutorial. Users who are new to circuit simulation are better served by the Xyce Users Guide. The Xyce Parallel Electronic Simulator has been written to support, in a rigorous manner, the simulation needs of the Sandia National Laboratories electrical designers. It is targeted specifically to run on large-scale parallel computing platforms but also runs well on a variety of architectures including single processor workstations. It also aims to support a variety of devices and models specific to Sandia needs. This document is intended to complement the Xyce Users Guide. It contains comprehensive, detailed information about a number of topics pertinent to the usage of Xyce. Included in this document is a netlist reference for the input-file commands and elements supported within Xyce; a command line reference, which describes the available command line arguments for Xyce; and quick-references for users of other circuit codes, such as Orcad's PSpice and Sandia's ChileSPICE.

  2. Characterizing the parallelism in rule-based expert systems

    SciTech Connect (OSTI)

    Douglass, R.J.

    1984-01-01

    A brief review of two classes of rule-based expert systems is presented, followed by a detailed analysis of potential sources of parallelism at the production or rule level, the subrule level (including match, select, and act parallelism), and at the search level (including AND, OR, and stream parallelism). The potential amount of parallelism from each source is discussed and characterized in terms of its granularity, inherent serial constraints, efficiency, speedup, dynamic behavior, and communication volume, frequency, and topology. Subrule parallelism will yield, at best, two- to tenfold speedup, and rule level parallelism will yield a modest speedup on the order of 5 to 10 times. Rule level can be combined with OR, AND, and stream parallelism in many instances to yield further parallel speedups.

  3. SimFS: A Large Scale Parallel File System Simulator

    Energy Science and Technology Software Center (OSTI)

    2011-08-30

    The software provides both framework and tools to simulate a large-scale parallel file system such as Lustre.

  4. Parallelizing AT with MatlabMPI

    SciTech Connect (OSTI)

    Li, Evan Y.; /Brown U. /SLAC

    2011-06-22

    The Accelerator Toolbox (AT) is a high-level collection of tools and scripts specifically oriented toward solving problems dealing with computational accelerator physics. It is integrated into the MATLAB environment, which provides an accessible, intuitive interface for accelerator physicists, allowing researchers to focus the majority of their efforts on simulations and calculations, rather than programming and debugging difficulties. Efforts toward parallelization of AT have been put in place to upgrade its performance to modern standards of computing. We utilized the packages MatlabMPI and pMatlab, which were developed by MIT Lincoln Laboratory, to set up a message-passing environment that could be called within MATLAB, which set up the necessary pre-requisites for multithread processing capabilities. On local quad-core CPUs, we were able to demonstrate processor efficiencies of roughly 95% and speed increases of nearly 380%. By exploiting the efficacy of modern-day parallel computing, we were able to demonstrate incredibly efficient speed increments per processor in AT's beam-tracking functions. Extrapolating from prediction, we can expect to reduce week-long computation runtimes to less than 15 minutes. This is a huge performance improvement and has enormous implications for the future computing power of the accelerator physics group at SSRL. However, one of the downfalls of parringpass is its current lack of transparency; the pMatlab and MatlabMPI packages must first be well-understood by the user before the system can be configured to run the scripts. In addition, the instantiation of argument parameters requires internal modification of the source code. Thus, parringpass, cannot be directly run from the MATLAB command line, which detracts from its flexibility and user-friendliness. Future work in AT's parallelization will focus on development of external functions and scripts that can be called from within MATLAB and configured on multiple nodes, while

  5. Parallel State Estimation Assessment with Practical Data

    SciTech Connect (OSTI)

    Chen, Yousu; Jin, Shuangshuang; Rice, Mark J.; Huang, Zhenyu

    2014-10-31

    This paper presents a full-cycle parallel state estimation (PSE) implementation using a preconditioned conjugate gradient algorithm. The developed code is able to solve large-size power system state estimation within 5 seconds using real-world data, comparable to the Supervisory Control And Data Acquisition (SCADA) rate. This achievement allows the operators to know the system status much faster to help improve grid reliability. Case study results of the Bonneville Power Administration (BPA) system with real measurements are presented. The benefits of fast state estimation are also discussed.

  6. SPRNG Scalable Parallel Random Number Generator LIbrary

    Energy Science and Technology Software Center (OSTI)

    2010-03-16

    This revision corrects some errors in SPRNG 1. Users of newer SPRNG versions can obtain the corrected files and build their version with it. This version also improves the scalability of some of the application-based tests in the SPRNG test suite. It also includes an interface to a parallel Mersenne Twister, so that if users install the Mersenne Twister, then they can test this generator with the SPRNG test suite and also use some SPRNGmore » features with that generator.« less

  7. A brief parallel I/O tutorial.

    SciTech Connect (OSTI)

    Ward, H. Lee

    2010-03-01

    This document provides common best practices for the efficient utilization of parallel file systems for analysts and application developers. A multi-program, parallel supercomputer is able to provide effective compute power by aggregating a host of lower-power processors using a network. The idea, in general, is that one either constructs the application to distribute parts to the different nodes and processors available and then collects the result (a parallel application), or one launches a large number of small jobs, each doing similar work on different subsets (a campaign). The I/O system on these machines is usually implemented as a tightly-coupled, parallel application itself. It is providing the concept of a 'file' to the host applications. The 'file' is an addressable store of bytes and that address space is global in nature. In essence, it is providing a global address space. Beyond the simple reality that the I/O system is normally composed of a small, less capable, collection of hardware, that concept of a global address space will cause problems if not very carefully utilized. How much of a problem and the ways in which those problems manifest will be different, but that it is problem prone has been well established. Worse, the file system is a shared resource on the machine - a system service. What an application does when it uses the file system impacts all users. It is not the case that some portion of the available resource is reserved. Instead, the I/O system responds to requests by scheduling and queuing based on instantaneous demand. Using the system well contributes to the overall throughput on the machine. From a solely self-centered perspective, using it well reduces the time that the application or campaign is subject to impact by others. The developer's goal should be to accomplish I/O in a way that minimizes interaction with the I/O system, maximizes the amount of data moved per call, and provides the I/O system the most information about

  8. Carbothermic reduction with parallel heat sources

    DOE Patents [OSTI]

    Troup, Robert L.; Stevenson, David T.

    1984-12-04

    Disclosed are apparatus and method of carbothermic direct reduction for producing an aluminum alloy from a raw material mix including aluminum oxide, silicon oxide, and carbon wherein parallel heat sources are provided by a combustion heat source and by an electrical heat source at essentially the same position in the reactor, e.g., such as at the same horizontal level in the path of a gravity-fed moving bed in a vertical reactor. The present invention includes providing at least 79% of the heat energy required in the process by the electrical heat source.

  9. Requirements for Parallel I/O,

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Requirements for Parallel I/O, ! Visualization and Analysis Prabhat 1 , Q uincey K oziol 2 1 LBL/NERSC 2 The H DF G roup NERSC A SCR R equirements f or 2 017 January 1 5, 2 014 LBNL 1. Project Description! * m636 r epo * LBL V is B ase P rogram ( Bethel P I) [ PM: N owell] * Conduct f undamental a nd a pplied vis/analyEcs R &D t o address e xascale c hallenges * ExaHDF5 P roject ( Prabhat, Q uincey P Is) [ PM: Nowell] * Scale P arallel I /O, a nd d ata m anagement t echnologies f or current

  10. Parallel heater system for subsurface formations

    DOE Patents [OSTI]

    Harris, Christopher Kelvin (Houston, TX); Karanikas, John Michael (Houston, TX); Nguyen, Scott Vinh (Houston, TX)

    2011-10-25

    A heating system for a subsurface formation is disclosed. The system includes a plurality of substantially horizontally oriented or inclined heater sections located in a hydrocarbon containing layer in the formation. At least a portion of two of the heater sections are substantially parallel to each other. The ends of at least two of the heater sections in the layer are electrically coupled to a substantially horizontal, or inclined, electrical conductor oriented substantially perpendicular to the ends of the at least two heater sections.

  11. Processing data communications events by awakening threads in parallel active messaging interface of a parallel computer

    DOE Patents [OSTI]

    Archer, Charles J.; Blocksome, Michael A.; Ratterman, Joseph D.; Smith, Brian E.

    2016-03-15

    Processing data communications events in a parallel active messaging interface (`PAMI`) of a parallel computer that includes compute nodes that execute a parallel application, with the PAMI including data communications endpoints, and the endpoints are coupled for data communications through the PAMI and through other data communications resources, including determining by an advance function that there are no actionable data communications events pending for its context, placing by the advance function its thread of execution into a wait state, waiting for a subsequent data communications event for the context; responsive to occurrence of a subsequent data communications event for the context, awakening by the thread from the wait state; and processing by the advance function the subsequent data communications event now pending for the context.

  12. Cosmic expansion histories in massive bigravity with symmetric matter coupling

    SciTech Connect (OSTI)

    Enander, Jonas; Mrtsell, Edvard [Oskar Klein Center, Stockholm University, Albanova University Center, 106 91 Stockholm (Sweden); Solomon, Adam R. [DAMTP, Centre for Mathematical Sciences, University of Cambridge, Wilberforce Rd., Cambridge CB3 0WA (United Kingdom); Akrami, Yashar, E-mail: enander@fysik.su.se, E-mail: a.r.solomon@damtp.cam.ac.uk, E-mail: yashar.akrami@astro.uio.no, E-mail: edvard@fysik.su.se [Institute of Theoretical Astrophysics, University of Oslo, P.O. Box 1029 Blindern, N-0315 Oslo (Norway)

    2015-01-01

    We study the cosmic expansion history of massive bigravity with a viable matter coupling which treats both metrics on equal footing. We derive the Friedmann equation for the effective metric through which matter couples to the two metrics, and study its solutions. For certain parameter choices, the background cosmology is identical to that of ?CDM. More general parameters yield dynamical dark energy, which can still be in agreement with observations of the expansion history. We study specific parameter choices of interest, including minimal models, maximally-symmetric models, and a candidate partially-massless theory.

  13. Translation invariant time-dependent massive gravity: Hamiltonian analysis

    SciTech Connect (OSTI)

    Mourad, Jihad; Steer, Danile A.; Noui, Karim E-mail: karim.noui@lmpt.univ-tours.fr

    2014-09-01

    The canonical structure of the massive gravity in the first order moving frame formalism is studied. We work in the simplified context of translation invariant fields, with mass terms given by general non-derivative interactions, invariant under the diagonal Lorentz group, depending on the moving frame as well as a fixed reference frame. We prove that the only mass terms which give 5 propagating degrees of freedom are the dRGT mass terms, namely those which are linear in the lapse. We also complete the Hamiltonian analysis with the dynamical evolution of the system.

  14. Closed-form decomposition of one-loop massive amplitudes

    SciTech Connect (OSTI)

    Britto, Ruth; Feng Bo; Mastrolia, Pierpaolo

    2008-07-15

    We present formulas for the coefficients of 2-, 3-, 4-, and 5-point master integrals for one-loop massive amplitudes. The coefficients are derived from unitarity cuts in D dimensions. The input parameters can be read off from any unitarity-cut integrand, as assembled from tree-level expressions, after simple algebraic manipulations. The formulas presented here are suitable for analytical as well as numerical evaluation. Their validity is confirmed in two known cases of helicity amplitudes contributing to gg{yields}gg and gg{yields}gH, where the masses of the Higgs and the fermion circulating in the loop are kept as free parameters.

  15. Massive dark photons in a Higgs portal model

    SciTech Connect (OSTI)

    Hadjimichef, Dimiter

    2015-12-17

    An extension of the Standard Model with a hidden sector which consists of gauge singlets (a Dirac fermion χ and a scalar S) plus a vector boson V{sub μ} (dark massive photon) is studied. The singlet scalar interacts with the Standard Model sector through the triple and quartic scalar interactions, while the singlet fermion and vector boson field interact with the Standard Model only via the singlet scalar. The scalar field generates the vector boson’s mass. Perspectives for future e{sup −}e{sup +} colliders is considered.

  16. Hybrid and Parallel Domain-Decomposition Methods Development to Enable Monte Carlo for Reactor Analyses

    SciTech Connect (OSTI)

    Wagner, John C; Mosher, Scott W; Evans, Thomas M; Peplow, Douglas E.; Turner, John A

    2011-01-01

    This paper describes code and methods development at the Oak Ridge National Laboratory focused on enabling high-fidelity, large-scale reactor analyses with Monte Carlo (MC). Current state-of-the-art tools and methods used to perform real commercial reactor analyses have several undesirable features, the most significant of which is the non-rigorous spatial decomposition scheme. Monte Carlo methods, which allow detailed and accurate modeling of the full geometry and are considered the gold standard for radiation transport solutions, are playing an ever-increasing role in correcting and/or verifying the deterministic, multi-level spatial decomposition methodology in current practice. However, the prohibitive computational requirements associated with obtaining fully converged, system-wide solutions restrict the role of MC to benchmarking deterministic results at a limited number of state-points for a limited number of relevant quantities. The goal of this research is to change this paradigm by enabling direct use of MC for full-core reactor analyses. The most significant of the many technical challenges that must be overcome are the slow, non-uniform convergence of system-wide MC estimates and the memory requirements associated with detailed solutions throughout a reactor (problems involving hundreds of millions of different material and tally regions due to fuel irradiation, temperature distributions, and the needs associated with multi-physics code coupling). To address these challenges, our research has focused on the development and implementation of (1) a novel hybrid deterministic/MC method for determining high-precision fluxes throughout the problem space in k-eigenvalue problems and (2) an efficient MC domain-decomposition (DD) algorithm that partitions the problem phase space onto multiple processors for massively parallel systems, with statistical uncertainty estimation. The hybrid method development is based on an extension of the FW-CADIS method, which

  17. Hybrid and Parallel Domain-Decomposition Methods Development to Enable Monte Carlo for Reactor Analyses

    SciTech Connect (OSTI)

    Wagner, John C; Mosher, Scott W; Evans, Thomas M; Peplow, Douglas E.; Turner, John A

    2010-01-01

    This paper describes code and methods development at the Oak Ridge National Laboratory focused on enabling high-fidelity, large-scale reactor analyses with Monte Carlo (MC). Current state-of-the-art tools and methods used to perform ''real'' commercial reactor analyses have several undesirable features, the most significant of which is the non-rigorous spatial decomposition scheme. Monte Carlo methods, which allow detailed and accurate modeling of the full geometry and are considered the ''gold standard'' for radiation transport solutions, are playing an ever-increasing role in correcting and/or verifying the deterministic, multi-level spatial decomposition methodology in current practice. However, the prohibitive computational requirements associated with obtaining fully converged, system-wide solutions restrict the role of MC to benchmarking deterministic results at a limited number of state-points for a limited number of relevant quantities. The goal of this research is to change this paradigm by enabling direct use of MC for full-core reactor analyses. The most significant of the many technical challenges that must be overcome are the slow, non-uniform convergence of system-wide MC estimates and the memory requirements associated with detailed solutions throughout a reactor (problems involving hundreds of millions of different material and tally regions due to fuel irradiation, temperature distributions, and the needs associated with multi-physics code coupling). To address these challenges, our research has focused on the development and implementation of (1) a novel hybrid deterministic/MC method for determining high-precision fluxes throughout the problem space in k-eigenvalue problems and (2) an efficient MC domain-decomposition (DD) algorithm that partitions the problem phase space onto multiple processors for massively parallel systems, with statistical uncertainty estimation. The hybrid method development is based on an extension of the FW-CADIS method

  18. Parallel tetrahedral mesh refinement with MOAB.

    SciTech Connect (OSTI)

    Thompson, David C.; Pebay, Philippe Pierre

    2008-12-01

    In this report, we present the novel functionality of parallel tetrahedral mesh refinement which we have implemented in MOAB. This report details work done to implement parallel, edge-based, tetrahedral refinement into MOAB. The theoretical basis for this work is contained in [PT04, PT05, TP06] while information on design, performance, and operation specific to MOAB are contained herein. As MOAB is intended mainly for use in pre-processing and simulation (as opposed to the post-processing bent of previous papers), the primary use case is different: rather than refining elements with non-linear basis functions, the goal is to increase the number of degrees of freedom in some region in order to more accurately represent the solution to some system of equations that cannot be solved analytically. Also, MOAB has a unique mesh representation which impacts the algorithm. This introduction contains a brief review of streaming edge-based tetrahedral refinement. The remainder of the report is broken into three sections: design and implementation, performance, and conclusions. Appendix A contains instructions for end users (simulation authors) on how to employ the refiner.

  19. Switch for serial or parallel communication networks

    DOE Patents [OSTI]

    Crosette, D.B.

    1994-07-19

    A communication switch apparatus and a method for use in a geographically extensive serial, parallel or hybrid communication network linking a multi-processor or parallel processing system has a very low software processing overhead in order to accommodate random burst of high density data. Associated with each processor is a communication switch. A data source and a data destination, a sensor suite or robot for example, may also be associated with a switch. The configuration of the switches in the network are coordinated through a master processor node and depends on the operational phase of the multi-processor network: data acquisition, data processing, and data exchange. The master processor node passes information on the state to be assumed by each switch to the processor node associated with the switch. The processor node then operates a series of multi-state switches internal to each communication switch. The communication switch does not parse and interpret communication protocol and message routing information. During a data acquisition phase, the communication switch couples sensors producing data to the processor node associated with the switch, to a downlink destination on the communications network, or to both. It also may couple an uplink data source to its processor node. During the data exchange phase, the switch couples its processor node or an uplink data source to a downlink destination (which may include a processor node or a robot), or couples an uplink source to its processor node and its processor node to a downlink destination. 9 figs.

  20. Switch for serial or parallel communication networks

    DOE Patents [OSTI]

    Crosette, Dario B.

    1994-01-01

    A communication switch apparatus and a method for use in a geographically extensive serial, parallel or hybrid communication network linking a multi-processor or parallel processing system has a very low software processing overhead in order to accommodate random burst of high density data. Associated with each processor is a communication switch. A data source and a data destination, a sensor suite or robot for example, may also be associated with a switch. The configuration of the switches in the network are coordinated through a master processor node and depends on the operational phase of the multi-processor network: data acquisition, data processing, and data exchange. The master processor node passes information on the state to be assumed by each switch to the processor node associated with the switch. The processor node then operates a series of multi-state switches internal to each communication switch. The communication switch does not parse and interpret communication protocol and message routing information. During a data acquisition phase, the communication switch couples sensors producing data to the processor node associated with the switch, to a downlink destination on the communications network, or to both. It also may couple an uplink data source to its processor node. During the data exchange phase, the switch couples its processor node or an uplink data source to a downlink destination (which may include a processor node or a robot), or couples an uplink source to its processor node and its processor node to a downlink destination.

  1. Massive binaries in the vicinity of Sgr A*

    SciTech Connect (OSTI)

    Pfuhl, O.; Gillessen, S.; Genzel, R.; Eisenhauer, F.; Fritz, T. K.; Ott, T.; Alexander, T.; Martins, F.

    2014-02-20

    A long-term spectroscopic and photometric survey of the most luminous and massive stars in the vicinity of the supermassive black hole Sgr A* revealed two new binaries: a long-period Ofpe/WN9 binary, IRS 16NE, with a modest eccentricity of 0.3 and a period of 224 days, and an eclipsing Wolf-Rayet binary with a period of 2.3 days. Together with the already identified binary IRS 16SW, there are now three confirmed OB/WR binaries in the inner 0.2 pc of the Galactic center. Using radial velocity change upper limits, we were able to constrain the spectroscopic binary fraction in the Galactic center to F{sub SB}=0.30{sub −0.21}{sup +0.34} at a confidence level of 95%, a massive binary fraction close to that observed in dense clusters. The fraction of eclipsing binaries with photometric amplitudes Δm > 0.4 is F{sub EB}{sup GC}=3%±2%, which is consistent with local OB star clusters (F {sub EB} = 1%). Overall, the Galactic center binary fraction seems to be similar to the binary fraction in comparable young clusters.

  2. Sub-Second Parallel State Estimation

    SciTech Connect (OSTI)

    Chen, Yousu; Rice, Mark J.; Glaesemann, Kurt R.; Wang, Shaobu; Huang, Zhenyu

    2014-10-31

    This report describes the performance of Pacific Northwest National Laboratory (PNNL) sub-second parallel state estimation (PSE) tool using the utility data from the Bonneville Power Administrative (BPA) and discusses the benefits of the fast computational speed for power system applications. The test data were provided by BPA. They are two-days’ worth of hourly snapshots that include power system data and measurement sets in a commercial tool format. These data are extracted out from the commercial tool box and fed into the PSE tool. With the help of advanced solvers, the PSE tool is able to solve each BPA hourly state estimation problem within one second, which is more than 10 times faster than today’s commercial tool. This improved computational performance can help increase the reliability value of state estimation in many aspects: (1) the shorter the time required for execution of state estimation, the more time remains for operators to take appropriate actions, and/or to apply automatic or manual corrective control actions. This increases the chances of arresting or mitigating the impact of cascading failures; (2) the SE can be executed multiple times within time allowance. Therefore, the robustness of SE can be enhanced by repeating the execution of the SE with adaptive adjustments, including removing bad data and/or adjusting different initial conditions to compute a better estimate within the same time as a traditional state estimator’s single estimate. There are other benefits with the sub-second SE, such as that the PSE results can potentially be used in local and/or wide-area automatic corrective control actions that are currently dependent on raw measurements to minimize the impact of bad measurements, and provides opportunities to enhance the power grid reliability and efficiency. PSE also can enable other advanced tools that rely on SE outputs and could be used to further improve operators’ actions and automated controls to mitigate effects

  3. Optimized data communications in a parallel computer

    DOE Patents [OSTI]

    Faraj, Daniel A

    2014-10-21

    A parallel computer includes nodes that include a network adapter that couples the node in a point-to-point network and supports communications in opposite directions of each dimension. Optimized communications include: receiving, by a network adapter of a receiving compute node, a packet--from a source direction--that specifies a destination node and deposit hints. Each hint is associated with a direction within which the packet is to be deposited. If a hint indicates the packet to be deposited in the opposite direction: the adapter delivers the packet to an application on the receiving node; forwards the packet to a next node in the opposite direction if the receiving node is not the destination; and forwards the packet to a node in a direction of a subsequent dimension if the hints indicate that the packet is to be deposited in the direction of the subsequent dimension.

  4. Clock Agreement Among Parallel Supercomputer Nodes

    DOE Data Explorer [Office of Scientific and Technical Information (OSTI)]

    Jones, Terry R.; Koenig, Gregory A.

    This dataset presents measurements that quantify the clock synchronization time-agreement characteristics among several high performance computers including the current world's most powerful machine for open science, the U.S. Department of Energy's Titan machine sited at Oak Ridge National Laboratory. These ultra-fast machines derive much of their computational capability from extreme node counts (over 18000 nodes in the case of the Titan machine). Time-agreement is commonly utilized by parallel programming applications and tools, distributed programming application and tools, and system software. Our time-agreement measurements detail the degree of time variance between nodes and how that variance changes over time. The dataset includes empirical measurements and the accompanying spreadsheets.

  5. LAPACK BLAS Parallel BLAS ScaLAPACK

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    LAPACK BLAS Parallel BLAS ScaLAPACK (E.g., MPI, PVM) PBLAS Local Addressing Global Addressing man intro_blas3 man intro_blacs man intro_lapack BLACS Message Passing Primitives man intro_scalapack Basic Lin. Alg. Communication Subprograms 0 1 2 3 0 4 5 0 1 2 1 NB M N MB a a a a a a a a a a a a a a a a a a a a a a a a 11 12 13 14 a a a a a 15 16 17 18 19 a a a a a a a a a 21 22 23 24 25 26 27 28 29 a a a a a a a a a a a a a a a a a a a a a a a a a 31 32 33 34 35 36 37 38 39 41 42 43 44 45 46 47 48

  6. Optimized data communications in a parallel computer

    DOE Patents [OSTI]

    Faraj, Daniel A.

    2014-08-19

    A parallel computer includes nodes that include a network adapter that couples the node in a point-to-point network and supports communications in opposite directions of each dimension. Optimized communications include: receiving, by a network adapter of a receiving compute node, a packet--from a source direction--that specifies a destination node and deposit hints. Each hint is associated with a direction within which the packet is to be deposited. If a hint indicates the packet to be deposited in the opposite direction: the adapter delivers the packet to an application on the receiving node; forwards the packet to a next node in the opposite direction if the receiving node is not the destination; and forwards the packet to a node in a direction of a subsequent dimension if the hints indicate that the packet is to be deposited in the direction of the subsequent dimension.

  7. Clock Agreement Among Parallel Supercomputer Nodes

    DOE Data Explorer [Office of Scientific and Technical Information (OSTI)]

    Jones, Terry R.; Koenig, Gregory A.

    2014-04-30

    This dataset presents measurements that quantify the clock synchronization time-agreement characteristics among several high performance computers including the current world's most powerful machine for open science, the U.S. Department of Energy's Titan machine sited at Oak Ridge National Laboratory. These ultra-fast machines derive much of their computational capability from extreme node counts (over 18000 nodes in the case of the Titan machine). Time-agreement is commonly utilized by parallel programming applications and tools, distributed programming application and tools, and system software. Our time-agreement measurements detail the degree of time variance between nodes and how that variance changes over time. The dataset includes empirical measurements and the accompanying spreadsheets.

  8. Parallel detecting, spectroscopic ellipsometers/polarimeters

    DOE Patents [OSTI]

    Furtak, Thomas E.

    2002-01-01

    The parallel detecting spectroscopic ellipsometer/polarimeter sensor has no moving parts and operates in real-time for in-situ monitoring of the thin film surface properties of a sample within a processing chamber. It includes a multi-spectral source of radiation for producing a collimated beam of radiation directed towards the surface of the sample through a polarizer. The thus polarized collimated beam of radiation impacts and is reflected from the surface of the sample, thereby changing its polarization state due to the intrinsic material properties of the sample. The light reflected from the sample is separated into four separate polarized filtered beams, each having individual spectral intensities. Data about said four individual spectral intensities is collected within the processing chamber, and is transmitted into one or more spectrometers. The data of all four individual spectral intensities is then analyzed using transformation algorithms, in real-time.

  9. Internode data communications in a parallel computer

    DOE Patents [OSTI]

    Archer, Charles J.; Blocksome, Michael A.; Miller, Douglas R.; Parker, Jeffrey J.; Ratterman, Joseph D.; Smith, Brian E.

    2013-09-03

    Internode data communications in a parallel computer that includes compute nodes that each include main memory and a messaging unit, the messaging unit including computer memory and coupling compute nodes for data communications, in which, for each compute node at compute node boot time: a messaging unit allocates, in the messaging unit's computer memory, a predefined number of message buffers, each message buffer associated with a process to be initialized on the compute node; receives, prior to initialization of a particular process on the compute node, a data communications message intended for the particular process; and stores the data communications message in the message buffer associated with the particular process. Upon initialization of the particular process, the process establishes a messaging buffer in main memory of the compute node and copies the data communications message from the message buffer of the messaging unit into the message buffer of main memory.

  10. Broadcasting a message in a parallel computer

    DOE Patents [OSTI]

    Archer, Charles J; Faraj, Ahmad A

    2013-04-16

    Methods, systems, and products are disclosed for broadcasting a message in a parallel computer that includes: transmitting, by the logical root to all of the nodes directly connected to the logical root, a message; and for each node except the logical root: receiving the message; if that node is the physical root, then transmitting the message to all of the child nodes except the child node from which the message was received; if that node received the message from a parent node and if that node is not a leaf node, then transmitting the message to all of the child nodes; and if that node received the message from a child node and if that node is not the physical root, then transmitting the message to all of the child nodes except the child node from which the message was received and transmitting the message to the parent node.

  11. Intranode data communications in a parallel computer

    DOE Patents [OSTI]

    Archer, Charles J; Blocksome, Michael A; Miller, Douglas R; Ratterman, Joseph D; Smith, Brian E

    2013-07-23

    Intranode data communications in a parallel computer that includes compute nodes configured to execute processes, where the data communications include: allocating, upon initialization of a first process of a compute node, a region of shared memory; establishing, by the first process, a predefined number of message buffers, each message buffer associated with a process to be initialized on the compute node; sending, to a second process on the same compute node, a data communications message without determining whether the second process has been initialized, including storing the data communications message in the message buffer of the second process; and upon initialization of the second process: retrieving, by the second process, a pointer to the second process's message buffer; and retrieving, by the second process from the second process's message buffer in dependence upon the pointer, the data communications message sent by the first process.

  12. Intranode data communications in a parallel computer

    DOE Patents [OSTI]

    Archer, Charles J; Blocksome, Michael A; Miller, Douglas R; Ratterman, Joseph D; Smith, Brian E

    2014-01-07

    Intranode data communications in a parallel computer that includes compute nodes configured to execute processes, where the data communications include: allocating, upon initialization of a first process of a computer node, a region of shared memory; establishing, by the first process, a predefined number of message buffers, each message buffer associated with a process to be initialized on the compute node; sending, to a second process on the same compute node, a data communications message without determining whether the second process has been initialized, including storing the data communications message in the message buffer of the second process; and upon initialization of the second process: retrieving, by the second process, a pointer to the second process's message buffer; and retrieving, by the second process from the second process's message buffer in dependence upon the pointer, the data communications message sent by the first process.

  13. Internode data communications in a parallel computer

    DOE Patents [OSTI]

    Archer, Charles J; Blocksome, Michael A; Miller, Douglas R; Parker, Jeffrey J; Ratterman, Joseph D; Smith, Brian E

    2014-02-11

    Internode data communications in a parallel computer that includes compute nodes that each include main memory and a messaging unit, the messaging unit including computer memory and coupling compute nodes for data communications, in which, for each compute node at compute node boot time: a messaging unit allocates, in the messaging unit's computer memory, a predefined number of message buffers, each message buffer associated with a process to be initialized on the compute node; receives, prior to initialization of a particular process on the compute node, a data communications message intended for the particular process; and stores the data communications message in the message buffer associated with the particular process. Upon initialization of the particular process, the process establishes a messaging buffer in main memory of the compute node and copies the data communications message from the message buffer of the messaging unit into the message buffer of main memory.

  14. Parallelism of the SANDstorm hash algorithm.

    SciTech Connect (OSTI)

    Torgerson, Mark Dolan; Draelos, Timothy John; Schroeppel, Richard Crabtree

    2009-09-01

    Mainstream cryptographic hashing algorithms are not parallelizable. This limits their speed and they are not able to take advantage of the current trend of being run on multi-core platforms. Being limited in speed limits their usefulness as an authentication mechanism in secure communications. Sandia researchers have created a new cryptographic hashing algorithm, SANDstorm, which was specifically designed to take advantage of multi-core processing and be parallelizable on a wide range of platforms. This report describes a late-start LDRD effort to verify the parallelizability claims of the SANDstorm designers. We have shown, with operating code and bench testing, that the SANDstorm algorithm may be trivially parallelized on a wide range of hardware platforms. Implementations using OpenMP demonstrates a linear speedup with multiple cores. We have also shown significant performance gains with optimized C code and the use of assembly instructions to exploit particular platform capabilities.

  15. Broadcasting a message in a parallel computer

    DOE Patents [OSTI]

    Archer, Charles J; Faraj, Daniel A

    2014-11-18

    Methods, systems, and products are disclosed for broadcasting a message in a parallel computer that includes: transmitting, by the logical root to all of the nodes directly connected to the logical root, a message; and for each node except the logical root: receiving the message; if that node is the physical root, then transmitting the message to all of the child nodes except the child node from which the message was received; if that node received the message from a parent node and if that node is not a leaf node, then transmitting the message to all of the child nodes; and if that node received the message from a child node and if that node is not the physical root, then transmitting the message to all of the child nodes except the child node from which the message was received and transmitting the message to the parent node.

  16. Link failure detection in a parallel computer

    DOE Patents [OSTI]

    Archer, Charles J.; Blocksome, Michael A.; Megerian, Mark G.; Smith, Brian E.

    2010-11-09

    Methods, apparatus, and products are disclosed for link failure detection in a parallel computer including compute nodes connected in a rectangular mesh network, each pair of adjacent compute nodes in the rectangular mesh network connected together using a pair of links, that includes: assigning each compute node to either a first group or a second group such that adjacent compute nodes in the rectangular mesh network are assigned to different groups; sending, by each of the compute nodes assigned to the first group, a first test message to each adjacent compute node assigned to the second group; determining, by each of the compute nodes assigned to the second group, whether the first test message was received from each adjacent compute node assigned to the first group; and notifying a user, by each of the compute nodes assigned to the second group, whether the first test message was received.

  17. CS-Studio Scan System Parallelization

    SciTech Connect (OSTI)

    Kasemir, Kay; Pearson, Matthew R

    2015-01-01

    For several years, the Control System Studio (CS-Studio) Scan System has successfully automated the operation of beam lines at the Oak Ridge National Laboratory (ORNL) High Flux Isotope Reactor (HFIR) and Spallation Neutron Source (SNS). As it is applied to additional beam lines, we need to support simultaneous adjustments of temperatures or motor positions. While this can be implemented via virtual motors or similar logic inside the Experimental Physics and Industrial Control System (EPICS) Input/Output Controllers (IOCs), doing so requires a priori knowledge of experimenters requirements. By adding support for the parallel control of multiple process variables (PVs) to the Scan System, we can better support ad hoc automation of experiments that benefit from such simultaneous PV adjustments.

  18. Implementation of Generalized Coarse-Mesh Rebalance of NEWTRNX for Acceleration of Parallel Block-Jacobi Transport

    SciTech Connect (OSTI)

    Clarno, Kevin T

    2007-01-01

    The NEWTRNX transport module solves the multigroup, discrete-ordinates source-driven or k-eigenvalue transport equation in parallel on a 3-D unstructured tetrahedral mesh using the extended step characteristics (ESC), also known as the slice-balance approach (SBA), spatial discretization. The spatial domains are decomposed using METIS. NEWTRNX is under development for nuclear reactor analysis on computer hardware ranging from clusters to massively parallel machines, like the Cray XT4. Transport methods that rely on full sweeps across the spatial domain have been shown to display poor scaling for thousands of processors. The Parallel Block-Jacobi (PBJ) algorithm allows each spatial partition to sweep over all discrete-ordinate directions and energies independently of all other domains, potentially allowing for much better scaling than possible with full sweeps. The PBJ algorithm has been implemented in NEWTRNX using a Gauss-Seidel iteration in energy and an asynchronous communication by an energy group, such that each partition utilizes the latest boundary solution available for each group before solving the withingroup scattering in a given group. For each energy group, the within-group scattering converges with a generalized minimum residual (GMRES) solver, preconditioned with beta transport synthetic acceleration ({beta}-TSA).

  19. BRIGHT Lights, BIG City: Massive Galaxies, Giant Ly-A Nebulae, and Proto-Clusters

    SciTech Connect (OSTI)

    van Breugel, W; Reuland, M; de Vries, W; Stanford, A; Dey, A; Kurk, J; Venemans, B; Rottgering, H; Miley, G; De Breuck, C; Dopita, M; Sutherland, R; Bland-Hawthorn, J

    2002-08-01

    High redshift radio galaxies are great cosmological tools for pinpointing the most massive objects in the early Universe: massive forming galaxies, active super-massive black holes and proto-clusters. They report on deep narrow-band imaging and spectroscopic observations of several z > 2 radio galaxy fields to investigate the nature of giant Ly-{alpha} nebulae centered on the galaxies and to search for over-dense regions around them. They discuss the possible implications for our understanding of the formation and evolution of massive galaxies and galaxy clusters.

  20. Idaho Site D&D Crew Uses Specialized Tools to Cut Apart Massive Tank in

    Office of Energy Efficiency and Renewable Energy (EERE) Indexed Site

    Demolition Project | Department of Energy D&D Crew Uses Specialized Tools to Cut Apart Massive Tank in Demolition Project Idaho Site D&D Crew Uses Specialized Tools to Cut Apart Massive Tank in Demolition Project November 25, 2015 - 12:05pm Addthis A worker employs a thermal lance to cut apart a massive tank so it can be removed from a building slated for demolition at the Idaho Site's Materials and Fuels Complex. A worker employs a thermal lance to cut apart a massive tank so it can

  1. SEISMOLOGY OF A MASSIVE PULSATING HYDROGEN ATMOSPHERE WHITE DWARF

    SciTech Connect (OSTI)

    Kepler, S. O.; Pelisoli, Ingrid; Pecanha, Viviane; Costa, J. E. S.; Fraga, Luciano; Hermes, J. J.; Winget, D. E.; Castanheira, Barbara; Corsico, A. H.; Romero, A. D.; Althaus, Leandro; Kleinman, S. J.; Nitta, A.; Koester, D.; Kuelebi, Baybars; Kanaan, Antonio

    2012-10-01

    We report our observations of the new pulsating hydrogen atmosphere white dwarf SDSS J132350.28+010304.22. We discovered periodic photometric variations in frequency and amplitude that are commensurate with nonradial g-mode pulsations in ZZ Ceti stars. This, along with estimates for the star's temperature and gravity, establishes it as a massive ZZ Ceti star. We used time-series photometric observations with the 4.1 m SOAR Telescope, complemented by contemporary McDonald Observatory 2.1 m data, to discover the photometric variability. The light curve of SDSS J132350.28+010304.22 shows at least nine detectable frequencies. We used these frequencies to make an asteroseismic determination of the total mass and effective temperature of the star: M{sub *} = 0.88 {+-} 0.02 M{sub Sun} and T{sub eff} = 12, 100 {+-} 140 K. These values are consistent with those derived from the optical spectra and photometric colors.

  2. Galileons coupled to massive gravity: general analysis and cosmological solutions

    SciTech Connect (OSTI)

    Goon, Garrett; Trodden, Mark; Gmrko?lu, A. Emir; Hinterbichler, Kurt; Mukohyama, Shinji E-mail: Emir.Gumrukcuoglu@nottingham.ac.uk E-mail: shinji.mukohyama@ipmu.jp

    2014-08-01

    We further develop the framework for coupling galileons and Dirac-Born-Infeld (DBI) scalar fields to a massive graviton while retaining both the non-linear symmetries of the scalars and ghost-freedom of the theory. The general construction is recast in terms of vielbeins which simplifies calculations and allows for compact expressions. Expressions for the general form of the action are derived, with special emphasis on those models which descend from maximally symmetric spaces. We demonstrate the existence of maximally symmetric solutions to the fully non-linear theory and analyze their spectrum of quadratic fluctuations. Finally, we consider self-accelerating cosmological solutions and study their perturbations, showing that the vector and scalar modes have vanishing kinetic terms.

  3. Nanowire growth by an electron beam induced massive phase transformation

    DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)

    Sood, Shantanu; Kisslinger, Kim; Gouma, Perena

    2014-11-15

    Tungsten trioxide nanowires of a high aspect ratio have been synthesized in-situ in a TEM under an electron beam of current density 14A/cm² due to a massive polymorphic reaction. Sol-gel processed pseudocubic phase nanocrystals of tungsten trioxide were seen to rapidly transform to one dimensional monoclinic phase configurations, and this reaction was independent of the substrate on which the material was deposited. The mechanism of the self-catalyzed polymorphic transition and accompanying radical shape change is a typical characteristic of metastable to stable phase transformations in nanostructured polymorphic metal oxides. A heuristic model is used to confirm the metastable to stablemore » growth mechanism. The findings are important to the control electron beam deposition of nanowires for functional applications starting from colloidal precursors.« less

  4. Search for Charged Massive Long-Lived Particles

    SciTech Connect (OSTI)

    Abazov V. M.; Abbott B.; Acharya B. S.; Adams M.; Adams T.; Alexeev G. D.; Alimena J.; Alkhazov G.; Alton A.; Alverson G.; Alves G. A.; Aoki M.; Askew A.; Asman B.; Atkins S.; Atramentov O.; Augsten K.; Avila C.; BackusMayes J.; Badaud F.; Bagby L.; Baldin B.; Bandurin D. V.; Banerjee S.; Barberis E.; Baringer P.; Barreto J.; Bartlett J. F.; Bassler U.; Bazterra V.; Bean A.; Begalli M.; Belanger-Champagne C.; Bellantoni L.; Beri S. B.; Bernardi G.; Bernhard R.; Bertram I.; Besancon M.; Beuselinck R.; Bezzubov V. A.; Bhat P. C.; Bhatnagar V.; Blazey G.; Blessing S.; Bloom K.; Boehnlein A.; Boline D.; Boos E. E.; Borissov G.; Bose T.; Brandt A.; Brandt O.; Brock R.; Brooijmans G.; Bross A.; Brown D.; Brown J.; Bu X. B.; Buehler M.; Buescher V.; Bunichev V.; Burdin S.; Burnett T. H.; Buszello C. P.; Calpas B.; Camacho-Perez E.; Carrasco-Lizarraga M. A.; Casey B. C. K.; Castilla-Valdez H.; Chakrabarti S.; Chakraborty D.; Chan K. M.; Chandra A.; Chapon E.; Chen G.; Chevalier-Thery S.; Cho D. K.; Cho S. W.; Choi S.; Choudhary B.; Cihangir S.; Claes D.; Clutter J.; Cooke M.; Cooper W. E.; Corcoran M.; Couderc F.; Cousinou M. -C.; Croc A.; Cutts D.; Das A.; Davies G.; De K.; de Jong S. J.; De la Cruz-Burelo E.; Deliot F.; Demina R.; Denisov D.; Denisov S. P.; Desai S.; Deterre C.; DeVaughan K.; Diehl H. T.; Diesburg M.; Ding P. F.; Dominguez A.; Dorland T.; Dubey A.; Dudko L. V.; Duggan D.; Duperrin A.; Dutt S.; Dyshkant A.; Eads M.; Edmunds D.; Ellison J.; Elvira V. D.; Enari Y.; Evans H.; Evdokimov A.; Evdokimov V. N.; Facini G.; Ferbel T.; Fiedler F.; Filthaut F.; Fisher W.; Fisk H. E.; Fortner M.; Fox H.; Fuess S.; Garcia-Bellido A.; Garcia-Guerra G. A.; Gavrilov V.; Gay P.; Geng W.; Gerbaudo D.; Gerber C. E.; Gershtein Y.; Ginther G.; Golovanov G.; Goussiou A.; Grannis P. D.; Greder S.; Greenlee H.; Greenwood Z. D.; Gregores E. M.; Grenier G.; Gris Ph.; Grivaz J. -F.; Grohsjean A.; Gruenendahl S.; Gruenewald M. W.; Guillemin T.; Gutierrez G.; Gutierrez P.; Haas A.; Hagopian S.; Haley J.; Han L.; Harder K.; Harel A.; Hauptman J. M.; Hays J.; Head T.; Hebbeker T.; Hedin D.; Hegab H.; Heinson A. P.; Heintz U.; Hensel C.; Heredia-De La Cruz I.; Herner K.; Hesketh G.; Hildreth M. D.; Hirosky R.; Hoang T.; Hobbs J. D.; Hoeneisen B.; Hohlfeld M.; Hubacek Z.; Hynek V.; Iashvili I.; Ilchenko Y.; Illingworth R.; Ito A. S.; Jabeen S.; Jaffre M.; Jamin D.; Jayasinghe A.; Jesik R.; Johns K.; Johnson M.; Jonckheere A.; Jonsson P.; Joshi J.; Jung A. W.; Juste A.; Kaadze K.; Kajfasz E.; Karmanov D.; Kasper P. A.; Katsanos I.; Kehoe R.; Kermiche S.; Khalatyan N.; Khanov A.; Kharchilava A.; Kharzheev Y. N.; Kohli J. M.; Kozelov A. V.; Kraus J.; Kulikov S.; Kumar A.; Kupco A.; Kurca T.; Kuzmin V. A.; Kvita J.; Lammers S.; Landsberg G.; Lebrun P.; Lee H. S.; Lee S. W.; Lee W. M.; Lellouch J.; Li L.; Li Q. Z.; Lietti S. M.; Lim J. K.; Lincoln D.; Linnemann J.; Lipaev V. V.; Lipton R.; Liu Y.; Lobodenko A.; Lokajicek M.; de Sa R. Lopes; Lubatti H. J.; Luna-Garcia R.; Lyon A. L.; Maciel A. K. A.; Mackin D.; Madar R.; Magana-Villalba R.; Malik S.; Malyshev V. L.; Maravin Y.; Martinez-Ortega J.; McCarthy R.; McGivern C. L.; Meijer M. M.; et al.

    2012-03-21

    We report on a search for charged massive long-lived particles (CMLLPs), based on 5.2 fb{sup -1} of integrated luminosity collected with the D0 detector at the Fermilab Tevatron p{bar p} collider. We search for events in which one or more particles are reconstructed as muons but have speed and ionization energy loss (dE/dx) inconsistent with muons produced in beam collisions. CMLLPs are predicted in several theories of physics beyond the standard model. We exclude pair-produced long-lived gaugino-like charginos below 267 GeV and Higgsino-like charginos below 217 GeV at 95% C.L., as well as long-lived scalar top quarks with mass below 285 GeV.

  5. Nanowire growth by an electron beam induced massive phase transformation

    SciTech Connect (OSTI)

    Sood, Shantanu; Kisslinger, Kim; Gouma, Perena

    2014-11-15

    Tungsten trioxide nanowires of a high aspect ratio have been synthesized in-situ in a TEM under an electron beam of current density 14A/cm² due to a massive polymorphic reaction. Sol-gel processed pseudocubic phase nanocrystals of tungsten trioxide were seen to rapidly transform to one dimensional monoclinic phase configurations, and this reaction was independent of the substrate on which the material was deposited. The mechanism of the self-catalyzed polymorphic transition and accompanying radical shape change is a typical characteristic of metastable to stable phase transformations in nanostructured polymorphic metal oxides. A heuristic model is used to confirm the metastable to stable growth mechanism. The findings are important to the control electron beam deposition of nanowires for functional applications starting from colloidal precursors.

  6. Brownian motion of massive skyrmions in magnetic thin films

    SciTech Connect (OSTI)

    Troncoso, Roberto E.; Núñez, Álvaro S.

    2014-12-15

    We report on the thermal effects on the motion of current-driven massive magnetic skyrmions. The reduced equation for the motion of skyrmion has the form of a stochastic generalized Thiele’s equation. We propose an ansatz for the magnetization texture of a non-rigid single skyrmion that depends linearly with the velocity. By using this ansatz it is found that the skyrmion mass tensor is closely related to intrinsic skyrmion parameters, such as Gilbert damping, skyrmion-charge and dissipative force. We have found an exact expression for the average drift velocity as well as the mean-square velocity of the skyrmion. The longitudinal and transverse mobility of skyrmions for small spin-velocity of electrons is also determined and found to be independent of the skyrmion mass.

  7. Nanowire growth by an electron beam induced massive phase transformation

    SciTech Connect (OSTI)

    Sood, Shantanu; Kisslinger, Kim; Gouma, Perena

    2014-11-15

    Tungsten trioxide nanowires of a high aspect ratio have been synthesized in-situ in a TEM under an electron beam of current density 14A/cm due to a massive polymorphic reaction. Sol-gel processed pseudocubic phase nanocrystals of tungsten trioxide were seen to rapidly transform to one dimensional monoclinic phase configurations, and this reaction was independent of the substrate on which the material was deposited. The mechanism of the self-catalyzed polymorphic transition and accompanying radical shape change is a typical characteristic of metastable to stable phase transformations in nanostructured polymorphic metal oxides. A heuristic model is used to confirm the metastable to stable growth mechanism. The findings are important to the control electron beam deposition of nanowires for functional applications starting from colloidal precursors.

  8. LIMB-DARKENED RADIATION-DRIVEN WINDS FROM MASSIVE STARS

    SciTech Connect (OSTI)

    Cure, M.; Cidale, L.

    2012-10-01

    We calculated the influence of the limb-darkened finite-disk correction factor in the theory of radiation-driven winds from massive stars. We solved the one-dimensional m-CAK hydrodynamical equation of rotating radiation-driven winds for all three known solutions, i.e., fast, {Omega}-slow, and {delta}-slow. We found that for the fast solution, the mass-loss rate is increased by a factor of {approx}10%, while the terminal velocity is reduced about 10%, when compared with the solution using a finite-disk correction factor from a uniformly bright star. For the other two slow solutions, the changes are almost negligible. Although we found that the limb darkening has no effects on the wind-momentum-luminosity relationship, it would affect the calculation of synthetic line profiles and the derivation of accurate wind parameters.

  9. Massive graviton on arbitrary background: derivation, syzygies, applications

    SciTech Connect (OSTI)

    Bernard, Laura; Deffayet, Cédric; Strauss, Mikael von

    2015-06-23

    We give the detailed derivation of the fully covariant form of the quadratic action and the derived linear equations of motion for a massive graviton in an arbitrary background metric (which were presented in arXiv:1410.8302 [hep-th]). Our starting point is the de Rham-Gabadadze-Tolley (dRGT) family of ghost free massive gravities and using a simple model of this family, we are able to express this action and these equations of motion in terms of a single metric in which the graviton propagates, hence removing in particular the need for a “reference metric' which is present in the non perturbative formulation. We show further how 5 covariant constraints can be obtained including one which leads to the tracelessness of the graviton on flat space-time and removes the Boulware-Deser ghost. This last constraint involves powers and combinations of the curvature of the background metric. The 5 constraints are obtained for a background metric which is unconstrained, i.e. which does not have to obey the background field equations. We then apply these results to the case of Einstein space-times, where we show that the 5 constraints become trivial, and Friedmann-Lemaître-Robertson-Walker space-times, for which we correct in particular some results that appeared elsewhere. To reach our results, we derive several non trivial identities, syzygies, involving the graviton fields, its derivatives and the background metric curvature. These identities have their own interest. We also discover that there exist backgrounds for which the dRGT equations cannot be unambiguously linearized.

  10. X-RAY EMISSION FROM MAGNETIC MASSIVE STARS

    SciTech Connect (OSTI)

    Naz, Yal; Petit, Vronique; Rinbrand, Melanie; Owocki, Stan; Cohen, David; Ud-Doula, Asif; Wade, Gregg A.

    2014-11-01

    Magnetically confined winds of early-type stars are expected to be sources of bright and hard X-rays. To clarify the systematics of the observed X-ray properties, we have analyzed a large series of Chandra and XMM-Newton observations, corresponding to all available exposures of known massive magnetic stars (over 100 exposures covering ?60% of stars compiled in the catalog of Petit et al.). We show that the X-ray luminosity is strongly correlated with the stellar wind mass-loss rate, with a power-law form that is slightly steeper than linear for the majority of the less luminous, lower- M-dot B stars and flattens for the more luminous, higher- M-dot O stars. As the winds are radiatively driven, these scalings can be equivalently written as relations with the bolometric luminosity. The observed X-ray luminosities, and their trend with mass-loss rates, are well reproduced by new MHD models, although a few overluminous stars (mostly rapidly rotating objects) exist. No relation is found between other X-ray properties (plasma temperature, absorption) and stellar or magnetic parameters, contrary to expectations (e.g., higher temperature for stronger mass-loss rate). This suggests that the main driver for the plasma properties is different from the main determinant of the X-ray luminosity. Finally, variations of the X-ray hardnesses and luminosities, in phase with the stellar rotation period, are detected for some objects and they suggest that some temperature stratification exists in massive stars' magnetospheres.

  11. Data communications in a parallel active messaging interface of a parallel computer

    SciTech Connect (OSTI)

    Blocksome, Michael A.; Ratterman, Joseph D.; Smith, Brian E.

    2014-09-16

    Eager send data communications in a parallel active messaging interface (`PAMI`) of a parallel computer, the PAMI composed of data communications endpoints that specify a client, a context, and a task, including receiving an eager send data communications instruction with transfer data disposed in a send buffer characterized by a read/write send buffer memory address in a read/write virtual address space of the origin endpoint; determining for the send buffer a read-only send buffer memory address in a read-only virtual address space, the read-only virtual address space shared by both the origin endpoint and the target endpoint, with all frames of physical memory mapped to pages of virtual memory in the read-only virtual address space; and communicating by the origin endpoint to the target endpoint an eager send message header that includes the read-only send buffer memory address.

  12. Data communications in a parallel active messaging interface of a parallel computer

    SciTech Connect (OSTI)

    Blocksome, Michael A.; Ratterman, Joseph D.; Smith, Brian E.

    2014-09-02

    Eager send data communications in a parallel active messaging interface (`PAMI`) of a parallel computer, the PAMI composed of data communications endpoints that specify a client, a context, and a task, including receiving an eager send data communications instruction with transfer data disposed in a send buffer characterized by a read/write send buffer memory address in a read/write virtual address space of the origin endpoint; determining for the send buffer a read-only send buffer memory address in a read-only virtual address space, the read-only virtual address space shared by both the origin endpoint and the target endpoint, with all frames of physical memory mapped to pages of virtual memory in the read-only virtual address space; and communicating by the origin endpoint to the target endpoint an eager send message header that includes the read-only send buffer memory address.

  13. Data communications for a collective operation in a parallel active messaging interface of a parallel computer

    DOE Patents [OSTI]

    Faraj, Daniel A

    2013-07-16

    Algorithm selection for data communications in a parallel active messaging interface (`PAMI`) of a parallel computer, the PAMI composed of data communications endpoints, each endpoint including specifications of a client, a context, and a task, endpoints coupled for data communications through the PAMI, including associating in the PAMI data communications algorithms and bit masks; receiving in an origin endpoint of the PAMI a collective instruction, the instruction specifying transmission of a data communications message from the origin endpoint to a target endpoint; constructing a bit mask for the received collective instruction; selecting, from among the associated algorithms and bit masks, a data communications algorithm in dependence upon the constructed bit mask; and executing the collective instruction, transmitting, according to the selected data communications algorithm from the origin endpoint to the target endpoint, the data communications message.

  14. Data communications for a collective operation in a parallel active messaging interface of a parallel computer

    DOE Patents [OSTI]

    Faraj, Daniel A.

    2015-11-19

    Algorithm selection for data communications in a parallel active messaging interface (`PAMI`) of a parallel computer, the PAMI composed of data communications endpoints, each endpoint including specifications of a client, a context, and a task, endpoints coupled for data communications through the PAMI, including associating in the PAMI data communications algorithms and bit masks; receiving in an origin endpoint of the PAMI a collective instruction, the instruction specifying transmission of a data communications message from the origin endpoint to a target endpoint; constructing a bit mask for the received collective instruction; selecting, from among the associated algorithms and bit masks, a data communications algorithm in dependence upon the constructed bit mask; and executing the collective instruction, transmitting, according to the selected data communications algorithm from the origin endpoint to the target endpoint, the data communications message.

  15. Data communications in a parallel active messaging interface of a parallel computer

    DOE Patents [OSTI]

    Archer, Charles J; Blocksome, Michael A; Ratterman, Joseph D; Smith, Brian E

    2015-02-03

    Data communications in a parallel active messaging interface (`PAMI`) of a parallel computer, the PAMI composed of data communications endpoints, each endpoint including a specification of data communications parameters for a thread of execution on a compute node, including specifications of a client, a context, and a task, endpoints coupled for data communications through the PAMI and through data communications resources, including receiving in an origin endpoint of the PAMI a SEND instruction, the SEND instruction specifying a transmission of transfer data from the origin endpoint to a first target endpoint; transmitting from the origin endpoint to the first target endpoint a Request-To-Send (`RTS`) message advising the first target endpoint of the location and size of the transfer data; assigning by the first target endpoint to each of a plurality of target endpoints separate portions of the transfer data; and receiving by the plurality of target endpoints the transfer data.

  16. Fencing direct memory access data transfers in a parallel active messaging interface of a parallel computer

    DOE Patents [OSTI]

    Blocksome, Michael A.; Mamidala, Amith R.

    2013-09-03

    Fencing direct memory access (`DMA`) data transfers in a parallel active messaging interface (`PAMI`) of a parallel computer, the PAMI including data communications endpoints, each endpoint including specifications of a client, a context, and a task, the endpoints coupled for data communications through the PAMI and through DMA controllers operatively coupled to segments of shared random access memory through which the DMA controllers deliver data communications deterministically, including initiating execution through the PAMI of an ordered sequence of active DMA instructions for DMA data transfers between two endpoints, effecting deterministic DMA data transfers through a DMA controller and a segment of shared memory; and executing through the PAMI, with no FENCE accounting for DMA data transfers, an active FENCE instruction, the FENCE instruction completing execution only after completion of all DMA instructions initiated prior to execution of the FENCE instruction for DMA data transfers between the two endpoints.

  17. Fencing data transfers in a parallel active messaging interface of a parallel computer

    DOE Patents [OSTI]

    Blocksome, Michael A.; Mamidala, Amith R.

    2015-06-02

    Fencing data transfers in a parallel active messaging interface (`PAMI`) of a parallel computer, the PAMI including data communications endpoints, each endpoint including a specification of data communications parameters for a thread of execution on a compute node, including specifications of a client, a context, and a task; the compute nodes coupled for data communications through the PAMI and through data communications resources including at least one segment of shared random access memory; including initiating execution through the PAMI of an ordered sequence of active SEND instructions for SEND data transfers between two endpoints, effecting deterministic SEND data transfers through a segment of shared memory; and executing through the PAMI, with no FENCE accounting for SEND data transfers, an active FENCE instruction, the FENCE instruction completing execution only after completion of all SEND instructions initiated prior to execution of the FENCE instruction for SEND data transfers between the two endpoints.

  18. Fencing data transfers in a parallel active messaging interface of a parallel computer

    DOE Patents [OSTI]

    Blocksome, Michael A.; Mamidala, Amith R.

    2015-06-30

    Fencing data transfers in a parallel active messaging interface (`PAMI`) of a parallel computer, the PAMI including data communications endpoints, each endpoint comprising a specification of data communications parameters for a thread of execution on a compute node, including specifications of a client, a context, and a task, the compute nodes coupled for data communications through the PAMI and through data communications resources including a deterministic data communications network, including initiating execution through the PAMI of an ordered sequence of active SEND instructions for SEND data transfers between two endpoints, effecting deterministic SEND data transfers; and executing through the PAMI, with no FENCE accounting for SEND data transfers, an active FENCE instruction, the FENCE instruction completing execution only after completion of all SEND instructions initiated prior to execution of the FENCE instruction for SEND data transfers between the two endpoints.

  19. Fencing data transfers in a parallel active messaging interface of a parallel computer

    SciTech Connect (OSTI)

    Blocksome, Michael A.; Mamidala, Amith R.

    2015-08-11

    Fencing data transfers in a parallel active messaging interface (`PAMI`) of a parallel computer, the PAMI including data communications endpoints, each endpoint comprising a specification of data communications parameters for a thread of execution on a compute node, including specifications of a client, a context, and a task, the compute nodes coupled for data communications through the PAMI and through data communications resources including a deterministic data communications network, including initiating execution through the PAMI of an ordered sequence of active SEND instructions for SEND data transfers between two endpoints, effecting deterministic SEND data transfers; and executing through the PAMI, with no FENCE accounting for SEND data transfers, an active FENCE instruction, the FENCE instruction completing execution only after completion of all SEND instructions initiated prior to execution of the FENCE instruction for SEND data transfers between the two endpoints.

  20. Fencing data transfers in a parallel active messaging interface of a parallel computer

    DOE Patents [OSTI]

    Blocksome, Michael A.; Mamidala, Amith R.

    2015-06-09

    Fencing data transfers in a parallel active messaging interface (`PAMI`) of a parallel computer, the PAMI including data communications endpoints, each endpoint including a specification of data communications parameters for a thread of execution on a compute node, including specifications of a client, a context, and a task; the compute nodes coupled for data communications through the PAMI and through data communications resources including at least one segment of shared random access memory; including initiating execution through the PAMI of an ordered sequence of active SEND instructions for SEND data transfers between two endpoints, effecting deterministic SEND data transfers through a segment of shared memory; and executing through the PAMI, with no FENCE accounting for SEND data transfers, an active FENCE instruction, the FENCE instruction completing execution only after completion of all SEND instructions initiated prior to execution of the FENCE instruction for SEND data transfers between the two endpoints.

  1. Data communications in a parallel active messaging interface of a parallel computer

    DOE Patents [OSTI]

    Archer, Charles J; Blocksome, Michael A; Ratterman, Joseph D; Smith, Brian E

    2014-11-18

    Data communications in a parallel active messaging interface (`PAMI`) of a parallel computer, the PAMI composed of data communications endpoints, each endpoint including a specification of data communications parameters for a thread of execution on a compute node, including specifications of a client, a context, and a task, endpoints coupled for data communications through the PAMI and through data communications resources, including receiving in an origin endpoint of the PAMI a SEND instruction, the SEND instruction specifying a transmission of transfer data from the origin endpoint to a first target endpoint; transmitting from the origin endpoint to the first target endpoint a Request-To-Send (`RTS`) message advising the first target endpoint of the location and size of the transfer data; assigning by the first target endpoint to each of a plurality of target endpoints separate portions of the transfer data; and receiving by the plurality of target endpoints the transfer data.

  2. Data communications in a parallel active messaging interface of a parallel computer

    DOE Patents [OSTI]

    Davis, Kristan D.; Faraj, Daniel A.

    2014-07-22

    Algorithm selection for data communications in a parallel active messaging interface (`PAMI`) of a parallel computer, the PAMI composed of data communications endpoints, each endpoint including specifications of a client, a context, and a task, endpoints coupled for data communications through the PAMI, including associating in the PAMI data communications algorithms and ranges of message sizes so that each algorithm is associated with a separate range of message sizes; receiving in an origin endpoint of the PAMI a data communications instruction, the instruction specifying transmission of a data communications message from the origin endpoint to a target endpoint, the data communications message characterized by a message size; selecting, from among the associated algorithms and ranges, a data communications algorithm in dependence upon the message size; and transmitting, according to the selected data communications algorithm from the origin endpoint to the target endpoint, the data communications message.

  3. Fencing direct memory access data transfers in a parallel active messaging interface of a parallel computer

    DOE Patents [OSTI]

    Blocksome, Michael A; Mamidala, Amith R

    2014-02-11

    Fencing direct memory access (`DMA`) data transfers in a parallel active messaging interface (`PAMI`) of a parallel computer, the PAMI including data communications endpoints, each endpoint including specifications of a client, a context, and a task, the endpoints coupled for data communications through the PAMI and through DMA controllers operatively coupled to segments of shared random access memory through which the DMA controllers deliver data communications deterministically, including initiating execution through the PAMI of an ordered sequence of active DMA instructions for DMA data transfers between two endpoints, effecting deterministic DMA data transfers through a DMA controller and a segment of shared memory; and executing through the PAMI, with no FENCE accounting for DMA data transfers, an active FENCE instruction, the FENCE instruction completing execution only after completion of all DMA instructions initiated prior to execution of the FENCE instruction for DMA data transfers between the two endpoints.

  4. Data communications in a parallel active messaging interface of a parallel computer

    DOE Patents [OSTI]

    Davis, Kristan D; Faraj, Daniel A

    2013-07-09

    Algorithm selection for data communications in a parallel active messaging interface (`PAMI`) of a parallel computer, the PAMI composed of data communications endpoints, each endpoint including specifications of a client, a context, and a task, endpoints coupled for data communications through the PAMI, including associating in the PAMI data communications algorithms and ranges of message sizes so that each algorithm is associated with a separate range of message sizes; receiving in an origin endpoint of the PAMI a data communications instruction, the instruction specifying transmission of a data communications message from the origin endpoint to a target endpoint, the data communications message characterized by a message size; selecting, from among the associated algorithms and ranges, a data communications algorithm in dependence upon the message size; and transmitting, according to the selected data communications algorithm from the origin endpoint to the target endpoint, the data communications message.

  5. Parallelization and automatic data distribution for nuclear reactor simulations

    SciTech Connect (OSTI)

    Liebrock, L.M.

    1997-07-01

    Detailed attempts at realistic nuclear reactor simulations currently take many times real time to execute on high performance workstations. Even the fastest sequential machine can not run these simulations fast enough to ensure that the best corrective measure is used during a nuclear accident to prevent a minor malfunction from becoming a major catastrophe. Since sequential computers have nearly reached the speed of light barrier, these simulations will have to be run in parallel to make significant improvements in speed. In physical reactor plants, parallelism abounds. Fluids flow, controls change, and reactions occur in parallel with only adjacent components directly affecting each other. These do not occur in the sequentialized manner, with global instantaneous effects, that is often used in simulators. Development of parallel algorithms that more closely approximate the real-world operation of a reactor may, in addition to speeding up the simulations, actually improve the accuracy and reliability of the predictions generated. Three types of parallel architecture (shared memory machines, distributed memory multicomputers, and distributed networks) are briefly reviewed as targets for parallelization of nuclear reactor simulation. Various parallelization models (loop-based model, shared memory model, functional model, data parallel model, and a combined functional and data parallel model) are discussed along with their advantages and disadvantages for nuclear reactor simulation. A variety of tools are introduced for each of the models. Emphasis is placed on the data parallel model as the primary focus for two-phase flow simulation. Tools to support data parallel programming for multiple component applications and special parallelization considerations are also discussed.

  6. The Swift Parallel Scripting Language for ALCF Systems | Argonne Leadership

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Computing Facility Projects bgclang Compiler Cobalt Scheduler GLEAN Petrel Swift The Swift Parallel Scripting Language for ALCF Systems Swift is an implicitly parallel functional language that makes it easier to script higher-level applications or workflows composed from serial or parallel programs. Recently made available across ALCF systems, it has been used to script application workflows in a broad range of diverse disciplines from protein structure prediction to modeling global

  7. Building the Next Generation of Parallel Applications: Co-Design...

    Office of Scientific and Technical Information (OSTI)

    Applications: Co-Design Opportunities and Challenges. Citation Details In-Document Search Title: Building the Next Generation of Parallel Applications: Co-Design Opportunities and ...

  8. Using ARM data to correct plane-parallel satellite retrievals...

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Using ARM data to correct plane-parallel satellite retrievals of cloud properties Dong, Xiquan University of North Dakota Minnis, Patrick NASA Langley Research Center Xi, Baike...

  9. CPIC: A Parallel Electrostatic Particle-In-Cell Code (Conference...

    Office of Scientific and Technical Information (OSTI)

    Electrostatic Particle-In-Cell Code Citation Details In-Document Search Title: CPIC: A Parallel Electrostatic Particle-In-Cell Code Authors: Meierbachtol, Collin S. 1 ; Delzanno, ...

  10. Parallel ptychographic reconstruction (Journal Article) | SciTech...

    Office of Scientific and Technical Information (OSTI)

    SciTech Connect Search Results Journal Article: Parallel ptychographic reconstruction ... GrantContract Number: FC02-06ER25777 Type: Published Article Journal Name: Optics Express ...

  11. A set of parallel, implicit methods for a reconstructed discontinuous...

    Office of Scientific and Technical Information (OSTI)

    Journal Article: A set of parallel, implicit methods for a reconstructed discontinuous Galerkin method for compressible flows on 3D hybrid grids Citation Details In-Document Search...

  12. A set of parallel, implicit methods for a reconstructed discontinuous...

    Office of Scientific and Technical Information (OSTI)

    Furthermore, an SPMD (single program, multiple data) programming paradigm based on MPI is proposed to achieve parallelism. The numerical results on complex geometries...

  13. A garbage collection algorithm for shared memory parallel processors

    SciTech Connect (OSTI)

    Crammond, J. )

    1988-12-01

    This paper describes a technique for adapting the Morris sliding garbage collection algorithm to execute on parallel machines with shared memory. The algorithm is described within the framework of an implementation of the parallel logic language Parlog. However, the algorithm is a general one and can easily be adapted to parallel Prolog systems and to other languages. The performance of the algorithm executing a few simple Parlog benchmarks is analyzed. Finally, it is shown how the technique for parallelizing the sequential algorithm can be adapted for a semi-space copying algorithm.

  14. Mesoscale Simulations of Particulate Flows with Parallel Distributed...

    Office of Scientific and Technical Information (OSTI)

    Title: Mesoscale Simulations of Particulate Flows with Parallel Distributed Lagrange Multiplier Technique Fluid particulate flows are common phenomena in nature and industry. ...

  15. Portable, parallel, reusable Krylov space codes

    SciTech Connect (OSTI)

    Smith, B.; Gropp, W.

    1994-12-31

    Krylov space accelerators are an important component of many algorithms for the iterative solution of linear systems. Each Krylov space method has it`s own particular advantages and disadvantages, therefore it is desirable to have a variety of them available all with an identical, easy to use, interface. A common complaint application programmers have with available software libraries for the iterative solution of linear systems is that they require the programmer to use the data structures provided by the library. The library is not able to work with the data structures of the application code. Hence, application programmers find themselves constantly recoding the Krlov space algorithms. The Krylov space package (KSP) is a data-structure-neutral implementation of a variety of Krylov space methods including preconditioned conjugate gradient, GMRES, BiCG-Stab, transpose free QMR and CGS. Unlike all other software libraries for linear systems that the authors are aware of, KSP will work with any application codes data structures, in Fortran or C. Due to it`s data-structure-neutral design KSP runs unchanged on both sequential and parallel machines. KSP has been tested on workstations, the Intel i860 and Paragon, Thinking Machines CM-5 and the IBM SP1.

  16. Multi-petascale highly efficient parallel supercomputer

    DOE Patents [OSTI]

    Asaad, Sameh; Bellofatto, Ralph E.; Blocksome, Michael A.; Blumrich, Matthias A.; Boyle, Peter; Brunheroto, Jose R.; Chen, Dong; Cher, Chen -Yong; Chiu, George L.; Christ, Norman; Coteus, Paul W.; Davis, Kristan D.; Dozsa, Gabor J.; Eichenberger, Alexandre E.; Eisley, Noel A.; Ellavsky, Matthew R.; Evans, Kahn C.; Fleischer, Bruce M.; Fox, Thomas W.; Gara, Alan; Giampapa, Mark E.; Gooding, Thomas M.; Gschwind, Michael K.; Gunnels, John A.; Hall, Shawn A.; Haring, Rudolf A.; Heidelberger, Philip; Inglett, Todd A.; Knudson, Brant L.; Kopcsay, Gerard V.; Kumar, Sameer; Mamidala, Amith R.; Marcella, James A.; Megerian, Mark G.; Miller, Douglas R.; Miller, Samuel J.; Muff, Adam J.; Mundy, Michael B.; O'Brien, John K.; O'Brien, Kathryn M.; Ohmacht, Martin; Parker, Jeffrey J.; Poole, Ruth J.; Ratterman, Joseph D.; Salapura, Valentina; Satterfield, David L.; Senger, Robert M.; Smith, Brian; Steinmacher-Burow, Burkhard; Stockdell, William M.; Stunkel, Craig B.; Sugavanam, Krishnan; Sugawara, Yutaka; Takken, Todd E.; Trager, Barry M.; Van Oosten, James L.; Wait, Charles D.; Walkup, Robert E.; Watson, Alfred T.; Wisniewski, Robert W.; Wu, Peng

    2015-07-14

    A Multi-Petascale Highly Efficient Parallel Supercomputer of 100 petaOPS-scale computing, at decreased cost, power and footprint, and that allows for a maximum packaging density of processing nodes from an interconnect point of view. The Supercomputer exploits technological advances in VLSI that enables a computing model where many processors can be integrated into a single Application Specific Integrated Circuit (ASIC). Each ASIC computing node comprises a system-on-chip ASIC utilizing four or more processors integrated into one die, with each having full access to all system resources and enabling adaptive partitioning of the processors to functions such as compute or messaging I/O on an application by application basis, and preferably, enable adaptive partitioning of functions in accordance with various algorithmic phases within an application, or if I/O or other processors are underutilized, then can participate in computation or communication nodes are interconnected by a five dimensional torus network with DMA that optimally maximize the throughput of packet communications between nodes and minimize latency.

  17. Berkeley Unified Parallel C (UPC) Runtime Library

    Energy Science and Technology Software Center (OSTI)

    2003-03-31

    This software comprises a portable, open source implementation of a runtime library to support applications written in the Unified Parallel C (UPC) language. This library implements the UPC-specific functionality, including shared memory allocation and locks. The network-dependent functionality is implemented as a thin wrapper around a separate library implementing the GASNet (Global-Address Space Networking) specification. For true shared memory machines. GASNet is bypassed in favor of direct memory operations and local synchronization mechanisms. The Berkeleymore » UPC Runtime Library is currently the only implementation of the "Berkeley UPC Runtime Specification", and thus the only runtme library usable with the Berkeley UPC Compiler. Also, it is the only UPC runtime known to the author to provide two shared pointer representations: one for arbitrary blocksizes and one to optimize for the common cases of phaseless and blocksize=1. For distributed memory environments a library implementing the GASNet (Global-Address Space Networking) specification is required for communication. While no specialized hardware is required, a high-speed interconnet supported by the GASNet implementation is suggested for preformance. If no supported high-speed interconnect is available. GASNet can run over MPI. An external library is reqired for certain local memory allocation operations. A well defined interface allows for multiple implementations of this library, but at present the "umalloc" library from LBNL is the only compatible implementation.« less

  18. Flexible Language Constructs for Large Parallel Programs

    DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)

    Rosing, Matt; Schnabel, Robert

    1994-01-01

    The goal of the research described in this article is to develop flexible language constructs for writing large data parallel numerical programs for distributed memory (multiple instruction multiple data [MIMD]) multiprocessors. Previously, several models have been developed to support synchronization and communication. Models for global synchronization include single instruction multiple data (SIMD), single program multiple data (SPMD), and sequential programs annotated with data distribution statements. The two primary models for communication include implicit communication based on shared memory and explicit communication based on messages. None of these models by themselves seem sufficient to permit the natural and efficient expression ofmore » the variety of algorithms that occur in large scientific computations. In this article, we give an overview of a new language that combines many of these programming models in a clean manner. This is done in a modular fashion such that different models can be combined to support large programs. Within a module, the selection of a model depends on the algorithm and its efficiency requirements. In this article, we give an overview of the language and discuss some of the critical implementation details.« less

  19. Green's function of a free massive scalar field on the lattice

    SciTech Connect (OSTI)

    Borasoy, B.; Krebs, H.

    2005-09-01

    We propose a method to calculate the Green's function of a free massive scalar field on the lattice numerically to very high precision. For masses m<2 (in lattice units) the massive Green's function can be expressed recursively in terms of the massless Green's function and just two additional mass-independent constants.

  20. Current parallel I/O limitations to scalable data analysis.

    SciTech Connect (OSTI)

    Mascarenhas, Ajith Arthur; Pebay, Philippe Pierre

    2011-07-01

    This report describes the limitations to parallel scalability which we have encountered when applying our otherwise optimally scalable parallel statistical analysis tool kit to large data sets distributed across the parallel file system of the current premier DOE computational facility. This report describes our study to evaluate the effect of parallel I/O on the overall scalability of a parallel data analysis pipeline using our scalable parallel statistics tool kit [PTBM11]. In this goal, we tested it using the Jaguar-pf DOE/ORNL peta-scale platform on a large combustion simulation data under a variety of process counts and domain decompositions scenarios. In this report we have recalled the foundations of the parallel statistical analysis tool kit which we have designed and implemented, with the specific double intent of reproducing typical data analysis workflows, and achieving optimal design for scalable parallel implementations. We have briefly reviewed those earlier results and publications which allow us to conclude that we have achieved both goals. However, in this report we have further established that, when used in conjuction with a state-of-the-art parallel I/O system, as can be found on the premier DOE peta-scale platform, the scaling properties of the overall analysis pipeline comprising parallel data access routines degrade rapidly. This finding is problematic and must be addressed if peta-scale data analysis is to be made scalable, or even possible. In order to attempt to address these parallel I/O limitations, we will investigate the use the Adaptable IO System (ADIOS) [LZL+10] to improve I/O performance, while maintaining flexibility for a variety of IO options, such MPI IO, POSIX IO. This system is developed at ORNL and other collaborating institutions, and is being tested extensively on Jaguar-pf. Simulation code being developed on these systems will also use ADIOS to output the data thereby making it easier for other systems, such as ours, to

  1. Endpoint-based parallel data processing with non-blocking collective instructions in a parallel active messaging interface of a parallel computer

    DOE Patents [OSTI]

    Archer, Charles J; Blocksome, Michael A; Cernohous, Bob R; Ratterman, Joseph D; Smith, Brian E

    2014-11-18

    Methods, apparatuses, and computer program products for endpoint-based parallel data processing with non-blocking collective instructions in a parallel active messaging interface (`PAMI`) of a parallel computer are provided. Embodiments include establishing by a parallel application a data communications geometry, the geometry specifying a set of endpoints that are used in collective operations of the PAMI, including associating with the geometry a list of collective algorithms valid for use with the endpoints of the geometry. Embodiments also include registering in each endpoint in the geometry a dispatch callback function for a collective operation and executing without blocking, through a single one of the endpoints in the geometry, an instruction for the collective operation.

  2. Petascale Parallelization of the Gyrokinetic Toroidal Code

    SciTech Connect (OSTI)

    Ethier, Stephane; Adams, Mark; Carter, Jonathan; Oliker, Leonid

    2010-05-01

    The Gyrokinetic Toroidal Code (GTC) is a global, three-dimensional particle-in-cell application developed to study microturbulence in tokamak fusion devices. The global capability of GTC is unique, allowing researchers to systematically analyze important dynamics such as turbulence spreading. In this work we examine a new radial domain decomposition approach to allow scalability onto the latest generation of petascale systems. Extensive performance evaluation is conducted on three high performance computing systems: the IBM BG/P, the Cray XT4, and an Intel Xeon Cluster. Overall results show that the radial decomposition approach dramatically increases scalability, while reducing the memory footprint - allowing for fusion device simulations at an unprecedented scale. After a decade where high-end computing (HEC) was dominated by the rapid pace of improvements to processor frequencies, the performance of next-generation supercomputers is increasingly differentiated by varying interconnect designs and levels of integration. Understanding the tradeoffs of these system designs is a key step towards making effective petascale computing a reality. In this work, we examine a new parallelization scheme for the Gyrokinetic Toroidal Code (GTC) [?] micro-turbulence fusion application. Extensive scalability results and analysis are presented on three HEC systems: the IBM BlueGene/P (BG/P) at Argonne National Laboratory, the Cray XT4 at Lawrence Berkeley National Laboratory, and an Intel Xeon cluster at Lawrence Livermore National Laboratory. Overall results indicate that the new radial decomposition approach successfully attains unprecedented scalability to 131,072 BG/P cores by overcoming the memory limitations of the previous approach. The new version is well suited to utilize emerging petascale resources to access new regimes of physical phenomena.

  3. Electrostatically focused addressable field emission array chips (AFEA's) for high-speed massively parallel maskless digital E-beam direct write lithography and scanning electron microscopy

    DOE Patents [OSTI]

    Thomas, Clarence E.; Baylor, Larry R.; Voelkl, Edgar; Simpson, Michael L.; Paulus, Michael J.; Lowndes, Douglas H.; Whealton, John H.; Whitson, John C.; Wilgen, John B.

    2002-12-24

    Systems and methods are described for addressable field emission array (AFEA) chips. A method of operating an addressable field-emission array, includes: generating a plurality of electron beams from a pluralitly of emitters that compose the addressable field-emission array; and focusing at least one of the plurality of electron beams with an on-chip electrostatic focusing stack. The systems and methods provide advantages including the avoidance of space-charge blow-up.

  4. WAS THE SUN BORN IN A MASSIVE CLUSTER?

    SciTech Connect (OSTI)

    Dukes, Donald; Krumholz, Mark R.

    2012-07-20

    A number of authors have argued that the Sun must have been born in a cluster of no more than several thousand stars, on the basis that, in a larger cluster, close encounters between the Sun and other stars would have truncated the outer solar system or excited the outer planets into eccentric orbits. However, this dynamical limit is in tension with meteoritic evidence that the solar system was exposed to a nearby supernova during or shortly after its formation; a several-thousand-star cluster is much too small to produce a massive star whose lifetime is short enough to have provided the enrichment. In this paper, we revisit the dynamical limit in the light of improved observations of the properties of young clusters. We use a series of scattering simulations to measure the velocity-dependent cross-section for disruption of the outer solar system by stellar encounters, and use this cross-section to compute the probability of a disruptive encounter as a function of birth cluster properties. We find that, contrary to prior work, the probability of disruption is small regardless of the cluster mass, and that it actually decreases rather than increases with cluster mass. Our results differ from prior work for three main reasons: (1) unlike in most previous work, we compute a velocity-dependent cross-section and properly integrate over the cluster mass-dependent velocity distribution of incoming stars; (2) we recognize that {approx}90% of clusters have lifetimes of a few crossing times, rather than the 10-100 Myr adopted in many earlier models; and (3) following recent observations, we adopt a mass-independent surface density for embedded clusters, rather than a mass-independent radius as assumed many earlier papers. Our results remove the tension between the dynamical limit and the meteoritic evidence, and suggest that the Sun was born in a massive cluster. A corollary to this result is that close encounters in the Sun's birth cluster are highly unlikely to truncate

  5. Parallel 3-D method of characteristics in MPACT

    SciTech Connect (OSTI)

    Kochunas, B.; Dovvnar, T. J.; Liu, Z.

    2013-07-01

    A new parallel 3-D MOC kernel has been developed and implemented in MPACT which makes use of the modular ray tracing technique to reduce computational requirements and to facilitate parallel decomposition. The parallel model makes use of both distributed and shared memory parallelism which are implemented with the MPI and OpenMP standards, respectively. The kernel is capable of parallel decomposition of problems in space, angle, and by characteristic rays up to 0(104) processors. Initial verification of the parallel 3-D MOC kernel was performed using the Takeda 3-D transport benchmark problems. The eigenvalues computed by MPACT are within the statistical uncertainty of the benchmark reference and agree well with the averages of other participants. The MPACT k{sub eff} differs from the benchmark results for rodded and un-rodded cases by 11 and -40 pcm, respectively. The calculations were performed for various numbers of processors and parallel decompositions up to 15625 processors; all producing the same result at convergence. The parallel efficiency of the worst case was 60%, while very good efficiency (>95%) was observed for cases using 500 processors. The overall run time for the 500 processor case was 231 seconds and 19 seconds for the case with 15625 processors. Ongoing work is focused on developing theoretical performance models and the implementation of acceleration techniques to minimize the number of iterations to converge. (authors)

  6. Parallel architecture for real-time simulation. Master's thesis

    SciTech Connect (OSTI)

    Cockrell, C.D.

    1989-01-01

    This thesis is concerned with the development of a very fast and highly efficient parallel computer architecture for real-time simulation of continuous systems. Currently, several parallel processing systems exist that may be capable of executing a complex simulation in real-time. These systems are examined and the pros and cons of each system discussed. The thesis then introduced a custom-designed parallel architecture based upon The University of Alabama's OPERA architecture. Each component of this system is discussed and rationale presented for its selection. The problem selected, real-time simulation of the Space Shuttle Main Engine for the test and evaluation of the proposed architecture, is explored, identifying the areas where parallelism can be exploited and parallel processing applied. Results from the test and evaluation phase are presented and compared with the results of the same problem that has been processed on a uniprocessor system.

  7. Broadcasting collective operation contributions throughout a parallel computer

    DOE Patents [OSTI]

    Faraj, Ahmad

    2012-02-21

    Methods, systems, and products are disclosed for broadcasting collective operation contributions throughout a parallel computer. The parallel computer includes a plurality of compute nodes connected together through a data communications network. Each compute node has a plurality of processors for use in collective parallel operations on the parallel computer. Broadcasting collective operation contributions throughout a parallel computer according to embodiments of the present invention includes: transmitting, by each processor on each compute node, that processor's collective operation contribution to the other processors on that compute node using intra-node communications; and transmitting on a designated network link, by each processor on each compute node according to a serial processor transmission sequence, that processor's collective operation contribution to the other processors on the other compute nodes using inter-node communications.

  8. Multilingual interfaces for parallel coupling in multiphysics and multiscale systems.

    SciTech Connect (OSTI)

    Ong, E. T.; Larson, J. W.; Norris, B.; Jacob, R. L.; Tobis, M.; Steder, M.; Mathematics and Computer Science; Univ. of Wisconsin; Australian National Univ.; Univ. of Chicago

    2007-01-01

    Multiphysics and multiscale simulation systems are emerging as a new grand challenge in computational science, largely because of increased computing power provided by the distributed-memory parallel programming model on commodity clusters. These systems often present a parallel coupling problem in their intercomponent data exchanges. Another potential problem in these coupled systems is language interoperability between their various constituent codes. In anticipation of combined parallel coupling/language interoperability challenges, we have created a set of interlanguage bindings for a successful parallel coupling library, the Model Coupling Toolkit. We describe the method used for automatically generating the bindings using the Babel language interoperability tool, and illustrate with short examples how MCT can be used from the C++ and Python languages. We report preliminary performance reports for the MCT interpolation benchmark. We conclude with a discussion of the significance of this work to the rapid prototyping of large parallel coupled systems.

  9. Characterizing and Mitigating Work Time Inflation in Task Parallel Programs

    DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)

    Olivier, Stephen L.; de Supinski, Bronis R.; Schulz, Martin; Prins, Jan F.

    2013-01-01

    Task parallelism raises the level of abstraction in shared memory parallel programming to simplify the development of complex applications. However, task parallel applications can exhibit poor performance due to thread idleness, scheduling overheads, and work time inflation – additional time spent by threads in a multithreaded computation beyond the time required to perform the same work in a sequential computation. We identify the contributions of each factor to lost efficiency in various task parallel OpenMP applications and diagnose the causes of work time inflation in those applications. Increased data access latency can cause significant work time inflation in NUMAmore » systems. Our locality framework for task parallel OpenMP programs mitigates this cause of work time inflation. Our extensions to the Qthreads library demonstrate that locality-aware scheduling can improve performance up to 3X compared to the Intel OpenMP task scheduler.« less

  10. Characterizing the convective velocity fields in massive stars

    SciTech Connect (OSTI)

    Chatzopoulos, Emmanouil; Graziani, Carlo; Couch, Sean M.

    2014-11-01

    We apply the mathematical formalism of vector spherical harmonics decomposition to convective stellar velocity fields from multidimensional hydrodynamics simulations and show that the resulting power spectra furnish a robust and stable statistical description of stellar convective turbulence. Analysis of the power spectra helps identify key physical parameters of the convective process such as the dominant scale of the turbulent motions that influence the structure of massive evolved pre-supernova stars. We introduce the numerical method that can be used to calculate vector spherical harmonics power spectra from two-dimensional (2D) and three-dimensional (3D) convective shell simulation data. Using this method we study the properties of oxygen shell burning and convection for a 15 M {sub ☉} star simulated by the hydrodynamics code FLASH in 2D and 3D. We discuss the importance of realistic initial conditions to achieving successful core-collapse supernova explosions in multidimensional simulations. We show that the calculated power spectra can be used to generate realizations of the velocity fields of presupernova convective shells. We find that the slope of the solenoidal mode power spectrum remains mostly constant throughout the evolution of convection in the oxygen shell in both 2D and 3D simulations. We also find that the characteristic radial scales of the convective elements are smaller in 3D than in 2D, while the angular scales are larger in 3D.

  11. Pair instability supernovae of very massive population III stars

    SciTech Connect (OSTI)

    Chen, Ke-Jung; Woosley, Stan; Heger, Alexander; Almgren, Ann; Whalen, Daniel J.

    2014-09-01

    Numerical studies of primordial star formation suggest that the first stars in the universe may have been very massive. Stellar models indicate that non-rotating Population III stars with initial masses of 140-260 M {sub ☉} die as highly energetic pair-instability supernovae. We present new two-dimensional simulations of primordial pair-instability supernovae done with the CASTRO code. Our simulations begin at earlier times than previous multidimensional models, at the onset of core contraction, to capture any dynamical instabilities that may be seeded by core contraction and explosive burning. Such instabilities could enhance explosive yields by mixing hot ash with fuel, thereby accelerating nuclear burning, and affect the spectra of the supernova by dredging up heavy elements from greater depths in the star at early times. Our grid of models includes both blue supergiants and red supergiants over the range in progenitor mass expected for these events. We find that fluid instabilities driven by oxygen and helium burning arise at the upper and lower boundaries of the oxygen shell ∼20-100 s after core bounce. Instabilities driven by burning freeze out after the SN shock exits the helium core. As the shock later propagates through the hydrogen envelope, a strong reverse shock forms that drives the growth of Rayleigh-Taylor instabilities. In red supergiant progenitors, the amplitudes of these instabilities are sufficient to mix the supernova ejecta.

  12. PROTOSTELLAR OUTFLOWS AND RADIATIVE FEEDBACK FROM MASSIVE STARS

    SciTech Connect (OSTI)

    Kuiper, Rolf; Yorke, Harold W.; Turner, Neal J. E-mail: Harold.W.Yorke@jpl.nasa.gov

    2015-02-20

    We carry out radiation hydrodynamical simulations of the formation of massive stars in the super-Eddington regime including both their radiative feedback and protostellar outflows. The calculations start from a prestellar core of dusty gas and continue until the star stops growing. The accretion ends when the remnants of the core are ejected, mostly by the force of the direct stellar radiation in the polar direction and elsewhere by the reradiated thermal infrared radiation. How long the accretion persists depends on whether the protostellar outflows are present. We set the mass outflow rate to 1% of the stellar sink particle's accretion rate. The outflows open a bipolar cavity extending to the core's outer edge, through which the thermal radiation readily escapes. The radiative flux is funneled into the polar directions while the core's collapse proceeds near the equator. The outflow thus extends the ''flashlight effect'', or anisotropic radiation field, found in previous studies from the few hundred AU scale of the circumstellar disk up to the 0.1 parsec scale of the core. The core's flashlight effect allows core gas to accrete on the disk for longer, in the same way that the disk's flashlight effect allows disk gas to accrete on the star for longer. Thus although the protostellar outflows remove material near the core's poles, causing slower stellar growth over the first few free-fall times, they also enable accretion to go on longer in our calculations. The outflows ultimately lead to stars of somewhat higher mass.

  13. HERSCHEL REVEALS MASSIVE COLD CLUMPS IN NGC 7538

    SciTech Connect (OSTI)

    Fallscheer, C.; Di Francesco, J.; Sadavoy, S.; Reid, M. A.; Martin, P. G.; Nguyen-Luong, Q.; Hill, T.; Hennemann, M.; Motte, F.; Men'shchikov, A.; Andre, Ph.; Konyves, V.; Sauvage, M.; Griffin, M.; Rygl, K. L. J.; Benedettini, M.; Schneider, N.; Anderson, L. D. [Laboratoire d'Astrophysique de Marseille, CNRS and others

    2013-08-20

    We present the first overview of the Herschel observations of the nearby high-mass star-forming region NGC 7538, taken as part of the Herschel imaging study of OB young stellar objects (HOBYS) Key Programme. These PACS and SPIRE maps cover an approximate area of one square degree at five submillimeter and far-infrared wavebands. We have identified 780 dense sources and classified 224 of those. With the intention of investigating the existence of cold massive starless or class 0-like clumps that would have the potential to form intermediate- to high-mass stars, we further isolate 13 clumps as the most likely candidates for follow-up studies. These 13 clumps have masses in excess of 40 M{sub Sun} and temperatures below 15 K. They range in size from 0.4 pc to 2.5 pc and have densities between 3 Multiplication-Sign 10{sup 3} cm{sup -3} and 4 Multiplication-Sign 10{sup 4} cm{sup -3}. Spectral energy distributions are then used to characterize their energetics and evolutionary state through a luminosity-mass diagram. NGC 7538 has a highly filamentary structure, previously unseen in the dust continuum of existing submillimeter surveys. We report the most complete imaging to date of a large, evacuated ring of material in NGC 7538 which is bordered by many cool sources.

  14. A LARGE, MASSIVE, ROTATING DISK AROUND AN ISOLATED YOUNG STELLAR OBJECT

    SciTech Connect (OSTI)

    Quanz, Sascha P.; Beuther, Henrik; Steinacker, Juergen; Linz, Hendrik; Krause, Oliver; Henning, Thomas; Birkmann, Stephan M.

    2010-07-10

    We present multi-wavelength observations and a radiative transfer model of a newly discovered massive circumstellar disk of gas and dust which is one of the largest disks known today. Seen almost edge-on, the disk is resolved in high-resolution near-infrared (NIR) images and appears as a dark lane of high opacity intersecting a bipolar reflection nebula. Based on molecular line observations, we estimate the distance to the object to be 3.5 kpc. This leads to a size for the dark lane of {approx}10,500 AU but due to shadowing effects the true disk size could be smaller. In Spitzer/IRAC 3.6 {mu}m images, the elongated shape of the bipolar reflection nebula is still preserved and the bulk of the flux seems to come from disk regions that can be detected due to the slight inclination of the disk. At longer IRAC wavelengths, the flux is mainly coming from the central regions penetrating directly through the dust lane. Interferometric observations of the dust continuum emission at millimeter wavelengths with the Submillimeter Array confirm this finding as the peak of the unresolved millimeter-emission coincides perfectly with the peak of the Spitzer/IRAC 5.8 {mu}m flux and the center of the dark lane seen in the NIR images. Simultaneously acquired CO data reveal a molecular outflow along the northern part of the reflection nebula which seems to be the outflow cavity. An elongated gaseous disk component is also detected and shows signs of rotation. The emission is perpendicular to the molecular outflow and thus parallel to but even more extended than the dark lane in the NIR images. Based on the dust continuum and the CO observations, we estimate a disk mass of up to a few solar masses depending on the underlying assumptions. Whether the disk-like structure is an actual accretion disk or rather a larger-scale flattened envelope or pseudodisk is difficult to discriminate with the current data set. The existence of HCO{sup +}/H{sup 13}CO{sup +} emission proves the presence of

  15. Eighth SIAM conference on parallel processing for scientific computing: Final program and abstracts

    SciTech Connect (OSTI)

    1997-12-31

    This SIAM conference is the premier forum for developments in parallel numerical algorithms, a field that has seen very lively and fruitful developments over the past decade, and whose health is still robust. Themes for this conference were: combinatorial optimization; data-parallel languages; large-scale parallel applications; message-passing; molecular modeling; parallel I/O; parallel libraries; parallel software tools; parallel compilers; particle simulations; problem-solving environments; and sparse matrix computations.

  16. An integrated approach to improving the parallel applications development process

    SciTech Connect (OSTI)

    Rasmussen, Craig E; Watson, Gregory R; Tibbitts, Beth R

    2009-01-01

    The development of parallel applications is becoming increasingly important to a broad range of industries. Traditionally, parallel programming was a niche area that was primarily exploited by scientists trying to model extremely complicated physical phenomenon. It is becoming increasingly clear, however, that continued hardware performance improvements through clock scaling and feature-size reduction are simply not going to be achievable for much longer. The hardware vendor's approach to addressing this issue is to employ parallelism through multi-processor and multi-core technologies. While there is little doubt that this approach produces scaling improvements, there are still many significant hurdles to be overcome before parallelism can be employed as a general replacement to more traditional programming techniques. The Parallel Tools Platform (PTP) Project was created in 2005 in an attempt to provide developers with new tools aimed at addressing some of the parallel development issues. Since then, the introduction of a new generation of peta-scale and multi-core systems has highlighted the need for such a platform. In this paper, we describe some of the challenges facing parallel application developers, present the current state of PTP, and provide a simple case study that demonstrates how PTP can be used to locate a potential deadlock situation in an MPI code.

  17. Xyce Parallel Electronic Simulator : users' guide, version 4.1.

    SciTech Connect (OSTI)

    Mei, Ting; Rankin, Eric Lamont; Thornquist, Heidi K.; Santarelli, Keith R.; Fixel, Deborah A.; Coffey, Todd Stirling; Russo, Thomas V.; Schiek, Richard Louis; Keiter, Eric Richard; Pawlowski, Roger Patrick

    2009-02-01

    This manual describes the use of the Xyce Parallel Electronic Simulator. Xyce has been designed as a SPICE-compatible, high-performance analog circuit simulator, and has been written to support the simulation needs of the Sandia National Laboratories electrical designers. This development has focused on improving capability over the current state-of-the-art in the following areas: (1) Capability to solve extremely large circuit problems by supporting large-scale parallel computing platforms (up to thousands of processors). Note that this includes support for most popular parallel and serial computers. (2) Improved performance for all numerical kernels (e.g., time integrator, nonlinear and linear solvers) through state-of-the-art algorithms and novel techniques. (3) Device models which are specifically tailored to meet Sandia's needs, including some radiation-aware devices (for Sandia users only). (4) Object-oriented code design and implementation using modern coding practices that ensure that the Xyce Parallel Electronic Simulator will be maintainable and extensible far into the future. Xyce is a parallel code in the most general sense of the phrase - a message passing parallel implementation - which allows it to run efficiently on the widest possible number of computing platforms. These include serial, shared-memory and distributed-memory parallel as well as heterogeneous platforms. Careful attention has been paid to the specific nature of circuit-simulation problems to ensure that optimal parallel efficiency is achieved as the number of processors grows. The development of Xyce provides a platform for computational research and development aimed specifically at the needs of the Laboratory. With Xyce, Sandia has an 'in-house' capability with which both new electrical (e.g., device model development) and algorithmic (e.g., faster time-integration methods, parallel solver algorithms) research and development can be performed. As a result, Xyce is a unique electrical

  18. Xyce parallel electronic simulator : users' guide. Version 5.1.

    SciTech Connect (OSTI)

    Mei, Ting; Rankin, Eric Lamont; Thornquist, Heidi K.; Santarelli, Keith R.; Fixel, Deborah A.; Coffey, Todd Stirling; Russo, Thomas V.; Schiek, Richard Louis; Keiter, Eric Richard; Pawlowski, Roger Patrick

    2009-11-01

    This manual describes the use of the Xyce Parallel Electronic Simulator. Xyce has been designed as a SPICE-compatible, high-performance analog circuit simulator, and has been written to support the simulation needs of the Sandia National Laboratories electrical designers. This development has focused on improving capability over the current state-of-the-art in the following areas: (1) Capability to solve extremely large circuit problems by supporting large-scale parallel computing platforms (up to thousands of processors). Note that this includes support for most popular parallel and serial computers. (2) Improved performance for all numerical kernels (e.g., time integrator, nonlinear and linear solvers) through state-of-the-art algorithms and novel techniques. (3) Device models which are specifically tailored to meet Sandia's needs, including some radiation-aware devices (for Sandia users only). (4) Object-oriented code design and implementation using modern coding practices that ensure that the Xyce Parallel Electronic Simulator will be maintainable and extensible far into the future. Xyce is a parallel code in the most general sense of the phrase - a message passing parallel implementation - which allows it to run efficiently on the widest possible number of computing platforms. These include serial, shared-memory and distributed-memory parallel as well as heterogeneous platforms. Careful attention has been paid to the specific nature of circuit-simulation problems to ensure that optimal parallel efficiency is achieved as the number of processors grows. The development of Xyce provides a platform for computational research and development aimed specifically at the needs of the Laboratory. With Xyce, Sandia has an 'in-house' capability with which both new electrical (e.g., device model development) and algorithmic (e.g., faster time-integration methods, parallel solver algorithms) research and development can be performed. As a result, Xyce is a unique electrical

  19. Scientists say climate change could cause a 'massive' tree die-off in

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    the U.S. Southwest Climate change could cause a 'massive' tree die-off in the U.S. Southwest Scientists say climate change could cause a 'massive' tree die-off in the U.S. Southwest In a troubling new study says a warming climate could trigger a "massive" dieoff of coniferous trees in the U.S. southwest sometime this century. December 24, 2015 Dying conifers, particularly ponderosa pine (Pinus ponderosa) and sugar pine (Pinus lambertiana) in California's Sequoia National Park,

  20. TECA: A Parallel Toolkit for Extreme Climate Analysis

    SciTech Connect (OSTI)

    Prabhat, Mr; Ruebel, Oliver; Byna, Surendra; Wu, Kesheng; Li, Fuyu; Wehner, Michael; Bethel, E. Wes

    2012-03-12

    We present TECA, a parallel toolkit for detecting extreme events in large climate datasets. Modern climate datasets expose parallelism across a number of dimensions: spatial locations, timesteps and ensemble members. We design TECA to exploit these modes of parallelism and demonstrate a prototype implementation for detecting and tracking three classes of extreme events: tropical cyclones, extra-tropical cyclones and atmospheric rivers. We process a modern TB-sized CAM5 simulation dataset with TECA, and demonstrate good runtime performance for the three case studies.

  1. Nemesis I: Parallel Enhancements to ExodusII

    Energy Science and Technology Software Center (OSTI)

    2006-03-28

    NEMESIS I is an enhancement to the EXODUS II finite element database model used to store and retrieve data for unstructured parallel finite element analyses. NEMESIS I adds data structures which facilitate the partitioning of a scalar (standard serial) EXODUS II file onto parallel disk systems found on many parallel computers. Since the NEMESIS I application programming interface (APl)can be used to append information to an existing EXODUS II files can be used on filesmore » which contain NEMESIS I information. The NEMESIS I information is written and read via C or C++ callable functions which compromise the NEMESIS I API.« less

  2. pcircle - A Suite of Scalable Parallel File System Tools

    SciTech Connect (OSTI)

    WANG, FEIYI

    2015-10-01

    Most of the software related to file system are written for conventional local file system, they are serialized and can't take advantage of the benefit of a large scale parallel file system. "pcircle" software builds on top of ubiquitous MPI in cluster computing environment and "work-stealing" pattern to provide a scalable, high-performance suite of file system tools. In particular - it implemented parallel data copy and parallel data checksumming, with advanced features such as async progress report, checkpoint and restart, as well as integrity checking.

  3. Iterative Schemes for Time Parallelization with Application to Reservoir Simulation

    SciTech Connect (OSTI)

    Garrido, I; Fladmark, G E; Espedal, M S; Lee, B

    2005-04-18

    Parallel methods are usually not applied to the time domain because of the inherit sequentialness of time evolution. But for many evolutionary problems, computer simulation can benefit substantially from time parallelization methods. In this paper, they present several such algorithms that actually exploit the sequential nature of time evolution through a predictor-corrector procedure. This sequentialness ensures convergence of a parallel predictor-corrector scheme within a fixed number of iterations. The performance of these novel algorithms, which are derived from the classical alternating Schwarz method, are illustrated through several numerical examples using the reservoir simulator Athena.

  4. A Framework for Parallel Nonlinear Optimization by Partitioning Localized Constraints

    SciTech Connect (OSTI)

    Xu, You; Chen, Yixin

    2008-06-28

    We present a novel parallel framework for solving large-scale continuous nonlinear optimization problems based on constraint partitioning. The framework distributes constraints and variables to parallel processors and uses an existing solver to handle the partitioned subproblems. In contrast to most previous decomposition methods that require either separability or convexity of constraints, our approach is based on a new constraint partitioning theory and can handle nonconvex problems with inseparable global constraints. We also propose a hypergraph partitioning method to recognize the problem structure. Experimental results show that the proposed parallel algorithm can efficiently solve some difficult test cases.

  5. pcircle - A Suite of Scalable Parallel File System Tools

    Energy Science and Technology Software Center (OSTI)

    2015-10-01

    Most of the software related to file system are written for conventional local file system, they are serialized and can't take advantage of the benefit of a large scale parallel file system. "pcircle" software builds on top of ubiquitous MPI in cluster computing environment and "work-stealing" pattern to provide a scalable, high-performance suite of file system tools. In particular - it implemented parallel data copy and parallel data checksumming, with advanced features such as asyncmore » progress report, checkpoint and restart, as well as integrity checking.« less

  6. A faint galaxy redshift survey behind massive clusters

    SciTech Connect (OSTI)

    Frye, Brenda

    1999-12-01

    This thesis is concerned with the gravitational lensing effect by massive galaxy clusters. We have explored a new technique for measuring galaxy masses and for detecting high-z galaxies by their optical colors. A redshift survey has been obtained at the Keck for a magnitude limited sample of objects (I<23) behind three clusters, A1689, A2390, and A2218 within a radius of 0.5M pc. For each cluster we see both a clear trend of increasing flux and redshift towards the center. This behavior is the result of image magnifications, such that at fixed redshift one sees further down the luminosity function. The gradient of this magnification is, unlike measurements of image distortion, sensitive to the mass profile, and found to depart strongly from a pure isothermal halo. We have found that V RI color selection can be used effectively as a discriminant for finding high-z galaxies behind clusters and present five 4.1 < z < 5.1 spectra which are of very high quality due to their high mean magnification of {approximately}20, showing strong, visibly-saturated interstellar metal lines in some cases. We have also investigated the radio ring lens PKS 1830-211, locating the source and multiple images and detected molecular absorption at mm wavelengths. Broad molecular absorption of width 1/40kms is found toward the southwest component only, where surprisingly it does not reach the base of the continuum, which implies incomplete coverage of the SW component by molecular gas, despite the small projected size of the source, less than 1/8h pc at the absorption redshift.

  7. Verification of runaway migration in a massive disk

    SciTech Connect (OSTI)

    Li, Shengtai

    2009-01-01

    Runaway migration of a proto-planet was first proposed and observed by Masset and Papaloizou (2003). The semi-major axis of the proto-planet varies by 50% over just a few tens of orbits when runaway migration happens. More recent work by D'Angelo et al. (2005) solved the same problem with locally refined grid and found that the migration rate is sharply reduced and no runaway occurs when the grid cells surrounding the planet are refined enough. To verify these two seemly contradictory results, we independently perform high-resolution simulations, solving the same problem as Masset and Papaloizou (2003), with and without self-gravity. We find that the migration rate is highly dependent on the softening used in the gravitational force between thd disk and planet. When a small softening is used in a 2D massive disk, the mass of the circumplanetary disk (CPD) increases with time with enough resolution in the CPD region. It acts as the mass is continually accreted to the CPD, which cannot be settled down until after thousands of orbits. If the planet is held on a fixed orbit long enough, the mass of CPD will become so large that the condition for the runaway migration derived in Masset (2008) will not be satisfied, and hence the runaway migration will not be triggered. However, when a large softening is used, the mass of the CPD will begin to decrease after the initial increase stage. Our numerical results with and without disk-gravity confirm that the runaway migration indeed exists when the mass deficit is larger than the total mass of the planet and CPD. Our simulations results also show that the torque from the co-orbital region, in particular the planet's Hill sphere, is the main contributor to the runaway migration, and the CPD which is lagged behind by the planet becomes so asymmetric that it accelerates the migration.

  8. Endpoint-based parallel data processing with non-blocking collective instructions in a parallel active messaging interface of a parallel computer

    DOE Patents [OSTI]

    Archer, Charles J; Blocksome, Michael A; Cernohous, Bob R; Ratterman, Joseph D; Smith, Brian E

    2014-11-11

    Endpoint-based parallel data processing with non-blocking collective instructions in a PAMI of a parallel computer is disclosed. The PAMI is composed of data communications endpoints, each including a specification of data communications parameters for a thread of execution on a compute node, including specifications of a client, a context, and a task. The compute nodes are coupled for data communications through the PAMI. The parallel application establishes a data communications geometry specifying a set of endpoints that are used in collective operations of the PAMI by associating with the geometry a list of collective algorithms valid for use with the endpoints of the geometry; registering in each endpoint in the geometry a dispatch callback function for a collective operation; and executing without blocking, through a single one of the endpoints in the geometry, an instruction for the collective operation.

  9. CPIC: A Parallel Electrostatic Particle-In-Cell Code (Conference...

    Office of Scientific and Technical Information (OSTI)

    Electrostatic Particle-In-Cell Code Citation Details In-Document Search Title: CPIC: A Parallel Electrostatic Particle-In-Cell Code You are accessing a document from the ...

  10. Parallel Botulinum Neurotoxin/A Immuno- and Enzyme Activity Assays...

    Office of Scientific and Technical Information (OSTI)

    Title: Parallel Botulinum NeurotoxinA Immuno- and Enzyme Activity Assays Using the Versatile RapiDx Platform. Abstract not provided. Authors: Sommer, Gregory Jon ; Wang, Ying-Chih ...

  11. A Comprehensive Look at High Performance Parallel I/O

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    In this era of "big data," high performance parallel IO-the way disk drives efficiently read and write information on HPC systems-is extremely important. Yet the last book to ...

  12. Parallel and Antiparallel Interfacial Coupling in AF-FM Bilayers

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Parallel and Antiparallel Interfacial Coupling in AF-FM Bilayers Parallel and Antiparallel Interfacial Coupling in AF-FM Bilayers Print Wednesday, 30 August 2006 00:00 Cooling an antiferromagnetic-ferromagnetic bilayer in a magnetic field typically results in a remanent (zero-field) magnetization in the ferromagnet (FM) that is always in the direction of the field during cooling (positive Mrem). Strikingly, when FeF2 is the antiferromagnet (AF), cooling in a field can lead to a remanent

  13. Mesoscale Simulations of Particulate Flows with Parallel Distributed

    Office of Scientific and Technical Information (OSTI)

    Lagrange Multiplier Technique (Conference) | SciTech Connect Mesoscale Simulations of Particulate Flows with Parallel Distributed Lagrange Multiplier Technique Citation Details In-Document Search Title: Mesoscale Simulations of Particulate Flows with Parallel Distributed Lagrange Multiplier Technique Fluid particulate flows are common phenomena in nature and industry. Modeling of such flows at micro and macro levels as well establishing relationships between these approaches are needed to

  14. Mesoscale simulations of particulate flows with parallel distributed

    Office of Scientific and Technical Information (OSTI)

    Lagrange multiplier technique (Journal Article) | SciTech Connect Journal Article: Mesoscale simulations of particulate flows with parallel distributed Lagrange multiplier technique Citation Details In-Document Search Title: Mesoscale simulations of particulate flows with parallel distributed Lagrange multiplier technique Authors: Kanarska, Y ; Lomov, I ; Antoun, T Publication Date: 2010-09-10 OSTI Identifier: 1120915 Report Number(s): LLNL-JRNL-455392 DOE Contract Number: W-7405-ENG-48

  15. Interface for Parallel I/O from Componentized Visualization Algorithms

    Energy Science and Technology Software Center (OSTI)

    2008-09-16

    The software is an interface layer over file I/O with features specifically designed for efficient parallel reads and writes. The interface provides multiple concrete implementations that easily allow the replacement of one interface with another. This feature allows a reader or writer implementation to work independently of whether parallel file I/O is available or desired. The software also contains extensions to some readers to allow it to use the file I/O functionality.

  16. Multiscale Molecular Simulations at the Petascale (Parallelization of

    Office of Scientific and Technical Information (OSTI)

    Reactive Force Field Model for Blue Gene/Q): ALCF-2 Early Science Program Technical Report (Technical Report) | SciTech Connect Multiscale Molecular Simulations at the Petascale (Parallelization of Reactive Force Field Model for Blue Gene/Q): ALCF-2 Early Science Program Technical Report Citation Details In-Document Search Title: Multiscale Molecular Simulations at the Petascale (Parallelization of Reactive Force Field Model for Blue Gene/Q): ALCF-2 Early Science Program Technical Report

  17. Sort-First, Distributed Memory Parallel Visualization and Rendering

    SciTech Connect (OSTI)

    Bethel, E. Wes; Humphreys, Greg; Paul, Brian; Brederson, J. Dean

    2003-07-15

    While commodity computing and graphics hardware has increased in capacity and dropped in cost, it is still quite difficult to make effective use of such systems for general-purpose parallel visualization and graphics. We describe the results of a recent project that provides a software infrastructure suitable for general-purpose use by parallel visualization and graphics applications. Our work combines and extends two technologies: Chromium, a stream-oriented framework that implements the OpenGL programming interface; and OpenRM Scene Graph, a pipelined-parallel scene graph interface for graphics data management. Using this combination, we implement a sort-first, distributed memory, parallel volume rendering application. We describe the performance characteristics in terms of bandwidth requirements and highlight key algorithmic considerations needed to implement the sort-first system. We characterize system performance using a distributed memory parallel volume rendering application, a nd present performance gains realized by using scene specific knowledge to accelerate rendering through reduced network bandwidth. The contribution of this work is an exploration of general-purpose, sort-first architecture performance characteristics as applied to distributed memory, commodity hardware, along with a description of the algorithmic support needed to realize parallel, sort-first implementations.

  18. Massive-scale RDF Processing Using Compressed Bitmap Indexes

    SciTech Connect (OSTI)

    Madduri, Kamesh; Wu, Kesheng

    2011-05-26

    The Resource Description Framework (RDF) is a popular data model for representing linked data sets arising from the web, as well as large scienti#12;c data repositories such as UniProt. RDF data intrinsically represents a labeled and directed multi-graph. SPARQL is a query language for RDF that expresses subgraph pattern-#12;nding queries on this implicit multigraph in a SQL- like syntax. SPARQL queries generate complex intermediate join queries; to compute these joins e#14;ciently, we propose a new strategy based on bitmap indexes. We store the RDF data in column-oriented structures as compressed bitmaps along with two dictionaries. This paper makes three new contributions. (i) We present an e#14;cient parallel strategy for parsing the raw RDF data, building dictionaries of unique entities, and creating compressed bitmap indexes of the data. (ii) We utilize the constructed bitmap indexes to e#14;ciently answer SPARQL queries, simplifying the join evaluations. (iii) To quantify the performance impact of using bitmap indexes, we compare our approach to the state-of-the-art triple-store RDF-3X. We #12;nd that our bitmap index-based approach to answering queries is up to an order of magnitude faster for a variety of SPARQL queries, on gigascale RDF data sets.

  19. Massive Energy Storage in Superconductors (SMES) | U.S. DOE Office of

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Science (SC) Massive Energy Storage in Superconductors (SMES) Nuclear Physics (NP) NP Home About Research Facilities Science Highlights Benefits of NP Funding Opportunities Nuclear Science Advisory Committee (NSAC) Community Resources Contact Information Nuclear Physics U.S. Department of Energy SC-26/Germantown Building 1000 Independence Ave., SW Washington, DC 20585 P: (301) 903-3613 F: (301) 903-3833 E: Email Us More Information » 08.01.13 Massive Energy Storage in Superconductors (SMES)

  20. New Cosmologies on the Horizon. Cosmology and Holography in bigravity and massive gravity

    SciTech Connect (OSTI)

    Tolley, Andrew James

    2013-03-31

    The goal of this research program is to explore the cosmological dynamics, the nature of cosmological and black hole horizons, and the role of holography in a new class of infrared modified theories of gravity. This will capitalize of the considerable recent progress in our understanding of the dynamics of massive spin two fields on curved spacetimes, culminating in the formulation of the first fully consistent theories of massive gravity and bigravity/bimetric theories.

  1. The MASSIVE survey. I. A volume-limited integral-field spectroscopic study of the most massive early-type galaxies within 108 Mpc

    SciTech Connect (OSTI)

    Ma, Chung-Pei [Department of Astronomy, University of California, Berkeley, CA 94720 (United States); Greene, Jenny E.; Murphy, Jeremy D. [Department of Astrophysical Sciences, Princeton University, Princeton, NJ 08544 (United States); McConnell, Nicholas [Institute for Astronomy, University of Hawaii at Manoa, Honolulu, HI 96822 (United States); Janish, Ryan [Department of Physics, University of California, Berkeley, CA 94720 (United States); Blakeslee, John P. [Dominion Astrophysical Observatory, NRC Herzberg Institute of Astrophysics, Victoria, BC V9E 2E7 (Canada); Thomas, Jens, E-mail: cpma@berkeley.edu [Max Planck-Institute for Extraterrestrial Physics, Giessenbachstr. 1, D-85741 Garching (Germany)

    2014-11-10

    Massive early-type galaxies represent the modern day remnants of the earliest major star formation episodes in the history of the universe. These galaxies are central to our understanding of the evolution of cosmic structure, stellar populations, and supermassive black holes, but the details of their complex formation histories remain uncertain. To address this situation, we have initiated the MASSIVE Survey, a volume-limited, multi-wavelength, integral-field spectroscopic (IFS) and photometric survey of the structure and dynamics of the ?100 most massive early-type galaxies within a distance of 108 Mpc. This survey probes a stellar mass range M* ? 10{sup 11.5} M {sub ?} and diverse galaxy environments that have not been systematically studied to date. Our wide-field IFS data cover about two effective radii of individual galaxies, and for a subset of them, we are acquiring additional IFS observations on sub-arcsecond scales with adaptive optics. We are also acquiring deep K-band imaging to trace the extended halos of the galaxies and measure accurate total magnitudes. Dynamical orbit modeling of the combined data will allow us to simultaneously determine the stellar, black hole, and dark matter halo masses. The primary goals of the project are to constrain the black hole scaling relations at high masses, investigate systematically the stellar initial mass function and dark matter distribution in massive galaxies, and probe the late-time assembly of ellipticals through stellar population and kinematical gradients. In this paper, we describe the MASSIVE sample selection, discuss the distinct demographics and structural and environmental properties of the selected galaxies, and provide an overview of our basic observational program, science goals and early survey results.

  2. ug[SCIP,*] Library : A Software Library for General Purpose Parallel...

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    ugSCIP,* Library : A Software Library for General Purpose Parallel Branch-and-Bound ... ugSCIP,* library is a software library to parallelize customized SCIP solvers. ...

  3. final report for Center for Programming Models for Scalable Parallel Computing

    SciTech Connect (OSTI)

    Johnson, Ralph E

    2013-04-10

    This is the final report of the work on parallel programming patterns that was part of the Center for Programming Models for Scalable Parallel Computing

  4. Sub-Second Parallel State Estimation (Technical Report) | SciTech...

    Office of Scientific and Technical Information (OSTI)

    Technical Report: Sub-Second Parallel State Estimation Citation Details In-Document Search Title: Sub-Second Parallel State Estimation This report describes the performance of ...

  5. Parallel Breadth-First Search on Distributed Memory Systems

    SciTech Connect (OSTI)

    Computational Research Division; Buluc, Aydin; Madduri, Kamesh

    2011-04-15

    Data-intensive, graph-based computations are pervasive in several scientific applications, and are known to to be quite challenging to implement on distributed memory systems. In this work, we explore the design space of parallel algorithms for Breadth-First Search (BFS), a key subroutine in several graph algorithms. We present two highly-tuned par- allel approaches for BFS on large parallel systems: a level-synchronous strategy that relies on a simple vertex-based partitioning of the graph, and a two-dimensional sparse matrix- partitioning-based approach that mitigates parallel commu- nication overhead. For both approaches, we also present hybrid versions with intra-node multithreading. Our novel hybrid two-dimensional algorithm reduces communication times by up to a factor of 3.5, relative to a common vertex based approach. Our experimental study identifies execu- tion regimes in which these approaches will be competitive, and we demonstrate extremely high performance on lead- ing distributed-memory parallel systems. For instance, for a 40,000-core parallel execution on Hopper, an AMD Magny- Cours based system, we achieve a BFS performance rate of 17.8 billion edge visits per second on an undirected graph of 4.3 billion vertices and 68.7 billion edges with skewed degree distribution.

  6. A Parallel Ghosting Algorithm for The Flexible Distributed Mesh Database

    DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)

    Mubarak, Misbah; Seol, Seegyoung; Lu, Qiukai; Shephard, Mark S.

    2013-01-01

    Critical to the scalability of parallel adaptive simulations are parallel control functions including load balancing, reduced inter-process communication and optimal data decomposition. In distributed meshes, many mesh-based applications frequently access neighborhood information for computational purposes which must be transmitted efficiently to avoid parallel performance degradation when the neighbors are on different processors. This article presents a parallel algorithm of creating and deleting data copies, referred to as ghost copies, which localize neighborhood data for computation purposes while minimizing inter-process communication. The key characteristics of the algorithm are: (1) It can create ghost copies of any permissible topological order inmore » a 1D, 2D or 3D mesh based on selected adjacencies. (2) It exploits neighborhood communication patterns during the ghost creation process thus eliminating all-to-all communication. (3) For applications that need neighbors of neighbors, the algorithm can create n number of ghost layers up to a point where the whole partitioned mesh can be ghosted. Strong and weak scaling results are presented for the IBM BG/P and Cray XE6 architectures up to a core count of 32,768 processors. The algorithm also leads to scalable results when used in a parallel super-convergent patch recovery error estimator, an application that frequently accesses neighborhood data to carry out computation.« less

  7. Parallel vacuum arc discharge with microhollow array dielectric and anode

    SciTech Connect (OSTI)

    Feng, Jinghua; Zhou, Lin; Fu, Yuecheng; Zhang, Jianhua; Xu, Rongkun; Chen, Faxin; Li, Linbo; Meng, Shijian

    2014-07-15

    An electrode configuration with microhollow array dielectric and anode was developed to obtain parallel vacuum arc discharge. Compared with the conventional electrodes, more than 10 parallel microhollow discharges were ignited for the new configuration, which increased the discharge area significantly and made the cathode eroded more uniformly. The vacuum discharge channel number could be increased effectively by decreasing the distances between holes or increasing the arc current. Experimental results revealed that plasmas ejected from the adjacent hollow and the relatively high arc voltage were two key factors leading to the parallel discharge. The characteristics of plasmas in the microhollow were investigated as well. The spectral line intensity and electron density of plasmas in microhollow increased obviously with the decease of the microhollow diameter.

  8. Parallel Scaling Characteristics of Selected NERSC User ProjectCodes

    SciTech Connect (OSTI)

    Skinner, David; Verdier, Francesca; Anand, Harsh; Carter,Jonathan; Durst, Mark; Gerber, Richard

    2005-03-05

    This report documents parallel scaling characteristics of NERSC user project codes between Fiscal Year 2003 and the first half of Fiscal Year 2004 (Oct 2002-March 2004). The codes analyzed cover 60% of all the CPU hours delivered during that time frame on seaborg, a 6080 CPU IBM SP and the largest parallel computer at NERSC. The scale in terms of concurrency and problem size of the workload is analyzed. Drawing on batch queue logs, performance data and feedback from researchers we detail the motivations, benefits, and challenges of implementing highly parallel scientific codes on current NERSC High Performance Computing systems. An evaluation and outlook of the NERSC workload for Allocation Year 2005 is presented.

  9. Parallel, adaptive finite element methods for conservation laws

    SciTech Connect (OSTI)

    Biswas, R.; Devine, K.D.; Flaherty, J.E. Rensselaer Polytechnic Institute, Troy, NY )

    1994-01-01

    We construct parallel finite element methods for the solution of hyperbolic conservation laws in one and two dimensions. Spatial discretization is performed by a discontinuous Galerkin finite element method using a basis of piecewise Legendre polynomials. Temporal discretization utilizes a Runge-Kutta method. Dissipative fluxes and projection limiting prevent oscillations near solution discontinuities. A posteriori estimates of spatial errors are obtained by a p-refinement technique using superconvergence at Radau points. The resulting method is of high order and may be parallelized efficiently on MIMD computers. We compare results using different limiting schemes and demonstrate parallel efficiency through computations on an NCUBE/2 hypercube. We also present results using adaptive h- and p-refinement to reduce the computational cost of the method.

  10. Parallel garbage collection on a virtual memory system

    SciTech Connect (OSTI)

    Abraham, S.G.; Patel, J.H.

    1987-01-01

    Since most artificial intelligence applications are programmed in list processing languages, it is important to design architectures to support efficient garbage collection. This paper presents an architecture and an associated algorithm for parallel garbage collection on a virtual memory system. All the previously proposed parallel algorithms attempt to collect cells released by the list processor during the garbage collection cycle. We do not attempt to collect such cells. As a consequence, the list processor incurs little overhead in the proposed scheme, since it need not synchronize with the collector. Most parallel algorithms are designed for shared memory machines which have certain implicit synchronization functions on variable access. The proposed algorithm is designed for virtual memory systems where both the list processor and the garbage collector have private memories. The enforcement of coherence between the two private memories can be expensive and is not necessary in our scheme. 15 refs., 3 figs.

  11. Parallel halo finding in N-body cosmology simulations

    SciTech Connect (OSTI)

    Pfitzner, D.W.; Salmon, J.K.

    1996-12-31

    Cosmological N-body simulations on parallel computers produce large datasets - about five hundred Megabytes at a single output time, or tens of Gigabytes over the course of a simulation. These large datasets require further analysis before they can be compared to astronomical observations. We have implemented two methods for performing halo finding, a key part of the knowledge discovery process, on parallel machines. One of these is a parallel implementation of the friends of friends (FOF) algorithm, widely used in the field of N-body cosmology. The new isodensity (ID) method has been developed to overcome some of the shortcomings of FOR Both have been implemented on a variety of computer systems, and successfully used to extract halos from simulations with up to 256{sup 3} (or about 16.8 million) particles, which axe among the largest N-body cosmology simulations in existence.

  12. Graphical representation of parallel algorithmic processes. Master's thesis

    SciTech Connect (OSTI)

    Williams, E.M.

    1990-12-01

    Algorithm animation is a visualization method used to enhance understanding of functioning of an algorithm or program. Visualization is used for many purposes, including education, algorithm research, performance analysis, and program debugging. This research applies algorithm animation techniques to programs developed for parallel architectures, with specific on the Intel iPSC/2 hypercube. While both P-time and NP-time algorithms can potentially benefit from using visualization techniques, the set of NP-complete problems provides fertile ground for developing parallel applications, since the combinatoric nature of the problems makes finding the optimum solution impractical. The primary goals for this visualization system are: Data should be displayed as it is generated. The interface to the targe program should be transparent, allowing the animation of existing programs. Flexibility - the system should be able to animate any algorithm. The resulting system incorporates and extends two AFIT products: the AFIT Algorithm Animation Research Facility (AAARF) and the Parallel Resource Analysis Software Environment (PRASE). AAARF is an algorithm animation system developed primarily for sequential programs, but is easily adaptable for use with parallel programs. PRASE is an instrumentation package that extracts system performance data from programs on the Intel hypercubes. Since performance data is an essential part of analyzing any parallel program, views of the performance data are provided as an elementary part of the system. Custom software is designed to interface these systems and to display the program data. The program chosen as the example for this study is a member of the NP-complete problem set; it is a parallel implementation of a general.

  13. Small file aggregation in a parallel computing system

    DOE Patents [OSTI]

    Faibish, Sorin; Bent, John M.; Tzelnic, Percy; Grider, Gary; Zhang, Jingwang

    2014-09-02

    Techniques are provided for small file aggregation in a parallel computing system. An exemplary method for storing a plurality of files generated by a plurality of processes in a parallel computing system comprises aggregating the plurality of files into a single aggregated file; and generating metadata for the single aggregated file. The metadata comprises an offset and a length of each of the plurality of files in the single aggregated file. The metadata can be used to unpack one or more of the files from the single aggregated file.

  14. Methods for operating parallel computing systems employing sequenced communications

    DOE Patents [OSTI]

    Benner, Robert E.; Gustafson, John L.; Montry, Gary R.

    1999-01-01

    A parallel computing system and method having improved performance where a program is concurrently run on a plurality of nodes for reducing total processing time, each node having a processor, a memory, and a predetermined number of communication channels connected to the node and independently connected directly to other nodes. The present invention improves performance of performance of the parallel computing system by providing a system which can provide efficient communication between the processors and between the system and input and output devices. A method is also disclosed which can locate defective nodes with the computing system.

  15. Performance analysis of parallel supernodal sparse LU factorization

    SciTech Connect (OSTI)

    Grigori, Laura; Li, Xiaoye S.

    2004-02-05

    We investigate performance characteristics for the LU factorization of large matrices with various sparsity patterns. We consider supernodal right-looking parallel factorization on a bi-dimensional grid of processors, making use of static pivoting. We develop a performance model and we validate it using the implementation in SuperLU-DIST, the real matrices and the IBM Power3 machine at NERSC. We use this model to obtain performance bounds on parallel computers, to perform scalability analysis and to identify performance bottlenecks. We also discuss the role of load balance and data distribution in this approach.

  16. Parallel Harmony Search Based Distributed Energy Resource Optimization

    SciTech Connect (OSTI)

    Ceylan, Oguzhan; Liu, Guodong; Tomsovic, Kevin

    2015-01-01

    This paper presents a harmony search based parallel optimization algorithm to minimize voltage deviations in three phase unbalanced electrical distribution systems and to maximize active power outputs of distributed energy resources (DR). The main contribution is to reduce the adverse impacts on voltage profile during a day as photovoltaics (PVs) output or electrical vehicles (EVs) charging changes throughout a day. The IEEE 123- bus distribution test system is modified by adding DRs and EVs under different load profiles. The simulation results show that by using parallel computing techniques, heuristic methods may be used as an alternative optimization tool in electrical power distribution systems operation.

  17. Methods for operating parallel computing systems employing sequenced communications

    DOE Patents [OSTI]

    Benner, R.E.; Gustafson, J.L.; Montry, G.R.

    1999-08-10

    A parallel computing system and method are disclosed having improved performance where a program is concurrently run on a plurality of nodes for reducing total processing time, each node having a processor, a memory, and a predetermined number of communication channels connected to the node and independently connected directly to other nodes. The present invention improves performance of the parallel computing system by providing a system which can provide efficient communication between the processors and between the system and input and output devices. A method is also disclosed which can locate defective nodes with the computing system. 15 figs.

  18. Global synchronization of parallel processors using clock pulse width modulation

    SciTech Connect (OSTI)

    Chen, Dong; Ellavsky, Matthew R.; Franke, Ross L.; Gara, Alan; Gooding, Thomas M.; Haring, Rudolf A.; Jeanson, Mark J.; Kopcsay, Gerard V.; Liebsch, Thomas A.; Littrell, Daniel; Ohmacht, Martin; Reed, Don D.; Schenck, Brandon E.; Swetz, Richard A.

    2013-04-02

    A circuit generates a global clock signal with a pulse width modification to synchronize processors in a parallel computing system. The circuit may include a hardware module and a clock splitter. The hardware module may generate a clock signal and performs a pulse width modification on the clock signal. The pulse width modification changes a pulse width within a clock period in the clock signal. The clock splitter may distribute the pulse width modified clock signal to a plurality of processors in the parallel computing system.

  19. Primordial massive gravitational waves from Einstein-Chern-Simons-Weyl gravity

    SciTech Connect (OSTI)

    Myung, Yun Soo; Moon, Taeyoon E-mail: tymoon@inje.ac.kr

    2014-08-01

    We investigate the evolution of cosmological perturbations during de Sitter inflation in the Einstein-Chern-Simons-Weyl gravity. Primordial massive gravitational waves are composed of one scalar, two vector and four tensor circularly polarized modes. We show that the vector power spectrum decays quickly like a transversely massive vector in the superhorizon limit z?0. In this limit, the power spectrum coming from massive tensor modes decays quickly, leading to the conventional tensor power spectrum. Also, we find that in the limit of m{sup 2}?0 (keeping the Weyl-squared term only), the vector and tensor power spectra disappear. It implies that their power spectra are not gravitationally produced because they (vector and tensor) are decoupled from the expanding de Sitter background, as a result of conformal invariance.

  20. Building a Parallel Cloud Storage System using OpenStacks Swift Object Store and Transformative Parallel I/O

    SciTech Connect (OSTI)

    Burns, Andrew J.; Lora, Kaleb D.; Martinez, Esteban; Shorter, Martel L.

    2012-07-30

    Our project consists of bleeding-edge research into replacing the traditional storage archives with a parallel, cloud-based storage solution. It used OpenStack's Swift Object Store cloud software. It's Benchmarked Swift for write speed and scalability. Our project is unique because Swift is typically used for reads and we are mostly concerned with write speeds. Cloud Storage is a viable archive solution because: (1) Container management for larger parallel archives might ease the migration workload; (2) Many tools that are written for cloud storage could be utilized for local archive; and (3) Current large cloud storage practices in industry could be utilized to manage a scalable archive solution.

  1. IRDC G030.88+00.13: A TALE OF TWO MASSIVE CLUMPS

    SciTech Connect (OSTI)

    Zhang Qizhou; Wang Ke, E-mail: qzhang@cfa.harvard.edu [Harvard-Smithsonian Center for Astrophysics, 60 Garden Street, Cambridge, MA 02138 (United States)

    2011-05-20

    Massive stars (M {approx}>10 M{sub sun}) form from collapse of parsec-scale molecular clumps. How molecular clumps fragment to give rise to massive stars in a cluster with a distribution of masses is unclear. We search for cold cores that may lead to future formation of massive stars in a massive (>10{sup 3} M{sub sun}), low luminosity (4.6 x 10{sup 2} L{sub sun}) infrared dark cloud (IRDC) G030.88+00.13. The NH{sub 3} data from the Very Large Array (VLA) and Green Bank Telescope reveal that the extinction feature seen in the infrared consists of two distinctive clumps along the same line of sight. The C1 clump at 97 km s{sup -1} coincides with the extinction in the Spitzer 8 and 24 {mu}m. Therefore, it is responsible for the majority of the IRDC. The C2 clump at 107 km s{sup -1} is more compact and has a peak temperature of 45 K. Compact dust cores and H{sub 2}O masers revealed in the Submillimeter Array and VLA observations are mostly associated with C2, and none are within the IRDC in C1. The luminosity indicates that neither the C1 nor C2 clump has yet to form massive protostars. But C1 might be at a precluster forming stage. The simulated observations rule out 0.1 pc cold cores with masses above 8 M{sub sun} within the IRDC. The core masses in C1 and C2 and those in high-mass protostellar objects suggest an evolutionary trend that the mass of cold cores increases over time. Based on our findings, we propose an empirical picture of massive star formation that protostellar cores and the embedded protostars undergo simultaneous mass growth during the protostellar evolution.

  2. Parallel heat transport in integrable and chaotic magnetic fields

    SciTech Connect (OSTI)

    Del-Castillo-Negrete, Diego B [ORNL; Chacon, Luis [ORNL

    2012-01-01

    The study of transport in magnetized plasmas is a problem of fundamental interest in controlled fusion, space plasmas, and astrophysics research. Three issues make this problem particularly chal- lenging: (i) The extreme anisotropy between the parallel (i.e., along the magnetic field), , and the perpendicular, , conductivities ( / may exceed 1010 in fusion plasmas); (ii) Magnetic field lines chaos which in general complicates (and may preclude) the construction of magnetic field line coordinates; and (iii) Nonlocal parallel transport in the limit of small collisionality. Motivated by these issues, we present a Lagrangian Green s function method to solve the local and non-local parallel transport equation applicable to integrable and chaotic magnetic fields in arbitrary geom- etry. The method avoids by construction the numerical pollution issues of grid-based algorithms. The potential of the approach is demonstrated with nontrivial applications to integrable (magnetic island chain), weakly chaotic (devil s staircase), and fully chaotic magnetic field configurations. For the latter, numerical solutions of the parallel heat transport equation show that the effective radial transport, with local and non-local closures, is non-diffusive, thus casting doubts on the appropriateness of the applicability of quasilinear diffusion descriptions. General conditions for the existence of non-diffusive, multivalued flux-gradient relations in the temperature evolution are derived.

  3. An intercalation-locked parallel-stranded DNA tetraplex

    SciTech Connect (OSTI)

    Tripathi, S.; Zhang, D.; Paukstelis, P. J.

    2015-01-27

    DNA has proved to be an excellent material for nanoscale construction because complementary DNA duplexes are programmable and structurally predictable. However, in the absence of Watson–Crick pairings, DNA can be structurally more diverse. Here, we describe the crystal structures of d(ACTCGGATGAT) and the brominated derivative, d(ACBrUCGGABrUGAT). These oligonucleotides form parallel-stranded duplexes with a crystallographically equivalent strand, resulting in the first examples of DNA crystal structures that contains four different symmetric homo base pairs. Two of the parallel-stranded duplexes are coaxially stacked in opposite directions and locked together to form a tetraplex through intercalation of the 5'-most A–A base pairs between adjacent G–G pairs in the partner duplex. The intercalation region is a new type of DNA tertiary structural motif with similarities to the i-motif. 1H–1H nuclear magnetic resonance and native gel electrophoresis confirmed the formation of a parallel-stranded duplex in solution. Finally, we modified specific nucleotide positions and added d(GAY) motifs to oligonucleotides and were readily able to obtain similar crystals. This suggests that this parallel-stranded DNA structure may be useful in the rational design of DNA crystals and nanostructures.

  4. Hardware packet pacing using a DMA in a parallel computer

    DOE Patents [OSTI]

    Chen, Dong; Heidelberger, Phillip; Vranas, Pavlos

    2013-08-13

    Method and system for hardware packet pacing using a direct memory access controller in a parallel computer which, in one aspect, keeps track of a total number of bytes put on the network as a result of a remote get operation, using a hardware token counter.

  5. An intercalation-locked parallel-stranded DNA tetraplex

    DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)

    Tripathi, S.; Zhang, D.; Paukstelis, P. J.

    2015-01-27

    DNA has proved to be an excellent material for nanoscale construction because complementary DNA duplexes are programmable and structurally predictable. However, in the absence of Watson–Crick pairings, DNA can be structurally more diverse. Here, we describe the crystal structures of d(ACTCGGATGAT) and the brominated derivative, d(ACBrUCGGABrUGAT). These oligonucleotides form parallel-stranded duplexes with a crystallographically equivalent strand, resulting in the first examples of DNA crystal structures that contains four different symmetric homo base pairs. Two of the parallel-stranded duplexes are coaxially stacked in opposite directions and locked together to form a tetraplex through intercalation of the 5'-most A–A base pairs betweenmore » adjacent G–G pairs in the partner duplex. The intercalation region is a new type of DNA tertiary structural motif with similarities to the i-motif. 1H–1H nuclear magnetic resonance and native gel electrophoresis confirmed the formation of a parallel-stranded duplex in solution. Finally, we modified specific nucleotide positions and added d(GAY) motifs to oligonucleotides and were readily able to obtain similar crystals. This suggests that this parallel-stranded DNA structure may be useful in the rational design of DNA crystals and nanostructures.« less

  6. Lamb dip asymmetry in lasers with plane-parallel resonators

    SciTech Connect (OSTI)

    Tache, J.P.; Le Floch, A.; Le Naour, R.

    1986-09-01

    The Lamb dip asymmetry due to linear or nonlinear lenslike effects is investigated in lasers with plane-parallel resonators. The experiments are performed using diffracted light spectroscopy. It is shown that the frequency-dependent diffraction losses are essential in determining Lamb dip asymmetry, regardless of the origin of the lenslike effects.

  7. Xyce Parallel Electronic Simulator Users Guide Version 6.2.

    SciTech Connect (OSTI)

    Keiter, Eric R.; Mei, Ting; Russo, Thomas V.; Schiek, Richard; Sholander, Peter E.; Thornquist, Heidi K.; Verley, Jason; Baur, David Gregory

    2014-09-01

    This manual describes the use of the Xyce Parallel Electronic Simulator. Xyce has been de- signed as a SPICE-compatible, high-performance analog circuit simulator, and has been written to support the simulation needs of the Sandia National Laboratories electrical designers. This development has focused on improving capability over the current state-of-the-art in the following areas: Capability to solve extremely large circuit problems by supporting large-scale parallel com- puting platforms (up to thousands of processors). This includes support for most popular parallel and serial computers. A differential-algebraic-equation (DAE) formulation, which better isolates the device model package from solver algorithms. This allows one to develop new types of analysis without requiring the implementation of analysis-specific device models. Device models that are specifically tailored to meet Sandia's needs, including some radiation- aware devices (for Sandia users only). Object-oriented code design and implementation using modern coding practices. Xyce is a parallel code in the most general sense of the phrase -- a message passing parallel implementation -- which allows it to run efficiently a wide range of computing platforms. These include serial, shared-memory and distributed-memory parallel platforms. Attention has been paid to the specific nature of circuit-simulation problems to ensure that optimal parallel efficiency is achieved as the number of processors grows. Trademarks The information herein is subject to change without notice. Copyright c 2002-2014 Sandia Corporation. All rights reserved. Xyce TM Electronic Simulator and Xyce TM are trademarks of Sandia Corporation. Portions of the Xyce TM code are: Copyright c 2002, The Regents of the University of California. Produced at the Lawrence Livermore National Laboratory. Written by Alan Hindmarsh, Allan Taylor, Radu Serban. UCRL-CODE-2002-59 All rights reserved. Orcad, Orcad Capture, PSpice and Probe are

  8. The Death of a Massive Star Holds Key to Early Universe | U.S. DOE Office

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    of Science (SC) The Death of a Massive Star Holds Key to Early Universe News News Home Featured Articles 2016 2015 2014 2013 2012 2011 2010 2009 2008 2007 2006 2005 Science Headlines Science Highlights Presentations & Testimony News Archives Communications and Public Affairs Contact Information Office of Science U.S. Department of Energy 1000 Independence Ave., SW Washington, DC 20585 P: (202) 586-5430 12.16.09 The Death of a Massive Star Holds Key to Early Universe Scientists found the

  9. Workers Pour 1 Million Gallons of Grout into Massive Tanks | Department of

    Office of Environmental Management (EM)

    Energy Pour 1 Million Gallons of Grout into Massive Tanks Workers Pour 1 Million Gallons of Grout into Massive Tanks May 15, 2012 - 12:00pm Addthis Cement trucks transport a specially formulated grout that is pumped into the waste tanks. Cement trucks transport a specially formulated grout that is pumped into the waste tanks. AIKEN, S.C. - Workers have poured more than 1 million gallons of a cement-like grout into two underground radioactive waste tanks, moving the Savannah River Site (SRS)

  10. Parallel Index and Query for Large Scale Data Analysis

    SciTech Connect (OSTI)

    Chou, Jerry; Wu, Kesheng; Ruebel, Oliver; Howison, Mark; Qiang, Ji; Prabhat,; Austin, Brian; Bethel, E. Wes; Ryne, Rob D.; Shoshani, Arie

    2011-07-18

    Modern scientific datasets present numerous data management and analysis challenges. State-of-the-art index and query technologies are critical for facilitating interactive exploration of large datasets, but numerous challenges remain in terms of designing a system for process- ing general scientific datasets. The system needs to be able to run on distributed multi-core platforms, efficiently utilize underlying I/O infrastructure, and scale to massive datasets. We present FastQuery, a novel software framework that address these challenges. FastQuery utilizes a state-of-the-art index and query technology (FastBit) and is designed to process mas- sive datasets on modern supercomputing platforms. We apply FastQuery to processing of a massive 50TB dataset generated by a large scale accelerator modeling code. We demonstrate the scalability of the tool to 11,520 cores. Motivated by the scientific need to search for inter- esting particles in this dataset, we use our framework to reduce search time from hours to tens of seconds.

  11. Data consistency conditions for truncated fanbeam and parallel projections

    SciTech Connect (OSTI)

    Clackdoyle, Rolf; Desbat, Laurent

    2015-02-15

    Purpose: In image reconstruction from projections, data consistency conditions (DCCs) are mathematical relationships that express the overlap of information between ideal projections. DCCs have been incorporated in image reconstruction procedures for positron emission tomography, single photon emission computed tomography, and x-ray computed tomography (CT). Building on published fanbeam DCCs for nontruncated projections along a line, the authors recently announced new DCCs that can be applied to truncated parallel projections in classical (two-dimensional) image reconstruction. These DCCs take the form of polynomial expressions for a weighted backprojection of the projections. The purpose of this work was to present the new DCCs for truncated parallel projections, to extend these conditions to truncated fanbeam projections on a circular trajectory, to verify the conditions with numerical examples, and to present a model of how DCCs could be applied with a toy problem in patient motion estimation with truncated projections. Methods: A mathematical derivation of the new parallel DCCs was performed by substituting the underlying imaging equation into the mathematical expression for the weighted backprojection and demonstrating the resulting polynomial form. This DCC result was extended to fanbeam projections by a substitution of parallel to fanbeam variables. Ideal fanbeam projections of a simple mathematical phantom were simulated and the DCCs for these projections were evaluated by fitting polynomials to the weighted backprojection. For the motion estimation problem, a parametrized motion was simulated using a dynamic version of the mathematical phantom, and both noiseless and noisy fanbeam projections were simulated for a full circular trajectory. The fanbeam DCCs were applied to extract the motion parameters, which allowed the motion contamination to be removed from the projections. A reconstruction was performed from the corrected projections. Results: The

  12. Deconfinement phase transition in a finite volume in the presence of massive particles

    SciTech Connect (OSTI)

    Ait El Djoudi, A.; Ghenam, L.

    2012-06-27

    We study the QCD deconfinement phase transition from a hadronic gas to a Quark-Gluon Plasma, in the presence of massive particles. Especially, the influence of some parameters as the finite volume, finite mass, flavors number N{sub f} on the transition point and on the order of the transition is investigated.

  13. The incidence of stellar mergers and mass gainers among massive stars

    SciTech Connect (OSTI)

    De Mink, S. E.; Sana, H.; Langer, N.; Izzard, R. G.; Schneider, F. R. N.

    2014-02-10

    Because the majority of massive stars are born as members of close binary systems, populations of massive main-sequence stars contain stellar mergers and products of binary mass transfer. We simulate populations of massive stars accounting for all major binary evolution effects based on the most recent binary parameter statistics and extensively evaluate the effect of model uncertainties. Assuming constant star formation, we find that 8{sub −4}{sup +9}% of a sample of early-type stars are the products of a merger resulting from a close binary system. In total we find that 30{sub −15}{sup +10}% of massive main-sequence stars are the products of binary interaction. We show that the commonly adopted approach to minimize the effects of binaries on an observed sample by excluding systems detected as binaries through radial velocity campaigns can be counterproductive. Systems with significant radial velocity variations are mostly pre-interaction systems. Excluding them substantially enhances the relative incidence of mergers and binary products in the non-radial velocity variable sample. This poses a challenge for testing single stellar evolutionary models. It also raises the question of whether certain peculiar classes of stars, such as magnetic O stars, are the result of binary interaction and it emphasizes the need to further study the effect of binarity on the diagnostics that are used to derive the fundamental properties (star-formation history, initial mass function, mass-to-light ratio) of stellar populations nearby and at high redshift.

  14. Massive Stars in Colliding Wind Systems: the High-Energy Gamma-Ray Perspective

    SciTech Connect (OSTI)

    Reimer, Anita; Reimer, Olaf; /Stanford U., HEPL /KIPAC, Menlo Park

    2011-11-23

    Colliding winds of massive stars in binary systems are viable candidates for non-thermal high-energy photon emission. Long since, coincidences between massive star systems/associations and {gamma}-ray sources have been noted. Now, with the sensitivity of the Fermi Gamma Ray Observatory and current very-high-energy (VHE) Cherenkov instruments, will it be possible to sensibly probe these systems as high-energy emitters.We will summarize the characteristics and broadband predictions of generic optically thin emission models in the observables accessible at GeV and TeV energies. The ability to constrain orbital parameters of massive star-star binaries through GeV-to-TeV observations is discussed. As an example we will present orbital parameter constraints for the nearby Wolf-Rayet binary system WR 147 based on recently published VHE flux limits. Combining our broadband emission model with the cataloged binaries systems and their individual parameters allows us to conclude on the population of massive star-star systems at high-energy {gamma}-rays.

  15. Effective matter cosmologies of massive gravity I: non-physical fluids

    SciTech Connect (OSTI)

    Y?lmaz, Nejat Tevfik

    2014-08-01

    For the massive gravity, after decoupling from the metric equation we find a broad class of solutions of the Stckelberg sector by solving the background metric in the presence of a diagonal physical metric. We then construct the dynamics of the corresponding FLRW cosmologies which inherit effective matter contribution through the decoupling solution mechanism of the scalar sector.

  16. Recent progress and advances in iterative software (including parallel aspects)

    SciTech Connect (OSTI)

    Carey, G.; Young, D.M.; Kincaid, D.

    1994-12-31

    The purpose of the workshop is to provide a forum for discussion of the current state of iterative software packages. Of particular interest is software for large scale engineering and scientific applications, especially for distributed parallel systems. However, the authors will also review the state of software development for conventional architectures. This workshop will complement the other proposed workshops on iterative BLAS kernels and applications. The format for the workshop is as follows: To provide some structure, there will be brief presentations, each of less than five minutes duration and dealing with specific facets of the subject. These will be designed to focus the discussion and to stimulate an exchange with the participants. Issues to be covered include: The evolution of iterative packages, current state of the art, the parallel computing challenge, applications viewpoint, standards, and future directions and open problems.

  17. Administering truncated receive functions in a parallel messaging interface

    DOE Patents [OSTI]

    Archer, Charles J; Blocksome, Michael A; Ratterman, Joseph D; Smith, Brian E

    2014-12-09

    Administering truncated receive functions in a parallel messaging interface (`PMI`) of a parallel computer comprising a plurality of compute nodes coupled for data communications through the PMI and through a data communications network, including: sending, through the PMI on a source compute node, a quantity of data from the source compute node to a destination compute node; specifying, by an application on the destination compute node, a portion of the quantity of data to be received by the application on the destination compute node and a portion of the quantity of data to be discarded; receiving, by the PMI on the destination compute node, all of the quantity of data; providing, by the PMI on the destination compute node to the application on the destination compute node, only the portion of the quantity of data to be received by the application; and discarding, by the PMI on the destination compute node, the portion of the quantity of data to be discarded.

  18. Laser Safety Method For Duplex Open Loop Parallel Optical Link

    DOE Patents [OSTI]

    Baumgartner, Steven John; Hedin, Daniel Scott; Paschal, Matthew James

    2003-12-02

    A method and apparatus are provided to ensure that laser optical power does not exceed a "safe" level in an open loop parallel optical link in the event that a fiber optic ribbon cable is broken or otherwise severed. A duplex parallel optical link includes a transmitter and receiver pair and a fiber optic ribbon that includes a designated number of channels that cannot be split. The duplex transceiver includes a corresponding transmitter and receiver that are physically attached to each other and cannot be detached therefrom, so as to ensure safe, laser optical power in the event that the fiber optic ribbon cable is broken or severed. Safe optical power is ensured by redundant current and voltage safety checks.

  19. JPARSS: A Java Parallel Network Package for Grid Computing

    SciTech Connect (OSTI)

    Chen, Jie; Akers, Walter; Chen, Ying; Watson, William

    2002-03-01

    The emergence of high speed wide area networks makes grid computinga reality. However grid applications that need reliable data transfer still have difficulties to achieve optimal TCP performance due to network tuning of TCP window size to improve bandwidth and to reduce latency on a high speed wide area network. This paper presents a Java package called JPARSS (Java Parallel Secure Stream (Socket)) that divides data into partitions that are sent over several parallel Java streams simultaneously and allows Java or Web applications to achieve optimal TCP performance in a grid environment without the necessity of tuning TCP window size. This package enables single sign-on, certificate delegation and secure or plain-text data transfer using several security components based on X.509 certificate and SSL. Several experiments will be presented to show that using Java parallelstreams is more effective than tuning TCP window size. In addition a simple architecture using Web services

  20. Scalable parallel solution coupling for multi-physics reactor simulation.

    SciTech Connect (OSTI)

    Tautges, T. J.; Caceres, A.; Mathematics and Computer Science

    2009-01-01

    Reactor simulation depends on the coupled solution of various physics types, including neutronics, thermal/hydraulics, and structural mechanics. This paper describes the formulation and implementation of a parallel solution coupling capability being developed for reactor simulation. The coupling process consists of mesh and coupler initialization, point location, field interpolation, and field normalization. We report here our test of this capability on an example problem, namely, a reflector assembly from an advanced burner test reactor. Performance of this coupler in parallel is reasonable for the chosen problem size and range of processor counts. The runtime is dominated by startup costs, which amortize over the entire coupled simulation. Future efforts will include adding more sophisticated interpolation and normalization methods, to accommodate different numerical solvers used in various physics modules and to obtain better conservation properties for certain field types.

  1. Parallel Algorithms for Graph Optimization using Tree Decompositions

    SciTech Connect (OSTI)

    Sullivan, Blair D; Weerapurage, Dinesh P; Groer, Christopher S

    2012-06-01

    Although many $\\cal{NP}$-hard graph optimization problems can be solved in polynomial time on graphs of bounded tree-width, the adoption of these techniques into mainstream scientific computation has been limited due to the high memory requirements of the necessary dynamic programming tables and excessive runtimes of sequential implementations. This work addresses both challenges by proposing a set of new parallel algorithms for all steps of a tree decomposition-based approach to solve the maximum weighted independent set problem. A hybrid OpenMP/MPI implementation includes a highly scalable parallel dynamic programming algorithm leveraging the MADNESS task-based runtime, and computational results demonstrate scaling. This work enables a significant expansion of the scale of graphs on which exact solutions to maximum weighted independent set can be obtained, and forms a framework for solving additional graph optimization problems with similar techniques.

  2. Ultrafast stimulated Raman parallel adiabatic passage by shaped pulses

    SciTech Connect (OSTI)

    Dridi, G.; Guerin, S.; Hakobyan, V.; Jauslin, H. R.; Eleuch, H.

    2009-10-15

    We present a general and versatile technique of population transfer based on parallel adiabatic passage by femtosecond shaped pulses. Their amplitude and phase are specifically designed to optimize the adiabatic passage corresponding to parallel eigenvalues at all times. We show that this technique allows the robust adiabatic population transfer in a Raman system with the total pulse area as low as 3{pi}, corresponding to a fluence of one order of magnitude below the conventional stimulated Raman adiabatic passage process. This process of short duration, typically picosecond and subpicosecond, is easily implementable with the modern pulse shaper technology and opens the possibility of ultrafast robust population transfer with interesting applications in quantum information processing.

  3. Coupled Serial and Parallel Non-uniform SQUIDs

    SciTech Connect (OSTI)

    Longhini, Patrick; In, Visarath; Berggren, Susan; Palacios, Antonio; Leese de Escobar, Anna

    2011-04-19

    In this work we numerical model series and parallel non-uniform superconducting quantum interference device (SQUID) array. Previous work has shown that series SQUID array constructed with a random distribution of loop sizes, (i.e. different areas for each SQUID loop) there exists a unique 'anti-peak' at the zero magnetic field for the voltage versus applied magnetic field (V-B). Similar results extend to a parallel SQUID array where the difference lies in the arrangement of the Josephson junctions. Other system parameter such as bias current, the number of loops, and mutual inductances are varied to demonstrate the change in dynamic range and linearity of the V-B response. Application of the SQUID array as a low noise amplifier (LNA) would increase link margins and affect the entire communication system. For unmanned aerial vehicles (UAVs), size, weight and power are limited, the SQUID array would allow use of practical 'electrically small' antennas that provide acceptable gain.

  4. Performance evaluation of a parallel sparse lattice Boltzmann solver

    SciTech Connect (OSTI)

    Axner, L. Bernsdorf, J. Zeiser, T. Lammers, P. Linxweiler, J. Hoekstra, A.G.

    2008-05-01

    We develop a performance prediction model for a parallelized sparse lattice Boltzmann solver and present performance results for simulations of flow in a variety of complex geometries. A special focus is on partitioning and memory/load balancing strategy for geometries with a high solid fraction and/or complex topology such as porous media, fissured rocks and geometries from medical applications. The topology of the lattice nodes representing the fluid fraction of the computational domain is mapped on a graph. Graph decomposition is performed with both multilevel recursive-bisection and multilevel k-way schemes based on modified Kernighan-Lin and Fiduccia-Mattheyses partitioning algorithms. Performance results and optimization strategies are presented for a variety of platforms, showing a parallel efficiency of almost 80% for the largest problem size. A good agreement between the performance model and experimental results is demonstrated.

  5. Performing a local reduction operation on a parallel computer

    DOE Patents [OSTI]

    Blocksome, Michael A; Faraj, Daniel A

    2013-06-04

    A parallel computer including compute nodes, each including two reduction processing cores, a network write processing core, and a network read processing core, each processing core assigned an input buffer. Copying, in interleaved chunks by the reduction processing cores, contents of the reduction processing cores' input buffers to an interleaved buffer in shared memory; copying, by one of the reduction processing cores, contents of the network write processing core's input buffer to shared memory; copying, by another of the reduction processing cores, contents of the network read processing core's input buffer to shared memory; and locally reducing in parallel by the reduction processing cores: the contents of the reduction processing core's input buffer; every other interleaved chunk of the interleaved buffer; the copied contents of the network write processing core's input buffer; and the copied contents of the network read processing core's input buffer.

  6. Establishing a group of endpoints in a parallel computer

    DOE Patents [OSTI]

    Archer, Charles J.; Blocksome, Michael A.; Ratterman, Joseph D.; Smith, Brian E.; Xue, Hanhong

    2016-02-02

    A parallel computer executes a number of tasks, each task includes a number of endpoints and the endpoints are configured to support collective operations. In such a parallel computer, establishing a group of endpoints receiving a user specification of a set of endpoints included in a global collection of endpoints, where the user specification defines the set in accordance with a predefined virtual representation of the endpoints, the predefined virtual representation is a data structure setting forth an organization of tasks and endpoints included in the global collection of endpoints and the user specification defines the set of endpoints without a user specification of a particular endpoint; and defining a group of endpoints in dependence upon the predefined virtual representation of the endpoints and the user specification.

  7. Parallel computation with adaptive methods for elliptic and hyperbolic systems

    SciTech Connect (OSTI)

    Benantar, M.; Biswas, R.; Flaherty, J.E.; Shephard, M.S.

    1990-01-01

    We consider the solution of two dimensional vector systems of elliptic and hyperbolic partial differential equations on a shared memory parallel computer. For elliptic problems, the spatial domain is discretized using a finite quadtree mesh generation procedure and the differential system is discretized by a finite element-Galerkin technique with a piecewise linear polynomial basis. Resulting linear algebraic systems are solved using the conjugate gradient technique with element-by-element and symmetric successive over-relaxation preconditioners. Stiffness matrix assembly and linear system solutions are processed in parallel with computations scheduled on noncontiguous quadrants of the tree in order to minimize process synchronization. Determining noncontiguous regions by coloring the regular finite quadtree structure is far simpler than coloring elements of the unstructured mesh that the finite quadtree procedure generates. We describe linear-time complexity coloring procedures that use six and eight colors.

  8. Parallel computation of transverse wakes in linear colliders

    SciTech Connect (OSTI)

    Zhan, Xiaowei; Ko, Kwok

    1996-11-01

    SLAC has proposed the detuned structure (DS) as one possible design to control the emittance growth of long bunch trains due to transverse wakefields in the Next Linear Collider (NLC). The DS consists of 206 cells with tapering from cell to cell of the order of few microns to provide Gaussian detuning of the dipole modes. The decoherence of these modes leads to two orders of magnitude reduction in wakefield experienced by the trailing bunch. To model such a large heterogeneous structure realistically is impractical with finite-difference codes using structured grids. The authors have calculated the wakefield in the DS on a parallel computer with a finite-element code using an unstructured grid. The parallel implementation issues are presented along with simulation results that include contributions from higher dipole bands and wall dissipation.

  9. Beam Dynamics Studies of Parallel-Bar Deflecting Cavities

    SciTech Connect (OSTI)

    S. Ahmed, G. Krafft, K. Detrick, S. Silva, J. Delayen, M. Spata ,M. Tiefenback, A. Hofler ,K. Beard

    2011-03-01

    We have performed three-dimensional simulations of beam dynamics for parallel-bar transverse electromagnetic mode (TEM) type RF separators: normal- and super-conducting. The compact size of these cavities as compared to conventional TM$_{110}$ type structures is more attractive particularly at low frequency. Highly concentrated electromagnetic fields between the parallel bars provide strong electrical stability to the beam for any mechanical disturbance. An array of six 2-cell normal conducting cavities or a one- or two-cell superconducting structure are enough to produce the required vertical displacement at the Lambertson magnet. Both the normal and super-conducting structures show very small emittance dilution due to the vertical kick of the beam.

  10. Performing a local reduction operation on a parallel computer

    SciTech Connect (OSTI)

    Blocksome, Michael A.; Faraj, Daniel A.

    2012-12-11

    A parallel computer including compute nodes, each including two reduction processing cores, a network write processing core, and a network read processing core, each processing core assigned an input buffer. Copying, in interleaved chunks by the reduction processing cores, contents of the reduction processing cores' input buffers to an interleaved buffer in shared memory; copying, by one of the reduction processing cores, contents of the network write processing core's input buffer to shared memory; copying, by another of the reduction processing cores, contents of the network read processing core's input buffer to shared memory; and locally reducing in parallel by the reduction processing cores: the contents of the reduction processing core's input buffer; every other interleaved chunk of the interleaved buffer; the copied contents of the network write processing core's input buffer; and the copied contents of the network read processing core's input buffer.

  11. Data-Parallel Mesh Connected Components Labeling and Analysis

    SciTech Connect (OSTI)

    Harrison, Cyrus; Childs, Hank; Gaither, Kelly

    2011-04-10

    We present a data-parallel algorithm for identifying and labeling the connected sub-meshes within a domain-decomposed 3D mesh. The identification task is challenging in a distributed-memory parallel setting because connectivity is transitive and the cells composing each sub-mesh may span many or all processors. Our algorithm employs a multi-stage application of the Union-find algorithm and a spatial partitioning scheme to efficiently merge information across processors and produce a global labeling of connected sub-meshes. Marking each vertex with its corresponding sub-mesh label allows us to isolate mesh features based on topology, enabling new analysis capabilities. We briefly discuss two specific applications of the algorithm and present results from a weak scaling study. We demonstrate the algorithm at concurrency levels up to 2197 cores and analyze meshes containing up to 68 billion cells.

  12. Parallel resistivity and ohmic heating of laboratory dipole plasmas

    SciTech Connect (OSTI)

    Fox, W.

    2012-08-15

    The parallel resistivity is calculated in the long-mean-free-path regime for the dipole plasma geometry; this is shown to be a neoclassical transport problem in the limit of a small number of circulating electrons. In this regime, the resistivity is substantially higher than the Spitzer resistivity due to the magnetic trapping of a majority of the electrons. This suggests that heating the outer flux surfaces of the plasma with low-frequency parallel electric fields can be substantially more efficient than might be naively estimated. Such a skin-current heating scheme is analyzed by deriving an equation for diffusion of skin currents into the plasma, from which quantities such as the resistive skin-depth, lumped-circuit impedance, and power deposited in the plasma can be estimated. Numerical estimates indicate that this may be a simple and efficient way to couple power into experiments in this geometry.

  13. Final Report: Center for Programming Models for Scalable Parallel Computing

    SciTech Connect (OSTI)

    Mellor-Crummey, John

    2011-09-13

    As part of the Center for Programming Models for Scalable Parallel Computing, Rice University collaborated with project partners in the design, development and deployment of language, compiler, and runtime support for parallel programming models to support application development for the “leadership-class” computer systems at DOE national laboratories. Work over the course of this project has focused on the design, implementation, and evaluation of a second-generation version of Coarray Fortran. Research and development efforts of the project have focused on the CAF 2.0 language, compiler, runtime system, and supporting infrastructure. This has involved working with the teams that provide infrastructure for CAF that we rely on, implementing new language and runtime features, producing an open source compiler that enabled us to evaluate our ideas, and evaluating our design and implementation through the use of benchmarks. The report details the research, development, findings, and conclusions from this work.

  14. Methodology for Augmenting Existing Paths with Additional Parallel Transects

    SciTech Connect (OSTI)

    Wilson, John E.

    2013-09-30

    Visual Sample Plan (VSP) is sample planning software that is used, among other purposes, to plan transect sampling paths to detect areas that were potentially used for munition training. This module was developed for application on a large site where existing roads and trails were to be used as primary sampling paths. Gap areas between these primary paths needed to found and covered with parallel transect paths. These gap areas represent areas on the site that are more than a specified distance from a primary path. These added parallel paths needed to optionally be connected together into a single paththe shortest path possible. The paths also needed to optionally be attached to existing primary paths, again with the shortest possible path. Finally, the process must be repeatable and predictable so that the same inputs (primary paths, specified distance, and path options) will result in the same set of new paths every time. This methodology was developed to meet those specifications.

  15. Buffered coscheduling for parallel programming and enhanced fault tolerance

    DOE Patents [OSTI]

    Petrini, Fabrizio; Feng, Wu-chun

    2006-01-31

    A computer implemented method schedules processor jobs on a network of parallel machine processors or distributed system processors. Control information communications generated by each process performed by each processor during a defined time interval is accumulated in buffers, where adjacent time intervals are separated by strobe intervals for a global exchange of control information. A global exchange of the control information communications at the end of each defined time interval is performed during an intervening strobe interval so that each processor is informed by all of the other processors of the number of incoming jobs to be received by each processor in a subsequent time interval. The buffered coscheduling method of this invention also enhances the fault tolerance of a network of parallel machine processors or distributed system processors

  16. NIR SPECTROSCOPIC OBSERVATION OF MASSIVE GALAXIES IN THE PROTOCLUSTER AT z = 3.09

    SciTech Connect (OSTI)

    Kubo, Mariko; Yamada, Toru; Ichikawa, Takashi; Kajisawa, Masaru; Matsuda, Yuichi; Tanaka, Ichi

    2015-01-20

    We present the results of near-infrared spectroscopic observations of the K-band-selected candidate galaxies in the protocluster at z = 3.09 in the SSA22 field. We observed 67 candidates with K {sub AB} < 24 and confirmed redshifts of the 39 galaxies at 2.0 < z {sub spec} < 3.4. Of the 67 candidates, 24 are certainly protocluster members with 3.04 ≤ z {sub spec} ≤ 3.12, which are massive red galaxies that have been unidentified in previous optical observations of the SSA22 protocluster. Many distant red galaxies (J – K {sub AB} > 1.4), hyper extremely red objects (J – K {sub AB} > 2.1), Spitzer MIPS 24 μm sources, active galactic nuclei (AGNs) as well as the counterparts of Lyα blobs and the AzTEC/ASTE 1.1 mm sources in the SSA22 field are also found to be protocluster members. The mass of the SSA22 protocluster is estimated to be ∼2-5 × 10{sup 14} M {sub ☉}, and this system is plausibly a progenitor of the most massive clusters of galaxies in the current universe. The reddest (J – K {sub AB} ≥ 2.4) protocluster galaxies are massive galaxies with M {sub star} ∼ 10{sup 11} M {sub ☉} showing quiescent star formation activities and plausibly dominated by old stellar populations. Most of these massive quiescent galaxies host moderately luminous AGNs detected by X-ray. There are no significant differences in the [O III] λ5007/Hβ emission line ratios and [O III] λ5007 line widths and spatial extents of the protocluster galaxies from those of massive galaxies at z ∼ 2-3 in the general field.

  17. Electronically commutated serial-parallel switching for motor windings

    DOE Patents [OSTI]

    Hsu, John S.

    2012-03-27

    A method and a circuit for controlling an ac machine comprises controlling a full bridge network of commutation switches which are connected between a multiphase voltage source and the phase windings to switch the phase windings between a parallel connection and a series connection while providing commutation discharge paths for electrical current resulting from inductance in the phase windings. This provides extra torque for starting a vehicle from lower battery current.

  18. Xyce parallel electronic simulator reference guide, version 6.0.

    SciTech Connect (OSTI)

    Keiter, Eric Richard; Mei, Ting; Russo, Thomas V.; Schiek, Richard Louis; Thornquist, Heidi K.; Verley, Jason C.; Fixel, Deborah A.; Coffey, Todd Stirling; Pawlowski, Roger Patrick; Warrender, Christina E.; Baur, David G.

    2013-08-01

    This document is a reference guide to the Xyce Parallel Electronic Simulator, and is a companion document to the Xyce Users' Guide [1] . The focus of this document is (to the extent possible) exhaustively list device parameters, solver options, parser options, and other usage details of Xyce. This document is not intended to be a tutorial. Users who are new to circuit simulation are better served by the Xyce Users' Guide [1].

  19. Xyce parallel electronic simulator reference guide, version 6.1

    SciTech Connect (OSTI)

    Keiter, Eric R; Mei, Ting; Russo, Thomas V.; Schiek, Richard Louis; Sholander, Peter E.; Thornquist, Heidi K.; Verley, Jason C.; Baur, David Gregory

    2014-03-01

    This document is a reference guide to the Xyce Parallel Electronic Simulator, and is a companion document to the Xyce Users Guide [1] . The focus of this document is (to the extent possible) exhaustively list device parameters, solver options, parser options, and other usage details of Xyce. This document is not intended to be a tutorial. Users who are new to circuit simulation are better served by the Xyce Users Guide [1] .

  20. Parallel and Antiparallel Interfacial Coupling in AF-FM Bilayers

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Parallel and Antiparallel Interfacial Coupling in AF-FM Bilayers Print Cooling an antiferromagnetic-ferromagnetic bilayer in a magnetic field typically results in a remanent (zero-field) magnetization in the ferromagnet (FM) that is always in the direction of the field during cooling (positive Mrem). Strikingly, when FeF2 is the antiferromagnet (AF), cooling in a field can lead to a remanent magnetization opposite to the field (negative Mrem). A collaboration led by researchers from the Stanford

  1. Parallel and Antiparallel Interfacial Coupling in AF-FM Bilayers

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Parallel and Antiparallel Interfacial Coupling in AF-FM Bilayers Print Cooling an antiferromagnetic-ferromagnetic bilayer in a magnetic field typically results in a remanent (zero-field) magnetization in the ferromagnet (FM) that is always in the direction of the field during cooling (positive Mrem). Strikingly, when FeF2 is the antiferromagnet (AF), cooling in a field can lead to a remanent magnetization opposite to the field (negative Mrem). A collaboration led by researchers from the Stanford

  2. Parallel and Antiparallel Interfacial Coupling in AF-FM Bilayers

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Parallel and Antiparallel Interfacial Coupling in AF-FM Bilayers Print Cooling an antiferromagnetic-ferromagnetic bilayer in a magnetic field typically results in a remanent (zero-field) magnetization in the ferromagnet (FM) that is always in the direction of the field during cooling (positive Mrem). Strikingly, when FeF2 is the antiferromagnet (AF), cooling in a field can lead to a remanent magnetization opposite to the field (negative Mrem). A collaboration led by researchers from the Stanford

  3. Open parabosonic string theory between two parallel Dp-branes

    SciTech Connect (OSTI)

    Hamam, D.; Belaloui, N.

    2012-06-27

    We investigate an open parabosonic string theory between two parallel Dp-branes. The spectrum is constructed and the partition function is derived. A common chord between the development of this latter and the degeneracy of the states for each mass level is obtained. The theory is consistent and with no tachyon. The Virasoro algebra is derived and compared to the one of the ordinary case.

  4. Dynamic and Adaptive Parallel Programming for Exascale Research | Argonne

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Leadership Computing Facility Sample multi-resolution adaptive decomposition of a function and flow of data associated with compression of the representation. Sample multi-resolution adaptive decomposition of a function and flow of data associated with compression of the representation. The example is in one dimension but practical applications are typically in three, four, five and even six dimensions. Robert Harrison, Stony Brook University Dynamic and Adaptive Parallel Programming for

  5. Dynamic and Adaptive Parallel Programming for Exascale Research | Argonne

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Leadership Computing Facility calculation A neutron wave-function for a benchmark calculation using a Skyrme functional for the Hartree-Fock-Bogoliubov equation in nuclear physics is solved using an extension of MADNESS, MADNESS-HFB, in coordinate space. Credit: George Fann, Oak Ridge National Laboratory Dynamic and Adaptive Parallel Programming for Exascale Research PI Name: Robert Harrison PI Email: rjharrison@gmail.com Institution: Brookhaven National Laboratory Allocation Program: INCITE

  6. Dynamic and Adaptive Parallel Programming for Exascale Research | Argonne

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Leadership Computing Facility calculation using a Skyrme functional for the Hartree-Fock-Bogoliubov equation in nuclear physics is solved using an extension of MADNESS, MADNESS-HFB, in coordinate space. George Fann, Oak Ridge National Laboratory Dynamic and Adaptive Parallel Programming for Exascale Research PI Name: Robert Harrison PI Email: rjharrison@gmail.com Institution: Stony Brook University Allocation Program: INCITE Allocation Hours at ALCF: 20 Million Year: 2016 Research Domain:

  7. Xyce Parallel Electronic Simulator : reference guide, version 2.0.

    SciTech Connect (OSTI)

    Hoekstra, Robert John; Waters, Lon J.; Rankin, Eric Lamont; Fixel, Deborah A.; Russo, Thomas V.; Keiter, Eric Richard; Hutchinson, Scott Alan; Pawlowski, Roger Patrick; Wix, Steven D.

    2004-06-01

    This document is a reference guide to the Xyce Parallel Electronic Simulator, and is a companion document to the Xyce Users' Guide. The focus of this document is (to the extent possible) exhaustively list device parameters, solver options, parser options, and other usage details of Xyce. This document is not intended to be a tutorial. Users who are new to circuit simulation are better served by the Xyce Users' Guide.

  8. Scalable System Software for Parallel Programming | Argonne Leadership

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Computing Facility Streamlines from an early time step of the Rayleigh-Taylor instability depend on scalable storage, communication, and data analysis algorithms developed at extreme scale using INCITE resources. Tom Peterka Scalable System Software for Parallel Programming PI Name: Robert Latham PI Email: robl@mcs.anl.gov Institution: Argonne National Laboratory Allocation Program: INCITE Allocation Hours at ALCF: 20 Million Year: 2013 Research Domain: Computer Science The purpose of this

  9. Scalable System Software for Parallel Programming | Argonne Leadership

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Computing Facility depend on scalable storage, communication, and data analysis algorithms developed at extreme scale using INCITE resources. Tom Peterka, Argonne National Laboratory Scalable System Software for Parallel Programming PI Name: Robert Latham PI Email: robl@mcs.anl.gov Institution: Argonne National Laboratory Allocation Program: INCITE Allocation Hours at ALCF: 25 Million Year: 2014 Research Domain: Computer Science As hardware complexity in Leadership Class Facility systems

  10. Scalable System Software for Parallel Programming | Argonne Leadership

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Computing Facility Streamlines from an early time step of the Rayleigh-Taylor instability depend on scalable storage, communication, and data analysis algorithms developed at extreme scale using INCITE resources. Credit: Tom Peterka, Argonne National Laboratory Scalable System Software for Parallel Programming PI Name: Robert Latham PI Email: robl@mcs.anl.gov Institution: Argonne National Laboratory Allocation Program: INCITE Allocation Hours at ALCF: 25 Million Year: 2015 Research Domain:

  11. Apply for the Parallel Computing Summer Research Internship

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    How to Apply Apply for the Parallel Computing Summer Research Internship Creating next-generation leaders in HPC research and applications development Program Co-Lead Robert (Bob) Robey Email Program Co-Lead Gabriel Rockefeller Email Program Co-Lead Hai Ah Nam Email Professional Staff Assistant Nicole Aguilar Garcia (505) 665-3048 Email Current application deadline is February 5, 2016 with notification by early March 2016. Who can apply? Upper division undergraduate students and early graduate

  12. Multithreaded processor architecture for parallel symbolic computation. Technical report

    SciTech Connect (OSTI)

    Fujita, T.

    1987-09-01

    This paper describes the Multilisp Architecture for Symbolic Applications (MASA), which is a multithreaded processor architecture for parallel symbolic computation with various features intended for effective Multilisp program execution. The principal mechanisms exploited for this processor are multiple contexts, interleaved pipeline execution from separate instruction streams, and synchronization based on a bit in each memory cell. The tagged architecture approach is taken for Lisp program execution, and trap conditions are provided for future object manipulation and garbage collection.

  13. Xyce Parallel Electronic Simulator : reference guide, version 4.1.

    SciTech Connect (OSTI)

    Mei, Ting; Rankin, Eric Lamont; Thornquist, Heidi K.; Santarelli, Keith R.; Fixel, Deborah A.; Coffey, Todd Stirling; Russo, Thomas V.; Schiek, Richard Louis; Keiter, Eric Richard; Pawlowski, Roger Patrick

    2009-02-01

    This document is a reference guide to the Xyce Parallel Electronic Simulator, and is a companion document to the Xyce Users Guide. The focus of this document is (to the extent possible) exhaustively list device parameters, solver options, parser options, and other usage details of Xyce. This document is not intended to be a tutorial. Users who are new to circuit simulation are better served by the Xyce Users Guide.

  14. A Parallel Stochastic Framework for Reservoir Characterization and History Matching

    DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)

    Thomas, Sunil G.; Klie, Hector M.; Rodriguez, Adolfo A.; Wheeler, Mary F.

    2011-01-01

    The spatial distribution of parameters that characterize the subsurface is never known to any reasonable level of accuracy required to solve the governing PDEs of multiphase flow or species transport through porous media. This paper presents a numerically cheap, yet efficient, accurate and parallel framework to estimate reservoir parameters, for example, medium permeability, using sensor information from measurements of the solution variables such as phase pressures, phase concentrations, fluxes, and seismic and well log data. Numerical results are presented to demonstrate the method.

  15. Parallel Element Agglomeration Algebraic Multigrid and Upscaling Library

    Energy Science and Technology Software Center (OSTI)

    2015-02-19

    ParFELAG is a parallel distributed memory C++ library for numerical upscaling of finite element discretizations. It provides optimal complesity algorithms ro build multilevel hierarchies and solvers that can be used for solving a wide class of partial differential equations (elliptic, hyperbolic, saddle point problems) on general unstructured mesh (under the assumption that the topology of the agglomerated entities is correct). Additionally, a novel multilevel solver for saddle point problems with divergence constraint is implemented.

  16. PArallel Reacting Multiphase FLOw Computational Fluid Dynamic Analysis

    Energy Science and Technology Software Center (OSTI)

    2002-06-01

    PARMFLO is a parallel multiphase reacting flow computational fluid dynamics (CFD) code. It can perform steady or unsteady simulations in three space dimensions. It is intended for use in engineering CFD analysis of industrial flow system components. Its parallel processing capabilities allow it to be applied to problems that use at least an order of magnitude more computational cells than the number that can be used on a typical single processor workstation (about 106 cellsmore » in parallel processing mode versus about io cells in serial processing mode). Alternately, by spreading the work of a CFD problem that could be run on a single workstation over a group of computers on a network, it can bring the runtime down by an order of magnitude or more (typically from many days to less than one day). The software was implemented using the industry standard Message-Passing Interface (MPI) and domain decomposition in one spatial direction. The phases of a flow problem may include an ideal gas mixture with an arbitrary number of chemical species, and dispersed droplet and particle phases. Regions of porous media may also be included within the domain. The porous media may be packed beds, foams, or monolith catalyst supports. With these features, the code is especially suited to analysis of mixing of reactants in the inlet chamber of catalytic reactors coupled to computation of product yields that result from the flow of the mixture through the catalyst coaled support structure.« less

  17. Identifying failure in a tree network of a parallel computer

    DOE Patents [OSTI]

    Archer, Charles J.; Pinnow, Kurt W.; Wallenfelt, Brian P.

    2010-08-24

    Methods, parallel computers, and products are provided for identifying failure in a tree network of a parallel computer. The parallel computer includes one or more processing sets including an I/O node and a plurality of compute nodes. For each processing set embodiments include selecting a set of test compute nodes, the test compute nodes being a subset of the compute nodes of the processing set; measuring the performance of the I/O node of the processing set; measuring the performance of the selected set of test compute nodes; calculating a current test value in dependence upon the measured performance of the I/O node of the processing set, the measured performance of the set of test compute nodes, and a predetermined value for I/O node performance; and comparing the current test value with a predetermined tree performance threshold. If the current test value is below the predetermined tree performance threshold, embodiments include selecting another set of test compute nodes. If the current test value is not below the predetermined tree performance threshold, embodiments include selecting from the test compute nodes one or more potential problem nodes and testing individually potential problem nodes and links to potential problem nodes.

  18. Improved parallel solution techniques for the integral transport matrix method

    SciTech Connect (OSTI)

    Zerr, Robert J; Azmy, Yousry Y

    2010-11-23

    Alternative solution strategies to the parallel block Jacobi (PBJ) method for the solution of the global problem with the integral transport matrix method operators have been designed and tested. The most straightforward improvement to the Jacobi iterative method is the Gauss-Seidel alternative. The parallel red-black Gauss-Seidel (PGS) algorithm can improve on the number of iterations and reduce work per iteration by applying an alternating red-black color-set to the subdomains and assigning multiple sub-domains per processor. A parallel GMRES(m) method was implemented as an alternative to stationary iterations. Computational results show that the PGS method can improve on the PBJ method execution by up to {approx}50% when eight sub-domains per processor are used. However, compared to traditional source iterations with diffusion synthetic acceleration, it is still approximately an order of magnitude slower. The best-performing case are opticaUy thick because sub-domains decouple, yielding faster convergence. Further tests revealed that 64 sub-domains per processor was the best performing level of sub-domain division. An acceleration technique that improves the convergence rate would greatly improve the ITMM. The GMRES(m) method with a diagonal block preconditioner consumes approximately the same time as the PBJ solver but could be improved by an as yet undeveloped, more efficient preconditioner.

  19. Parallel Computation of the Regional Ocean Modeling System (ROMS)

    SciTech Connect (OSTI)

    Wang, P; Song, Y T; Chao, Y; Zhang, H

    2005-04-05

    The Regional Ocean Modeling System (ROMS) is a regional ocean general circulation modeling system solving the free surface, hydrostatic, primitive equations over varying topography. It is free software distributed world-wide for studying both complex coastal ocean problems and the basin-to-global scale ocean circulation. The original ROMS code could only be run on shared-memory systems. With the increasing need to simulate larger model domains with finer resolutions and on a variety of computer platforms, there is a need in the ocean-modeling community to have a ROMS code that can be run on any parallel computer ranging from 10 to hundreds of processors. Recently, we have explored parallelization for ROMS using the MPI programming model. In this paper, an efficient parallelization strategy for such a large-scale scientific software package, based on an existing shared-memory computing model, is presented. In addition, scientific applications and data-performance issues on a couple of SGI systems, including Columbia, the world's third-fastest supercomputer, are discussed.

  20. Generating unstructured nuclear reactor core meshes in parallel

    SciTech Connect (OSTI)

    Jain, Rajeev; Tautges, Timothy J.

    2014-10-24

    Recent advances in supercomputers and parallel solver techniques have enabled users to run large simulations problems using millions of processors. Techniques for multiphysics nuclear reactor core simulations are under active development in several countries. Most of these techniques require large unstructured meshes that can be hard to generate in a standalone desktop computers because of high memory requirements, limited processing power, and other complexities. We have previously reported on a hierarchical lattice-based approach for generating reactor core meshes. Here, we describe efforts to exploit coarse-grained parallelism during reactor assembly and reactor core mesh generation processes. We highlight several reactor core examples including a very high temperature reactor, a full-core model of the Korean MONJU reactor, a ¼ pressurized water reactor core, the fast reactor Experimental Breeder Reactor-II core with a XX09 assembly, and an advanced breeder test reactor core. The times required to generate large mesh models, along with speedups obtained from running these problems in parallel, are reported. A graphical user interface to the tools described here has also been developed.

  1. Composing Data Parallel Code for a SPARQL Graph Engine

    SciTech Connect (OSTI)

    Castellana, Vito G.; Tumeo, Antonino; Villa, Oreste; Haglin, David J.; Feo, John

    2013-09-08

    Big data analytics process large amount of data to extract knowledge from them. Semantic databases are big data applications that adopt the Resource Description Framework (RDF) to structure metadata through a graph-based representation. The graph based representation provides several benefits, such as the possibility to perform in memory processing with large amounts of parallelism. SPARQL is a language used to perform queries on RDF-structured data through graph matching. In this paper we present a tool that automatically translates SPARQL queries to parallel graph crawling and graph matching operations. The tool also supports complex SPARQL constructs, which requires more than basic graph matching for their implementation. The tool generates parallel code annotated with OpenMP pragmas for x86 Shared-memory Multiprocessors (SMPs). With respect to commercial database systems such as Virtuoso, our approach reduces memory occupation due to join operations and provides higher performance. We show the scaling of the automatically generated graph-matching code on a 48-core SMP.

  2. Design and performance of a scalable, parallel statistics toolkit.

    SciTech Connect (OSTI)

    Thompson, David C.; Bennett, Janine Camille; Pebay, Philippe Pierre

    2010-11-01

    Most statistical software packages implement a broad range of techniques but do so in an ad hoc fashion, leaving users who do not have a broad knowledge of statistics at a disadvantage since they may not understand all the implications of a given analysis or how to test the validity of results. These packages are also largely serial in nature, or target multicore architectures instead of distributed-memory systems, or provide only a small number of statistics in parallel. This paper surveys a collection of parallel implementations of statistics algorithm developed as part of a common framework over the last 3 years. The framework strategically groups modeling techniques with associated verification and validation techniques to make the underlying assumptions of the statistics more clear. Furthermore it employs a design pattern specifically targeted for distributed-memory parallelism, where architectural advances in large-scale high-performance computing have been focused. Moment-based statistics (which include descriptive, correlative, and multicorrelative statistics, principal component analysis (PCA), and k-means statistics) scale nearly linearly with the data set size and number of processes. Entropy-based statistics (which include order and contingency statistics) do not scale well when the data in question is continuous or quasi-diffuse but do scale well when the data is discrete and compact. We confirm and extend our earlier results by now establishing near-optimal scalability with up to 10,000 processes.

  3. Parallel garbage collection without synchronization overhead. Technical report

    SciTech Connect (OSTI)

    Patel, J.H.

    1984-08-01

    Incremental garbage-collection schemes incur substantial overhead that is directly translated as reduced execution efficiency for the user. Parallel garbage-collection schemes implemented via time-slicing on a serial processor also incur this overhead, which might even be aggravated due to context switching. It is useful, therefore, to examine the possibility of implementing a parallel garbage-collection algorithm using a separate processor operating asynchronously with the main-list processor. The overhead in such a scheme arises from the synchronization necessary to manage the two processors, maintaining memory consistency. In this paper, the authors present an architecture and supporting parallel garbage-collection algorithms designed for a virtual memory system with separate processors for list processing and for garbage collection. Each processor has its own primary memory; in addition, there is a small common memory which both processors may access. Individual memories swap off a common secondary memory, but no locking mechanism is required. In particular, a page may reside in both memories simultaneously, and indeed may be accessed and modified freely by each processor. A secondary memory controller ensures consistency without necessitating numerous lockouts on the pages.

  4. Parallel Computing Environments and Methods for Power Distribution System Simulation

    SciTech Connect (OSTI)

    Lu, Ning; Taylor, Zachary T.; Chassin, David P.; Guttromson, Ross T.; Studham, Scott S.

    2005-11-10

    The development of cost-effective high-performance parallel computing on multi-processor super computers makes it attractive to port excessively time consuming simulation software from personal computers (PC) to super computes. The power distribution system simulator (PDSS) takes a bottom-up approach and simulates load at appliance level, where detailed thermal models for appliances are used. This approach works well for a small power distribution system consisting of a few thousand appliances. When the number of appliances increases, the simulation uses up the PC memory and its run time increases to a point where the approach is no longer feasible to model a practical large power distribution system. This paper presents an effort made to port a PC-based power distribution system simulator (PDSS) to a 128-processor shared-memory super computer. The paper offers an overview of the parallel computing environment and a description of the modification made to the PDSS model. The performances of the PDSS running on a standalone PC and on the super computer are compared. Future research direction of utilizing parallel computing in the power distribution system simulation is also addressed.

  5. Generating unstructured nuclear reactor core meshes in parallel

    DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)

    Jain, Rajeev; Tautges, Timothy J.

    2014-10-24

    Recent advances in supercomputers and parallel solver techniques have enabled users to run large simulations problems using millions of processors. Techniques for multiphysics nuclear reactor core simulations are under active development in several countries. Most of these techniques require large unstructured meshes that can be hard to generate in a standalone desktop computers because of high memory requirements, limited processing power, and other complexities. We have previously reported on a hierarchical lattice-based approach for generating reactor core meshes. Here, we describe efforts to exploit coarse-grained parallelism during reactor assembly and reactor core mesh generation processes. We highlight several reactor coremore » examples including a very high temperature reactor, a full-core model of the Korean MONJU reactor, a ¼ pressurized water reactor core, the fast reactor Experimental Breeder Reactor-II core with a XX09 assembly, and an advanced breeder test reactor core. The times required to generate large mesh models, along with speedups obtained from running these problems in parallel, are reported. A graphical user interface to the tools described here has also been developed.« less

  6. Automatic Thread-Level Parallelization in the Chombo AMR Library

    SciTech Connect (OSTI)

    Christen, Matthias; Keen, Noel; Ligocki, Terry; Oliker, Leonid; Shalf, John; Van Straalen, Brian; Williams, Samuel

    2011-05-26

    The increasing on-chip parallelism has some substantial implications for HPC applications. Currently, hybrid programming models (typically MPI+OpenMP) are employed for mapping software to the hardware in order to leverage the hardware?s architectural features. In this paper, we present an approach that automatically introduces thread level parallelism into Chombo, a parallel adaptive mesh refinement framework for finite difference type PDE solvers. In Chombo, core algorithms are specified in the ChomboFortran, a macro language extension to F77 that is part of the Chombo framework. This domain-specific language forms an already used target language for an automatic migration of the large number of existing algorithms into a hybrid MPI+OpenMP implementation. It also provides access to the auto-tuning methodology that enables tuning certain aspects of an algorithm to hardware characteristics. Performance measurements are presented for a few of the most relevant kernels with respect to a specific application benchmark using this technique as well as benchmark results for the entire application. The kernel benchmarks show that, using auto-tuning, up to a factor of 11 in performance was gained with 4 threads with respect to the serial reference implementation.

  7. Fencing network direct memory access data transfers in a parallel active messaging interface of a parallel computer

    DOE Patents [OSTI]

    Blocksome, Michael A.; Mamidala, Amith R.

    2015-07-07

    Fencing direct memory access (`DMA`) data transfers in a parallel active messaging interface (`PAMI`) of a parallel computer, the PAMI including data communications endpoints, each endpoint including specifications of a client, a context, and a task, the endpoints coupled for data communications through the PAMI and through DMA controllers operatively coupled to a deterministic data communications network through which the DMA controllers deliver data communications deterministically, including initiating execution through the PAMI of an ordered sequence of active DMA instructions for DMA data transfers between two endpoints, effecting deterministic DMA data transfers through a DMA controller and the deterministic data communications network; and executing through the PAMI, with no FENCE accounting for DMA data transfers, an active FENCE instruction, the FENCE instruction completing execution only after completion of all DMA instructions initiated prior to execution of the FENCE instruction for DMA data transfers between the two endpoints.

  8. Fencing network direct memory access data transfers in a parallel active messaging interface of a parallel computer

    DOE Patents [OSTI]

    Blocksome, Michael A.; Mamidala, Amith R.

    2015-07-14

    Fencing direct memory access (`DMA`) data transfers in a parallel active messaging interface (`PAMI`) of a parallel computer, the PAMI including data communications endpoints, each endpoint including specifications of a client, a context, and a task, the endpoints coupled for data communications through the PAMI and through DMA controllers operatively coupled to a deterministic data communications network through which the DMA controllers deliver data communications deterministically, including initiating execution through the PAMI of an ordered sequence of active DMA instructions for DMA data transfers between two endpoints, effecting deterministic DMA data transfers through a DMA controller and the deterministic data communications network; and executing through the PAMI, with no FENCE accounting for DMA data transfers, an active FENCE instruction, the FENCE instruction completing execution only after completion of all DMA instructions initiated prior to execution of the FENCE instruction for DMA data transfers between the two endpoints.

  9. Xyce Parallel Electronic Simulator Users Guide Version 6.4

    SciTech Connect (OSTI)

    Keiter, Eric R.; Mei, Ting; Russo, Thomas V.; Schiek, Richard; Sholander, Peter E.; Thornquist, Heidi K.; Verley, Jason; Baur, David Gregory

    2015-12-01

    This manual describes the use of the Xyce Parallel Electronic Simulator. Xyce has been de- signed as a SPICE-compatible, high-performance analog circuit simulator, and has been written to support the simulation needs of the Sandia National Laboratories electrical designers. This development has focused on improving capability over the current state-of-the-art in the following areas: Capability to solve extremely large circuit problems by supporting large-scale parallel com- puting platforms (up to thousands of processors). This includes support for most popular parallel and serial computers. A differential-algebraic-equation (DAE) formulation, which better isolates the device model package from solver algorithms. This allows one to develop new types of analysis without requiring the implementation of analysis-specific device models. Device models that are specifically tailored to meet Sandia's needs, including some radiation- aware devices (for Sandia users only). Object-oriented code design and implementation using modern coding practices. Xyce is a parallel code in the most general sense of the phrase -- a message passing parallel implementation -- which allows it to run efficiently a wide range of computing platforms. These include serial, shared-memory and distributed-memory parallel platforms. Attention has been paid to the specific nature of circuit-simulation problems to ensure that optimal parallel efficiency is achieved as the number of processors grows. Trademarks The information herein is subject to change without notice. Copyright c 2002-2015 Sandia Corporation. All rights reserved. Xyce TM Electronic Simulator and Xyce TM are trademarks of Sandia Corporation. Portions of the Xyce TM code are: Copyright c 2002, The Regents of the University of California. Produced at the Lawrence Livermore National Laboratory. Written by Alan Hindmarsh, Allan Taylor, Radu Serban. UCRL-CODE-2002-59 All rights reserved. Orcad, Orcad Capture, PSpice and Probe are

  10. Xyce Parallel Electronic Simulator - Users' Guide Version 2.1.

    SciTech Connect (OSTI)

    Hutchinson, Scott A; Hoekstra, Robert J.; Russo, Thomas V.; Rankin, Eric; Pawlowski, Roger P.; Fixel, Deborah A; Schiek, Richard; Bogdan, Carolyn W.; Shirley, David N.; Campbell, Phillip M.; Keiter, Eric R.

    2005-06-01

    This manual describes the use of theXyceParallel Electronic Simulator.Xycehasbeen designed as a SPICE-compatible, high-performance analog circuit simulator, andhas been written to support the simulation needs of the Sandia National Laboratorieselectrical designers. This development has focused on improving capability over thecurrent state-of-the-art in the following areas:%04Capability to solve extremely large circuit problems by supporting large-scale par-allel computing platforms (up to thousands of processors). Note that this includessupport for most popular parallel and serial computers.%04Improved performance for all numerical kernels (e.g., time integrator, nonlinearand linear solvers) through state-of-the-art algorithms and novel techniques.%04Device models which are specifically tailored to meet Sandia's needs, includingmany radiation-aware devices.3 XyceTMUsers' Guide%04Object-oriented code design and implementation using modern coding practicesthat ensure that theXyceParallel Electronic Simulator will be maintainable andextensible far into the future.Xyceis a parallel code in the most general sense of the phrase - a message passingparallel implementation - which allows it to run efficiently on the widest possible numberof computing platforms. These include serial, shared-memory and distributed-memoryparallel as well as heterogeneous platforms. Careful attention has been paid to thespecific nature of circuit-simulation problems to ensure that optimal parallel efficiencyis achieved as the number of processors grows.The development ofXyceprovides a platform for computational research and de-velopment aimed specifically at the needs of the Laboratory. WithXyce, Sandia hasan %22in-house%22 capability with which both new electrical (e.g., device model develop-ment) and algorithmic (e.g., faster time-integration methods, parallel solver algorithms)research and development can be performed. As a result,Xyceis a unique electricalsimulation capability, designed to

  11. Optimizing Parallel Access to the BaBar Database System Using...

    Office of Scientific and Technical Information (OSTI)

    Optimizing Parallel Access to the BaBar Database System Using CORBA Servers Citation Details In-Document Search Title: Optimizing Parallel Access to the BaBar Database System Using ...

  12. cray-hdf5-parallel/1.8.13 garbling integers in intel environment

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    cray-hdf5-parallel1.8.13 garbling integers in intel environment cray-hdf5-parallel1.8.13 garbling integers in intel environment September 11, 2014 This problem was fixed on 11...

  13. Parallel I/O Software Infrastructure for Large-Scale Systems

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Parallel IO Software Infrastructure for Large-Scale Systems Parallel IO Software Infrastructure for Large-Scale Systems Choudhary.png An illustration of how MPI---IO file domain...

  14. Final Report: Migration Mechanisms for Large-scale Parallel Applications

    SciTech Connect (OSTI)

    Jason Nieh

    2009-10-30

    Process migration is the ability to transfer a process from one machine to another. It is a useful facility in distributed computing environments, especially as computing devices become more pervasive and Internet access becomes more ubiquitous. The potential benefits of process migration, among others, are fault resilience by migrating processes off of faulty hosts, data access locality by migrating processes closer to the data, better system response time by migrating processes closer to users, dynamic load balancing by migrating processes to less loaded hosts, and improved service availability and administration by migrating processes before host maintenance so that applications can continue to run with minimal downtime. Although process migration provides substantial potential benefits and many approaches have been considered, achieving transparent process migration functionality has been difficult in practice. To address this problem, our work has designed, implemented, and evaluated new and powerful transparent process checkpoint-restart and migration mechanisms for desktop, server, and parallel applications that operate across heterogeneous cluster and mobile computing environments. A key aspect of this work has been to introduce lightweight operating system virtualization to provide processes with private, virtual namespaces that decouple and isolate processes from dependencies on the host operating system instance. This decoupling enables processes to be transparently checkpointed and migrated without modifying, recompiling, or relinking applications or the operating system. Building on this lightweight operating system virtualization approach, we have developed novel technologies that enable (1) coordinated, consistent checkpoint-restart and migration of multiple processes, (2) fast checkpointing of process and file system state to enable restart of multiple parallel execution environments and time travel, (3) process migration across heterogeneous

  15. Topologically Massive Yang-Mills field on the Null-Plane: A Hamilton-Jacobi approach

    SciTech Connect (OSTI)

    Bertin, M. C.; Pimentel, B. M.; Valcarcel, C. E.; Zambrano, G. E. R.

    2010-11-12

    Non-abelian gauge theories are super-renormalizable in 2+1 dimensions and suffer from infrared divergences. These divergences can be avoided by adding a Chern-Simons term, i.e., building a Topologically Massive Theory. In this sense, we are interested in the study of the Topologically Massive Yang-Mills theory on the Null-Plane. Since this is a gauge theory, we need to analyze its constraint structure which is done with the Hamilton-Jacobi formalism. We are able to find the complete set of Hamiltonian densities, and build the Generalized Brackets of the theory. With the GB we obtain a set of involutive Hamiltonian densities, generators of the evolution of the system.

  16. Time parallelization of advanced operation scenario simulations of ITER plasma

    SciTech Connect (OSTI)

    Samaddar, D.; Casper, T. A.; Kim, S. H.; Berry, Lee A; Elwasif, Wael R; Batchelor, Donald B; Houlberg, Wayne A

    2013-01-01

    This work demonstrates that simulations of advanced burning plasma operation scenarios can be successfully parallelized in time using the parareal algorithm. CORSICA - an advanced operation scenario code for tokamak plasmas is used as a test case. This is a unique application since the parareal algorithm has so far been applied to relatively much simpler systems except for the case of turbulence. In the present application, a computational gain of an order of magnitude has been achieved which is extremely promising. A successful implementation of the Parareal algorithm to codes like CORSICA ushers in the possibility of time efficient simulations of ITER plasmas.

  17. Scripts for Scalable Monitoring of Parallel Filesystem Infrastructure

    Energy Science and Technology Software Center (OSTI)

    2014-02-27

    Scripts for scalable monitoring of parallel filesystem infrastructure provide frameworks for monitoring the health of block storage arrays and large InfiniBand fabrics. The block storage framework uses Python multiprocessing to within scale the number monitored arrays to scale with the number of processors in the system. This enables live monitoring of HPC-scale filesystem with 10-50 storage arrays. For InfiniBand monitoring, there are scripts included that monitor InfiniBand health of each host along with visualization toolsmore » for mapping the topology of complex fabric topologies.« less

  18. Parallel Paradigm for Ultraparallel Multi-Scale Brain Blood Flow

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Simulations | Argonne Leadership Computing Facility Parallel Paradigm for Ultraparallel Multi-Scale Brain Blood Flow Simulations Authors: Grinberg, L., Karniadakis, G.E. In this paper we present one approach in building a scalable solver NekTarG for solution of multi-scale and large size problems [1]. NekTarG has been designed for multi-scale blood modeling. The macro-vascular scales describing the flow dynamics in large vessels are coupled to the mesovascular scales unfolding dynamics of

  19. Implementation of a parallel multilevel secure process. Master's thesis

    SciTech Connect (OSTI)

    Pratt, D.R.

    1988-06-01

    This thesis demonstrates an implementation of a parallel multilevel secure process. This is done within the framework of an electronic-mail system. Security is implemented by GEMSOS, the operating system of the Gemini Trusted Computer Base. A brief history of computer secrecy is followed by a discussion of security kernels. Event counts and sequences are used to provide concurrency control and are covered in detail. The specifications for the system are based upon the requirements for a Headquarters of a hypothetical Marine Battalion in garrison.

  20. Computing NLTE Opacities -- Node Level Parallel Calculation

    SciTech Connect (OSTI)

    Holladay, Daniel

    2015-09-11

    Presentation. The goal: to produce a robust library capable of computing reasonably accurate opacities inline with the assumption of LTE relaxed (non-LTE). Near term: demonstrate acceleration of non-LTE opacity computation. Far term (if funded): connect to application codes with in-line capability and compute opacities. Study science problems. Use efficient algorithms that expose many levels of parallelism and utilize good memory access patterns for use on advanced architectures. Portability to multiple types of hardware including multicore processors, manycore processors such as KNL, GPUs, etc. Easily coupled to radiation hydrodynamics and thermal radiative transfer codes.

  1. Parallel optics technology assessment for the versatile link project

    SciTech Connect (OSTI)

    Chramowicz, J.; Kwan, S.; Rivera, R.; Prosser, A.; /Fermilab

    2011-01-01

    This poster describes the assessment of commercially available and prototype parallel optics modules for possible use as back end components for the Versatile Link common project. The assessment covers SNAP12 transmitter and receiver modules as well as optical engine technologies in dense packaging options. Tests were performed using vendor evaluation boards (SNAP12) as well as custom evaluation boards (optical engines). The measurements obtained were used to compare the performance of these components with single channel SFP+ components operating at a transmission wavelength of 850 nm over multimode fibers.

  2. Scripts for Scalable Monitoring of Parallel Filesystem Infrastructure

    SciTech Connect (OSTI)

    2014-02-27

    Scripts for scalable monitoring of parallel filesystem infrastructure provide frameworks for monitoring the health of block storage arrays and large InfiniBand fabrics. The block storage framework uses Python multiprocessing to within scale the number monitored arrays to scale with the number of processors in the system. This enables live monitoring of HPC-scale filesystem with 10-50 storage arrays. For InfiniBand monitoring, there are scripts included that monitor InfiniBand health of each host along with visualization tools for mapping the topology of complex fabric topologies.

  3. Parallel Computation of Persistent Homology using the Blowup Complex

    SciTech Connect (OSTI)

    Lewis, Ryan; Morozov, Dmitriy

    2015-04-27

    We describe a parallel algorithm that computes persistent homology, an algebraic descriptor of a filtered topological space. Our algorithm is distinguished by operating on a spatial decomposition of the domain, as opposed to a decomposition with respect to the filtration. We rely on a classical construction, called the Mayer--Vietoris blowup complex, to glue global topological information about a space from its disjoint subsets. We introduce an efficient algorithm to perform this gluing operation, which may be of independent interest, and describe how to process the domain hierarchically. We report on a set of experiments that help assess the strengths and identify the limitations of our method.

  4. Evaluating parallel relational databases for medical data analysis.

    SciTech Connect (OSTI)

    Rintoul, Mark Daniel; Wilson, Andrew T.

    2012-03-01

    Hospitals have always generated and consumed large amounts of data concerning patients, treatment and outcomes. As computers and networks have permeated the hospital environment it has become feasible to collect and organize all of this data. This raises naturally the question of how to deal with the resulting mountain of information. In this report we detail a proof-of-concept test using two commercially available parallel database systems to analyze a set of real, de-identified medical records. We examine database scalability as data sizes increase as well as responsiveness under load from multiple users.

  5. Local rollback for fault-tolerance in parallel computing systems

    DOE Patents [OSTI]

    Blumrich, Matthias A.; Chen, Dong; Gara, Alan; Giampapa, Mark E.; Heidelberger, Philip; Ohmacht, Martin; Steinmacher-Burow, Burkhard; Sugavanam, Krishnan

    2012-01-24

    A control logic device performs a local rollback in a parallel super computing system. The super computing system includes at least one cache memory device. The control logic device determines a local rollback interval. The control logic device runs at least one instruction in the local rollback interval. The control logic device evaluates whether an unrecoverable condition occurs while running the at least one instruction during the local rollback interval. The control logic device checks whether an error occurs during the local rollback. The control logic device restarts the local rollback interval if the error occurs and the unrecoverable condition does not occur during the local rollback interval.

  6. Performing a global barrier operation in a parallel computer

    DOE Patents [OSTI]

    Archer, Charles J; Blocksome, Michael A; Ratterman, Joseph D; Smith, Brian E

    2014-12-09

    Executing computing tasks on a parallel computer that includes compute nodes coupled for data communications, where each compute node executes tasks, with one task on each compute node designated as a master task, including: for each task on each compute node until all master tasks have joined a global barrier: determining whether the task is a master task; if the task is not a master task, joining a single local barrier; if the task is a master task, joining the global barrier and the single local barrier only after all other tasks on the compute node have joined the single local barrier.

  7. SERODS optical data storage with parallel signal transfer

    DOE Patents [OSTI]

    Vo-Dinh, Tuan

    2003-09-02

    Surface-enhanced Raman optical data storage (SERODS) systems having increased reading and writing speeds, that is, increased data transfer rates, are disclosed. In the various SERODS read and write systems, the surface-enhanced Raman scattering (SERS) data is written and read using a two-dimensional process called parallel signal transfer (PST). The various embodiments utilize laser light beam excitation of the SERODS medium, optical filtering, beam imaging, and two-dimensional light detection. Two- and three-dimensional SERODS media are utilized. The SERODS write systems employ either a different laser or a different level of laser power.

  8. SERODS optical data storage with parallel signal transfer

    DOE Patents [OSTI]

    Vo-Dinh, Tuan

    2003-06-24

    Surface-enhanced Raman optical data storage (SERODS) systems having increased reading and writing speeds, that is, increased data transfer rates, are disclosed. In the various SERODS read and write systems, the surface-enhanced Raman scattering (SERS) data is written and read using a two-dimensional process called parallel signal transfer (PST). The various embodiments utilize laser light beam excitation of the SERODS medium, optical filtering, beam imaging, and two-dimensional light detection. Two- and three-dimensional SERODS media are utilized. The SERODS write systems employ either a different laser or a different level of laser power.

  9. Digital intermediate frequency QAM modulator using parallel processing

    DOE Patents [OSTI]

    Pao, Hsueh-Yuan; Tran, Binh-Nien

    2008-05-27

    The digital Intermediate Frequency (IF) modulator applies to various modulation types and offers a simple and low cost method to implement a high-speed digital IF modulator using field programmable gate arrays (FPGAs). The architecture eliminates multipliers and sequential processing by storing the pre-computed modulated cosine and sine carriers in ROM look-up-tables (LUTs). The high-speed input data stream is parallel processed using the corresponding LUTs, which reduces the main processing speed, allowing the use of low cost FPGAs.

  10. Preliminary Failure Modes and Effects Analysis of the US Massive Gas Injection Disruption Mitigation System Design

    SciTech Connect (OSTI)

    Lee C. Cadwallader

    2013-10-01

    This report presents the results of a preliminary failure modes and effects analysis (FMEA) of a candidate design for the ITER Disruption Mitigation System. This candidate is the Massive Gas Injection System that provides machine protection in a plasma disruption event. The FMEA was quantified with “generic” component failure rate data as well as some data calculated from operating facilities, and the failure events were ranked for their criticality to system operation.

  11. Massive Energy Storage in Superconductors (SMES) | U.S. DOE Office of

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    Massachusetts NNSA to conduct Aerial Radiation Assessment Survey over Boston area BOSTON - On April 12 through April 15, the U.S. Department of Energy's National Nuclear Security Administration's (NNSA) will conduct low-altitude helicopter flights around Boston to measure naturally occurring background radiation. Officials from NNSA announced that the Science (SC)

    Massive Energy Storage in Superconductors (SMES) High Energy Physics (HEP) HEP Home About Research Facilities Science

  12. The coupling to matter in massive, bi- and multi-gravity

    SciTech Connect (OSTI)

    Noller, Johannes; Melville, Scott E-mail: scott.melville@queens.ox.ac.uk

    2015-01-01

    In this paper we construct a family of ways in which matter can couple to one or more 'metrics'/spin-2 fields in the vielbein formulation. We do so subject to requiring the weak equivalence principle and the absence of ghosts from pure spin-2 interactions generated by the matter action. Results are presented for Massive, Bi- and Multi-Gravity theories and we give explicit expressions for the effective matter metric in all of these cases.

  13. "On the Formation of Massive Galaxies" | Princeton Plasma Physics Lab

    Broader source: All U.S. Department of Energy (DOE) Office Webpages (Extended Search)

    December 19, 2012, 4:15pm Colloquia MBG Auditorium "On the Formation of Massive Galaxies" Professor Jeremiah Ostriker Princeton University Presentation: File WC19DEC2-12_JOstriker.pptx Looking backwards, using fossil evidence from nearby galaxies provides a plausible picture of how galaxies have formed over cosmic time. Also, going forwards, the present quite definite cosmological model, shows how perturbations grew from low amplitude fluctuations via standard physical processes to the

  14. Blast furnace injection of massive quantities of coal with enriched air or pure oxygen

    SciTech Connect (OSTI)

    Ponghis, N.; Dufresne, P.; Vidal, R.; Poos, A. )

    1993-01-01

    An extensive study of the phenomena associated with the blast furnace injection of massive quantities of coal is described. Trials with conventional lances or oxy-coal injectors and hot blast at different oxygen contents - up to 40% - or with cold pure oxygen were realized at coal to oxygen ratios corresponding to a range of 150 to 440 kg. Pilot scale rigs, empty or filled with coke, as well as industrial blast furnaces were utilized.

  15. Topologically massive Yang-Mills: A Hamilton-Jacobi constraint analysis

    SciTech Connect (OSTI)

    Bertin, M. C.; Pimentel, B. M.; Valcrcel, C. E.; Zambrano, G. E. R.

    2014-04-15

    We analyse the constraint structure of the topologically massive Yang-Mills theory in instant-form and null-plane dynamics via the Hamilton-Jacobi formalism. The complete set of hamiltonians that generates the dynamics of the system is obtained from the Frobenius integrability conditions, as well as its characteristic equations. As generators of canonical transformations, the hamiltonians are naturally linked to the generator of Lagrangian gauge transformations.

  16. The fate of high redshift massive compact galaxies in dense environments

    SciTech Connect (OSTI)

    Kaufmann, Tobias; Mayer, Lucio; Carollo, Marcella; Feldmann, Robert; /Fermilab /Chicago U., KICP

    2012-01-01

    Massive compact galaxies seem to be more common at high redshift than in the local universe, especially in denser environments. To investigate the fate of such massive galaxies identified at z {approx} 2 we analyse the evolution of their properties in three cosmological hydrodynamical simulations that form virialized galaxy groups of mass {approx} 10{sup 13} M{sub {circle_dot}} hosting a central massive elliptical/S0 galaxy by redshift zero. We find that at redshift {approx} 2 the population of galaxies with M{sub *} > 2 x 10{sup 10} M{sub {circle_dot}} is diverse in terms of mass, velocity dispersion, star formation and effective radius, containing both very compact and relatively extended objects. In each simulation all the compact satellite galaxies have merged into the central galaxy by redshift 0 (with the exception of one simulation where one of such satellite galaxy survives). Satellites of similar mass at z = 0 are all less compact than their high redshift counterparts. They form later than the galaxies in the z = 2 sample and enter the group potential at z < 1, when dynamical friction times are longer than the Hubble time. Also, by z = 0 the central galaxies have increased substantially their characteristic radius via a combination of in situ star formation and mergers. Hence in a group environment descendants of compact galaxies either evolve towards larger sizes or they disappear before the present time as a result of the environment in which they evolve. Since the group-sized halos that we consider are representative of dense environments in the {Lambda}CDM cosmology, we conclude that the majority of high redshift compact massive galaxies do not survive until today as a result of the environment.

  17. A GIANT RADIO HALO IN THE MASSIVE AND MERGING CLUSTER ABELL 1351

    SciTech Connect (OSTI)

    Giacintucci, S.; Venturi, T.; Cassano, R.; Dallacasa, D.; Brunetti, G.

    2009-10-10

    We report on the detection of diffuse radio emission in the X-ray luminous and massive galaxy cluster A 1351 (z = 0.322) using archival Very Large Array data at 1.4 GHz. Given its central location, morphology, and Mpc-scale extent, we classify the diffuse source as a giant radio halo. X-ray and weak lensing studies show A 1351 to be a system undergoing a major merger. The halo is associated with the most massive substructure. The presence of this source is explained assuming that merger-driven turbulence may re-accelerate high-energy particles in the intracluster medium and generate diffuse radio emission on the cluster scale. The position of A 1351 in the log P {sub 1.4GHz}-log L {sub X} plane is consistent with that of all other radio-halo clusters known to date, supporting a causal connection between the unrelaxed dynamical state of massive (>10{sup 15} M{sub sun}) clusters and the presence of giant radio halos.

  18. A MASSIVE PROTOSTAR EMBEDDED IN THE SCUBA CORE JCMT 18354-0649S

    SciTech Connect (OSTI)

    Zhu Ming; Davis, C. J.; Wu, Yufang; Whitney, B. A.; Robitaille, T.; Peng, R.

    2011-09-20

    We report the discovery of an extremely red object embedded in the massive SCUBA core JCMT 18354-0649S. This object is not associated with any known radio or far-IR source, though it appears in Spitzer IRAC data obtained as part of the GLIMPSE survey. At shorter wavelengths, this embedded source exhibits an extreme color, K - L' = 6.7. At an assumed distance of 5.7 kpc, this source has a near-IR luminosity of {approx}1000 L{sub sun}. Its spectral energy distribution (SED) rises sharply from 2.1 {mu}m to 8 {mu}m, similar to that of a Class 0 young stellar object. Theoretical modeling of the SED indicates that the central star has a mass of 6-12 M{sub sun}, with an optical extinction of more than 30. As both inflow and outflow motions are present in JCMT 18354-0649S, we suggest that this deeply embedded source is (1) a massive protostar in the early stages of accretion, and (2) the driving source of a massive molecular outflow evident in HCN J = 3-2 profiles observed toward this region.

  19. Massive gravity on de Sitter and unique candidate for partially massless gravity

    SciTech Connect (OSTI)

    Rham, Claudia de; Renaux-Petel, Sbastien E-mail: srenaux@lpthe.jussieu.fr

    2013-01-01

    We derive the decoupling limit of Massive Gravity on de Sitter in an arbitrary number of space-time dimensions d. By embedding d-dimensional de Sitter into d+1-dimensional Minkowski, we extract the physical helicity-1 and helicity-0 polarizations of the graviton. The resulting decoupling theory is similar to that obtained around Minkowski. We take great care at exploring the partially massless limit and define the unique fully non-linear candidate theory that is free of the helicity-0 mode in the decoupling limit, and which therefore propagates only four degrees of freedom in four dimensions. In the latter situation, we show that a new Vainshtein mechanism is at work in the limit m{sup 2} ? 2H{sup 2} which decouples the helicity-0 mode when the parameters are different from that of partially massless gravity. As a result, there is no discontinuity between massive gravity and its partially massless limit, just in the same way as there is no discontinuity in the massless limit of massive gravity. The usual bounds on the graviton mass could therefore equivalently well be interpreted as bounds on m{sup 2}?2H{sup 2}. When dealing with the exact partially massless parameters, on the other hand, the symmetry at m{sup 2} = 2H{sup 2} imposes a specific constraint on matter. As a result the helicity-0 mode decouples without even the need of any Vainshtein mechanism.

  20. Spherically symmetric analysis on open FLRW solution in non-linear massive gravity

    SciTech Connect (OSTI)

    Chiang, Chien-I; Izumi, Keisuke; Chen, Pisin E-mail: izumi@phys.ntu.edu.tw

    2012-12-01

    We study non-linear massive gravity in the spherically symmetric context. Our main motivation is to investigate the effect of helicity-0 mode which remains elusive after analysis of cosmological perturbation around an open Friedmann-Lemaitre-Robertson-Walker (FLRW) universe. The non-linear form of the effective energy-momentum tensor stemming from the mass term is derived for the spherically symmetric case. Only in the special case where the area of the two sphere is not deviated away from the FLRW universe, the effective energy momentum tensor becomes completely the same as that of cosmological constant. This opens a window for discriminating the non-linear massive gravity from general relativity (GR). Indeed, by further solving these spherically symmetric gravitational equations of motion in vacuum to the linear order, we obtain a solution which has an arbitrary time-dependent parameter. In GR, this parameter is a constant and corresponds to the mass of a star. Our result means that Birkhoff's theorem no longer holds in the non-linear massive gravity and suggests that energy can probably be emitted superluminously (with infinite speed) on the self-accelerating background by the helicity-0 mode, which could be a potential plague of this theory.

  1. Users manual for the Chameleon parallel programming tools

    SciTech Connect (OSTI)

    Gropp, W.; Smith, B.

    1993-06-01

    Message passing is a common method for writing programs for distributed-memory parallel computers. Unfortunately, the lack of a standard for message passing has hampered the construction of portable and efficient parallel programs. In an attempt to remedy this problem, a number of groups have developed their own message-passing systems, each with its own strengths and weaknesses. Chameleon is a second-generation system of this type. Rather than replacing these existing systems, Chameleon is meant to supplement them by providing a uniform way to access many of these systems. Chameleon`s goals are to (a) be very lightweight (low over-head), (b) be highly portable, and (c) help standardize program startup and the use of emerging message-passing operations such as collective operations on subsets of processors. Chameleon also provides a way to port programs written using PICL or Intel NX message passing to other systems, including collections of workstations. Chameleon is tracking the Message-Passing Interface (MPI) draft standard and will provide both an MPI implementation and an MPI transport layer. Chameleon provides support for heterogeneous computing by using p4 and PVM. Chameleon`s support for homogeneous computing includes the portable libraries p4, PICL, and PVM and vendor-specific implementation for Intel NX, IBM EUI (SP-1), and Thinking Machines CMMD (CM-5). Support for Ncube and PVM 3.x is also under development.

  2. Long-time dynamics through parallel trajectory splicing

    DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)

    Perez, Danny; Cubuk, Ekin D.; Waterland, Amos; Kaxiras, Efthimios; Voter, Arthur F.

    2015-11-24

    Simulating the atomistic evolution of materials over long time scales is a longstanding challenge, especially for complex systems where the distribution of barrier heights is very heterogeneous. Such systems are difficult to investigate using conventional long-time scale techniques, and the fact that they tend to remain trapped in small regions of configuration space for extended periods of time strongly limits the physical insights gained from short simulations. We introduce a novel simulation technique, Parallel Trajectory Splicing (ParSplice), that aims at addressing this problem through the timewise parallelization of long trajectories. The computational efficiency of ParSplice stems from a speculation strategymore » whereby predictions of the future evolution of the system are leveraged to increase the amount of work that can be concurrently performed at any one time, hence improving the scalability of the method. ParSplice is also able to accurately account for, and potentially reuse, a substantial fraction of the computational work invested in the simulation. We validate the method on a simple Ag surface system and demonstrate substantial increases in efficiency compared to previous methods. As a result, we then demonstrate the power of ParSplice through the study of topology changes in Ag42Cu13 core–shell nanoparticles.« less

  3. A Programming Model Performance Study Using the NAS Parallel Benchmarks

    DOE Public Access Gateway for Energy & Science Beta (PAGES Beta)

    Shan, Hongzhang; Blagojević, Filip; Min, Seung-Jai; Hargrove, Paul; Jin, Haoqiang; Fuerlinger, Karl; Koniges, Alice; Wright, Nicholas J.

    2010-01-01

    Harnessing the power of multicore platforms is challenging due to the additional levels of parallelism present. In this paper we use the NAS Parallel Benchmarks to study three programming models, MPI, OpenMP and PGAS to understand their performance and memory usage characteristics on current multicore architectures. To understand these characteristics we use the Integrated Performance Monitoring tool and other ways to measure communication versus computation time, as well as the fraction of the run time spent in OpenMP. The benchmarks are run on two different Cray XT5 systems and an Infiniband cluster. Our results show that in general the threemore » programming models exhibit very similar performance characteristics. In a few cases, OpenMP is significantly faster because it explicitly avoids communication. For these particular cases, we were able to re-write the UPC versions and achieve equal performance to OpenMP. Using OpenMP was also the most advantageous in terms of memory usage. Also we compare performance differences between the two Cray systems, which have quad-core and hex-core processors. We show that at scale the performance is almost always slower on the hex-core system because of increased contention for network resources.« less

  4. Perm Web: remote parallel and distributed volume visualization

    SciTech Connect (OSTI)

    Wittenbrink, C.M.; Kim, K.; Story, J.; Pang, A.; Hollerbach, K.; Max, N.

    1997-01-01

    In this paper we present a system for visualizing volume data from remote supercomputers (PermWeb). We have developed both parallel volume rendering algorithms, and the World Wide Web software for accessing the data at the remote sites. The implementation uses Hypertext Markup Language (HTML), Java, and Common Gateway Interface (CGI) scripts to connect World Wide Web (WWW) servers/clients to our volume renderers. The front ends are interactive Java classes for specification of view, shading, and classification inputs. We present performance results, and implementation details for connections to our computing resources at the University of California Santa Cruz including a MasPar MP-2, SGI Reality Engine-RE2, and SGI Challenge machines. We apply the system to the task of visualizing trabecular bone from finite element simulations. Fast volume rendering on remote compute servers through a web interface allows us to increase the accessibility of the results to more users. User interface issues, overviews of parallel algorithm developments, and overall system interfaces and protocols are presented. Access is available through Uniform Resource Locator (URL) http://www.cse.ucsc.edu/research/slvg/. 26 refs., 7 figs.

  5. Efficient VLSI networks for parallel processing based on orthogonal trees

    SciTech Connect (OSTI)

    Nath, D.; Maheshwari, S.N.; Bhatt, P.C.P.

    1983-06-01

    Two interconnection networks for parallel processing, namely the orthogonal trees network and the orthogonal tree cycles (OTN and OTC) are discussed. Both networks are suitable for VLSI implementation and have been analysed using Thompson's model of VLSI. While the OTN and OTC have time performances similar to fast networks such as the perfect shuffle network (PSN), the cube connected cycles (CCC), etc., they have substantially better area* time/sup 2/ performances for a number of matrix and graph problems. For instance, the connected components and a minimal spanning tree of an undirected n-vertex graph can be found in 0(log/sup 4/ n) time on the OTC with an area* time/sup 2/ performance of 0(n/sup 2/ log/sup 8/ n) and 0(n/sup 2/ log/sup 9/ n) respectively. This is asymptotically much better than the performances of the CCC, PSN and MESH. The OTC and OTN can be looked upon as general purpose parallel processors since a number of other problems such as sorting and DFT can be solved on them with an area* time/sup 2/ performance matching that of other networks. Finally, programming the OTN and OTC is simple and they are also amenable to pipelining a series of problems. 33 references.

  6. Nonlinear parallel momentum transport in strong electrostatic turbulence

    SciTech Connect (OSTI)

    Wang, Lu Wen, Tiliang; Diamond, P. H.

    2015-05-15

    Most existing theoretical studies of momentum transport focus on calculating the Reynolds stress based on quasilinear theory, without considering the nonlinear momentum flux-〈v{sup ~}{sub r}n{sup ~}u{sup ~}{sub ∥}〉. However, a recent experiment on TORPEX found that the nonlinear toroidal momentum flux induced by blobs makes a significant contribution as compared to the Reynolds stress [Labit et al., Phys. Plasmas 18, 032308 (2011)]. In this work, the nonlinear parallel momentum flux in strong electrostatic turbulence is calculated by using a three dimensional Hasegawa-Mima equation, which is relevant for tokamak edge turbulence. It is shown that the nonlinear diffusivity is smaller than the quasilinear diffusivity from Reynolds stress. However, the leading order nonlinear residual stress can be comparable to the quasilinear residual stress, and so may be important to intrinsic rotation in tokamak edge plasmas. A key difference from the quasilinear residual stress is that parallel fluctuation spectrum asymmetry is not required for nonlinear residual stress.

  7. Scalable Library for the Parallel Solution of Sparse Linear Systems

    Energy Science and Technology Software Center (OSTI)

    1993-07-14

    BlockSolve is a scalable parallel software library for the solution of large sparse, symmetric systems of linear equations. It runs on a variety of parallel architectures and can easily be ported to others. BlockSovle is primarily intended for the solution of sparse linear systems that arise from physical problems having multiple degrees of freedom at each node point. For example, when the finite element method is used to solve practical problems in structural engineering, eachmore » node will typically have anywhere from 3-6 degrees of freedom associated with it. BlockSolve is written to take advantage of problems of this nature; however, it is still reasonably efficient for problems that have only one degree of freedom associated with each node, such as the three-dimensional Poisson problem. It does not require that the matrices have any particular structure other than being sparse and symmetric. BlockSolve is intended to be used within real application codes. It is designed to work best in the context of our experience which indicated that most application codes solve the same linear systems with several different right-hand sides and/or linear systems with the same structure, but different matrix values multiple times.« less

  8. High-performance parallel interface to synchronous optical network gateway

    DOE Patents [OSTI]

    St. John, Wallace B.; DuBois, David H.

    1996-01-01

    A system of sending and receiving gateways interconnects high speed data interfaces, e.g., HIPPI interfaces, through fiber optic links, e.g., a SONET network. An electronic stripe distributor distributes bytes of data from a first interface at the sending gateway onto parallel fiber optics of the fiber optic link to form transmitted data. An electronic stripe collector receives the transmitted data on the parallel fiber optics and reforms the data into a format effective for input to a second interface at the receiving gateway. Preferably, an error correcting syndrome is constructed at the sending gateway and sent with a data frame so that transmission errors can be detected and corrected in a real-time basis. Since the high speed data interface operates faster than any of the fiber optic links the transmission rate must be adapted to match the available number of fiber optic links so the sending and receiving gateways monitor the availability of fiber links and adjust the data throughput accordingly. In another aspect, the receiving gateway must have sufficient available buffer capacity to accept an incoming data frame. A credit-based flow control system provides for continuously updating the sending gateway on the available buffer capacity at the receiving gateway.

  9. Long-time dynamics through parallel trajectory splicing

    SciTech Connect (OSTI)

    Perez, Danny; Cubuk, Ekin D.; Waterland, Amos; Kaxiras, Efthimios; Voter, Arthur F.

    2015-11-24

    Simulating the atomistic evolution of materials over long time scales is a longstanding challenge, especially for complex systems where the distribution of barrier heights is very heterogeneous. Such systems are difficult to investigate using conventional long-time scale techniques, and the fact that they tend to remain trapped in small regions of configuration space for extended periods of time strongly limits the physical insights gained from short simulations. We introduce a novel simulation technique, Parallel Trajectory Splicing (ParSplice), that aims at addressing this problem through the timewise parallelization of long trajectories. The computational efficiency of ParSplice stems from a speculation strategy whereby predictions of the future evolution of the system are leveraged to increase the amount of work that can be concurrently performed at any one time, hence improving the scalability of the method. ParSplice is also able to accurately account for, and potentially reuse, a substantial fraction of the computational work invested in the simulation. We validate the method on a simple Ag surface system and demonstrate substantial increases in efficiency compared to previous methods. As a result, we then demonstrate the power of ParSplice through the study of topology changes in Ag42Cu13 core–shell nanoparticles.

  10. High-performance parallel interface to synchronous optical network gateway

    DOE Patents [OSTI]

    St. John, W.B.; DuBois, D.H.

    1996-12-03

    Disclosed is a system of sending and receiving gateways interconnects high speed data interfaces, e.g., HIPPI interfaces, through fiber optic links, e.g., a SONET network. An electronic stripe distributor distributes bytes of data from a first interface at the sending gateway onto parallel fiber optics of the fiber optic link to form transmitted data. An electronic stripe collector receives the transmitted data on the parallel fiber optics and reforms the data into a format effective for input to a second interface at the receiving gateway. Preferably, an error correcting syndrome is constructed at the sending gateway and sent with a data frame so that transmission errors can be detected and corrected in a real-time basis. Since the high speed data interface operates faster than any of the fiber optic links the transmission rate must be adapted to match the available number of fiber optic links so the sending and receiving gateways monitor the availability of fiber links and adjust the data throughput accordingly. In another aspect, the receiving gateway must have sufficient available buffer capacity to accept an incoming data frame. A credit-based flow control system provides for continuously updating the sending gateway on the available buffer capacity at the receiving gateway. 7 figs.

  11. Energy Proportionality and Performance in Data Parallel Computing Clusters

    SciTech Connect (OSTI)

    Kim, Jinoh; Chou, Jerry; Rotem, Doron

    2011-02-14

    Energy consumption in datacenters has recently become a major concern due to the rising operational costs andscalability issues. Recent solutions to this problem propose the principle of energy proportionality, i.e., the amount of energy consumedby the server nodes must be proportional to the amount of work performed. For data parallelism and fault tolerancepurposes, most common file systems used in MapReduce-type clusters maintain a set of replicas for each data block. A coveringset is a group of nodes that together contain at least one replica of the data blocks needed for performing computing tasks. In thiswork, we develop and analyze algorithms to maintain energy proportionality by discovering a covering set that minimizesenergy consumption while placing the remaining nodes in lowpower standby mode. Our algorithms can also discover coveringsets in heterogeneous computing environments. In order to allow more data parallelism, we generalize our algorithms so that itcan discover k-covering sets, i.e., a set of nodes that contain at least k replicas of the data blocks. Our experimental results showthat we can achieve substantial energy saving without significant performance loss in diverse cluster configurations and workingenvironments.

  12. SOUTHERN MASSIVE STARS AT HIGH ANGULAR RESOLUTION: OBSERVATIONAL CAMPAIGN AND COMPANION DETECTION

    SciTech Connect (OSTI)

    Sana, H.; Lacour, S.; Gauchet, L.; Pickel, D.; Berger, J.-P.; Norris, B.; Olofsson, J.; Absil, O.; De Koter, A.; Kratter, K.; Schnurr, O.; Zinnecker, H.

    2014-11-01

    Multiplicity is one of the most fundamental observable properties of massive O-type stars and offers a promising way to discriminate between massive star formation theories. Nevertheless, companions at separations between 1 and 100 milliarcsec (mas) remain mostly unknown due to intrinsic observational limitations. At a typical distance of 2 kpc, this corresponds to projected physical separations of 2-200 AU. The Southern MAssive Stars at High angular resolution survey (SMaSH+) was designed to fill this gap by providing the first systematic interferometric survey of Galactic massive stars. We observed 117 O-type stars with VLTI/PIONIER and 162 O-type stars with NACO/Sparse Aperture Masking (SAM), probing the separation ranges 1-45 and 30-250 mas and brightness contrasts of ΔH < 4 and ΔH < 5, respectively. Taking advantage of NACO's field of view, we further uniformly searched for visual companions in an 8'' radius down to ΔH = 8. This paper describes observations and data analysis, reports the discovery of almost 200 new companions in the separation range from 1 mas to 8'' and presents a catalog of detections, including the first resolved measurements of over a dozen known long-period spectroscopic binaries. Excluding known runaway stars for which no companions are detected, 96 objects in our main sample (δ < 0°; H < 7.5) were observed both with PIONIER and NACO/SAM. The fraction of these stars with at least one resolved companion within 200 mas is 0.53. Accounting for known but unresolved spectroscopic or eclipsing companions, the multiplicity fraction at separation ρ < 8'' increases to f {sub m} = 0.91 ± 0.03. The fraction of luminosity class V stars that have a bound companion reaches 100% at 30 mas while their average number of physically connected companions within 8'' is f {sub c} = 2.2 ± 0.3. This demonstrates that massive stars form nearly exclusively in multiple systems. The nine non-thermal radio emitters observed by SMaSH+ are all resolved

  13. Aggregating job exit statuses of a plurality of compute nodes executing a parallel application

    DOE Patents [OSTI]

    Aho, Michael E.; Attinella, John E.; Gooding, Thomas M.; Mundy, Michael B.

    2015-07-21

    Aggregating job exit statuses of a plurality of compute nodes executing a parallel application, including: identifying a subset of compute nodes in the parallel computer to execute the parallel application; selecting one compute node in the subset of compute nodes in the parallel computer as a job leader compute node; initiating execution of the parallel application on the subset of compute nodes; receiving an exit status from each compute node in the subset of compute nodes, where the exit status for each compute node includes information describing execution of some portion of the parallel application by the compute node; aggregating each exit status from each compute node in the subset of compute nodes; and sending an aggregated exit status for the subset of compute nodes in the parallel computer.

  14. The parallel I/O architecture of the High Performance Storage System (HPSS)

    SciTech Connect (OSTI)

    Watson, R.W.; Coyne, R.A.

    1995-02-01

    Rapid improvements in computational science, processing capability, main memory sizes, data collection devices, multimedia capabilities and integration of enterprise data are producing very large datasets (10s-100s of gigabytes to terabytes). This rapid growth of data has resulted in a serious imbalance in I/O and storage system performance and functionality. One promising approach to restoring balanced I/O and storage system performance is use of parallel data transfer techniques for client access to storage, device-to-device transfers, and remote file transfers. This paper describes the parallel I/O architecture and mechanisms, Parallel Transport Protocol, parallel FIP, and parallel client Application Programming Interface (API) used by the High Performance Storage System (HPSS). Parallel storage integration issues with a local parallel file system are also discussed.

  15. Cooperative storage of shared files in a parallel computing system with dynamic block size

    DOE Patents [OSTI]

    Bent, John M.; Faibish, Sorin; Grider, Gary

    2015-11-10

    Improved techniques are provided for parallel writing of data to a shared object in a parallel computing system. A method is provided for storing data generated by a plurality of parallel processes to a shared object in a parallel computing system. The method is performed by at least one of the processes and comprises: dynamically determining a block size for storing the data; exchanging a determined amount of the data with at least one additional process to achieve a block of the data having the dynamically determined block size; and writing the block of the data having the dynamically determined block size to a file system. The determined block size comprises, e.g., a total amount of the data to be stored divided by the number of parallel processes. The file system comprises, for example, a log structured virtual parallel file system, such as a Parallel Log-Structured File System (PLFS).

  16. Time-Dependent, Parallel Neutral Particle Transport Code System.

    Energy Science and Technology Software Center (OSTI)

    2009-09-10

    Version 00 PARTISN (PARallel, TIme-Dependent SN) is the evolutionary successor to CCC-547/DANTSYS. The PARTISN code package is a modular computer program package designed to solve the time-independent or dependent multigroup discrete ordinates form of the Boltzmann transport equation in several different geometries. The modular construction of the package separates the input processing, the transport equation solving, and the post processing (or edit) functions into distinct code modules: the Input Module, the Solver Module, and themore » Edit Module, respectively. PARTISN is the evolutionary successor to the DANTSYSTM code system package. The Input and Edit Modules in PARTISN are very similar to those in DANTSYS. However, unlike DANTSYS, the Solver Module in PARTISN contains one, two, and three-dimensional solvers in a single module. In addition to the diamond-differencing method, the Solver Module also has Adaptive Weighted Diamond-Differencing (AWDD), Linear Discontinuous (LD), and Exponential Discontinuous (ED) spatial differencing methods. The spatial mesh may consist of either a standard orthogonal mesh or a block adaptive orthogonal mesh. The Solver Module may be run in parallel for two and three dimensional problems. One can now run 1-D problems in parallel using Energy Domain Decomposition (triggered by Block 5 input keyword npeg>0). EDD can also be used in 2-D/3-D with or without our standard Spatial Domain Decomposition. Both the static (fixed source or eigenvalue) and time-dependent forms of the transport equation are solved in forward or adjoint mode. In addition, PARTISN now has a probabilistic mode for Probability of Initiation (static) and Probability of Survival (dynamic) calculations. Vacuum, reflective, periodic, white, or inhomogeneous boundary conditions are solved. General anisotropic scattering and inhomogeneous sources are permitted. PARTISN solves the transport equation on orthogonal (single level or block-structured AMR) grids in 1-D

  17. P-SPARSLIB: A parallel sparse iterative solution package

    SciTech Connect (OSTI)

    Saad, Y.

    1994-12-31

    Iterative methods are gaining popularity in engineering and sciences at a time where the computational environment is changing rapidly. P-SPARSLIB is a project to build a software library for sparse matrix computations on parallel computers. The emphasis is on iterative methods and the use of distributed sparse matrices, an extension of the domain decomposition approach to general sparse matrices. One of the goals of this project is to develop a software package geared towards specific applications. For example, the author will test the performance and usefulness of P-SPARSLIB modules on linear systems arising from CFD applications. Equally important is the goal of portability. In the long run, the author wishes to ensure that this package is portable on a variety of platforms, including SIMD environments and shared memory environments.

  18. Synchronizing compute node time bases in a parallel computer

    DOE Patents [OSTI]

    Chen, Dong; Faraj, Daniel A; Gooding, Thomas M; Heidelberger, Philip

    2015-01-27

    Synchronizing time bases in a parallel computer that includes compute nodes organized for data communications in a tree network, where one compute node is designated as a root, and, for each compute node: calculating data transmission latency from the root to the compute node; configuring a thread as a pulse waiter; initializing a wakeup unit; and performing a local barrier operation; upon each node completing the local barrier operation, entering, by all compute nodes, a global barrier operation; upon all nodes entering the global barrier operation, sending, to all the compute nodes, a pulse signal; and for each compute node upon receiving the pulse signal: waking, by the wakeup unit, the pulse waiter; setting a time base for the compute node equal to the data transmission latency between the root node and the compute node; and exiting the global barrier operation.

  19. Synchronizing compute node time bases in a parallel computer

    DOE Patents [OSTI]

    Chen, Dong; Faraj, Daniel A; Gooding, Thomas M; Heidelberger, Philip

    2014-12-30

    Synchronizing time bases in a parallel computer that includes compute nodes organized for data communications in a tree network, where one compute node is designated as a root, and, for each compute node: calculating data transmission latency from the root to the compute node; configuring a thread as a pulse waiter; initializing a wakeup unit; and performing a local barrier operation; upon each node completing the local barrier operation, entering, by all compute nodes, a global barrier operation; upon all nodes entering the global barrier operation, sending, to all the compute nodes, a pulse signal; and for each compute node upon receiving the pulse signal: waking, by the wakeup unit, the pulse waiter; setting a time base for the compute node equal to the data transmission latency between the root node and the compute node; and exiting the global barrier operation.

  20. Microchannel cross load array with dense parallel input

    DOE Patents [OSTI]

    Swierkowski, Stefan P.

    2004-04-06

    An architecture or layout for microchannel arrays using T or Cross (+) loading for electrophoresis or other injection and separation chemistry that are performed in microfluidic configurations. This architecture enables a very dense layout of arrays of functionally identical shaped channels and it also solves the problem of simultaneously enabling efficient parallel shapes and biasing of the input wells, waste wells, and bias wells at the input end of the separation columns. One T load architecture uses circular holes with common rows, but not columns, which allows the flow paths for each channel to be identical in shape, using multiple mirror image pieces. Another T load architecture enables the access hole array to be formed on a biaxial, collinear grid suitable for EDM micromachining (square holes), with common rows and columns.