skip to main content
OSTI.GOV title logo U.S. Department of Energy
Office of Scientific and Technical Information

Title: Exploring Distributed Memory Parallel CPLEX

; ; ;
Publication Date:
Research Org.:
Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States)
Sponsoring Org.:
OSTI Identifier:
Report Number(s):
DOE Contract Number:
Resource Type:
Technical Report
Country of Publication:
United States

Citation Formats

Cong, G, Magerlein, J, Rajan, D, and Meyers, C. Exploring Distributed Memory Parallel CPLEX. United States: N. p., 2014. Web. doi:10.2172/1165747.
Cong, G, Magerlein, J, Rajan, D, & Meyers, C. Exploring Distributed Memory Parallel CPLEX. United States. doi:10.2172/1165747.
Cong, G, Magerlein, J, Rajan, D, and Meyers, C. Mon . "Exploring Distributed Memory Parallel CPLEX". United States. doi:10.2172/1165747.
title = {Exploring Distributed Memory Parallel CPLEX},
author = {Cong, G and Magerlein, J and Rajan, D and Meyers, C},
abstractNote = {},
doi = {10.2172/1165747},
journal = {},
number = ,
volume = ,
place = {United States},
year = {Mon Aug 18 00:00:00 EDT 2014},
month = {Mon Aug 18 00:00:00 EDT 2014}

Technical Report:

Save / Share:
  • DIME (Distributed Irregular Mesh Environment) is a user environment written in C for manipulation of an unstructured triangular mesh in two dimensions. The mesh is distributed among the separate memories of the processors, and communication between processors is handled by DIME; thus the user writes C-code referring to the elements and nodes of the mesh and need not be unduly concerned with the parallelism. A tool is provided for the user to make an initial coarse triangulation of a region, which may then be adaptively refined and load-balanced. DIME provides many graphics facilities for examining the mesh, including contouring andmore » a Postscript hard-copy interface. DIME also runs on sequential machines. 8 refs., 18 figs.« less
  • The problem of exploiting the parallelism available in a program to efficiently employ the resources of the target machine is addressed. The problem is discussed in the context of building a mapping compiler for a distributed memory parallel machine. The paper describes using execution models to drive the process of mapping a program in the most efficient way onto a particular machine. Through analysis of the execution models for several mapping techniques for one class of programs, we show that the selection of the best technique for a particular program instance can make a significant difference in performance. On themore » other hand, the results of benchmarks from an implementation of a mapping compiler show that our execution models are accurate enough to select the best mapping technique for a given program.« less
  • The storage and retrieval of patterns in a Hopfield-like Parallel Distributed Memory is investigated experimentally with a view toward increasing its storage capacity. The first two Chapters give an overview of distributed memories and, in particular, the Hopfield distributed memory. This dissertation then experimentally investigates new and untested methods to increase the storage capabilities of a Hopfield-like neural net. Increasing the storage capacity by using the continuous-valued Hopfield memory is explored in Chapter 3 and the impact on capacity of data representation is experimentally investigated in Chapter 4. New ways of storing data are then discussed (changing the interconnect strengths)more » including in Chapter 7 developing a new method called Modifying the Energy Contour or MEC. In addition, this Chapter also outlines how to increase error-tolerance through the use of noisy patterns. The Hopfield distributed memory is then contrasted to another intelligent memory subsystem based on more of a traditional computer technology. In Chapter 8 it is seen that traditional computer technology using data-parallel techniques has a greater storage efficiency than possible with current Hopfield-like distributed memories.« less
  • Several parallel algorithms are presented for solving triangular systems of linear equations on distributed-memory multiprocessors. New wavefront algorithms are developed for both row-oriented and column-oriented matrix storage. Performance of the new algorithms and several previously proposed algorithms in analyzed theoretically and illustrated empirically using implementations on commercially available hypercube multiprocessors.
  • An efficient three dimensional unstructured Euler solver is parallelized on a Cray Y-MP C90 shared memory computer and on an Intel Touchstone Delta distributed memory computer. This paper relates the experiences gained and describes the software tools and hardware used in this study. Performance comparisons between two differing architectures are made.... Unstructured, Parallel, Shared memory, CRAY.