skip to main content
OSTI.GOV title logo U.S. Department of Energy
Office of Scientific and Technical Information

Title: BEC :a virtual shared memory parallel programming environment.

; ; ; ;  [1]
  1. (Georgia Institute of Technology, Atlanta, GA)
Publication Date:
Research Org.:
Sandia National Laboratories
Sponsoring Org.:
OSTI Identifier:
Report Number(s):
DOE Contract Number:
Resource Type:
Technical Report
Country of Publication:
United States
Parallel programming.; High performance computing.; Parallel computers-Programming.

Citation Formats

Goudy, Susan Phelps, Brown, Jonathan Leighton, Wen, Zhaofang., Heroux, Michael Allen, and Huang, Shan Shan. BEC :a virtual shared memory parallel programming environment.. United States: N. p., 2006. Web. doi:10.2172/882923.
Goudy, Susan Phelps, Brown, Jonathan Leighton, Wen, Zhaofang., Heroux, Michael Allen, & Huang, Shan Shan. BEC :a virtual shared memory parallel programming environment.. United States. doi:10.2172/882923.
Goudy, Susan Phelps, Brown, Jonathan Leighton, Wen, Zhaofang., Heroux, Michael Allen, and Huang, Shan Shan. Sun . "BEC :a virtual shared memory parallel programming environment.". United States. doi:10.2172/882923.
title = {BEC :a virtual shared memory parallel programming environment.},
author = {Goudy, Susan Phelps and Brown, Jonathan Leighton and Wen, Zhaofang. and Heroux, Michael Allen and Huang, Shan Shan},
abstractNote = {},
doi = {10.2172/882923},
journal = {},
number = ,
volume = ,
place = {United States},
year = {Sun Jan 01 00:00:00 EST 2006},
month = {Sun Jan 01 00:00:00 EST 2006}

Technical Report:

Save / Share:
  • The virtues of the different parallel programming models, shared memory and distributed memory, have been much debated. Conventionally the debate could be reduced to programming convenience on the one hand, and high salability factors on the other. More recently the debate has become somewhat blurred with the provision of virtual shared memory models built on machines with physically distributed memory. The intention of such models/machines is to provide scalable shared memory, i.e. to provide both programmer convenience and high salability. In this paper, the different models are considered from experiences gained with a number of system ranging from applications inmore » both commerce and science to languages and operating systems. Case studies are introduced as appropriate.« less
  • DIME (Distributed Irregular Mesh Environment) is a user environment written in C for manipulation of an unstructured triangular mesh in two dimensions. The mesh is distributed among the separate memories of the processors, and communication between processors is handled by DIME; thus the user writes C-code referring to the elements and nodes of the mesh and need not be unduly concerned with the parallelism. A tool is provided for the user to make an initial coarse triangulation of a region, which may then be adaptively refined and load-balanced. DIME provides many graphics facilities for examining the mesh, including contouring andmore » a Postscript hard-copy interface. DIME also runs on sequential machines. 8 refs., 18 figs.« less
  • Many iterative schemes in scientific applications require the multiplication of a sparse matrix by a vector. This kernel has been mainly studied on vector processors and shared-memory parallel computers. In this paper, we address the implementation issues when using a shared virtual memory system on a distributed memory parallel computer. We study in details the impact of loop distribution schemes in order to design an efficient algorithm.
  • This report describes the use of shared memory emulation with DOLIB (Distributed Object Library) to simplify parallel programming on the Intel Paragon. A molecular dynamics application is used as an example to illustrate the use of the DOLIB shared memory library. SOTON-PAR, a parallel molecular dynamics code with explicit message-passing using a Lennard-Jones 6-12 potential, is rewritten using DOLIB primitives. The resulting code has no explicit message primitives and resembles a serial code. The new code can perform dynamic load balancing and achieves better performance than the original parallel code with explicit message-passing.
  • Most large parallel computers now built use a hybrid architecture called a shared memory cluster. In this design, a computer consists of several nodes connected by an interconnection network. Each node contains a pool of memory and multiple processors that share direct access to it. Because shared memory clusters combine architectural features of shared memory computers and distributed memory computers, they support several different styles of parallel programming or programming models. (Further information on the design of these systems and their programming models appears in Section 2.) The purpose of this project was to investigate the programming models available onmore » these systems and to answer three questions: (1) How easy to use are the different programming models in real applications? (2) How do the hardware and system software on different computers affect the performance of these programming models? (3) What are the performance characteristics of different programming models for typical LLNL applications on various shared memory clusters?« less