Comparative Study of Message Passing and Shared Memory Parallel Programming Models in Neural Network Training
It is presented a comparative performance study of a coarse grained parallel neural network training code, implemented in both OpenMP and MPI, standards for shared memory and message passing parallel programming environments, respectively. In addition, these versions of the parallel training code are compared to an implementation utilizing SHMEM the native SGI/CRAY environment for shared memory programming. The multiprocessor platform used is a SGI/Cray Origin 2000 with up to 32 processors. It is shown that in this study, the native CRAY environment outperforms MPI for the entire range of processors used, while OpenMP shows better performance than the other two environments when using more than 19 processors. In this study, the efficiency is always greater than 60% regardless of the parallel programming environment used as well as of the number of processors.
- Research Organization:
- Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States)
- Sponsoring Organization:
- USDOE Office of Defense Programs (DP) (US)
- DOE Contract Number:
- W-7405-Eng-48
- OSTI ID:
- 791052
- Report Number(s):
- UCRL-JC-136867; TRN: US200302%%494
- Resource Relation:
- Conference: High Performance Computing 2000, Washington, DC (US), 04/16/2000--04/20/2000; Other Information: PBD: 14 Dec 1999
- Country of Publication:
- United States
- Language:
- English
Similar Records
MPF: a portable message passing facility for shared memory multiprocessors
Message passing and shared address space parallelism on an SMP cluster