Comparative Study of Message Passing and Shared Memory Parallel Programming Models in Neural Network Training
- LLNL
It is presented a comparative performance study of a coarse grained parallel neural network training code, implemented in both OpenMP and MPI, standards for shared memory and message passing parallel programming environments, respectively. In addition, these versions of the parallel training code are compared to an implementation utilizing SHMEM the native SGI/CRAY environment for shared memory programming. The multiprocessor platform used is a SGI/Cray Origin 2000 with up to 32 processors. It is shown that in this study, the native CRAY environment outperforms MPI for the entire range of processors used, while OpenMP shows better performance than the other two environments when using more than 19 processors. In this study, the efficiency is always greater than 60% regardless of the parallel programming environment used as well as of the number of processors.
- Research Organization:
- Lawrence Livermore National Lab., CA (US)
- Sponsoring Organization:
- USDOE Office of Defense Programs (DP) (US)
- DOE Contract Number:
- W-7405-ENG-48
- OSTI ID:
- 791052
- Report Number(s):
- UCRL-JC-136867
- Country of Publication:
- United States
- Language:
- English
Similar Records
MPF: a portable message passing facility for shared memory multiprocessors
Message-passing controller for a shared-memory multiprocessor