A parallel neural network training algorithm for control of discrete dynamical systems.
In this work we present a parallel neural network controller training code, that uses MPI, a portable message passing environment. A comprehensive performance analysis is reported which compares results of a performance model with actual measurements. The analysis is made for three different load assignment schemes: block distribution, strip mining and a sliding average bin packing (best-fit) algorithm. Such analysis is crucial since optimal load balance can not be achieved because the work load information is not available a priori. The speedup results obtained with the above schemes are compared with those corresponding to the bin packing load balance scheme with perfect load prediction based on a priori knowledge of the computing effort. Two multiprocessor platforms: a SGI/Cray Origin 2000 and a IBM SP have been utilized for this study. It is shown that for the best load balance scheme a parallel efficiency of over 50% for the entire computation is achieved by 17 processors of either parallel computers.
- Research Organization:
- Argonne National Lab., IL (US)
- Sponsoring Organization:
- US Department of Energy (US)
- DOE Contract Number:
- W-31-109-ENG-38
- OSTI ID:
- 8131
- Report Number(s):
- ANL/RA/CP-94593; TRN: AH200117%%32
- Resource Relation:
- Conference: High Performance Computing '98, Boston, MA (US), 04/05/1998--04/09/1998; Other Information: PBD: 20 Jan 1998
- Country of Publication:
- United States
- Language:
- English
Similar Records
DANTSYS/MPI: a system for 3-D deterministic transport on parallel architectures
Parallelization of a dynamic unstructured algorithm using three leading programming paradigms