skip to main content
OSTI.GOV title logo U.S. Department of Energy
Office of Scientific and Technical Information

Title: Extended Parallelism Models for Optimization on Massively Parallel Computers

Conference ·
OSTI ID:7265

Single-level parallel optimization approaches, those in which either the simulation code executes in parallel or the optimiza- tion algorithm invokes multiple simultaneous single-processor analyses, have been investigated previously and been shown to be effective in reducing the time required to compute optimal solutions. However, these approaches have clear performance limita- tions that prevent effective scaling with the thousands of processors available in massively parallel supercomputers. In more recent work, a capability has been developed for multilevel parallelism in which multiple instances of multiprocessor simulations are coordinated simultaneously. This implementation employs a master-slave approach using the Message Passing Interface (MPI) within the DAKOTA software toolkit. Mathematical analysis on achieving peak efficiency in multilevel parallelism has shown that the most effective processor partitioning scheme is the one that limits the size of multiprocessor simulations in favor of concurrent execution of multiple simulations. That is, if both coarse-grained and fine-grained parallelism can be exploited, then preference should be given to the coarse-grained parallelism. This analysis was verified in multilevel paralIel computatiorud experiments on networks of workstations (NOWS) and on the Intel TeraFLOPS massively parallel supercomputer. In current work, methods for exploiting additional coarse-grained parallelism in optimization are being investigated so that fine-grained efficiency losses can be further minimized. These activities are focusing on both algorithmic coarse-grained parallel- ism (multiple independent function evaluations) through the development of speculative gradient methods and concurrent iterator strategies and on function evaluation coarse-grained parallelism (multiple separable simulations within a function evaluation) through the development of general partitioning and nested synchronization facilities. The net result is a total of four separate lev- els of parallelism which can minimize efficiency losses and achieve near linear scaling on massively parallel computers.

Research Organization:
Sandia National Laboratories (SNL), Albuquerque, NM, and Livermore, CA (United States)
Sponsoring Organization:
USDOE
DOE Contract Number:
AC04-94AL85000
OSTI ID:
7265
Report Number(s):
SAND99-1295C; ON: DE00007265
Resource Relation:
Conference: Third World Congress of Structural and Multidisciplinary Optimization; Amherst, NY; 05/17-21/1999
Country of Publication:
United States
Language:
English