skip to main content
OSTI.GOV title logo U.S. Department of Energy
Office of Scientific and Technical Information

Title: Experiences with interior point method across a range of parallel computing architectures

Conference ·
OSTI ID:36288

The interior point methods for linear programming in general share the common characteristic that most computational efforts are spent in factorizing and solving the symmetric positive semi-definite matrix that results from the sparse least square equations. The number of iterations of most interior point methods is fairly small and is almost invariant to changes in problem dimensions. For numerical reasons, it is unlikely that the average number of iterations can be greatly reduced. Therefore research into speeding up the computation of the IPM concentrates mostly on improving solution methods for the sparse least square equations. The symmetric matrix resulting from these equations can be fairly dense and slow down computation considerably. Alternative approaches using augmented system, column splitting and schur complement have been reported and implemented within commercial systems. These approaches are mostly suitable for matrices with a small number of highly dense columns. Alternatively, iterative or hybrid (direct-iterative) solution methods can also be employed. Whatever solution approach is used, the key factor in speeding up the computation is efficient exploitation of the static density structure of the matrices involved. Supernodes, decomposition, dense windows and elimination tree techniques for processor scheduling have been successfully employed for speeding up the computation. Some of these techniques naturally reveal the parallel nature of the underlying computational model. In this paper, we present a parallel interior point algorithm based on some of the above mentioned methods. The algorithm is designed for the PVM environment which can exploit dedicated computers as well as cluster of workstations. We show that although the main computational effort of the algorithm concentrates in few steps, a proper speed up can be achieved only if the whole algorithm is designed to run in parallel.

OSTI ID:
36288
Report Number(s):
CONF-9408161-; TRN: 94:009753-0620
Resource Relation:
Conference: 15. international symposium on mathematical programming, Ann Arbor, MI (United States), 15-19 Aug 1994; Other Information: PBD: 1994; Related Information: Is Part Of Mathematical programming: State of the art 1994; Birge, J.R.; Murty, K.G. [eds.]; PB: 312 p.
Country of Publication:
United States
Language:
English