skip to main content
OSTI.GOV title logo U.S. Department of Energy
Office of Scientific and Technical Information

Title: Migration of vectorized iterative solvers to distributed memory architectures

Abstract

Both necessity and opportunity motivate the use of high-performance computers for iterative linear solvers. Necessity results from the size of the problems being solved-smaller problems are often better handled by direct methods. Opportunity arises from the formulation of the iterative methods in terms of simple linear algebra operations, even if this {open_quote}natural{close_quotes} parallelism is not easy to exploit in irregularly structured sparse matrices and with good preconditioners. As a result, high-performance implementations of iterative solvers have attracted a lot of interest in recent years. Most efforts are geared to vectorize or parallelize the dominating operation-structured or unstructured sparse matrix-vector multiplication, or to increase locality and parallelism by reformulating the algorithm-reducing global synchronization in inner products or local data exchange in preconditioners. Target architectures for iterative solvers currently include mostly vector supercomputers and architectures with one or few optimized (e.g., super-scalar and/or super-pipelined RISC) processors and hierarchical memory systems. More recently, parallel computers with physically distributed memory and a better price/performance ratio have been offered by vendors as a very interesting alternative to vector supercomputers. However, programming comfort on such distributed memory parallel processors (DMPPs) still lags behind. Here the authors are concerned with iterative solvers and their changing computing environment.more » In particular, they are considering migration from traditional vector supercomputers to DMPPs. Application requirements force one to use flexible and portable libraries. They want to extend the portability of iterative solvers rather than reimplementing everything for each new machine, or even for each new architecture.« less

Authors:
 [1];  [2]
  1. AT&T Bell Labs., Murray Hill, NJ (United States)
  2. CSCS-ETH, Manno (Switzerland)
Publication Date:
Research Org.:
Front Range Scientific Computations, Inc., Boulder, CO (United States); US Department of Energy (USDOE), Washington DC (United States); National Science Foundation, Washington, DC (United States)
OSTI Identifier:
223853
Report Number(s):
CONF-9404305-Vol.1
Journal ID: ISSN 1064--8275; ON: DE96005735; TRN: 96:002320-0027
Resource Type:
Conference
Resource Relation:
Journal Volume: 17; Journal Issue: 1; Conference: Colorado conference on iterative methods, Breckenridge, CO (United States), 5-9 Apr 1994; Other Information: PBD: [1994]; Related Information: Is Part Of Colorado Conference on iterative methods. Volume 1; PB: 203 p.
Country of Publication:
United States
Language:
English
Subject:
99 MATHEMATICS, COMPUTERS, INFORMATION SCIENCE, MANAGEMENT, LAW, MISCELLANEOUS; ALGEBRA; PARALLEL PROCESSING; ITERATIVE METHODS; VECTOR PROCESSING

Citation Formats

Pommerell, C, and Ruehl, R. Migration of vectorized iterative solvers to distributed memory architectures. United States: N. p., 1994. Web. doi:10.1137/0917017.
Pommerell, C, & Ruehl, R. Migration of vectorized iterative solvers to distributed memory architectures. United States. https://doi.org/10.1137/0917017
Pommerell, C, and Ruehl, R. 1994. "Migration of vectorized iterative solvers to distributed memory architectures". United States. https://doi.org/10.1137/0917017. https://www.osti.gov/servlets/purl/223853.
@article{osti_223853,
title = {Migration of vectorized iterative solvers to distributed memory architectures},
author = {Pommerell, C and Ruehl, R},
abstractNote = {Both necessity and opportunity motivate the use of high-performance computers for iterative linear solvers. Necessity results from the size of the problems being solved-smaller problems are often better handled by direct methods. Opportunity arises from the formulation of the iterative methods in terms of simple linear algebra operations, even if this {open_quote}natural{close_quotes} parallelism is not easy to exploit in irregularly structured sparse matrices and with good preconditioners. As a result, high-performance implementations of iterative solvers have attracted a lot of interest in recent years. Most efforts are geared to vectorize or parallelize the dominating operation-structured or unstructured sparse matrix-vector multiplication, or to increase locality and parallelism by reformulating the algorithm-reducing global synchronization in inner products or local data exchange in preconditioners. Target architectures for iterative solvers currently include mostly vector supercomputers and architectures with one or few optimized (e.g., super-scalar and/or super-pipelined RISC) processors and hierarchical memory systems. More recently, parallel computers with physically distributed memory and a better price/performance ratio have been offered by vendors as a very interesting alternative to vector supercomputers. However, programming comfort on such distributed memory parallel processors (DMPPs) still lags behind. Here the authors are concerned with iterative solvers and their changing computing environment. In particular, they are considering migration from traditional vector supercomputers to DMPPs. Application requirements force one to use flexible and portable libraries. They want to extend the portability of iterative solvers rather than reimplementing everything for each new machine, or even for each new architecture.},
doi = {10.1137/0917017},
url = {https://www.osti.gov/biblio/223853}, journal = {},
issn = {1064--8275},
number = 1,
volume = 17,
place = {United States},
year = {Sat Dec 31 00:00:00 EST 1994},
month = {Sat Dec 31 00:00:00 EST 1994}
}

Conference:
Other availability
Please see Document Availability for additional information on obtaining the full-text document. Library patrons may search WorldCat to identify libraries that hold this conference proceeding.

Save / Share: