skip to main content
DOE PAGES title logo U.S. Department of Energy
Office of Scientific and Technical Information

Title: Alternating Anderson–Richardson method: An efficient alternative to preconditioned Krylov methods for large, sparse linear systems

Abstract

Here, we present the Alternating Anderson–Richardson (AAR) method: an efficient and scalable alternative to preconditioned Krylov solvers for the solution of large, sparse linear systems on high performance computing platforms. Specifically, we generalize the recently proposed Alternating Anderson–Jacobi (AAJ) method (Pratapa et al., 2016) to include preconditioning, discuss efficient parallel implementation, and provide serial MATLAB and parallel C/C++ implementations. In serial applications to nonsymmetric systems, we find that AAR is comparably robust to GMRES, using the same preconditioning, while often outperforming it in time to solution; and find AAR to be more robust than Bi-CGSTAB for the problems considered. In parallel applications to the Helmholtz and Poisson equations, we find that AAR shows superior strong and weak scaling to GMRES, Bi-CGSTAB, and Conjugate Gradient (CG) methods, using the same preconditioning, with consistently shorter times to solution at larger processor counts. Finally, in massively parallel applications to the Poisson equation, on up to 110,592 processors, we find that AAR shows superior strong and weak scaling to CG, with shorter minimum time to solution. We thus find that AAR offers a robust and efficient alternative to current state-of-the-art solvers, with increasing advantages as the number of processors grows.

Authors:
 [1];  [1];  [2]
  1. Georgia Inst. of Technology, Atlanta, GA (United States). College of Engineering
  2. Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States). Physics Division
Publication Date:
Research Org.:
Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States)
Sponsoring Org.:
USDOE National Nuclear Security Administration (NNSA); National Science Foundation (NSF)
OSTI Identifier:
1497958
Report Number(s):
LLNL-JRNL-757222
Journal ID: ISSN 0010-4655; 942389
Grant/Contract Number:  
AC52-07NA27344; 1333500
Resource Type:
Accepted Manuscript
Journal Name:
Computer Physics Communications
Additional Journal Information:
Journal Volume: 234; Journal Issue: C; Journal ID: ISSN 0010-4655
Publisher:
Elsevier
Country of Publication:
United States
Language:
English
Subject:
97 MATHEMATICS AND COMPUTING

Citation Formats

Suryanarayana, Phanish, Pratapa, Phanisri P., and Pask, John E. Alternating Anderson–Richardson method: An efficient alternative to preconditioned Krylov methods for large, sparse linear systems. United States: N. p., 2018. Web. doi:10.1016/j.cpc.2018.07.007.
Suryanarayana, Phanish, Pratapa, Phanisri P., & Pask, John E. Alternating Anderson–Richardson method: An efficient alternative to preconditioned Krylov methods for large, sparse linear systems. United States. doi:10.1016/j.cpc.2018.07.007.
Suryanarayana, Phanish, Pratapa, Phanisri P., and Pask, John E. Thu . "Alternating Anderson–Richardson method: An efficient alternative to preconditioned Krylov methods for large, sparse linear systems". United States. doi:10.1016/j.cpc.2018.07.007. https://www.osti.gov/servlets/purl/1497958.
@article{osti_1497958,
title = {Alternating Anderson–Richardson method: An efficient alternative to preconditioned Krylov methods for large, sparse linear systems},
author = {Suryanarayana, Phanish and Pratapa, Phanisri P. and Pask, John E.},
abstractNote = {Here, we present the Alternating Anderson–Richardson (AAR) method: an efficient and scalable alternative to preconditioned Krylov solvers for the solution of large, sparse linear systems on high performance computing platforms. Specifically, we generalize the recently proposed Alternating Anderson–Jacobi (AAJ) method (Pratapa et al., 2016) to include preconditioning, discuss efficient parallel implementation, and provide serial MATLAB and parallel C/C++ implementations. In serial applications to nonsymmetric systems, we find that AAR is comparably robust to GMRES, using the same preconditioning, while often outperforming it in time to solution; and find AAR to be more robust than Bi-CGSTAB for the problems considered. In parallel applications to the Helmholtz and Poisson equations, we find that AAR shows superior strong and weak scaling to GMRES, Bi-CGSTAB, and Conjugate Gradient (CG) methods, using the same preconditioning, with consistently shorter times to solution at larger processor counts. Finally, in massively parallel applications to the Poisson equation, on up to 110,592 processors, we find that AAR shows superior strong and weak scaling to CG, with shorter minimum time to solution. We thus find that AAR offers a robust and efficient alternative to current state-of-the-art solvers, with increasing advantages as the number of processors grows.},
doi = {10.1016/j.cpc.2018.07.007},
journal = {Computer Physics Communications},
number = C,
volume = 234,
place = {United States},
year = {2018},
month = {7}
}

Journal Article:
Free Publicly Available Full Text
Publisher's Version of Record

Citation Metrics:
Cited by: 3 works
Citation information provided by
Web of Science

Save / Share: