skip to main content
OSTI.GOV title logo U.S. Department of Energy
Office of Scientific and Technical Information

Title: Convergence analysis of Anderson-type acceleration of Richardson's iteration

Journal Article · · Numerical Linear Algebra with Applications
DOI:https://doi.org/10.1002/nla.2241· OSTI ID:1511931
ORCiD logo [1]
  1. Emory Univ., Atlanta, GA (United States). Dept. of Mathematics and Computer Science; Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States). National Center for Computational Sciences

We consider here Anderson extrapolation to accelerate the (stationary) Richardson iterative method for sparse linear systems. Using an Anderson mixing at periodic intervals, we assess how this benefits convergence to a prescribed accuracy. The method, named alternating Anderson–Richardson, has appealing properties for high-performance computing, such as the potential to reduce communication and storage in comparison to more conventional linear solvers. We establish sufficient conditions for convergence, and we evaluate the performance of this technique in combination with various preconditioners through numerical examples. Furthermore, we propose an augmented version of this technique.

Research Organization:
Oak Ridge National Laboratory (ORNL), Oak Ridge, TN (United States)
Sponsoring Organization:
USDOE Office of Science (SC)
Grant/Contract Number:
AC05-00OR22725
OSTI ID:
1511931
Alternate ID(s):
OSTI ID: 1504993
Journal Information:
Numerical Linear Algebra with Applications, Vol. 26, Issue 4; ISSN 1070-5325
Publisher:
WileyCopyright Statement
Country of Publication:
United States
Language:
English
Citation Metrics:
Cited by: 5 works
Citation information provided by
Web of Science

References (19)

Reducing the bandwidth of sparse symmetric matrices conference January 1969
A characterization of the behavior of the Anderson acceleration on linear problems journal February 2013
Chebyshev semi-iterative methods, successive overrelaxation iterative methods, and second order Richardson iterative methods: Part II journal December 1961
Two classes of multisecant methods for nonlinear acceleration journal March 2009
Benchmarking optimization software with performance profiles journal January 2002
Numerical Analysis of Fixed Point Algorithms in the Presence of Hardware Faults journal January 2015
Iterative Methods for Sparse Linear Systems book January 2003
GMRES: A Generalized Minimal Residual Algorithm for Solving Nonsymmetric Linear Systems journal July 1986
Hierarchical Krylov and nested Krylov methods for extreme-scale computing journal January 2014
Alternating Anderson–Richardson method: An efficient alternative to preconditioned Krylov methods for large, sparse linear systems journal January 2019
Anderson acceleration of the Jacobi iterative method: An efficient alternative to Krylov methods for large, sparse linear systems journal February 2016
Domain Decomposition Preconditioners for Communication-Avoiding Krylov Methods on a Hybrid CPU/GPU Cluster
  • Yamazaki, Ichitaro; Rajamanickam, Sivasankaran; Boman, Erik G.
  • SC14: International Conference for High Performance Computing, Networking, Storage and Analysis https://doi.org/10.1109/SC.2014.81
conference November 2014
Convergence Analysis for Anderson Acceleration journal January 2015
Preconditioning Techniques for Large Linear Systems: A Survey journal November 2002
Anderson Acceleration for Fixed-Point Iterations journal January 2011
The Tchebychev iteration for nonsymmetric linear systems journal September 1977
Iterative Procedures for Nonlinear Integral Equations journal October 1965
Chebyshev semi-iterative methods, successive overrelaxation iterative methods, and second order Richardson iterative methods: Part I journal December 1961
Benchmarking Optimization Software with Performance Profiles text January 2001