Analysis of Monte Carlo accelerated iterative methods for sparse linear systems: Analysis of Monte Carlo accelerated iterative methods for sparse linear systems
Abstract
Here, we consider hybrid deterministicstochastic iterative algorithms for the solution of large, sparse linear systems. Starting from a convergent splitting of the coefficient matrix, we analyze various types of Monte Carlo acceleration schemes applied to the original preconditioned Richardson (stationary) iteration. We expect that these methods will have considerable potential for resiliency to faults when implemented on massively parallel machines. We also establish sufficient conditions for the convergence of the hybrid schemes, and we investigate different types of preconditioners including sparse approximate inverses. Numerical experiments on linear systems arising from the discretization of partial differential equations are presented.
 Authors:
 Department of Mathematics and Computer Science, Emory University, Atlanta 30322 GA USA
 Oak Ridge National Laboratory, 1 Bethel Valley Rd. Oak Ridge 37831 TN USA
 Publication Date:
 Research Org.:
 Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States)
 Sponsoring Org.:
 USDOE
 OSTI Identifier:
 1346688
 Alternate Identifier(s):
 OSTI ID: 1400612
 Grant/Contract Number:
 AC0500OR22725; ERKJ247
 Resource Type:
 Journal Article: Accepted Manuscript
 Journal Name:
 Numerical Linear Algebra with Applications
 Additional Journal Information:
 Journal Volume: 24; Journal Issue: 3; Journal ID: ISSN 10705325
 Publisher:
 Wiley
 Country of Publication:
 United States
 Language:
 English
 Subject:
 97 MATHEMATICS AND COMPUTING; iterative methods; Monte Carlo methods; preconditioning; resilience; Richardson iteration; sparse approximation inverse; sparse linear systems
Citation Formats
Benzi, Michele, Evans, Thomas M., Hamilton, Steven P., Lupo Pasini, Massimiliano, and Slattery, Stuart R.. Analysis of Monte Carlo accelerated iterative methods for sparse linear systems: Analysis of Monte Carlo accelerated iterative methods for sparse linear systems. United States: N. p., 2017.
Web. doi:10.1002/nla.2088.
Benzi, Michele, Evans, Thomas M., Hamilton, Steven P., Lupo Pasini, Massimiliano, & Slattery, Stuart R.. Analysis of Monte Carlo accelerated iterative methods for sparse linear systems: Analysis of Monte Carlo accelerated iterative methods for sparse linear systems. United States. doi:10.1002/nla.2088.
Benzi, Michele, Evans, Thomas M., Hamilton, Steven P., Lupo Pasini, Massimiliano, and Slattery, Stuart R.. Sun .
"Analysis of Monte Carlo accelerated iterative methods for sparse linear systems: Analysis of Monte Carlo accelerated iterative methods for sparse linear systems". United States.
doi:10.1002/nla.2088. https://www.osti.gov/servlets/purl/1346688.
@article{osti_1346688,
title = {Analysis of Monte Carlo accelerated iterative methods for sparse linear systems: Analysis of Monte Carlo accelerated iterative methods for sparse linear systems},
author = {Benzi, Michele and Evans, Thomas M. and Hamilton, Steven P. and Lupo Pasini, Massimiliano and Slattery, Stuart R.},
abstractNote = {Here, we consider hybrid deterministicstochastic iterative algorithms for the solution of large, sparse linear systems. Starting from a convergent splitting of the coefficient matrix, we analyze various types of Monte Carlo acceleration schemes applied to the original preconditioned Richardson (stationary) iteration. We expect that these methods will have considerable potential for resiliency to faults when implemented on massively parallel machines. We also establish sufficient conditions for the convergence of the hybrid schemes, and we investigate different types of preconditioners including sparse approximate inverses. Numerical experiments on linear systems arising from the discretization of partial differential equations are presented.},
doi = {10.1002/nla.2088},
journal = {Numerical Linear Algebra with Applications},
number = 3,
volume = 24,
place = {United States},
year = {Sun Mar 05 00:00:00 EST 2017},
month = {Sun Mar 05 00:00:00 EST 2017}
}

Implementation of iterative methods for large sparse nonsymmetric linear systems on a parallel vector machine
This paper reports on the restructure of three outstanding iterative methods for large space nonsymmetric linear systems. These methods are CGS (conjugate gradient squared), CRS (conjugate residual squared), and Orthomin(k). The restructured methods are more suitable for vector and parallel processing. The authors implemented these methods on a parallel vector system. The linear systems for the numerical tests are obtained from discretizing four two dimensional elliptic partial differential equations by finite difference and finite element methods. A vectorizable and parallelizable version of incomplete LU preconditioning is used. The authors restructured the subroutines to enhance the data locality in vector machinesmore » 
Anderson acceleration of the Jacobi iterative method: An efficient alternative to Krylov methods for large, sparse linear systems
We employ Anderson extrapolation to accelerate the classical Jacobi iterative method for large, sparse linear systems. Specifically, we utilize extrapolation at periodic intervals within the Jacobi iteration to develop the Alternating Anderson–Jacobi (AAJ) method. We verify the accuracy and efficacy of AAJ in a range of test cases, including nonsymmetric systems of equations. We demonstrate that AAJ possesses a favorable scaling with system size that is accompanied by a small prefactor, even in the absence of a preconditioner. In particular, we show that AAJ is able to accelerate the classical Jacobi iteration by over four orders of magnitude, with speedupsmore »Cited by 7 
A parallel and vectorial implementation of basic linear algebra subroutines in iterative solving of large sparse linear systems of equations
Electromagnetic field analysis by finite elements methods needs solving of large sparse systems of linear equations. Though no discernible structure for the distribution of nonzero elements can be found (e.g. multidiagonal structures,...), subsets of independent equations can be determined. Equations that are in a same subset are then solved in parallel. A good choice for the storage scheme of sparse matrices is also very important to speedup the resolution by vectorization. The modifications the authors made to data structures are presented, and the possibility to use some other schemes is discussed.