skip to main content
OSTI.GOV title logo U.S. Department of Energy
Office of Scientific and Technical Information

Title: A Block-Asynchronous Relaxation Method for Graphics Processing Units

Abstract

In this paper, we analyze the potential of asynchronous relaxation methods on Graphics Processing Units (GPUs). For this purpose, we developed a set of asynchronous iteration algorithms in CUDA and compared them with a parallel implementation of synchronous relaxation methods on CPU-based systems. For a set of test matrices taken from the University of Florida Matrix Collection we monitor the convergence behavior, the average iteration time and the total time-to-solution time. Analyzing the results, we observe that even for our most basic asynchronous relaxation scheme, despite its lower convergence rate compared to the Gauss-Seidel relaxation (that we expected), the asynchronous iteration running on GPUs is still able to provide solution approximations of certain accuracy in considerably shorter time then Gauss- Seidel running on CPUs. Hence, it overcompensates for the slower convergence by exploiting the scalability and the good fit of the asynchronous schemes for the highly parallel GPU architectures. Further, enhancing the most basic asynchronous approach with hybrid schemes – using multiple iterations within the ”subdomain” handled by a GPU thread block and Jacobi-like asynchronous updates across the ”boundaries”, subject to tuning various parameters – we manage to not only recover the loss of global convergence but often accelerate convergencemore » of up to two times (compared to the effective but difficult to parallelize Gauss-Seidel type of schemes), while keeping the execution time of a global iteration practically the same. This shows the high potential of the asynchronous methods not only as a stand alone numerical solver for linear systems of equations fulfilling certain convergence conditions but more importantly as a smoother in multigrid methods. Due to the explosion of parallelism in todays architecture designs, the significance and the need for asynchronous methods, as the ones described in this work, is expected to grow.« less

Authors:
 [1];  [2];  [3];  [1]
  1. Karlsruhe Inst. of Technology (KIT) (Germany)
  2. Univ. of Tennessee, Knoxville, TN (United States)
  3. Univ. of Tennessee, Knoxville, TN (United States); Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Univ. of Manchester (United Kingdom)
Publication Date:
Research Org.:
Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States)
Sponsoring Org.:
USDOE Office of Science (SC)
OSTI Identifier:
1173288
Report Number(s):
LBNL-5784E
DOE Contract Number:  
AC02-05CH11231
Resource Type:
Technical Report
Country of Publication:
United States
Language:
English
Subject:
97 MATHEMATICS AND COMPUTING; Asynchronous Relaxation; Chaotic Iteration; Graphics Processing Units (GPUs); Jacobi Method

Citation Formats

Antz, Hartwig, Tomov, Stanimire, Dongarra, Jack, and Heuveline, Vincent. A Block-Asynchronous Relaxation Method for Graphics Processing Units. United States: N. p., 2011. Web. doi:10.2172/1173288.
Antz, Hartwig, Tomov, Stanimire, Dongarra, Jack, & Heuveline, Vincent. A Block-Asynchronous Relaxation Method for Graphics Processing Units. United States. https://doi.org/10.2172/1173288
Antz, Hartwig, Tomov, Stanimire, Dongarra, Jack, and Heuveline, Vincent. 2011. "A Block-Asynchronous Relaxation Method for Graphics Processing Units". United States. https://doi.org/10.2172/1173288. https://www.osti.gov/servlets/purl/1173288.
@article{osti_1173288,
title = {A Block-Asynchronous Relaxation Method for Graphics Processing Units},
author = {Antz, Hartwig and Tomov, Stanimire and Dongarra, Jack and Heuveline, Vincent},
abstractNote = {In this paper, we analyze the potential of asynchronous relaxation methods on Graphics Processing Units (GPUs). For this purpose, we developed a set of asynchronous iteration algorithms in CUDA and compared them with a parallel implementation of synchronous relaxation methods on CPU-based systems. For a set of test matrices taken from the University of Florida Matrix Collection we monitor the convergence behavior, the average iteration time and the total time-to-solution time. Analyzing the results, we observe that even for our most basic asynchronous relaxation scheme, despite its lower convergence rate compared to the Gauss-Seidel relaxation (that we expected), the asynchronous iteration running on GPUs is still able to provide solution approximations of certain accuracy in considerably shorter time then Gauss- Seidel running on CPUs. Hence, it overcompensates for the slower convergence by exploiting the scalability and the good fit of the asynchronous schemes for the highly parallel GPU architectures. Further, enhancing the most basic asynchronous approach with hybrid schemes – using multiple iterations within the ”subdomain” handled by a GPU thread block and Jacobi-like asynchronous updates across the ”boundaries”, subject to tuning various parameters – we manage to not only recover the loss of global convergence but often accelerate convergence of up to two times (compared to the effective but difficult to parallelize Gauss-Seidel type of schemes), while keeping the execution time of a global iteration practically the same. This shows the high potential of the asynchronous methods not only as a stand alone numerical solver for linear systems of equations fulfilling certain convergence conditions but more importantly as a smoother in multigrid methods. Due to the explosion of parallelism in todays architecture designs, the significance and the need for asynchronous methods, as the ones described in this work, is expected to grow.},
doi = {10.2172/1173288},
url = {https://www.osti.gov/biblio/1173288}, journal = {},
number = ,
volume = ,
place = {United States},
year = {Wed Nov 30 00:00:00 EST 2011},
month = {Wed Nov 30 00:00:00 EST 2011}
}