Work stealing for GPU-accelerated parallel programs in a global address space framework
Journal Article
·
· Concurrency and Computation. Practice and Experience
- Department of Computer Science and Engineering, The Ohio State University, Columbus OH USA
- Mathematics and Computer Science Division, Argonne National Laboratory, Lemont IL USA
- Computer Science and Mathematics Division, Pacific Northwest National Laboratory, Richland WA USA
Task parallelism is an attractive approach to automatically load balance the computation in a parallel system and adapt to dynamism exhibited by parallel systems. Exploiting task parallelism through work stealing has been extensively studied in shared and distributed-memory contexts. In this paper, we study the design of a system that uses work stealing for dynamic load balancing of task-parallel programs executed on hybrid distributed-memory CPU-graphics processing unit (GPU) systems in a global-address space framework. We take into account the unique nature of the accelerator model employed by GPUs, the significant performance difference between GPU and CPU execution as a function of problem size, and the distinct CPU and GPU memory domains. We consider various alternatives in designing a distributed work stealing algorithm for CPU-GPU systems, while taking into account the impact of task distribution and data movement overheads. These strategies are evaluated using microbenchmarks that capture various execution configurations as well as the state-of-the-art CCSD(T) application module from the computational chemistry domain
- Research Organization:
- Argonne National Laboratory (ANL)
- Sponsoring Organization:
- USDOE Office of Science
- DOE Contract Number:
- AC02-06CH11357
- OSTI ID:
- 1393474
- Journal Information:
- Concurrency and Computation. Practice and Experience, Journal Name: Concurrency and Computation. Practice and Experience Journal Issue: 13 Vol. 28; ISSN 1532-0626
- Publisher:
- Wiley
- Country of Publication:
- United States
- Language:
- English
Similar Records
Work stealing for GPU-accelerated parallel programs in a global address space framework: WORK STEALING ON GPU-ACCELERATED SYSTEMS
Scalable Work Stealing
Data-driven Fault Tolerance for Work Stealing Computations
Journal Article
·
Tue Jan 05 23:00:00 EST 2016
· Concurrency and Computation. Practice and Experience
·
OSTI ID:1333989
Scalable Work Stealing
Conference
·
Fri Nov 13 23:00:00 EST 2009
·
OSTI ID:986715
Data-driven Fault Tolerance for Work Stealing Computations
Conference
·
Mon Jun 25 00:00:00 EDT 2012
·
OSTI ID:1239507