DOE PAGES title logo U.S. Department of Energy
Office of Scientific and Technical Information

Title: A performance model for GPUs with caches

Abstract

To exploit the abundant computational power of the world's fastest supercomputers, an even workload distribution to the typically heterogeneous compute devices is necessary. While relatively accurate performance models exist for conventional CPUs, accurate performance estimation models for modern GPUs do not exist. This paper presents two accurate models for modern GPUs: a sampling-based linear model, and a model based on machine-learning (ML) techniques which improves the accuracy of the linear model and is applicable to modern GPUs with and without caches. We first construct the sampling-based linear model to predict the runtime of an arbitrary OpenCL kernel. Based on an analysis of NVIDIA GPUs' scheduling policies we determine the earliest sampling points that allow an accurate estimation. The linear model cannot capture well the significant effects that memory coalescing or caching as implemented in modern GPUs have on performance. We therefore propose a model based on ML techniques that takes several compiler-generated statistics about the kernel as well as the GPU's hardware performance counters as additional inputs to obtain a more accurate runtime performance estimation for modern GPUs. We demonstrate the effectiveness and broad applicability of the model by applying it to three different NVIDIA GPU architectures and one AMDmore » GPU architecture. On an extensive set of OpenCL benchmarks, on average, the proposed model estimates the runtime performance with less than 7 percent error for a second-generation GTX 280 with no on-chip caches and less than 5 percent for the Fermi-based GTX 580 with hardware caches. On the Kepler-based GTX 680, the linear model has an error of less than 10 percent. On an AMD GPU architecture, Radeon HD 6970, the model estimates with 8 percent of error rates. As a result, the proposed technique outperforms existing models by a factor of 5 to 6 in terms of accuracy.« less

Authors:
 [1];  [2];  [3];  [1];  [1]
  1. Seoul National Univ. (Korea, Republic of)
  2. Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States)
  3. Argonne National Lab. (ANL), Argonne, IL (United States)
Publication Date:
Research Org.:
Argonne National Lab. (ANL), Argonne, IL (United States)
Sponsoring Org.:
USDOE Office of Science (SC)
OSTI Identifier:
1333005
Grant/Contract Number:  
AC02-06CH11357
Resource Type:
Accepted Manuscript
Journal Name:
IEEE Transactions on Parallel and Distributed Systems
Additional Journal Information:
Journal Volume: 26; Journal Issue: 7; Journal ID: ISSN 1045-9219
Publisher:
IEEE
Country of Publication:
United States
Language:
English
Subject:
97 MATHEMATICS AND COMPUTING; AMD; GPU; NVIDIA; OpenCL; caches; performance modeling; scheduling; graphics processing units; Kernel; computational modeling; computer architecture; hardware; data models; estimation

Citation Formats

Dao, Thanh Tuan, Kim, Jungwon, Seo, Sangmin, Egger, Bernhard, and Lee, Jaejin. A performance model for GPUs with caches. United States: N. p., 2014. Web. doi:10.1109/TPDS.2014.2333526.
Dao, Thanh Tuan, Kim, Jungwon, Seo, Sangmin, Egger, Bernhard, & Lee, Jaejin. A performance model for GPUs with caches. United States. https://doi.org/10.1109/TPDS.2014.2333526
Dao, Thanh Tuan, Kim, Jungwon, Seo, Sangmin, Egger, Bernhard, and Lee, Jaejin. Tue . "A performance model for GPUs with caches". United States. https://doi.org/10.1109/TPDS.2014.2333526. https://www.osti.gov/servlets/purl/1333005.
@article{osti_1333005,
title = {A performance model for GPUs with caches},
author = {Dao, Thanh Tuan and Kim, Jungwon and Seo, Sangmin and Egger, Bernhard and Lee, Jaejin},
abstractNote = {To exploit the abundant computational power of the world's fastest supercomputers, an even workload distribution to the typically heterogeneous compute devices is necessary. While relatively accurate performance models exist for conventional CPUs, accurate performance estimation models for modern GPUs do not exist. This paper presents two accurate models for modern GPUs: a sampling-based linear model, and a model based on machine-learning (ML) techniques which improves the accuracy of the linear model and is applicable to modern GPUs with and without caches. We first construct the sampling-based linear model to predict the runtime of an arbitrary OpenCL kernel. Based on an analysis of NVIDIA GPUs' scheduling policies we determine the earliest sampling points that allow an accurate estimation. The linear model cannot capture well the significant effects that memory coalescing or caching as implemented in modern GPUs have on performance. We therefore propose a model based on ML techniques that takes several compiler-generated statistics about the kernel as well as the GPU's hardware performance counters as additional inputs to obtain a more accurate runtime performance estimation for modern GPUs. We demonstrate the effectiveness and broad applicability of the model by applying it to three different NVIDIA GPU architectures and one AMD GPU architecture. On an extensive set of OpenCL benchmarks, on average, the proposed model estimates the runtime performance with less than 7 percent error for a second-generation GTX 280 with no on-chip caches and less than 5 percent for the Fermi-based GTX 580 with hardware caches. On the Kepler-based GTX 680, the linear model has an error of less than 10 percent. On an AMD GPU architecture, Radeon HD 6970, the model estimates with 8 percent of error rates. As a result, the proposed technique outperforms existing models by a factor of 5 to 6 in terms of accuracy.},
doi = {10.1109/TPDS.2014.2333526},
journal = {IEEE Transactions on Parallel and Distributed Systems},
number = 7,
volume = 26,
place = {United States},
year = {Tue Jun 24 00:00:00 EDT 2014},
month = {Tue Jun 24 00:00:00 EDT 2014}
}

Journal Article:
Free Publicly Available Full Text
Publisher's Version of Record

Citation Metrics:
Cited by: 25 works
Citation information provided by
Web of Science

Save / Share: