Performance Portability Evaluation of OpenCL Benchmarks across Intel and NVIDIA Platforms
Abstract
We evaluate the capabilities of vendor-provided OpenCL implementations for performance portability across multiple computing platforms. The Rodinia benchmark suite is used for this evaluation. We apply the metric defined by Pennycook et al., and we use roofline efficiency from the Roofline performance model as the "performance efficiency" in the metric's definition. We found that the delivered performance portability is similar for several benchmarks, even if the roofline-based performance efficiencies across platforms are very different among the benchmarks. To help distinguish between these instances, we extend the metric by adding the standard deviation of the performance efficiencies for each benchmark. We argue that the standard deviation gives additional insight into performance portability assessment since it adds the performance variability across platforms. Additionally, we discuss the challenges to measure performance portability associated with algorithms and system software. In terms of algorithms, we need to carefully construct the benchmarks and appropriately use the concurrency available on a platform. In terms of system software, we depend on the vendor performance tools to support the desired programming model and runtime to be able to measure the metrics of interest.
- Authors:
- Publication Date:
- Research Org.:
- Argonne National Lab. (ANL), Argonne, IL (United States)
- Sponsoring Org.:
- USDOE Office of Science - Office of Basic Energy Sciences - Scientific User Facilities Division; USDOE Exascale Computing Project
- OSTI Identifier:
- 1804079
- DOE Contract Number:
- AC02-06CH11357
- Resource Type:
- Conference
- Resource Relation:
- Conference: 34th IEEE International Parallel and Distributed Processing Symposium, 05/18/20 - 05/22/20, New Orleans, LA, US
- Country of Publication:
- United States
- Language:
- English
- Subject:
- GPU; OpenCL; high performance computing; performance efficiency; performance portability; roofline performance analysis
Citation Formats
Bertoni, Colleen, Kwack, Jaehyuk, Applencourt, Thomas, Ghadar, Yasaman, Homerding, Brian, Knight, Christopher, Videau, Brice, Zheng, Huihuo, Morozov, Vitali, and Parker, Scott. Performance Portability Evaluation of OpenCL Benchmarks across Intel and NVIDIA Platforms. United States: N. p., 2020.
Web. doi:10.1109/IPDPSW50202.2020.00067.
Bertoni, Colleen, Kwack, Jaehyuk, Applencourt, Thomas, Ghadar, Yasaman, Homerding, Brian, Knight, Christopher, Videau, Brice, Zheng, Huihuo, Morozov, Vitali, & Parker, Scott. Performance Portability Evaluation of OpenCL Benchmarks across Intel and NVIDIA Platforms. United States. https://doi.org/10.1109/IPDPSW50202.2020.00067
Bertoni, Colleen, Kwack, Jaehyuk, Applencourt, Thomas, Ghadar, Yasaman, Homerding, Brian, Knight, Christopher, Videau, Brice, Zheng, Huihuo, Morozov, Vitali, and Parker, Scott. 2020.
"Performance Portability Evaluation of OpenCL Benchmarks across Intel and NVIDIA Platforms". United States. https://doi.org/10.1109/IPDPSW50202.2020.00067.
@article{osti_1804079,
title = {Performance Portability Evaluation of OpenCL Benchmarks across Intel and NVIDIA Platforms},
author = {Bertoni, Colleen and Kwack, Jaehyuk and Applencourt, Thomas and Ghadar, Yasaman and Homerding, Brian and Knight, Christopher and Videau, Brice and Zheng, Huihuo and Morozov, Vitali and Parker, Scott},
abstractNote = {We evaluate the capabilities of vendor-provided OpenCL implementations for performance portability across multiple computing platforms. The Rodinia benchmark suite is used for this evaluation. We apply the metric defined by Pennycook et al., and we use roofline efficiency from the Roofline performance model as the "performance efficiency" in the metric's definition. We found that the delivered performance portability is similar for several benchmarks, even if the roofline-based performance efficiencies across platforms are very different among the benchmarks. To help distinguish between these instances, we extend the metric by adding the standard deviation of the performance efficiencies for each benchmark. We argue that the standard deviation gives additional insight into performance portability assessment since it adds the performance variability across platforms. Additionally, we discuss the challenges to measure performance portability associated with algorithms and system software. In terms of algorithms, we need to carefully construct the benchmarks and appropriately use the concurrency available on a platform. In terms of system software, we depend on the vendor performance tools to support the desired programming model and runtime to be able to measure the metrics of interest.},
doi = {10.1109/IPDPSW50202.2020.00067},
url = {https://www.osti.gov/biblio/1804079},
journal = {},
number = ,
volume = ,
place = {United States},
year = {Wed Jan 01 00:00:00 EST 2020},
month = {Wed Jan 01 00:00:00 EST 2020}
}