skip to main content
OSTI.GOV title logo U.S. Department of Energy
Office of Scientific and Technical Information

Title: Multi-Level Load Balancing with an Integrated Runtime Approach

Abstract

The recent trend of increasing numbers of cores per chip has resulted in vast amounts of on-node parallelism. These high core counts result in hardware variability that introduces imbalance. Applications are also becoming more complex, re-sulting in dynamic load imbalance. Load imbalance of any kind can result in loss of performance and system utilization. We address the challenge of handling both transient and persistent load imbalances while maintaining locality with low overhead. In this paper, we propose an integrated runtime system that combines the Charm++ distributed programming model with concurrent tasks to mitigate load imbalances within and across shared memory address spaces. It utilizes a periodic assignment of work to cores based on load measurement, in combination with user created tasks to handle load imbalance. We integrate OpenMP with Charm++ to enable creation of potential tasks via OpenMP's parallel loop construct. This is also available to MPI applications through the Adaptive MPI implementation. We demonstrate the benefits of our work on three applications. We show improvements of Lassen by 29.6% on Cori and 46.5% on Theta. We also demonstrate the benefits on a Charm++ application, ChaNGa by 25.7% on Theta, as well as an MPI proxy application, Kripke, using Adaptivemore » MPI.« less

Authors:
; ; ; ;
Publication Date:
Research Org.:
Lawrence Berkeley National Laboratory (LBNL), Berkeley, CA (United States). National Energy Research Scientific Computing Center (NERSC)
Sponsoring Org.:
USDOE Office of Science (SC)
OSTI Identifier:
1544249
Resource Type:
Conference
Resource Relation:
Conference: 2018 18th IEEE/ACM International Symposium on Cluster, Cloud and Grid Computing (CCGRID), Washington, DC, 1-4 May 2018
Country of Publication:
United States
Language:
English

Citation Formats

Bak, Seonmyeong, Menon, Harshitha, White, Sam, Diener, Matthias, and Kale, Laxmikant. Multi-Level Load Balancing with an Integrated Runtime Approach. United States: N. p., 2018. Web. doi:10.1109/CCGRID.2018.00018.
Bak, Seonmyeong, Menon, Harshitha, White, Sam, Diener, Matthias, & Kale, Laxmikant. Multi-Level Load Balancing with an Integrated Runtime Approach. United States. doi:10.1109/CCGRID.2018.00018.
Bak, Seonmyeong, Menon, Harshitha, White, Sam, Diener, Matthias, and Kale, Laxmikant. Tue . "Multi-Level Load Balancing with an Integrated Runtime Approach". United States. doi:10.1109/CCGRID.2018.00018.
@article{osti_1544249,
title = {Multi-Level Load Balancing with an Integrated Runtime Approach},
author = {Bak, Seonmyeong and Menon, Harshitha and White, Sam and Diener, Matthias and Kale, Laxmikant},
abstractNote = {The recent trend of increasing numbers of cores per chip has resulted in vast amounts of on-node parallelism. These high core counts result in hardware variability that introduces imbalance. Applications are also becoming more complex, re-sulting in dynamic load imbalance. Load imbalance of any kind can result in loss of performance and system utilization. We address the challenge of handling both transient and persistent load imbalances while maintaining locality with low overhead. In this paper, we propose an integrated runtime system that combines the Charm++ distributed programming model with concurrent tasks to mitigate load imbalances within and across shared memory address spaces. It utilizes a periodic assignment of work to cores based on load measurement, in combination with user created tasks to handle load imbalance. We integrate OpenMP with Charm++ to enable creation of potential tasks via OpenMP's parallel loop construct. This is also available to MPI applications through the Adaptive MPI implementation. We demonstrate the benefits of our work on three applications. We show improvements of Lassen by 29.6% on Cori and 46.5% on Theta. We also demonstrate the benefits on a Charm++ application, ChaNGa by 25.7% on Theta, as well as an MPI proxy application, Kripke, using Adaptive MPI.},
doi = {10.1109/CCGRID.2018.00018},
journal = {},
number = ,
volume = ,
place = {United States},
year = {2018},
month = {5}
}

Conference:
Other availability
Please see Document Availability for additional information on obtaining the full-text document. Library patrons may search WorldCat to identify libraries that hold this conference proceeding.

Save / Share: