skip to main content
OSTI.GOV title logo U.S. Department of Energy
Office of Scientific and Technical Information

Title: $$\mathrm{RADICAL}$$-Pilot and $$\mathrm{PMIx}$$/$$\mathrm{PRRTE}$$: Executing Heterogeneous Workloads at Large Scale on Partitioned $$\mathrm{HPC}$$ Resources

Journal Article · · Lecture Notes in Computer Science

Execution of heterogeneous workflows on high-performance computing (HPC) platforms present unprecedented resource management and execution coordination challenges for runtime systems. Task heterogeneity increases the complexity of resource and execution management, limiting the scalability and efficiency of workflow execution. Re-source partitioning and distribution of tasks execution over portioned re-sources promises to address those problems but we lack an experimental evaluation of its performance at scale. Here this paper provides a performance evaluation of the Process Management Interface for Exascale (PMIx) and its reference implementation PRRTE on the leadership-class HPC plat-form Summit, when integrated into a pilot-based runtime system called RADICAL-Pilot. We partition resources across multiple PRRTE Distributed Virtual Machine (DVM) environments, responsible for launching tasks via the PMIx interface. We experimentally measure the work-load execution performance in terms of task scheduling/launching rate and distribution of DVM task placement times, DVM startup and termination overheads on the Summit leadership-class HPC platform. Integrated solution with PMIx/PRRTE enables using an abstracted, standardized set of interfaces for orchestrating the launch process, dynamic process management and monitoring capabilities. It extends scaling capabilities allowing to overcome a limitation of other launching mechanisms (e.g., JSM/LSF). Explored different DVM setup configurations provide insights on DVM performance and a layout to leverage it. Our experimental results show that heterogeneous workload of 65,500 tasks on 2048 nodes, and partitioned across 32 DVMs, runs steady with resource utilization not lower than 52%. While having less concurrently executed tasks resource utilization is able to reach up to 85%, based on results of heterogeneous workload of 8200 tasks on 256 nodes and 2 DVMs.

Research Organization:
Brookhaven National Laboratory (BNL), Upton, NY (United States); Oak Ridge National Laboratory (ORNL), Oak Ridge, TN (United States)
Sponsoring Organization:
USDOE Office of Science (SC), Advanced Scientific Computing Research (ASCR); USDOE Office of Science (SC), High Energy Physics (HEP)
Grant/Contract Number:
SC0012704; AC05-00OR22725
OSTI ID:
1963184
Report Number(s):
BNL-224123-2023-JAAM
Journal Information:
Lecture Notes in Computer Science, Vol. 13592; Conference: 25. Workshop on Job Scheduling Strategies for Parallel Processing (JSSPP 2022), Held Virtually, 3 Jun 2022; ISSN 0302-9743
Publisher:
SpringerCopyright Statement
Country of Publication:
United States
Language:
English

References (21)

IMPECCABLE: Integrated Modeling PipelinE for COVID Cure by Assessing Better LEads conference October 2021
PMIx: Process management for exascale environments journal November 2018
Job Management with mpi_jm book January 2018
CMS use of allocation based HPC resources journal October 2017
Integration of cloud, grid and local cluster resources with DIRAC journal December 2011
SAGA: A standardized access layer to heterogeneous Distributed Computing Infrastructure journal September 2015
AI-driven multiscale simulations illuminate mechanisms of SARS-CoV-2 spike dynamics journal April 2021
Evolution of the ATLAS PanDA workload management system for exascale computational science journal June 2014
Workflows are the New Applications: Challenges in Performance, Portability, and Productivity conference November 2020
Using Pilot Systems to Execute Many Task Workloads on Supercomputers book January 2019
BigPanDA: PanDA Workload Management System and its Applications beyond ATLAS journal January 2019
Job Management and Task Bundling journal January 2018
Flux: Overcoming scheduling challenges for exascale workflows journal September 2020
Scalable molecular dynamics on CPU and GPU architectures with NAMD journal July 2020
glideinWMS—a generic pilot-based workload management system journal July 2008
High-Throughput Computing on High-Performance Platforms: A Case Study conference October 2017
OpenMM 7: Rapid development of high performance algorithms for molecular dynamics journal July 2017
ExaWorks: Workflows for Exascale conference November 2021
Generalizable coordination of large multiscale workflows: challenges and learnings at scale
  • Bhatia, Harsh; Di Natale, Francesco; Moon, Joseph Y.
  • SC '21: The International Conference for High Performance Computing, Networking, Storage and Analysis, Proceedings of the International Conference for High Performance Computing, Networking, Storage and Analysis https://doi.org/10.1145/3458817.3476210
conference November 2021
Characterizing the Performance of Executing Many-tasks on Summit conference November 2019
Colmena: Scalable Machine-Learning-Based Steering of Ensemble Simulations for High Performance Computing conference November 2021