MPI Stages: Checkpointing MPI State for Bulk Synchronous Applications
Journal Article
·
· EuroMPI'18 Proceedings of the 25th European MPI Users' Group Meeting, Barcelona, Spain, September 23 - 26, 2018
- Auburn University, Auburn, AL
- University of Tennessee at Chattanooga, Chattanooga, TN
- Lawrence Livermore National Laboratory, Livermore, CA
When an MPI program experiences a failure, the most common recovery approach is to restart all processes from a previous checkpoint and to re-queue the entire job. A disadvantage of this method is that, although the failure occurred within the main application loop, live processes must start again from the beginning of the program, along with new replacement processes---this incurs unnecessary overhead for live processes. To avoid such overheads and concomitant delays, we introduce the concept of "MPI Stages." MPI Stages saves internal MPI state in a separate checkpoint in conjunction with application state. Upon failure, both MPI and application state are recovered, respectively, from their last synchronous checkpoints and continue without restarting the overall MPI job. Live processes roll back only a few iterations within the main loop instead of rolling back to the beginning of the program, while a replacement of failed process restarts and reintegrates, thereby achieving faster failure recovery. This approach integrates well with large-scale, bulk synchronous applications and checkpoint/restart. In this paper, we identify requirements for production MPI implementations to support state checkpointing with MPI Stages, which includes capturing and managing internal MPI state and serializing and deserializing user handles to MPI objects. We evaluate our fault tolerance approach with a proof-of-concept prototype MPI implementation that includes MPI Stages. We demonstrate its functionality and performance using LULESH and microbenchmarks. Our results show that MPI Stages reduces the recovery time by 13× for LULESH in comparison to checkpoint/restart.
- Research Organization:
- Lawrence Berkeley National Laboratory (LBNL), Berkeley, CA (United States). National Energy Research Scientific Computing Center (NERSC)
- Sponsoring Organization:
- USDOE Office of Science (SC)
- OSTI ID:
- 1544207
- Journal Information:
- EuroMPI'18 Proceedings of the 25th European MPI Users' Group Meeting, Barcelona, Spain, September 23 - 26, 2018, Journal Name: EuroMPI'18 Proceedings of the 25th European MPI Users' Group Meeting, Barcelona, Spain, September 23 - 26, 2018
- Country of Publication:
- United States
- Language:
- English
Similar Records
Failure recovery for bulk synchronous applications with MPI stages
A Job Pause Service under LAM/MPI+BLCR for Transparent Fault Tolerance
Exploring Automatic, Online Failure Recovery for Scientific Applications at Extreme Scales, SC '14: Proceedings of the International Conference for High Performance Computing, Networking, Storage and Analysis
Journal Article
·
Tue Feb 26 19:00:00 EST 2019
· Parallel Computing
·
OSTI ID:1784608
A Job Pause Service under LAM/MPI+BLCR for Transparent Fault Tolerance
Conference
·
Sun Dec 31 23:00:00 EST 2006
·
OSTI ID:931501
Exploring Automatic, Online Failure Recovery for Scientific Applications at Extreme Scales, SC '14: Proceedings of the International Conference for High Performance Computing, Networking, Storage and Analysis
Conference
·
Sat Nov 01 00:00:00 EDT 2014
· SC14: INTERNATIONAL CONFERENCE FOR HIGH PERFORMANCE COMPUTING, NETWORKING, STORAGE AND ANALYSIS
·
OSTI ID:1567373