skip to main content
OSTI.GOV title logo U.S. Department of Energy
Office of Scientific and Technical Information

Title: Parsl: Pervasive Parallel Programming in Python

Conference ·

High-level programming languages such as Python are increasingly used to provide intuitive interfaces to libraries written in lower-level languages and for assembling applications from various components. This migration towards orchestration rather than implementation, coupled with the growing need for parallel computing (e.g., due to big data and the end of Moore's law), necessitates rethinking how parallelism is expressed in programs. Here, we present Parsl, a parallel scripting library that augments Python with simple, scalable, and flexible constructs for encoding parallelism. These constructs allow Parsl to construct a dynamic dependency graph of components that it can then execute efficiently on one or many processors. Parsl is designed for scalability, with an extensible set of executors tailored to different use cases, such as low-latency, high-throughput, or extreme-scale execution. We show, via experiments on the Blue Waters supercomputer, that Parsl executors can allow Python scripts to execute components with as little as 5 ms of overhead, scale to more than 250000 workers across more than 8000 nodes, and process upward of 1200 tasks per second. Other Parsl features simplify the construction and execution of composite programs by supporting elastic provisioning and scaling of infrastructure, fault-tolerant execution, and integrated wide-area data management. We show that these capabilities satisfy the needs of many-task, interactive, online, and machine learning applications in fields such as biology, cosmology, and materials science.

Research Organization:
Argonne National Lab. (ANL), Argonne, IL (United States)
Sponsoring Organization:
National Science Foundation (NSF)
DOE Contract Number:
AC02-06CH11357
OSTI ID:
1558618
Resource Relation:
Conference: 28th International ACM Symposium on High Performance Parallel and Distributed Computing, 06/22/19 - 06/29/19, Phoenix, AZ, US
Country of Publication:
United States
Language:
English

References (2)

Scheduling many-task workloads on supercomputers: Dealing with trailing tasks conference November 2010
Supporting task-level fault-tolerance in HPC workflows by launching MPI jobs inside MPI jobs conference January 2017

Similar Records

Parallel, Distributed Scripting with Python
Conference · Fri May 24 00:00:00 EDT 2002 · OSTI ID:1558618

Task Parallelism to Optimize Performance of Environmental Modeling Software
Technical Report · Tue Aug 17 00:00:00 EDT 2021 · OSTI ID:1558618

The ASC Sequoia Programming Model
Technical Report · Wed Aug 06 00:00:00 EDT 2008 · OSTI ID:1558618

Related Subjects