skip to main content
DOE PAGES title logo U.S. Department of Energy
Office of Scientific and Technical Information

Title: Distributed Halide

Abstract

Many image processing tasks are naturally expressed as a pipeline of small computational kernels known as stencils. Halide is a popular domain-specific language and compiler designed to implement image processing algorithms. Halide uses simple language constructs to express what to compute and a separate scheduling co-language for expressing when and where to perform the computation. This approach has demonstrated performance comparable to or better than hand-optimized code. Until now, however, Halide has been restricted to parallel shared memory execution, limiting its performance for memory-bandwidth-bound pipelines or large-scale image processing tasks. We present an extension to Halide to support distributed-memory parallel execution of complex stencil pipelines. These extensions compose with the existing scheduling constructs in Halide, allowing expression of complex computation and communication strategies. Existing Halide applications can be distributed with minimal changes, allowing programmers to explore the tradeoff between recomputation and communication with little effort. Approximately 10 new of lines code are needed even for a 200 line, 99 stage application. On nine image processing benchmarks, our extensions give up to a 1.4× speedup on a single node over regular multithreaded execution with the same number of cores, by mitigating the effects of non-uniform memory access. The distributed benchmarks achievemore » up to 18× speedup on a 16 node testing machine and up to 57× speedup on 64 nodes of the NERSC Cori supercomputer.« less

Authors:
 [1];  [2];  [1]
  1. Massachusetts Inst. of Technology (MIT), Cambridge, MA (United States)
  2. Adobe, Cambridge, MA (United States)
Publication Date:
Research Org.:
Massachusetts Inst. of Technology (MIT), Cambridge, MA (United States)
Sponsoring Org.:
USDOE Office of Science (SC), Advanced Scientific Computing Research (ASCR)
OSTI Identifier:
1557579
Grant/Contract Number:  
SC0005288
Resource Type:
Accepted Manuscript
Journal Name:
ACM SIGPLAN Notices
Additional Journal Information:
Journal Volume: 51; Journal Issue: 8; Journal ID: ISSN 0362-1340
Publisher:
ACM
Country of Publication:
United States
Language:
English
Subject:
97 MATHEMATICS AND COMPUTING; Distributed memory; Image processing; Stencils

Citation Formats

Denniston, Tyler, Kamil, Shoaib, and Amarasinghe, Saman. Distributed Halide. United States: N. p., 2016. Web. doi:10.1145/2851141.2851157.
Denniston, Tyler, Kamil, Shoaib, & Amarasinghe, Saman. Distributed Halide. United States. doi:10.1145/2851141.2851157.
Denniston, Tyler, Kamil, Shoaib, and Amarasinghe, Saman. Fri . "Distributed Halide". United States. doi:10.1145/2851141.2851157. https://www.osti.gov/servlets/purl/1557579.
@article{osti_1557579,
title = {Distributed Halide},
author = {Denniston, Tyler and Kamil, Shoaib and Amarasinghe, Saman},
abstractNote = {Many image processing tasks are naturally expressed as a pipeline of small computational kernels known as stencils. Halide is a popular domain-specific language and compiler designed to implement image processing algorithms. Halide uses simple language constructs to express what to compute and a separate scheduling co-language for expressing when and where to perform the computation. This approach has demonstrated performance comparable to or better than hand-optimized code. Until now, however, Halide has been restricted to parallel shared memory execution, limiting its performance for memory-bandwidth-bound pipelines or large-scale image processing tasks. We present an extension to Halide to support distributed-memory parallel execution of complex stencil pipelines. These extensions compose with the existing scheduling constructs in Halide, allowing expression of complex computation and communication strategies. Existing Halide applications can be distributed with minimal changes, allowing programmers to explore the tradeoff between recomputation and communication with little effort. Approximately 10 new of lines code are needed even for a 200 line, 99 stage application. On nine image processing benchmarks, our extensions give up to a 1.4× speedup on a single node over regular multithreaded execution with the same number of cores, by mitigating the effects of non-uniform memory access. The distributed benchmarks achieve up to 18× speedup on a 16 node testing machine and up to 57× speedup on 64 nodes of the NERSC Cori supercomputer.},
doi = {10.1145/2851141.2851157},
journal = {ACM SIGPLAN Notices},
number = 8,
volume = 51,
place = {United States},
year = {2016},
month = {1}
}

Journal Article:
Free Publicly Available Full Text
Publisher's Version of Record

Citation Metrics:
Cited by: 2 works
Citation information provided by
Web of Science

Save / Share:

Works referenced in this record:

Real-time edge-aware image processing with the bilateral grid
conference, January 2007

  • Chen, Jiawen; Paris, Sylvain; Durand, Frédo
  • ACM SIGGRAPH 2007 papers on - SIGGRAPH '07
  • DOI: 10.1145/1275808.1276506

Statistical scalability analysis of communication operations in distributed applications
conference, January 2001

  • Vetter, Jeffrey S.; McCracken, Michael O.
  • Proceedings of the eighth ACM SIGPLAN symposium on Principles and practices of parallel programming - PPoPP '01
  • DOI: 10.1145/379539.379590

Automatic data mapping for distributed-memory parallel computers
conference, January 1992

  • Wholey, Skef
  • Proceedings of the 6th international conference on Supercomputing - ICS '92
  • DOI: 10.1145/143369.143377

An auto-tuning framework for parallel multicore stencil computations
conference, April 2010

  • Kamil, Shoaib; Chan, Cy; Oliker, Leonid
  • 2010 IEEE International Symposium on Parallel & Distributed Processing (IPDPS)
  • DOI: 10.1109/IPDPS.2010.5470421

The pochoir stencil compiler
conference, January 2011

  • Tang, Yuan; Chowdhury, Rezaul Alam; Kuszmaul, Bradley C.
  • Proceedings of the 23rd ACM symposium on Parallelism in algorithms and architectures - SPAA '11
  • DOI: 10.1145/1989493.1989508

A stencil compiler for short-vector SIMD architectures
conference, January 2013

  • Henretty, Tom; Veras, Richard; Franchetti, Franz
  • Proceedings of the 27th international ACM conference on International conference on supercomputing - ICS '13
  • DOI: 10.1145/2464996.2467268

Forma: a DSL for image processing applications to target GPUs and multi-core CPUs
conference, January 2015

  • Ravishankar, Mahesh; Holewinski, Justin; Grover, Vinod
  • Proceedings of the 8th Workshop on General Purpose Processing using GPUs - GPGPU 2015
  • DOI: 10.1145/2716282.2716290

LogGP: incorporating long messages into the LogP model---one step closer towards a realistic model for parallel computation
conference, January 1995

  • Alexandrov, Albert; Ionescu, Mihai F.; Schauser, Klaus E.
  • Proceedings of the seventh annual ACM symposium on Parallel algorithms and architectures - SPAA '95
  • DOI: 10.1145/215399.215427

Ghost Cell Pattern
conference, January 2010

  • Kjolstad, Fredrik Berg; Snir, Marc
  • Proceedings of the 2010 Workshop on Parallel Programming Patterns - ParaPLoP '10
  • DOI: 10.1145/1953611.1953615

Halide: a language and compiler for optimizing parallelism, locality, and recomputation in image processing pipelines
conference, January 2013

  • Ragan-Kelley, Jonathan; Barnes, Connelly; Adams, Andrew
  • Proceedings of the 34th ACM SIGPLAN conference on Programming language design and implementation - PLDI '13
  • DOI: 10.1145/2491956.2462176

PATUS: A Code Generation and Autotuning Framework for Parallel Iterative Stencil Computations on Modern Microarchitectures
conference, May 2011

  • Christen, Matthias; Schenk, Olaf; Burkhart, Helmar
  • Distributed Processing Symposium (IPDPS), 2011 IEEE International Parallel & Distributed Processing Symposium
  • DOI: 10.1109/IPDPS.2011.70

Scheduling Independent Multiprocessor Tasks
journal, February 2002


OpenTuner: an extensible framework for program autotuning
conference, January 2014

  • Ansel, Jason; Kamil, Shoaib; Veeramachaneni, Kalyan
  • Proceedings of the 23rd international conference on Parallel architectures and compilation - PACT '14
  • DOI: 10.1145/2628071.2628092

Distributed processing of very large datasets with DataCutter
journal, October 2001


X10: an object-oriented approach to non-uniform cluster computing
conference, January 2005

  • Charles, Philippe; Grothoff, Christian; Saraswat, Vijay
  • Proceedings of the 20th annual ACM SIGPLAN conference on Object oriented programming systems languages and applications - OOPSLA '05
  • DOI: 10.1145/1094811.1094852

Optimal scheduling algorithm for distributed-memory machines
journal, January 1998

  • Darbha, S.; Agrawal, D. P.
  • IEEE Transactions on Parallel and Distributed Systems, Vol. 9, Issue 1
  • DOI: 10.1109/71.655248

Distributed Image Processing On A Network Of Workstations
journal, January 2003


Physis: an implicitly parallel programming model for stencil computations on large-scale GPU-accelerated supercomputers
conference, January 2011

  • Maruyama, Naoya; Nomura, Tatsuo; Sato, Kento
  • Proceedings of 2011 International Conference for High Performance Computing, Networking, Storage and Analysis on - SC '11
  • DOI: 10.1145/2063384.2063398

PolyMage: Automatic Optimization for Image Processing Pipelines
conference, January 2015

  • Mullapudi, Ravi Teja; Vasista, Vinay; Bondhugula, Uday
  • Proceedings of the Twentieth International Conference on Architectural Support for Programming Languages and Operating Systems - ASPLOS '15
  • DOI: 10.1145/2694344.2694364