skip to main content
OSTI.GOV title logo U.S. Department of Energy
Office of Scientific and Technical Information

Title: Automatic Differentiation for Adjoint Stencil Loops

Abstract

Stencil loops are a common motif in computations including convolutional neural networks, structured-mesh solvers for partial differential equations, and image processing. Stencil loops are easy to parallelise, and their fast execution is aided by compilers, libraries, and domain-specific languages. Reverse-mode automatic differentiation, also known as algorithmic differentiation, autodiff, adjoint differentiation, or back-propagation, is sometimes used to obtain gradients of programs that contain stencil loops. Unfortunately, conventional automatic differentiation results in a memory access pattern that is not stencil-like and not easily parallelisable. In this paper we present a novel combination of automatic differentiation and loop transformations that preserves the structure and memory access pattern of stencil loops, while computing fully consistent derivatives. The generated loops can be parallelised and optimised for performance in the same way and using the same tools as the original computation. We have implemented this new technique in the Python tool PerforAD, which we release with this paper along with test cases derived from seismic imaging and computational fluid dynamics applications.

Authors:
; ; ; ; ;
Publication Date:
Research Org.:
Argonne National Lab. (ANL), Argonne, IL (United States)
Sponsoring Org.:
USDOE Office of Science (SC)
OSTI Identifier:
1574311
DOE Contract Number:  
AC02-06CH11357
Resource Type:
Conference
Resource Relation:
Conference: 48th International Conference on Parallel Processing, 08/05/19 - 08/08/19, Kyoto, JP
Country of Publication:
United States
Language:
English
Subject:
Automatic Differentiation; Back-Propagation; Discrete Adjoints; Loop-Transformation; Shared-Memory Parallel; Stencil Computation

Citation Formats

Huckelheim, Jan, Kukreja, Navjot, Narayanan, Sri Hari Krishna, Luporini, Fabio, Gorman, Gerard J., and Hovland, Paul. Automatic Differentiation for Adjoint Stencil Loops. United States: N. p., 2019. Web. doi:10.1145/3337821.3337906.
Huckelheim, Jan, Kukreja, Navjot, Narayanan, Sri Hari Krishna, Luporini, Fabio, Gorman, Gerard J., & Hovland, Paul. Automatic Differentiation for Adjoint Stencil Loops. United States. doi:10.1145/3337821.3337906.
Huckelheim, Jan, Kukreja, Navjot, Narayanan, Sri Hari Krishna, Luporini, Fabio, Gorman, Gerard J., and Hovland, Paul. Tue . "Automatic Differentiation for Adjoint Stencil Loops". United States. doi:10.1145/3337821.3337906.
@article{osti_1574311,
title = {Automatic Differentiation for Adjoint Stencil Loops},
author = {Huckelheim, Jan and Kukreja, Navjot and Narayanan, Sri Hari Krishna and Luporini, Fabio and Gorman, Gerard J. and Hovland, Paul},
abstractNote = {Stencil loops are a common motif in computations including convolutional neural networks, structured-mesh solvers for partial differential equations, and image processing. Stencil loops are easy to parallelise, and their fast execution is aided by compilers, libraries, and domain-specific languages. Reverse-mode automatic differentiation, also known as algorithmic differentiation, autodiff, adjoint differentiation, or back-propagation, is sometimes used to obtain gradients of programs that contain stencil loops. Unfortunately, conventional automatic differentiation results in a memory access pattern that is not stencil-like and not easily parallelisable. In this paper we present a novel combination of automatic differentiation and loop transformations that preserves the structure and memory access pattern of stencil loops, while computing fully consistent derivatives. The generated loops can be parallelised and optimised for performance in the same way and using the same tools as the original computation. We have implemented this new technique in the Python tool PerforAD, which we release with this paper along with test cases derived from seismic imaging and computational fluid dynamics applications.},
doi = {10.1145/3337821.3337906},
journal = {},
number = ,
volume = ,
place = {United States},
year = {2019},
month = {1}
}

Conference:
Other availability
Please see Document Availability for additional information on obtaining the full-text document. Library patrons may search WorldCat to identify libraries that hold this conference proceeding.

Save / Share:

Works referenced in this record:

Assessing Accelerator-Based HPC Reverse Time Migration
journal, January 2011

  • Araya-Polo, Mauricio; Cabezas, Javier; Hanzich, Mauricio
  • IEEE Transactions on Parallel and Distributed Systems, Vol. 22, Issue 1
  • DOI: 10.1109/TPDS.2010.144

The Tapenade automatic differentiation tool: Principles, model, and specification
journal, April 2013

  • Hascoet, Laurent; Pascual, Valérie
  • ACM Transactions on Mathematical Software, Vol. 39, Issue 3
  • DOI: 10.1145/2450153.2450158

Parallelizable adjoint stencil computations using transposed forward-mode algorithmic differentiation
journal, May 2018


Reverse-mode algorithmic differentiation of an OpenMP-parallel compressible flow solver
journal, February 2017

  • Hückelheim, Jan; Hovland, Paul; Strout, Michelle Mills
  • The International Journal of High Performance Computing Applications, Vol. 33, Issue 1
  • DOI: 10.1177/1094342017712060

Polyhedral Search Space Exploration in the ExaStencils Code Generator
journal, October 2018

  • Kronawitter, Stefan; Lengauer, Christian
  • ACM Transactions on Architecture and Code Optimization, Vol. 15, Issue 4
  • DOI: 10.1145/3274653

SymPy: symbolic computing in Python
journal, January 2017

  • Meurer, Aaron; Smith, Christopher P.; Paprocki, Mateusz
  • PeerJ Computer Science, Vol. 3
  • DOI: 10.7717/peerj-cs.103