DOE PAGES title logo U.S. Department of Energy
Office of Scientific and Technical Information

Title: Avoiding Communication in Primal and Dual Block Coordinate Descent Methods

Abstract

Primal and dual block coordinate descent methods are iterative methods for solving regularized and unregularized optimization problems. Distributed-memory parallel implementations of these methods have become popular in analyzing large machine learning datasets. However, existing implementations communicate at every iteration, which, on modern data center and supercomputing architectures, often dominates the cost of floating-point computation. Recent results on communication-avoiding Krylov subspace methods suggest that large speedups are possible by re-organizing iterative algorithms to avoid communication. We show how applying similar algorithmic transformations can lead to primal and dual block coordinate descent methods that only communicate every $$s$$ iterations---where $$s$$ is a tuning parameter---instead of every iteration for the regularized least-squares problem. We show that the communication-avoiding variants reduce the number of synchronizations by a factor of $$s$$ on distributed-memory parallel machines without altering the convergence rate and attain strong scaling speedups of up to $$6.1\times$$ over the “standard algorithm" on a Cray XC30 supercomputer.

Authors:
ORCiD logo [1];  [1];  [1];  [1]
  1. Univ. of California, Berkeley, CA (United States)
Publication Date:
Research Org.:
Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States). National Energy Research Scientific Computing Center (NERSC)
Sponsoring Org.:
USDOE
OSTI Identifier:
1544140
Resource Type:
Accepted Manuscript
Journal Name:
SIAM Journal on Scientific Computing
Additional Journal Information:
Journal Volume: 41; Journal Issue: 1; Journal ID: ISSN 1064-8275
Publisher:
SIAM
Country of Publication:
United States
Language:
English
Subject:
97 MATHEMATICS AND COMPUTING

Citation Formats

Devarakonda, Aditya, Fountoulakis, Kimon, Demmel, James, and Mahoney, Michael W. Avoiding Communication in Primal and Dual Block Coordinate Descent Methods. United States: N. p., 2019. Web. doi:10.1137/17M1134433.
Devarakonda, Aditya, Fountoulakis, Kimon, Demmel, James, & Mahoney, Michael W. Avoiding Communication in Primal and Dual Block Coordinate Descent Methods. United States. https://doi.org/10.1137/17M1134433
Devarakonda, Aditya, Fountoulakis, Kimon, Demmel, James, and Mahoney, Michael W. Thu . "Avoiding Communication in Primal and Dual Block Coordinate Descent Methods". United States. https://doi.org/10.1137/17M1134433. https://www.osti.gov/servlets/purl/1544140.
@article{osti_1544140,
title = {Avoiding Communication in Primal and Dual Block Coordinate Descent Methods},
author = {Devarakonda, Aditya and Fountoulakis, Kimon and Demmel, James and Mahoney, Michael W.},
abstractNote = {Primal and dual block coordinate descent methods are iterative methods for solving regularized and unregularized optimization problems. Distributed-memory parallel implementations of these methods have become popular in analyzing large machine learning datasets. However, existing implementations communicate at every iteration, which, on modern data center and supercomputing architectures, often dominates the cost of floating-point computation. Recent results on communication-avoiding Krylov subspace methods suggest that large speedups are possible by re-organizing iterative algorithms to avoid communication. We show how applying similar algorithmic transformations can lead to primal and dual block coordinate descent methods that only communicate every $s$ iterations---where $s$ is a tuning parameter---instead of every iteration for the regularized least-squares problem. We show that the communication-avoiding variants reduce the number of synchronizations by a factor of $s$ on distributed-memory parallel machines without altering the convergence rate and attain strong scaling speedups of up to $6.1\times$ over the “standard algorithm" on a Cray XC30 supercomputer.},
doi = {10.1137/17M1134433},
journal = {SIAM Journal on Scientific Computing},
number = 1,
volume = 41,
place = {United States},
year = {Thu Jan 17 00:00:00 EST 2019},
month = {Thu Jan 17 00:00:00 EST 2019}
}

Journal Article:
Free Publicly Available Full Text
Publisher's Version of Record

Citation Metrics:
Cited by: 3 works
Citation information provided by
Web of Science

Save / Share:

Works referenced in this record:

Communication lower bounds and optimal algorithms for numerical linear algebra
journal, May 2014


A Residual Replacement Strategy for Improving the Maximum Attainable Accuracy of $s$-Step Krylov Subspace Methods
journal, January 2014

  • Carson, Erin; Demmel, James
  • SIAM Journal on Matrix Analysis and Applications, Vol. 35, Issue 1
  • DOI: 10.1137/120893057

Accuracy of the $s$-Step Lanczos Method for the Symmetric Eigenproblem in Finite Precision
journal, January 2015

  • Carson, Erin; Demmel, James W.
  • SIAM Journal on Matrix Analysis and Applications, Vol. 36, Issue 2
  • DOI: 10.1137/140990735

Avoiding Communication in Nonsymmetric Lanczos-Based Krylov Subspace Methods
journal, January 2013

  • Carson, Erin; Knight, Nicholas; Demmel, James
  • SIAM Journal on Scientific Computing, Vol. 35, Issue 5
  • DOI: 10.1137/120881191

LIBSVM: A library for support vector machines
journal, April 2011

  • Chang, Chih-Chung; Lin, Chih-Jen
  • ACM Transactions on Intelligent Systems and Technology, Vol. 2, Issue 3
  • DOI: 10.1145/1961189.1961199

s-step iterative methods for symmetric linear systems
journal, February 1989


A survey of direct methods for sparse linear systems
journal, May 2016

  • Davis, Timothy A.; Rajamanickam, Sivasankaran; Sid-Lakhdar, Wissam M.
  • Acta Numerica, Vol. 25
  • DOI: 10.1017/S0962492916000076

Communication-optimal Parallel and Sequential QR and LU Factorizations
journal, January 2012

  • Demmel, James; Grigori, Laura; Hoemmen, Mark
  • SIAM Journal on Scientific Computing, Vol. 34, Issue 1
  • DOI: 10.1137/080731992

Expected Length of the Longest Probe Sequence in Hash Code Searching
journal, April 1981


An efficient nonsymmetric Lanczos method on parallel vector computers
journal, October 1992


Efficiency of Coordinate Descent Methods on Huge-Scale Optimization Problems
journal, January 2012

  • Nesterov, Yu.
  • SIAM Journal on Optimization, Vol. 22, Issue 2
  • DOI: 10.1137/100802001

Iteration complexity of randomized block-coordinate descent methods for minimizing a composite function
journal, December 2012


Implementation of the GMRES Method Using Householder Transformations
journal, January 1988

  • Walker, Homer F.
  • SIAM Journal on Scientific and Statistical Computing, Vol. 9, Issue 1
  • DOI: 10.1137/0909010

Coordinate descent algorithms
journal, March 2015


Works referencing / citing this record:

Numerical algorithms for high-performance computational science
journal, January 2020

  • Dongarra, Jack; Grigori, Laura; Higham, Nicholas J.
  • Philosophical Transactions of the Royal Society A: Mathematical, Physical and Engineering Sciences, Vol. 378, Issue 2166
  • DOI: 10.1098/rsta.2019.0066