skip to main content
DOE PAGES title logo U.S. Department of Energy
Office of Scientific and Technical Information

This content will become publicly available on January 17, 2020

Title: Avoiding Communication in Primal and Dual Block Coordinate Descent Methods

Abstract

Primal and dual block coordinate descent methods are iterative methods for solving regularized and unregularized optimization problems. Distributed-memory parallel implementations of these methods have become popular in analyzing large machine learning datasets. However, existing implementations communicate at every iteration, which, on modern data center and supercomputing architectures, often dominates the cost of floating-point computation. Recent results on communication-avoiding Krylov subspace methods suggest that large speedups are possible by re-organizing iterative algorithms to avoid communication. We show how applying similar algorithmic transformations can lead to primal and dual block coordinate descent methods that only communicate every $s$ iterations---where $s$ is a tuning parameter---instead of every iteration for the regularized least-squares problem. We show that the communication-avoiding variants reduce the number of synchronizations by a factor of $s$ on distributed-memory parallel machines without altering the convergence rate and attain strong scaling speedups of up to $$6.1\times$$ over the “standard algorithm" on a Cray XC30 supercomputer.

Authors:
ORCiD logo [1];  [1];  [1];  [1]
  1. Univ. of California, Berkeley, CA (United States)
Publication Date:
Research Org.:
Lawrence Berkeley National Laboratory (LBNL), Berkeley, CA (United States). National Energy Research Scientific Computing Center (NERSC)
Sponsoring Org.:
USDOE
OSTI Identifier:
1544140
Resource Type:
Accepted Manuscript
Journal Name:
SIAM Journal on Scientific Computing
Additional Journal Information:
Journal Volume: 41; Journal Issue: 1; Journal ID: ISSN 1064-8275
Publisher:
SIAM
Country of Publication:
United States
Language:
English
Subject:
97 MATHEMATICS AND COMPUTING

Citation Formats

Devarakonda, Aditya, Fountoulakis, Kimon, Demmel, James, and Mahoney, Michael W. Avoiding Communication in Primal and Dual Block Coordinate Descent Methods. United States: N. p., 2019. Web. doi:10.1137/17M1134433.
Devarakonda, Aditya, Fountoulakis, Kimon, Demmel, James, & Mahoney, Michael W. Avoiding Communication in Primal and Dual Block Coordinate Descent Methods. United States. doi:10.1137/17M1134433.
Devarakonda, Aditya, Fountoulakis, Kimon, Demmel, James, and Mahoney, Michael W. Thu . "Avoiding Communication in Primal and Dual Block Coordinate Descent Methods". United States. doi:10.1137/17M1134433.
@article{osti_1544140,
title = {Avoiding Communication in Primal and Dual Block Coordinate Descent Methods},
author = {Devarakonda, Aditya and Fountoulakis, Kimon and Demmel, James and Mahoney, Michael W.},
abstractNote = {Primal and dual block coordinate descent methods are iterative methods for solving regularized and unregularized optimization problems. Distributed-memory parallel implementations of these methods have become popular in analyzing large machine learning datasets. However, existing implementations communicate at every iteration, which, on modern data center and supercomputing architectures, often dominates the cost of floating-point computation. Recent results on communication-avoiding Krylov subspace methods suggest that large speedups are possible by re-organizing iterative algorithms to avoid communication. We show how applying similar algorithmic transformations can lead to primal and dual block coordinate descent methods that only communicate every $s$ iterations---where $s$ is a tuning parameter---instead of every iteration for the regularized least-squares problem. We show that the communication-avoiding variants reduce the number of synchronizations by a factor of $s$ on distributed-memory parallel machines without altering the convergence rate and attain strong scaling speedups of up to $6.1\times$ over the “standard algorithm" on a Cray XC30 supercomputer.},
doi = {10.1137/17M1134433},
journal = {SIAM Journal on Scientific Computing},
number = 1,
volume = 41,
place = {United States},
year = {2019},
month = {1}
}

Journal Article:
Free Publicly Available Full Text
This content will become publicly available on January 17, 2020
Publisher's Version of Record

Save / Share: