skip to main content

DOE PAGESDOE PAGES

Title: OpenMP Parallelization and Optimization of Graph-Based Machine Learning Algorithms

In this paper, we investigate the OpenMP parallelization and optimization of two novel data classification algorithms. The new algorithms are based on graph and PDE solution techniques and provide significant accuracy and performance advantages over traditional data classification algorithms in serial mode. The methods leverage the Nystrom extension to calculate eigenvalue/eigenvectors of the graph Laplacian and this is a self-contained module that can be used in conjunction with other graph-Laplacian based methods such as spectral clustering. We use performance tools to collect the hotspots and memory access of the serial codes and use OpenMP as the parallelization language to parallelize the most time-consuming parts. Where possible, we also use library routines. We then optimize the OpenMP implementations and detail the performance on traditional supercomputer nodes (in our case a Cray XC30), and test the optimization steps on emerging testbed systems based on Intel’s Knights Corner and Landing processors. We show both performance improvement and strong scaling behavior. Finally, a large number of optimization techniques and analyses are necessary before the algorithm reaches almost ideal scaling.
Authors:
 [1] ;  [2] ;  [2] ;  [2] ;  [2] ;  [2] ;  [2] ;  [3]
  1. Univ. of California, Los Angeles, CA (United States); Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States)
  2. Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States)
  3. Univ. of California, Los Angeles, CA (United States)
Publication Date:
Grant/Contract Number:
AC02-05CH11231; DMS-1417674; DMS-1045536; FA9550-10-1-0569
Type:
Accepted Manuscript
Journal Name:
Lecture Notes in Computer Science
Additional Journal Information:
Journal Volume: 9903; Journal ID: ISSN 0302-9743
Publisher:
Springer
Research Org:
Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States)
Sponsoring Org:
USDOE Office of Science (SC), Advanced Scientific Computing Research (ASCR) (SC-21); National Science Foundation (NSF); US Air Force Office of Scientific Research (AFOSR)
Country of Publication:
United States
Language:
English
Subject:
97 MATHEMATICS AND COMPUTING; semi-supervised; unsupervised; data; algorithms; OpenMP; optimization
OSTI Identifier:
1378982

Meng, Zhaoyi, Koniges, Alice, He, Yun Helen, Williams, Samuel, Kurth, Thorsten, Cook, Brandon, Deslippe, Jack, and Bertozzi, Andrea L. OpenMP Parallelization and Optimization of Graph-Based Machine Learning Algorithms. United States: N. p., Web. doi:10.1007/978-3-319-45550-1_2.
Meng, Zhaoyi, Koniges, Alice, He, Yun Helen, Williams, Samuel, Kurth, Thorsten, Cook, Brandon, Deslippe, Jack, & Bertozzi, Andrea L. OpenMP Parallelization and Optimization of Graph-Based Machine Learning Algorithms. United States. doi:10.1007/978-3-319-45550-1_2.
Meng, Zhaoyi, Koniges, Alice, He, Yun Helen, Williams, Samuel, Kurth, Thorsten, Cook, Brandon, Deslippe, Jack, and Bertozzi, Andrea L. 2016. "OpenMP Parallelization and Optimization of Graph-Based Machine Learning Algorithms". United States. doi:10.1007/978-3-319-45550-1_2. https://www.osti.gov/servlets/purl/1378982.
@article{osti_1378982,
title = {OpenMP Parallelization and Optimization of Graph-Based Machine Learning Algorithms},
author = {Meng, Zhaoyi and Koniges, Alice and He, Yun Helen and Williams, Samuel and Kurth, Thorsten and Cook, Brandon and Deslippe, Jack and Bertozzi, Andrea L.},
abstractNote = {In this paper, we investigate the OpenMP parallelization and optimization of two novel data classification algorithms. The new algorithms are based on graph and PDE solution techniques and provide significant accuracy and performance advantages over traditional data classification algorithms in serial mode. The methods leverage the Nystrom extension to calculate eigenvalue/eigenvectors of the graph Laplacian and this is a self-contained module that can be used in conjunction with other graph-Laplacian based methods such as spectral clustering. We use performance tools to collect the hotspots and memory access of the serial codes and use OpenMP as the parallelization language to parallelize the most time-consuming parts. Where possible, we also use library routines. We then optimize the OpenMP implementations and detail the performance on traditional supercomputer nodes (in our case a Cray XC30), and test the optimization steps on emerging testbed systems based on Intel’s Knights Corner and Landing processors. We show both performance improvement and strong scaling behavior. Finally, a large number of optimization techniques and analyses are necessary before the algorithm reaches almost ideal scaling.},
doi = {10.1007/978-3-319-45550-1_2},
journal = {Lecture Notes in Computer Science},
number = ,
volume = 9903,
place = {United States},
year = {2016},
month = {9}
}