Skip to main content
U.S. Department of Energy
Office of Scientific and Technical Information

Sparse Symmetric Format for Tucker Decomposition

Journal Article · · IEEE Transactions on Parallel and Distributed Systems

Tensor-based methods are receiving renewed attention in recent years due to their prevalence in diverse real-world applications. There is considerable literature on tensor representations and algorithms for tensor decompositions, both for dense and sparse tensors. Many applications in hypergraph analytics, machine learning, psychometry, and signal processing result in tensors that are both sparse and symmetric, making them an important class for further study. Similar to the critical Tensor Times Matrix chain operation (TTM c ) in general sparse tensors, the $$\underline{S}$$ parse $$\underline{S}$$ ymmetric $$\underline{T}$$ ensor $$\underline{T}$$ imes $$\underline{S}$$ ame $$\underline{M}$$ atrix $$\underline{c}$$ hain (S 3 TTM c ) operation is compute and memory intensive due to high tensor order and the associated factorial explosion in the number of non-zeros. We present the novel Compressed Sparse Symmetric (CSS) format for sparse symmetric tensors, along with an efficient parallel algorithm for the S 3 TTM c operation. We theoretically establish that S 3 TTM c on CSS achieves a better memory versus run-time trade-off compared to state-of-the-art implementations, and visualize the variation of the performance gap over the parameter space. We demonstrate experimental findings that confirm these results and achieve up to 2.72× speedup on synthetic and real datasets. The scaling of the algorithm on different test architectures is also showcased to highlight the effect of machine characteristics on algorithm performance.

Research Organization:
Oak Ridge National Laboratory (ORNL), Oak Ridge, TN (United States)
Sponsoring Organization:
USDOE Office of Science (SC), Basic Energy Sciences (BES). Scientific User Facilities (SUF)
Grant/Contract Number:
AC05-00OR22725
OSTI ID:
1997626
Journal Information:
IEEE Transactions on Parallel and Distributed Systems, Journal Name: IEEE Transactions on Parallel and Distributed Systems Journal Issue: 6 Vol. 34; ISSN 1045-9219
Publisher:
IEEECopyright Statement
Country of Publication:
United States
Language:
English

References (24)

Accelerating the Tucker Decomposition with Compressed Sparse Tensors book January 2017
Some mathematical notes on three-mode factor analysis journal September 1966
Optimizing sparse tensor times matrix on GPUs journal July 2019
Subgraph centrality and clustering in complex hyper-networks journal May 2006
A Unified Optimization Approach for Sparse Tensor Operations on GPUs conference September 2017
Best rank (R, R, R) super-symmetric tensor approximation-a continuous-time approach conference January 1999
High Performance Parallel Algorithms for the Tucker Decomposition of Sparse Tensors conference August 2016
Efficiently Computing Tensor Eigenvalues on a GPU
  • Ballard, Grey; Kolda, Tamara; Plantenga, Todd
  • Distributed Processing, Workshops and Phd Forum (IPDPSW), 2011 IEEE International Symposium on Parallel and Distributed Processing Workshops and Phd Forum https://doi.org/10.1109/IPDPS.2011.287
conference May 2011
SPLATT: Efficient and Parallel Sparse Tensor-Matrix Multiplication conference May 2015
HiCOO: Hierarchical Storage of Sparse Tensors conference November 2018
Tensor Decomposition for Signal Processing and Machine Learning journal July 2017
Symmetric Tensors and Symmetric Tensor Rank journal January 2008
Tensor Decompositions and Applications journal August 2009
LAPACK Users' Guide software January 1999
Exploiting Symmetry in Tensors for High Performance: Multiplication with Symmetric Tensors journal January 2014
Parallel Candecomp/Parafac Decomposition of Sparse Tensors Using Dimension Trees journal January 2018
Estimating Higher-Order Moments Using Symmetric Tensor Decomposition journal January 2020
Using 2-node hypergraph clustering coefficients to analyze disease-gene networks
  • Gallagher, Suzanne Renick; Dombrower, Micah; Goldberg, Debra S.
  • Proceedings of the 5th ACM Conference on Bioinformatics, Computational Biology, and Health Informatics https://doi.org/10.1145/2649387.2660817
conference September 2014
Tensor-matrix products with a compressed sparse tensor conference January 2015
HPTT: a high-performance tensor transposition C++ library
  • Springer, Paul; Su, Tong; Bientinesi, Paolo
  • Proceedings of the 4th ACM SIGPLAN International Workshop on Libraries, Languages, and Compilers for Array Programming https://doi.org/10.1145/3091966.3091968
conference June 2017
An efficient mixed-mode representation of sparse tensors
  • Nisa, Israt; Li, Jiajia; Sukumaran-Rajam, Aravind
  • Proceedings of the International Conference for High Performance Computing, Networking, Storage and Analysis https://doi.org/10.1145/3295500.3356216
conference November 2019
Clustering in graphs and hypergraphs with categorical edge labels conference April 2020
Alto conference June 2021
Justifying Recommendations using Distantly-Labeled Reviews and Fine-Grained Aspects
  • Ni, Jianmo; Li, Jiacheng; McAuley, Julian
  • Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP) https://doi.org/10.18653/v1/D19-1018
conference January 2019

Figures / Tables (17)


Similar Records

Efficient Parallel Sparse Symmetric Tucker Decomposition for High-Order Tensors
Conference · Thu Dec 31 23:00:00 EST 2020 · OSTI ID:1820807

SymProp: Scaling Sparse Symmetric Tucker Decomposition via Symmetry Propagation
Conference · Sun Jun 01 00:00:00 EDT 2025 · OSTI ID:3002148