We present ExaTN (Exascale Tensor Networks), a scalable GPU-accelerated C++ library which can express and process tensor networks on shared- as well as distributed-memory high-performance computing platforms, including those equipped with GPU accelerators. Specifically, ExaTN provides the ability to build, transform, and numerically evaluate tensor networks with arbitrary graph structures and complexity. It also provides algorithmic primitives for the optimization of tensor factors inside a given tensor network in order to find an extremum of a chosen tensor network functional, which is one of the key numerical procedures in quantum many-body theory and quantum-inspired machine learning. Numerical primitives exposed by ExaTN provide the foundation for composing rather complex tensor network algorithms. We enumerate multiple application domains which can benefit from the capabilities of our library, including condensed matter physics, quantum chemistry, quantum circuit simulations, as well as quantum and classical machine learning, for some of which we provide preliminary demonstrations and performance benchmarks just to emphasize a broad utility of our library.
Lyakh, Dmitry I., et al. "ExaTN: Scalable GPU-Accelerated High-Performance Processing of General Tensor Networks at Exascale." Frontiers in Applied Mathematics and Statistics, vol. 8, Jul. 2022. https://doi.org/10.3389/fams.2022.838601
Lyakh, Dmitry I., Nguyen, Thien, Claudino, Daniel, Dumitrescu, Eugene, & McCaskey, Alexander J. (2022). ExaTN: Scalable GPU-Accelerated High-Performance Processing of General Tensor Networks at Exascale. Frontiers in Applied Mathematics and Statistics, 8. https://doi.org/10.3389/fams.2022.838601
Lyakh, Dmitry I., Nguyen, Thien, Claudino, Daniel, et al., "ExaTN: Scalable GPU-Accelerated High-Performance Processing of General Tensor Networks at Exascale," Frontiers in Applied Mathematics and Statistics 8 (2022), https://doi.org/10.3389/fams.2022.838601
@article{osti_2325498,
author = {Lyakh, Dmitry I. and Nguyen, Thien and Claudino, Daniel and Dumitrescu, Eugene and McCaskey, Alexander J.},
title = {ExaTN: Scalable GPU-Accelerated High-Performance Processing of General Tensor Networks at Exascale},
annote = {We present ExaTN (Exascale Tensor Networks), a scalable GPU-accelerated C++ library which can express and process tensor networks on shared- as well as distributed-memory high-performance computing platforms, including those equipped with GPU accelerators. Specifically, ExaTN provides the ability to build, transform, and numerically evaluate tensor networks with arbitrary graph structures and complexity. It also provides algorithmic primitives for the optimization of tensor factors inside a given tensor network in order to find an extremum of a chosen tensor network functional, which is one of the key numerical procedures in quantum many-body theory and quantum-inspired machine learning. Numerical primitives exposed by ExaTN provide the foundation for composing rather complex tensor network algorithms. We enumerate multiple application domains which can benefit from the capabilities of our library, including condensed matter physics, quantum chemistry, quantum circuit simulations, as well as quantum and classical machine learning, for some of which we provide preliminary demonstrations and performance benchmarks just to emphasize a broad utility of our library.},
doi = {10.3389/fams.2022.838601},
url = {https://www.osti.gov/biblio/2325498},
journal = {Frontiers in Applied Mathematics and Statistics},
issn = {ISSN 2297-4687},
volume = {8},
place = {Switzerland},
publisher = {Frontiers Media SA},
year = {2022},
month = {07}}
Oak Ridge National Laboratory (ORNL), Oak Ridge, TN (United States)
Sponsoring Organization:
USDOE; USDOE Laboratory Directed Research and Development (LDRD) Program; USDOE Office of Science (SC), Advanced Scientific Computing Research (ASCR); USDOE Office of Science (SC), Basic Energy Sciences (BES)
Grant/Contract Number:
AC05-00OR22725
OSTI ID:
2325498
Alternate ID(s):
OSTI ID: 1876320
Journal Information:
Frontiers in Applied Mathematics and Statistics, Journal Name: Frontiers in Applied Mathematics and Statistics Vol. 8; ISSN 2297-4687
PACT '20: International Conference on Parallel Architectures and Compilation Techniques, Proceedings of the ACM International Conference on Parallel Architectures and Compilation Techniqueshttps://doi.org/10.1145/3410463.3414647