## Efficient tree decomposition of high-rank tensors [Efficient decomposition of high-rank tensors]

## Abstract

Tensors are a natural way to express correlations among many physical variables, but storing tensors in a computer naively requires memory which scales exponentially in the rank of the tensor. This is not optimal, as the required memory is actually set not by the rank but by the mutual information amongst the variables in question. Representations such as the tensor tree perform near-optimally when the tree decomposition is chosen to reflect the correlation structure in question, but making such a choice is non-trivial and good heuristics remain highly context-specific. In this work I present two new algorithms for choosing efficient tree decompositions, independent of the physical context of the tensor. The first is a brute-force algorithm which in most cases produces optimal decompositions but is generally impractical for high-rank tensors, as the number of possible choices grows exponentially in rank. Here, the second is a greedy algorithm, and while it is not optimal it performs extremely well in numerical experiments while having runtime which makes it practical even for tensors of very high rank.

- Authors:

- Univ. of Cambridge (United Kingdom)

- Publication Date:

- Research Org.:
- Lawrence Berkeley National Laboratory (LBNL), Berkeley, CA (United States). National Energy Research Scientific Computing Center (NERSC); Univ. of California, Oakland, CA (United States)

- Sponsoring Org.:
- USDOE

- OSTI Identifier:
- 1543563

- Grant/Contract Number:
- AC02-05CH11231

- Resource Type:
- Accepted Manuscript

- Journal Name:
- Journal of Computational Physics

- Additional Journal Information:
- Journal Volume: 377; Journal Issue: C; Journal ID: ISSN 0021-9991

- Publisher:
- Elsevier

- Country of Publication:
- United States

- Language:
- English

- Subject:
- 97 MATHEMATICS AND COMPUTING; Computer Science; Physics; Tensor networks; Singular value decomposition

### Citation Formats

```
Jermyn, Adam S. Efficient tree decomposition of high-rank tensors [Efficient decomposition of high-rank tensors]. United States: N. p., 2018.
Web. doi:10.1016/j.jcp.2018.10.026.
```

```
Jermyn, Adam S. Efficient tree decomposition of high-rank tensors [Efficient decomposition of high-rank tensors]. United States. doi:10.1016/j.jcp.2018.10.026.
```

```
Jermyn, Adam S. Fri .
"Efficient tree decomposition of high-rank tensors [Efficient decomposition of high-rank tensors]". United States. doi:10.1016/j.jcp.2018.10.026. https://www.osti.gov/servlets/purl/1543563.
```

```
@article{osti_1543563,
```

title = {Efficient tree decomposition of high-rank tensors [Efficient decomposition of high-rank tensors]},

author = {Jermyn, Adam S.},

abstractNote = {Tensors are a natural way to express correlations among many physical variables, but storing tensors in a computer naively requires memory which scales exponentially in the rank of the tensor. This is not optimal, as the required memory is actually set not by the rank but by the mutual information amongst the variables in question. Representations such as the tensor tree perform near-optimally when the tree decomposition is chosen to reflect the correlation structure in question, but making such a choice is non-trivial and good heuristics remain highly context-specific. In this work I present two new algorithms for choosing efficient tree decompositions, independent of the physical context of the tensor. The first is a brute-force algorithm which in most cases produces optimal decompositions but is generally impractical for high-rank tensors, as the number of possible choices grows exponentially in rank. Here, the second is a greedy algorithm, and while it is not optimal it performs extremely well in numerical experiments while having runtime which makes it practical even for tensors of very high rank.},

doi = {10.1016/j.jcp.2018.10.026},

journal = {Journal of Computational Physics},

number = C,

volume = 377,

place = {United States},

year = {2018},

month = {10}

}

*Citation information provided by*

Web of Science

Web of Science