# Improving the Numerical Stability of Fast Matrix Multiplication

## Abstract

Fast algorithms for matrix multiplication, namely those that perform asymptotically fewer scalar operations than the classical algorithm, have been considered primarily of theoretical interest. Apart from Strassen's original algorithm, few fast algorithms have been efficiently implemented or used in practical applications. However, there exist many practical alternatives to Strassen's algorithm with varying performance and numerical properties. Fast algorithms are known to be numerically stable, but because their error bounds are slightly weaker than the classical algorithm, they are not used even in cases where they provide a performance benefit. We argue in this study that the numerical sacrifice of fast algorithms, particularly for the typical use cases of practical algorithms, is not prohibitive, and we explore ways to improve the accuracy both theoretically and empirically. The numerical accuracy of fast matrix multiplication depends on properties of the algorithm and of the input matrices, and we consider both contributions independently. We generalize and tighten previous error analyses of fast algorithms and compare their properties. We discuss algorithmic techniques for improving the error guarantees from two perspectives: manipulating the algorithms, and reducing input anomalies by various forms of diagonal scaling. In conclusion, we benchmark performance and demonstrate our improved numerical accuracy.

- Authors:

- Sandia National Lab. (SNL-CA), Livermore, CA (United States); Wake Forest Univ., Winston-Salem, NC (United States)
- Stanford Univ., CA (United States). Inst. for Computational and Mathematical Engineering
- Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States)
- Google, San Bruno, CA (United States)
- Hebrew Univ. of Jerusalem (Israel). School of Computer Science and Engineering

- Publication Date:

- Research Org.:
- Hebrew Univ. of Jerusalem (Israel); Sandia National Lab. (SNL-CA), Livermore, CA (United States); Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States)

- Sponsoring Org.:
- USDOE Office of Science (SC), Advanced Scientific Computing Research (ASCR) (SC-21); USDOE National Nuclear Security Administration (NNSA); Israel Science Foundation (ISF); Ministry of Science and Technology (Israel); Einstein Foundation; Minerva Foundation (United States); Intel Collaborative Research Inst. for Computational Intelligence (ICRI-CI) (Israel); United States-Israel Binational Science Foundation (BSF); HUJI Cyber Security Research Center (Israel); Israel National Cyber Bureau

- OSTI Identifier:
- 1356986

- Alternate Identifier(s):
- OSTI ID: 1458476

- Report Number(s):
- [SAND2015-5246J]

[Journal ID: ISSN 0895-4798; 594464]

- Grant/Contract Number:
- [AC04-94AL85000; AC02-05CH11231; 1878/14; 1901/14; 3-10891]

- Resource Type:
- Accepted Manuscript

- Journal Name:
- SIAM Journal on Matrix Analysis and Applications

- Additional Journal Information:
- [ Journal Volume: 37; Journal Issue: 4]; Journal ID: ISSN 0895-4798

- Publisher:
- SIAM

- Country of Publication:
- United States

- Language:
- English

- Subject:
- 97 MATHEMATICS AND COMPUTING; practical fast matrix multiplication; error bounds; diagonal scaling

### Citation Formats

```
Ballard, Grey, Benson, Austin R., Druinsky, Alex, Lipshitz, Benjamin, and Schwartz, Oded. Improving the Numerical Stability of Fast Matrix Multiplication. United States: N. p., 2016.
Web. doi:10.1137/15M1032168.
```

```
Ballard, Grey, Benson, Austin R., Druinsky, Alex, Lipshitz, Benjamin, & Schwartz, Oded. Improving the Numerical Stability of Fast Matrix Multiplication. United States. doi:10.1137/15M1032168.
```

```
Ballard, Grey, Benson, Austin R., Druinsky, Alex, Lipshitz, Benjamin, and Schwartz, Oded. Tue .
"Improving the Numerical Stability of Fast Matrix Multiplication". United States. doi:10.1137/15M1032168. https://www.osti.gov/servlets/purl/1356986.
```

```
@article{osti_1356986,
```

title = {Improving the Numerical Stability of Fast Matrix Multiplication},

author = {Ballard, Grey and Benson, Austin R. and Druinsky, Alex and Lipshitz, Benjamin and Schwartz, Oded},

abstractNote = {Fast algorithms for matrix multiplication, namely those that perform asymptotically fewer scalar operations than the classical algorithm, have been considered primarily of theoretical interest. Apart from Strassen's original algorithm, few fast algorithms have been efficiently implemented or used in practical applications. However, there exist many practical alternatives to Strassen's algorithm with varying performance and numerical properties. Fast algorithms are known to be numerically stable, but because their error bounds are slightly weaker than the classical algorithm, they are not used even in cases where they provide a performance benefit. We argue in this study that the numerical sacrifice of fast algorithms, particularly for the typical use cases of practical algorithms, is not prohibitive, and we explore ways to improve the accuracy both theoretically and empirically. The numerical accuracy of fast matrix multiplication depends on properties of the algorithm and of the input matrices, and we consider both contributions independently. We generalize and tighten previous error analyses of fast algorithms and compare their properties. We discuss algorithmic techniques for improving the error guarantees from two perspectives: manipulating the algorithms, and reducing input anomalies by various forms of diagonal scaling. In conclusion, we benchmark performance and demonstrate our improved numerical accuracy.},

doi = {10.1137/15M1032168},

journal = {SIAM Journal on Matrix Analysis and Applications},

number = [4],

volume = [37],

place = {United States},

year = {2016},

month = {10}

}

*Citation information provided by*

Web of Science

Web of Science