DOE PAGES title logo U.S. Department of Energy
Office of Scientific and Technical Information

Title: Towards provably efficient quantum algorithms for large-scale machine-learning models

Journal Article · · Nature Communications
ORCiD logo [1]; ORCiD logo [2]; ORCiD logo [3];  [4];  [5];  [6]; ORCiD logo [7]; ORCiD logo [8]
  1. University of Chicago, IL (United States); Chicago Quantum Exchange, IL (United States); qBraid Co., Chicago, IL (United States); SeQure, Chicago, IL (United States)
  2. University of Chicago, IL (United States); Argonne National Laboratory (ANL), Argonne, IL (United States)
  3. University of California, Berkeley, CA (United States); Massachusetts Institute of Technology (MIT), Cambridge, MA (United States)
  4. University of Chicago, IL (United States)
  5. Brandeis University, Waltham, MA (United States)
  6. University of Chicago, IL (United States); Chicago Quantum Exchange, IL (United States); Argonne National Laboratory (ANL), Argonne, IL (United States)
  7. Free University, Berlin (Germany)
  8. University of Chicago, IL (United States); Chicago Quantum Exchange, IL (United States)

Large machine learning models are revolutionary technologies of artificial intelligence whose bottlenecks include huge computational expenses, power, and time used both in the pre-training and fine-tuning process. In this work, we show that fault-tolerant quantum computing could possibly provide provably efficient resolutions for generic (stochastic) gradient descent algorithms, scaling as $$\mathcal{O}$$(T2 x polylog($$n$$)), where n is the size of the models and T is the number of iterations in the training, as long as the models are both sufficiently dissipative and sparse, with small learning rates. Based on earlier efficient quantum algorithms for dissipative differential equations, we find and prove that similar algorithms work for (stochastic) gradient descent, the primary algorithm for machine learning. In practice, we benchmark instances of large machine learning models from 7 million to 103 million parameters. We find that, in the context of sparse training, a quantum enhancement is possible at the early stage of learning after model pruning, motivating a sparse parameter download and re-upload scheme. Our work shows solidly that fault-tolerant quantum algorithms could potentially contribute to most state-of-the-art, large-scale machine-learning problems.

Research Organization:
Argonne National Laboratory (ANL), Argonne, IL (United States)
Sponsoring Organization:
David and Lucile Packard Foundation; National Science Foundation (NSF); Simons Foundation; US Air Force Office of Scientific Research (AFOSR); US Army Research Office (ARO); USDOE Office of Science (SC), Basic Energy Sciences (BES). Scientific User Facilities (SUF)
Grant/Contract Number:
AC02-06CH11357
OSTI ID:
2469541
Journal Information:
Nature Communications, Journal Name: Nature Communications Journal Issue: 1 Vol. 15; ISSN 2041-1723
Publisher:
Nature Publishing GroupCopyright Statement
Country of Publication:
United States
Language:
English

References (27)

Data for "Towards provably efficient quantum algorithms for large-scale machine-learning models" dataset January 2023
Deep learning journal May 2015
Quantum machine learning journal September 2017
A variational eigenvalue solver on a photonic quantum processor journal July 2014
Quantum principal component analysis journal July 2014
Power of data in quantum machine learning journal May 2021
Predicting many properties of a quantum system from very few measurements journal June 2020
A rigorous and robust quantum speed-up in supervised machine learning journal July 2021
Supervised learning with quantum-enhanced feature spaces journal March 2019
Variational quantum algorithms journal August 2021
Efficient quantum algorithm for dissipative nonlinear differential equations journal August 2021
The theory of variational hybrid quantum-classical algorithms journal February 2016
Fast state tomography with optimal error bounds journal April 2020
Superpolynomial quantum-classical separation for density modeling journal April 2023
Quantum-assisted Gaussian process regression journal May 2019
Quantum Random Access Memory journal April 2008
Quantum Algorithm for Linear Systems of Equations journal October 2009
Quantum Support Vector Machine for Big Data Classification journal September 2014
Quantum Principal Component Analysis Only Achieves an Exponential Speedup Because of Its State Preparation Assumptions journal August 2021
Quantum computational chemistry journal March 2020
An Efficient Algorithm for Sparse Quantum State Preparation conference December 2021
Quantum advantage in learning from experiments journal June 2022
Quantum optimization of maximum independent set using Rydberg atom arrays journal June 2022
On the Quantum versus Classical Learnability of Discrete Distributions journal March 2021
Quantum supremacy using a programmable superconducting processor dataset January 2019
Data for "Towards provably efficient quantum algorithms for large-scale machine-learning models" dataset January 2023
Data for "Towards provably efficient quantum algorithms for large-scale machine-learning models" dataset January 2023


Similar Records

Resource frugal optimizer for quantum machine learning
Journal Article · 2023 · Quantum Science and Technology · OSTI ID:2228652

Stochastic Spectral Descent for Discrete Graphical Models
Journal Article · 2015 · IEEE Journal of Selected Topics in Signal Processing · OSTI ID:1367144

A phase transition for finding needles in nonlinear haystacks with LASSO artificial neural networks
Journal Article · 2022 · Statistics and Computing · OSTI ID:2469624