Subtleties in the trainability of quantum machine learning models
Journal Article
·
· Quantum Machine Intelligence
- Los Alamos National Laboratory (LANL), Los Alamos, NM (United States); National Univ. of Singapore (Singapore); Ecole Polytechnique Federale Lausanne (EPFL) (Switzerland); Ministry of Higher Education, Science, Research and Innovation (Thailand)
- Los Alamos National Laboratory (LANL), Los Alamos, NM (United States); Imperial College, London (United Kingdom)
- Los Alamos National Laboratory (LANL), Los Alamos, NM (United States); State Univ. of New York (SUNY), Stony Brook, NY (United States)
- Los Alamos National Laboratory (LANL), Los Alamos, NM (United States)
A new paradigm for data science has emerged, with quantum data, quantum models, and quantum computational devices. This field, called quantum machine learning (QML), aims to achieve a speedup over traditional machine learning for data analysis. However, its success usually hinges on efficiently training the parameters in quantum neural networks, and the field of QML is still lacking theoretical scaling results for their trainability. Some trainability results have been proven for a closely related field called variational quantum algorithms (VQAs). While both fields involve training a parametrized quantum circuit, there are crucial differences that make the results for one setting not readily applicable to the other. In this work, we bridge the two frameworks and show that gradient scaling results for VQAs can also be applied to study the gradient scaling of QML models. Our results indicate that features deemed detrimental for VQA trainability can also lead to issues such as barren plateaus in QML. Consequently, our work has implications for several QML proposals in the literature. In addition, we provide theoretical and numerical evidence that QML models exhibit further trainability issues not present in VQAs, arising from the use of a training dataset. We refer to these as dataset-induced barren plateaus. These results are most relevant when dealing with classical data, as here the choice of embedding scheme (i.e., the map between classical data and quantum states) can greatly affect the gradient scaling.
- Research Organization:
- Los Alamos National Laboratory (LANL), Los Alamos, NM (United States)
- Sponsoring Organization:
- USDOE Laboratory Directed Research and Development (LDRD) Program; USDOE Office of Science (SC), Advanced Scientific Computing Research (ASCR); USDOE National Nuclear Security Administration (NNSA)
- Grant/Contract Number:
- 89233218CNA000001
- OSTI ID:
- 2305307
- Report Number(s):
- LA-UR--21-30290
- Journal Information:
- Quantum Machine Intelligence, Journal Name: Quantum Machine Intelligence Journal Issue: 1 Vol. 5; ISSN 2524-4906
- Publisher:
- Springer NatureCopyright Statement
- Country of Publication:
- United States
- Language:
- English
Similar Records
On the practical usefulness of the Hardware Efficient Ansatz
Trainability of Dissipative Perceptron-Based Quantum Neural Networks
Journal Article
·
Wed Jul 03 00:00:00 EDT 2024
· Quantum
·
OSTI ID:2391064
Trainability of Dissipative Perceptron-Based Quantum Neural Networks
Journal Article
·
Fri May 06 00:00:00 EDT 2022
· Physical Review Letters
·
OSTI ID:1992248