DOE PAGES title logo U.S. Department of Energy
Office of Scientific and Technical Information
  1. Effect of Nonunital Noise on Random-Circuit Sampling

    In this work, drawing inspiration from the type of noise present in real hardware, we study the output distribution of random quantum circuits under practical nonunital noise sources with constant noise rates. We show that even in the presence of unital sources such as the depolarizing channel, the distribution, under the combined noise channel, never resembles a maximally entropic distribution at any depth. To show this, we prove that the output distribution of such circuits never anticoncentrates—meaning that it is never too “flat”—regardless of the depth of the circuit. This is in stark contrast to the behavior of noiseless randommore » quantum circuits or those with only unital noise, both of which anticoncentrate at sufficiently large depths. As a consequence, our results shows that the complexity of random-circuit sampling under realistic noise is still an open question, since anticoncentration is a critical property exploited by both state-of-the-art classical hardness and easiness results. Published by the American Physical Society 2024« less
  2. Continuous-Variable Quantum State Designs: Theory and Applications

    We generalize the notion of quantum state designs to infinite-dimensional spaces. We first prove that, under the definition of continuous-variable (CV) state t -designs from [Blume-Kohout , ], no state designs exist for t 2 . Similarly, we prove that no CV unitary t -designs exist for t 2 . We propose an alternative definition for CV state designs, which we call rigged t -designs, and provide explicit constructions for t = 2 . As an application of rigged designs, we develop a design-based shadow-tomography protocolmore » for CV states. Using energy-constrained versions of rigged designs, we define an average fidelity for CV quantum channels and relate this fidelity to the CV entanglement fidelity. As an additional result of independent interest, we establish a connection between torus 2-designs and complete sets of mutually unbiased bases. Published by the American Physical Society 2024« less
  3. Quantum-centric supercomputing for materials science: A perspective on challenges and future directions

    Computational models are an essential tool for the design, characterization, and discovery of novel materials. Computationally hard tasks in materials science stretch the limits of existing high-performance supercomputing centers, consuming much of their resources for simulation, analysis, and data processing. Quantum computing, on the other hand, is an emerging technology with the potential to accelerate many of the computational tasks needed for materials science. In order to do that, the quantum technology must interact with conventional high-performance computing in several ways: approximate results validation, identification of hard problems, and synergies in quantum-centric supercomputing. Here in this paper, we provide amore » perspective on how quantum-centric supercomputing can help address critical computational problems in materials science, the challenges to face in order to solve representative use cases, and new suggested directions.« less
  4. Analytic Theory for the Dynamics of Wide Quantum Neural Networks

    Here, parametrized quantum circuits can be used as quantum neural networks and have the potential to outperform their classical counterparts when trained for addressing learning problems. To date, much of the results on their performance on practical problems are heuristic in nature. In particular, the convergence rate for the training of quantum neural networks is not fully understood. Here, we analyze the dynamics of gradient descent for the training error of a class of variational quantum machine learning models. We define wide quantum neural networks as parametrized quantum circuits in the limit of a large number of qubits and variationalmore » parameters. Then, we find a simple analytic formula that captures the average behavior of their loss function and discuss the consequences of our findings. For example, for random quantum circuits, we predict and characterize an exponential decay of the residual training error as a function of the parameters of the system. Finally, we validate our analytic results with numerical experiments.« less
  5. Variational quantum state eigensolver

    Extracting eigenvalues and eigenvectors of exponentially large matrices will be an important application of near-term quantum computers. The variational quantum eigensolver (VQE) treats the case when the matrix is a Hamiltonian. Here, we address the case when the matrix is a density matrix ρ. We introduce the variational quantum state eigensolver (VQSE), which is analogous to VQE in that it variationally learns the largest eigenvalues of ρ as well as a gate sequence V that prepares the corresponding eigenvectors. VQSE exploits the connection between diagonalization and majorization to define a cost function C=Tr(ρ~H) where H is a non-degenerate Hamiltonian. Duemore » to Schur-concavity, C is minimized when ρ~=VρV† is diagonal in the eigenbasis of H. VQSE only requires a single copy of ρ (only n qubits) per iteration of the VQSE algorithm, making it amenable for near-term implementation. We heuristically demonstrate two applications of VQSE: (1) Principal component analysis, and (2) Error mitigation.« less
  6. Generalization in quantum machine learning from few training data

    Modern quantum machine learning (QML) methods involve variationally optimizing a parameterized quantum circuit on a training data set, and subsequently making predictions on a testing data set (i.e., generalizing). In this work, we provide a comprehensive study of generalization performance in QML after training on a limited number N of training data points. We show that the generalization error of a quantum machine learning model with T trainable gates scales at worst as $$\sqrt{T/N}$$. When only K$$\ll$$T gates have undergone substantial change in the optimization process, we prove that the generalization error improves to $$\sqrt{K/N}$$. Our results imply that themore » compiling of unitaries into a polynomial number of native gates, a crucial application for the quantum computing industry that typically uses exponential-size training data, can be sped up significantly. We also show that classification of quantum states across a phase transition with a quantum convolutional neural network requires only a very small training data set. Other potential applications include learning quantum error correcting codes or quantum dynamical simulation. Our work injects new hope into the field of QML, as good generalization is guaranteed from few training data.« less
  7. Diagnosing Barren Plateaus with Tools from Quantum Optimal Control

    Variational Quantum Algorithms (VQAs) have received considerable attention due to their potential for achieving near-term quantum advantage. However, more work is needed to understand their scalability. One known scaling result for VQAs is barren plateaus, where certain circumstances lead to exponentially vanishing gradients. It is common folklore that problem-inspired ansatzes avoid barren plateaus, but in fact, very little is known about their gradient scaling. In this work we employ tools from quantum optimal control to develop a framework that can diagnose the presence or absence of barren plateaus for problem-inspired ansatzes. Such ansatzes include the Quantum Alternating Operator Ansatz (QAOA),more » the Hamiltonian Variational Ansatz (HVA), and others. With our framework, we prove that avoiding barren plateaus for these ansatzes is not always guaranteed. Specifically, we show that the gradient scaling of the VQA depends on the degree of controllability of the system, and hence can be diagnosed through the dynamical Lie algebra $$\mathfrak{g}$$ obtained from the generators of the ansatz. We analyze the existence of barren plateaus in QAOA and HVA ansatzes, and we highlight the role of the input state, as different initial states can lead to the presence or absence of barren plateaus. Taken together, our results provide a framework for trainability-aware ansatz design strategies that do not come at the cost of extra quantum resources. Moreover, we prove no-go results for obtaining ground states with variational ansatzes for controllable system such as spin glasses. Our work establishes a link between the existence of barren plateaus and the scaling of the dimension of $$\mathfrak{g}$$.« less
  8. Trainability of Dissipative Perceptron-Based Quantum Neural Networks

    Several architectures have been proposed for quantum neural networks (QNNs), with the goal of efficiently performing machine learning tasks on quantum data. Rigorous scaling results are urgently needed for specific QNN constructions to understand which, if any, will be trainable at a large scale. Here, we analyze the gradient scaling (and hence the trainability) for a recently proposed architecture that we call dissipative QNNs (DQNNs), where the input qubits of each layer are discarded at the layer’s output. We find that DQNNs can exhibit barren plateaus, i.e., gradients that vanish exponentially in the number of qubits. Moreover, we provide quantitativemore » bounds on the scaling of the gradient for DQNNs under different conditions, such as different cost functions and circuit depths, and show that trainability is not always guaranteed. Here our work represents the first rigorous analysis of the scalability of a perceptron-based QNN.« less
  9. Connecting Ansatz Expressibility to Gradient Magnitudes and Barren Plateaus

    Parametrized quantum circuits serve as ansatze for solving variational problems and provide a flexible paradigm for the programming of near-term quantum computers. Ideally, such ansatze should be highly expressive, so that a close approximation of the desired solution can be accessed. On the other hand, the ansatz must also have sufficiently large gradients to allow for training. Here, we derive a fundamental relationship between these two essential properties: expressibility and trainability. This is done by extending the well-established barren plateau phenomenon, which holds for ansatze that form exact 2-designs, to arbitrary ansatze. Specifically, we calculate the variance in the costmore » gradient in terms of the expressibility of the ansatz, as measured by its distance from being a 2-design. Our resulting bounds indicate that highly expressive ansatze exhibit flatter cost landscapes and therefore will be harder to train. Furthermore, we provide numerics illustrating the effect of expressibility on gradient scalings and we discuss the implications for designing strategies to avoid barren plateaus.« less
  10. Reformulation of the No-Free-Lunch Theorem for Entangled Datasets

    The No-Free-Lunch (NFL) theorem is a celebrated result in learning theory that limits one’s ability to learn a function with a training data set. With the recent rise of quantum machine learning, it is natural to ask whether there is a quantum analog of the NFL theorem, which would restrict a quantum computer’s ability to learn a unitary process with quantum training data. However, in the quantum setting, the training data can possess entanglement, a strong correlation with no classical analog. In this work, we show that entangled data sets lead to an apparent violation of the (classical) NFL theorem.more » This motivates a reformulation that accounts for the degree of entanglement in the training set. As our main result, we prove a quantum NFL theorem whereby the fundamental limit on the learnability of a unitary is reduced by entanglement. We employ Rigetti's quantum computer to test both the classical and quantum NFL theorems. In conclusion, our work establishes that entanglement is a commodity in quantum machine learning.« less
...

Search for:
All Records
Creator / Author
"Sharma, Kunal"

Refine by:
Article Type
Availability
Journal
Creator / Author
Publication Date
Research Organization