Skip to main content
U.S. Department of Energy
Office of Scientific and Technical Information
  1. Quantum Annealing for Real-World Machine Learning Applications

    Optimizing the training of a machine learning pipeline is important for reducing training costs and improving model performance. One such optimizing strategy is quantum annealing, which is an emerging computing paradigm that has shown potential in optimizing the training of a machine learning model. The implementation of a physical quantum annealer has been realized by D-Wave systems and is available to the research community for experiments. Recent experimental results on a variety of machine learning applications have shown interesting results especially under the conditions where the performance of classical machine learning techniques are limited such as limited training data and high dimensional features. This chapter explores the application of D-Wave’s quantum annealer for optimizing machine learning pipelines for real-world classification problems. We review the application domains on which a physical quantum annealer has been used to train machine learning classifiers. We discuss and analyze the experiments performed on the D-Wave quantum annealer for applications such as image recognition, remote sensing imagery, security, computational biology, biomedical sciences, and physics. We discuss the possible advantages and the problems for which quantum annealing is likely to be advantageous over classical computation.

  2. Rethinking Programming Paradigms in the QC-HPC Context

    Programming for today’s quantum computers is making significant strides toward modern workflows compatible with high performance computing (HPC), but fundamental challenges still remain in the integration of these vastly different technologies. Quantum computing (QC) programming languages share some common ground, as well as their emerging runtimes and algorithmic modalities. In this short paper, we explore avenues of refinement for the quantum processing unit (QPU) in the context of many-tasks management, asynchronous or otherwise, in order to understand the value it can play in linking QC with HPC. Through examples, we illustrate how its potential for scientific discovery might be realized.

  3. Benchmarking characterization methods for noisy quantum circuits

    Effective methods for characterizing the noise in quantum computing devices are essential for programming and debugging circuit performance. Existing approaches vary in the information obtained as well as the amount of quantum and classical resources required, with more information generally requiring more resources. Here we benchmark the characterization methods of gate set tomography, Pauli channel noise reconstruction, and empirical direct characterization for developing models that describe noisy quantum circuit performance on a 27-qubit superconducting transmon device. We evaluate these models by comparing the accuracy of noisy circuit simulations with the corresponding experimental observations. Here, we find that the agreement of noise model to experiment does not correlate with the information gained by characterization and that the underlying circuit strongly influences the best choice of characterization approach. Empirical direct characterization scales best of the methods we tested and produced the most accurate characterizations across our benchmarks.

  4. Reliable Devices Yield Stable Quantum Computations

    Stable quantum computation requires noisy results to remain bounded even in the presence of noise fluctuations. Yet non-stationary noise processes lead to drift in the varying characteristics of a quantum device that can greatly influence the circuit outcomes. Here we address how temporal and spatial variations in noise relate device reliability to quantum computing stability. First, our approach quantifies the differences in statistical distributions of characterization metrics collected at different times and locations using Hellinger distance. We then validate an analytical bound that relates this distance directly to the stability of a computed expectation value. Our demonstration uses numerical simulations with models informed by the washington superconducting transmon device. We find that the stability metric is consistently bounded from above by the corresponding Hellinger distance, which can be cast as a specified tolerance level. These results underscore the significance of reliable quantum computing devices and the impact for stable quantum computation.

  5. Adaptive mitigation of time-varying quantum noise

    Current quantum computers suffer from non-stationary noise channels with high error rates, which undermines their reliability and reproducibility. We propose a Bayesian inference based adaptive algorithm that can learn and mitigate quantum noise in response to changing channel conditions. Our study emphasizes the need for dynamic inference of critical channel parameters to improve program accuracy. We use the Dirichlet distribution to model the stochasticity of the Pauli channel. This allows us to perform Bayesian inference, which can improve the performance of probabilistic error cancellation (PEC) under time-varying noise. Our work demonstrates the importance of characterizing and mitigating temporal variations in quantum noise, which is crucial for developing more accurate and reliable quantum technologies. Our results demonstrate that Bayesian PEC can outperform non-adaptive approaches by a factor of 4.5x when measured using Hellinger distance from the ideal distribution.

  6. Integrating quantum computing resources into scientific HPC ecosystems

    Quantum Computing (QC) offers significant potential to enhance scientific discovery in fields such as quantum chemistry, optimization, and artificial intelligence. Yet QC faces challenges due to the noisy intermediate-scale quantum era’s inherent external noise issues. Here, this paper discusses the integration of QC as a computational accelerator within classical scientific high-performance computing (HPC) systems. By leveraging a broad spectrum of simulators and hardware technologies, we propose a hardware-agnostic framework for augmenting classical HPC with QC capabilities. Drawing on the HPC expertise of the Oak Ridge National Laboratory (ORNL) and the HPC lifecycle management of the Department of Energy (DOE), our approach focuses on the strategic incorporation of QC capabilities and acceleration into existing scientific HPC workflows. This includes detailed analyses, benchmarks, and code optimization driven by the needs of the DOE and ORNL missions. Our comprehensive framework integrates hardware, software, workflows, and user interfaces to foster a synergistic environment for quantum and classical computing research. This paper outlines plans to unlock new computational possibilities, driving forward scientific inquiry and innovation in a wide array of research domains.

  7. Toward Consistent High-Fidelity Quantum Learning on Unstable Devices via Efficient In-Situ Calibration

    In the near-term noisy intermediate-scale quantum (NISQ) era, high noise will significantly reduce the fidelity of quantum computing. What's worse, recent works reveal that the noise on quantum devices is not stable, that is, the noise is dynamically changing over time. This leads to an imminent challenging problem: At run-time, is there a way to efficiently achieve a consistent high-fidelity quantum system on unstable devices? To study this problem, we take quantum learning (a.k.a., variational quantum algorithm) as a vehicle, which has a wide range of applications, such as combinatorial optimization and machine learning. A straightforward approach is to optimize a variational quantum circuit (VQC) with a parameter-shift approach on the target quantum device before using it; however, the optimization has an extremely high time cost, which is not practical at run-time. To address the pressing issue, in this paper, we proposed a novel quantum pulse-based noise adaptation framework, namely QuPAD. In the proposed framework, first, we identify that the CNOT gate is the fidelity bottleneck of the conventional VQC, and we employ a more robust parameterized multi-qubit gate (i.e., Rzx gate) to replace CNOT gate. Second, by benchmarking Rzx gate with different parameters, we build a fitting function for each coupling qubit pair, such that the deviation between the theoretic output of Rzx gate and its on-device output under a given pulse amplitude and duration can be efficiently predicted. On top of this, an evolutionary algorithm is devised to identify the pulse amplitude and duration of each Rzx gate (i.e., calibration) and find the quantum circuits with high fidelity. Experiments show that the runtime on quantum devices of QuPAD with 8–10 qubits is less than 15 minutes, which is up to 270 x faster than the parameter-shift approach. In addition, compared to the vanilla VQC as a baseline, QuPAD can achieve 59.33% accuracy gain on a classification task, and average 66.34% closer to ground state energy for molecular simulation.

  8. Quantum-centric supercomputing for materials science: A perspective on challenges and future directions

    Computational models are an essential tool for the design, characterization, and discovery of novel materials. Computationally hard tasks in materials science stretch the limits of existing high-performance supercomputing centers, consuming much of their resources for simulation, analysis, and data processing. Quantum computing, on the other hand, is an emerging technology with the potential to accelerate many of the computational tasks needed for materials science. In order to do that, the quantum technology must interact with conventional high-performance computing in several ways: approximate results validation, identification of hard problems, and synergies in quantum-centric supercomputing. Here in this paper, we provide a perspective on how quantum-centric supercomputing can help address critical computational problems in materials science, the challenges to face in order to solve representative use cases, and new suggested directions.

  9. Dimensionality Reduction with Variational Encoders Based on Subsystem Purification

    Efficient methods for encoding and compression are likely to pave the way toward the problem of efficient trainability on higher-dimensional Hilbert spaces, overcoming issues of barren plateaus. Here, we propose an alternative approach to variational autoencoders to reduce the dimensionality of states represented in higher dimensional Hilbert spaces. To this end, we build a variational algorithm-based autoencoder circuit that takes as input a dataset and optimizes the parameters of a Parameterized Quantum Circuit (PQC) ansatz to produce an output state that can be represented as a tensor product of two subsystems by minimizing $Tr(ρ^2)$. The output of this circuit is passed through a series of controlled swap gates and measurements to output a state with half the number of qubits while retaining the features of the starting state in the same spirit as any dimension-reduction technique used in classical algorithms. The output obtained is used for supervised learning to guarantee the working of the encoding procedure thus developed. We make use of the Bars and Stripes (BAS) dataset for an 8 × 8 grid to create efficient encoding states and report a classification accuracy of 95% on the same. Thus, the demonstrated example provides proof for the working of the method in reducing states represented in large Hilbert spaces while maintaining the features required for any further machine learning algorithm that follows.

  10. Quantum Programming Paradigms and Description Languages

    Here, this article offers perspective on quantum computing programming languages, as well as their emerging runtimes and algorithmic modalities. With the scientific high-performance computing (HPC) community as a target audience, we describe the current state of the art in the field, and outline programming paradigms for scientific workflows. One take-home message is that there is significant work required to first refine the notion of the quantum processing unit in order to integrate in the HPC environments. Programming for today’s quantum computers is making significant strides toward modern HPC-compatible workflows, but key challenges still face the field.


Search for:
All Records
Author / Contributor
0000000294490498

Refine by:
Resource Type
Availability
Publication Date
  • 2016: 1 results
  • 2017: 6 results
  • 2018: 17 results
  • 2019: 11 results
  • 2020: 8 results
  • 2021: 19 results
  • 2022: 13 results
  • 2023: 13 results
  • 2024: 6 results
2016
2024
Author / Contributor
Research Organization