Skip to main content
U.S. Department of Energy
Office of Scientific and Technical Information
  1. Toward Consistent High-Fidelity Quantum Learning on Unstable Devices via Efficient In-Situ Calibration

    In the near-term noisy intermediate-scale quantum (NISQ) era, high noise will significantly reduce the fidelity of quantum computing. What's worse, recent works reveal that the noise on quantum devices is not stable, that is, the noise is dynamically changing over time. This leads to an imminent challenging problem: At run-time, is there a way to efficiently achieve a consistent high-fidelity quantum system on unstable devices? To study this problem, we take quantum learning (a.k.a., variational quantum algorithm) as a vehicle, which has a wide range of applications, such as combinatorial optimization and machine learning. A straightforward approach is to optimize a variational quantum circuit (VQC) with a parameter-shift approach on the target quantum device before using it; however, the optimization has an extremely high time cost, which is not practical at run-time. To address the pressing issue, in this paper, we proposed a novel quantum pulse-based noise adaptation framework, namely QuPAD. In the proposed framework, first, we identify that the CNOT gate is the fidelity bottleneck of the conventional VQC, and we employ a more robust parameterized multi-qubit gate (i.e., Rzx gate) to replace CNOT gate. Second, by benchmarking Rzx gate with different parameters, we build a fitting function for each coupling qubit pair, such that the deviation between the theoretic output of Rzx gate and its on-device output under a given pulse amplitude and duration can be efficiently predicted. On top of this, an evolutionary algorithm is devised to identify the pulse amplitude and duration of each Rzx gate (i.e., calibration) and find the quantum circuits with high fidelity. Experiments show that the runtime on quantum devices of QuPAD with 8–10 qubits is less than 15 minutes, which is up to 270 x faster than the parameter-shift approach. In addition, compared to the vanilla VQC as a baseline, QuPAD can achieve 59.33% accuracy gain on a classification task, and average 66.34% closer to ground state energy for molecular simulation.

  2. Dimensionality Reduction with Variational Encoders Based on Subsystem Purification

    Efficient methods for encoding and compression are likely to pave the way toward the problem of efficient trainability on higher-dimensional Hilbert spaces, overcoming issues of barren plateaus. Here, we propose an alternative approach to variational autoencoders to reduce the dimensionality of states represented in higher dimensional Hilbert spaces. To this end, we build a variational algorithm-based autoencoder circuit that takes as input a dataset and optimizes the parameters of a Parameterized Quantum Circuit (PQC) ansatz to produce an output state that can be represented as a tensor product of two subsystems by minimizing $Tr(ρ^2)$. The output of this circuit is passed through a series of controlled swap gates and measurements to output a state with half the number of qubits while retaining the features of the starting state in the same spirit as any dimension-reduction technique used in classical algorithms. The output obtained is used for supervised learning to guarantee the working of the encoding procedure thus developed. We make use of the Bars and Stripes (BAS) dataset for an 8 × 8 grid to create efficient encoding states and report a classification accuracy of 95% on the same. Thus, the demonstrated example provides proof for the working of the method in reducing states represented in large Hilbert spaces while maintaining the features required for any further machine learning algorithm that follows.

  3. Enabling Scalable VQE Simulation on Leading HPC Systems

    Large-scale simulations of quantum circuits pose significant challenges, especially in the context of quantum chemistry, due to the number of qubits, circuit depth, and the number of circuits needed per problem. High-performance computing (HPC) systems offer massive computational capabilities that could help overcome these obstacles. We developed a high-performance quantum circuit simulator, called NWQ-Sim, and demonstrate its capability to simulate large quantum chemistry problems on NERSC's Perlmutter supercomputer. Integrating NWQ-Sim with XACC, we have executed QPE and VQE algorithms for downfolded quantum chemistry systems at unprecedented scales. Our work demonstrates the potential of leveraging HPC resources to advance quantum chemistry and other applications of near-term quantum devices.

  4. Towards Redefining the Reproducibility in Quantum Computing: A Data Analysis Approach on NISQ Devices

    Although the building of quantum computers has kept making rapid progress in recent years, noise is still the main challenge for any application to leverage the power of quantum computing. Existing works addressing noise in quantum devices proposed noise reduction when deploying a quantum algorithm to a specified quantum computer. The reproducibility issue of quantum algorithms has been raised since the noise levels vary on different quantum computers. Importantly, existing works largely ignore the fact that the noise of quantum devices varies as time goes by. Therefore, reproducing the results on the same hardware will even become a problem. We analyze the reproducibility of quantum machine learning (QML) algorithms based on daily model training and execution data collection. Our analysis shows a correlation between our QML models’ test accuracy and quantum computer hardware’s calibration features. We also demonstrate that noisy simulators for quantum computers are not a reliable tool for quantum machine learning applications.

  5. Noise-Resilient and Reduced Depth Approximate Adders for NISQ Quantum Computing

    The "Noisy intermediate-scale quantum" NISQ machine era primarily focuses on mitigating noise, controlling errors, and executing high-fidelity operations, hence requiring shallow circuit depth and noise robustness. Approximate computing is a novel computing paradigm that produces imprecise results by relaxing the need for fully precise output for error-tolerant applications including multimedia, data mining, and image processing. We investigate how approximate computing can improve the noise resilience of quantum adder circuits in NISQ quantum computing. We propose five designs of approximate quantum adders to reduce depth while making them noise-resilient, in which three designs are with carryout, while two are without carryout. We have used novel design approaches that include approximating the Sum only from the inputs (pass-through designs) and having zero depth, as they need no quantum gates. The second design style uses a single CNOT gate to approximate the SUM with a constant depth of O(1). We performed our experimentation on IBM Qiskit on noise models including thermal, depolarizing, amplitude damping, phase damping, and bitflip: (i) Compared to exact quantum ripple carry adder without carryout the proposed approximate adders without carryout have improved fidelity ranging from 8.34% to 219.22%, and (ii) Compared to exact quantum ripple carry adder with carryout the proposed approximate adders with carryout have improved fidelity ranging from 8.23% to 371%. Further, the proposed approximate quantum adders are evaluated in terms of various error metrics.

  6. Quantum Programming Paradigms and Description Languages

    Here, this article offers perspective on quantum computing programming languages, as well as their emerging runtimes and algorithmic modalities. With the scientific high-performance computing (HPC) community as a target audience, we describe the current state of the art in the field, and outline programming paradigms for scientific workflows. One take-home message is that there is significant work required to first refine the notion of the quantum processing unit in order to integrate in the HPC environments. Programming for today’s quantum computers is making significant strides toward modern HPC-compatible workflows, but key challenges still face the field.

  7. Enabling Scalable VQE Simulation on Leading HPC Systems

    Large-scale simulations of quantum circuits pose significant challenges, especially in quantum chemistry, due to the number of qubits, circuit depth, and the number of circuits needed per problem. High-performance computing (HPC) systems offer massive computational capabilities that could help overcome these obstacles. We developed a high-performance quantum circuit simulator called NWQ-Sim, and demonstrated its capability to simulate large quantum chemistry problems on NERSC’s Perlmutter supercomputer. Integrating NWQ-Sim with XACC, an open-source programming framework for quantum-classical applications, we have executed quantum phase estimation (QPE) and variational quantum eigensolver (VQE) algorithms for downfolded quantum chemistry systems at unprecedented scales. Our work demonstrates the potential of leveraging HPC resources and optimized simulators to advance quantum chemistry and other applications of near-term quantum devices. By scaling to larger qubit counts and circuit depths, high-performance simulators like NWQ-Sim will be critical for characterizing and validating quantum algorithms before their deployment on actual quantum hardware.

  8. TDAG: Tree-based Directed Acyclic Graph Partitioning for Quantum Circuits

    We propose the Tree-based Directed Acyclic Graph (TDAG) partitioning for quantum circuits, a novel quantum circuit partitioning method which partitions circuits by viewing them as a series of binary trees and selecting the tree containing the most gates. TDAG produces results of comparable quality (number of partitions) to an existing method called ScanPartitioner (an exhaustive search algorithm) with an 95% average reduction in execution time. Furthermore, TDAG improves compared to a faster partitioning method called QuickPartitioner by 38% in terms of quality of the results with minimal overhead in execution time.

  9. Integrating quantum computing resources into scientific HPC ecosystems

    Quantum Computing (QC) offers significant potential to enhance scientific discovery in fields such as quantum chemistry, optimization, and artificial intelligence. Yet QC faces challenges due to the noisy intermediate-scale quantum era’s inherent external noise issues. Here, this paper discusses the integration of QC as a computational accelerator within classical scientific high-performance computing (HPC) systems. By leveraging a broad spectrum of simulators and hardware technologies, we propose a hardware-agnostic framework for augmenting classical HPC with QC capabilities. Drawing on the HPC expertise of the Oak Ridge National Laboratory (ORNL) and the HPC lifecycle management of the Department of Energy (DOE), our approach focuses on the strategic incorporation of QC capabilities and acceleration into existing scientific HPC workflows. This includes detailed analyses, benchmarks, and code optimization driven by the needs of the DOE and ORNL missions. Our comprehensive framework integrates hardware, software, workflows, and user interfaces to foster a synergistic environment for quantum and classical computing research. This paper outlines plans to unlock new computational possibilities, driving forward scientific inquiry and innovation in a wide array of research domains.

  10. QASMTrans: A QASM Quantum Transpiler Framework for NISQ Devices

    In quantum computing, transpilation plays a crucial role in converting high-level, machine-independent quantum circuits into circuits specially for a quantum device, considering factors such as basis gate set, topology, error profile, etc. Yet, the efficiency of transpilation remains a significant bottleneck, particularly when dealing with very large QASM level input files. In this paper, we present QASMTrans, a C++ based high-performance quantum transpiler framework that can demonstrate on average 50-100× speedups compared to the internal transpiler of Qiskit. Particularly, for large dense circuits such as ’uccsd n24’ and ’qft n320’ incorporating millions of gates, QASMTrans can successfully transpile in 69s and 31s, respectively, while Qiskit failed to finish in one hour. Using QASMTrans as the baseline, it becomes more feasible to explore much larger design space and impose more comprehensive compiler optimizations.


Search for:
All Records
Author / Contributor
"Humble, Travis"

Refine by:
Resource Type
Availability
Publication Date
  • 2007: 3 results
  • 2008: 4 results
  • 2009: 3 results
  • 2010: 7 results
  • 2011: 6 results
  • 2012: 2 results
  • 2013: 6 results
  • 2014: 5 results
  • 2015: 4 results
  • 2016: 10 results
  • 2017: 12 results
  • 2018: 21 results
  • 2019: 12 results
  • 2020: 9 results
  • 2021: 21 results
  • 2022: 16 results
  • 2023: 22 results
  • 2024: 9 results
  • 2025: 2 results
2007
2025
Author / Contributor
Research Organization