skip to main content
DOE PAGES title logo U.S. Department of Energy
Office of Scientific and Technical Information

Title: A Study of Complex Deep Learning Networks on High-Performance, Neuromorphic, and Quantum Computers

Abstract

Current deep learning approaches have been very successful using convolutional neural networks trained on large graphical-processing-unit-based computers. Three limitations of this approach are that (1) they are based on a simple layered network topology, i.e., highly connected layers, without intra-layer connections; (2) the networks are manually configured to achieve optimal results, and (3) the implementation of the network model is expensive in both cost and power. In this paper, we evaluate deep learning models using three different computing architectures to address these problems: quantum computing to train complex topologies, high performance computing to automatically determine network topology, and neuromorphic computing for a low-power hardware implementation. We use the MNIST dataset for our experiment, due to input size limitations of current quantum computers. Our results show the feasibility of using the three architectures in tandem to address the above deep learning limitations. Finally, we show that a quantum computer can find high quality values of intra-layer connection weights in a tractable time as the complexity of the network increases, a high performance computer can find optimal layer-based topologies, and a neuromorphic computer can represent the complex topology and weights derived from the other architectures in low power memristive hardware.

Authors:
 [1];  [1];  [1];  [1];  [2];  [2];  [2];  [3];  [3]
  1. Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States). Computational Data Analytics Group
  2. Univ. of Southern California, Marina del Rey, CA (United States). Information Sciences Inst.
  3. Univ. of Tennessee, Knoxville, TN (United States). Dept. of Electrical Engineering and Computer Science
Publication Date:
Research Org.:
Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States)
Sponsoring Org.:
USDOE Office of Science (SC), Advanced Scientific Computing Research (ASCR) (SC-21)
OSTI Identifier:
1474723
Grant/Contract Number:  
AC05-00OR22725
Resource Type:
Accepted Manuscript
Journal Name:
ACM Journal on Emerging Technologies in Computing Systems
Additional Journal Information:
Journal Volume: 14; Journal Issue: 2; Journal ID: ISSN 1550-4832
Publisher:
Association for Computing Machinery
Country of Publication:
United States
Language:
English
Subject:
97 MATHEMATICS AND COMPUTING; deep learning; quantum computing; neuromorphic computing; high-performance computing

Citation Formats

Potok, Thomas E., Schuman, Catherine, Young, Steven, Patton, Robert, Spedalieri, Federico, Liu, Jeremy, Yao, Ke-Thia, Rose, Garrett, and Chakma, Gangotree. A Study of Complex Deep Learning Networks on High-Performance, Neuromorphic, and Quantum Computers. United States: N. p., 2018. Web. doi:10.1145/3178454.
Potok, Thomas E., Schuman, Catherine, Young, Steven, Patton, Robert, Spedalieri, Federico, Liu, Jeremy, Yao, Ke-Thia, Rose, Garrett, & Chakma, Gangotree. A Study of Complex Deep Learning Networks on High-Performance, Neuromorphic, and Quantum Computers. United States. doi:10.1145/3178454.
Potok, Thomas E., Schuman, Catherine, Young, Steven, Patton, Robert, Spedalieri, Federico, Liu, Jeremy, Yao, Ke-Thia, Rose, Garrett, and Chakma, Gangotree. Fri . "A Study of Complex Deep Learning Networks on High-Performance, Neuromorphic, and Quantum Computers". United States. doi:10.1145/3178454. https://www.osti.gov/servlets/purl/1474723.
@article{osti_1474723,
title = {A Study of Complex Deep Learning Networks on High-Performance, Neuromorphic, and Quantum Computers},
author = {Potok, Thomas E. and Schuman, Catherine and Young, Steven and Patton, Robert and Spedalieri, Federico and Liu, Jeremy and Yao, Ke-Thia and Rose, Garrett and Chakma, Gangotree},
abstractNote = {Current deep learning approaches have been very successful using convolutional neural networks trained on large graphical-processing-unit-based computers. Three limitations of this approach are that (1) they are based on a simple layered network topology, i.e., highly connected layers, without intra-layer connections; (2) the networks are manually configured to achieve optimal results, and (3) the implementation of the network model is expensive in both cost and power. In this paper, we evaluate deep learning models using three different computing architectures to address these problems: quantum computing to train complex topologies, high performance computing to automatically determine network topology, and neuromorphic computing for a low-power hardware implementation. We use the MNIST dataset for our experiment, due to input size limitations of current quantum computers. Our results show the feasibility of using the three architectures in tandem to address the above deep learning limitations. Finally, we show that a quantum computer can find high quality values of intra-layer connection weights in a tractable time as the complexity of the network increases, a high performance computer can find optimal layer-based topologies, and a neuromorphic computer can represent the complex topology and weights derived from the other architectures in low power memristive hardware.},
doi = {10.1145/3178454},
journal = {ACM Journal on Emerging Technologies in Computing Systems},
number = 2,
volume = 14,
place = {United States},
year = {2018},
month = {7}
}

Journal Article:
Free Publicly Available Full Text
Publisher's Version of Record

Save / Share:

Works referenced in this record:

Nanoscale Memristor Device as Synapse in Neuromorphic Systems
journal, April 2010

  • Jo, Sung Hyun; Chang, Ting; Ebong, Idongesit
  • Nano Letters, Vol. 10, Issue 4, p. 1297-1301
  • DOI: 10.1021/nl904092h