Skip to main content
U.S. Department of Energy
Office of Scientific and Technical Information

A Study of Complex Deep Learning Networks on High-Performance, Neuromorphic, and Quantum Computers

Journal Article · · ACM Journal on Emerging Technologies in Computing Systems
DOI:https://doi.org/10.1145/3178454· OSTI ID:1474723
 [1];  [1];  [1];  [1];  [2];  [2];  [2];  [3];  [3]
  1. Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States). Computational Data Analytics Group
  2. Univ. of Southern California, Marina del Rey, CA (United States). Information Sciences Inst.
  3. Univ. of Tennessee, Knoxville, TN (United States). Dept. of Electrical Engineering and Computer Science
Current deep learning approaches have been very successful using convolutional neural networks trained on large graphical-processing-unit-based computers. Three limitations of this approach are that (1) they are based on a simple layered network topology, i.e., highly connected layers, without intra-layer connections; (2) the networks are manually configured to achieve optimal results, and (3) the implementation of the network model is expensive in both cost and power. In this paper, we evaluate deep learning models using three different computing architectures to address these problems: quantum computing to train complex topologies, high performance computing to automatically determine network topology, and neuromorphic computing for a low-power hardware implementation. We use the MNIST dataset for our experiment, due to input size limitations of current quantum computers. Our results show the feasibility of using the three architectures in tandem to address the above deep learning limitations. Finally, we show that a quantum computer can find high quality values of intra-layer connection weights in a tractable time as the complexity of the network increases, a high performance computer can find optimal layer-based topologies, and a neuromorphic computer can represent the complex topology and weights derived from the other architectures in low power memristive hardware.
Research Organization:
Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States); Oak Ridge National Laboratory (ORNL), Oak Ridge, TN (United States). Oak Ridge Leadership Computing Facility (OLCF)
Sponsoring Organization:
USDOE Office of Science (SC), Advanced Scientific Computing Research (ASCR) (SC-21)
Grant/Contract Number:
AC05-00OR22725
OSTI ID:
1474723
Journal Information:
ACM Journal on Emerging Technologies in Computing Systems, Journal Name: ACM Journal on Emerging Technologies in Computing Systems Journal Issue: 2 Vol. 14; ISSN 1550-4832
Publisher:
Association for Computing MachineryCopyright Statement
Country of Publication:
United States
Language:
English

References (39)

Simulating physics with computers journal June 1982
ImageNet Large Scale Visual Recognition Challenge journal April 2015
Darwin: a neuromorphic hardware co-processor based on Spiking Neural Networks journal December 2015
Error-backpropagation in temporally encoded networks of spiking neurons journal October 2002
DANNA: A neuromorphic software ecosystem journal July 2016
Spatiotemporal Classification Using Neuroscience-Inspired Dynamic Architectures journal January 2014
Darwin: A neuromorphic hardware co-processor based on spiking neural networks journal June 2017
A Functional Hybrid Memristor Crossbar-Array/CMOS System for Data Storage and Neuromorphic Applications journal December 2011
Nanoscale Memristor Device as Synapse in Neuromorphic Systems journal April 2010
Competitive Hebbian learning through spike-timing-dependent synaptic plasticity journal September 2000
Quantum annealing with manufactured spins journal May 2011
Training and operation of an integrated neuromorphic network based on metal-oxide memristors journal May 2015
High switching endurance in TaOx memristive devices journal December 2010
Lognormal switching times for titanium dioxide bipolar memristors: origin and resolution journal January 2011
Estimation of effective temperatures in quantum annealers for sampling applications: A case study with possible applications in deep learning journal August 2016
Gradient-based learning applied to document recognition journal January 1998
FaceNet: A unified embedding for face recognition and clustering conference June 2015
What is the best multi-stage architecture for object recognition? conference September 2009
Delving Deep into Rectifiers: Surpassing Human-Level Performance on ImageNet Classification conference December 2015
Neuromorphic architectures for spiking deep neural networks conference December 2015
Extending SpikeProp conference January 2004
Building block of a programmable neuromorphic substrate: A digital neurosynaptic core
  • Arthur, John V.; Merolla, Paul A.; Akopyan, Filipp
  • 2012 International Joint Conference on Neural Networks (IJCNN 2012 - Brisbane), The 2012 International Joint Conference on Neural Networks (IJCNN) https://doi.org/10.1109/IJCNN.2012.6252637
conference June 2012
Cognitive computing building block: A versatile and efficient digital neuron model for neurosynaptic cores
  • Cassidy, Andrew S.; Merolla, Paul; Arthur, John V.
  • 2013 International Joint Conference on Neural Networks (IJCNN 2013 - Dallas), The 2013 International Joint Conference on Neural Networks (IJCNN) https://doi.org/10.1109/IJCNN.2013.6707077
conference August 2013
Fast-classifying, high-accuracy spiking deep networks through weight and threshold balancing conference July 2015
A Memristor Crossbar Based Computing Engine Optimized for High Speed and Accuracy conference July 2016
How We Found The Missing Memristor journal December 2008
Harmonica: A Framework of Heterogeneous Computing Systems With Memristor-Based Neuromorphic Computing Accelerators journal May 2016
Reducing the Dimensionality of Data with Neural Networks journal July 2006
Polynomial-Time Algorithms for Prime Factorization and Discrete Logarithms on a Quantum Computer journal October 1997
Polynomial-Time Algorithms for Prime Factorization and Discrete Logarithms on a Quantum Computer journal January 1999
Optimizing deep learning hyper-parameters through an evolutionary algorithm
  • Young, Steven R.; Rose, Derek C.; Karnowski, Thomas P.
  • Proceedings of the Workshop on Machine Learning in High-Performance Computing Environments - MLHPC '15 https://doi.org/10.1145/2834892.2834896
conference January 2015
Circuit Techniques for Online Learning of Memristive Synapses in CMOS-Memristor Neuromorphic Systems conference January 2017
Long Short-Term Memory journal November 1997
A Fast Learning Algorithm for Deep Belief Nets journal July 2006
A Learning Algorithm for Boltzmann Machines* journal January 1985
ImageNet Large Scale Visual Recognition Challenge text January 2015
Training and Operation of an Integrated Neuromorphic Network Based on Metal-Oxide Memristors text January 2014
Fast-classifying, high-accuracy spiking deep networks through weight and threshold balancing text January 2015
Neuromorphic Architectures for Spiking Deep Neural Networks text January 2015

Cited By (4)

Performance analysis and optimization for scalable deployment of deep learning models for country‐scale settlement mapping on Titan supercomputer journal May 2019
Quantum machine learning with D‐wave quantum computer journal June 2019
Reliability of analog resistive switching memory for neuromorphic computing journal March 2020
Intelligent video surveillance: a review through deep learning techniques for crowd analysis journal June 2019

Similar Records

A Study of Complex Deep Learning Networks on High Performance, Neuromorphic, and Quantum Computers
Conference · Thu Dec 31 23:00:00 EST 2015 · OSTI ID:1335350

A Study of Complex Deep Learning Networks on High Performance, Neuromorphic, and Quantum Computers
Conference · Tue Nov 01 00:00:00 EDT 2016 · OSTI ID:1567432

Adiabatic Quantum Computation Applied to Deep Learning Networks
Journal Article · Thu May 17 20:00:00 EDT 2018 · Entropy · OSTI ID:1468121