skip to main content
OSTI.GOV title logo U.S. Department of Energy
Office of Scientific and Technical Information

Title: A Study of Complex Deep Learning Networks on High Performance, Neuromorphic, and Quantum Computers

Abstract

Current Deep Learning models use highly optimized convolutional neural networks (CNN) trained on large graphical processing units (GPU)-based computers with a fairly simple layered network topology, i.e., highly connected layers, without intra-layer connections. Complex topologies have been proposed, but are intractable to train on current systems. Building the topologies of the deep learning network requires hand tuning, and implementing the network in hardware is expensive in both cost and power. In this paper, we evaluate deep learning models using three different computing architectures to address these problems: quantum computing to train complex topologies, high performance computing (HPC) to automatically determine network topology, and neuromorphic computing for a low-power hardware implementation. Due to input size limitations of current quantum computers we use the MNIST dataset for our evaluation. The results show the possibility of using the three architectures in tandem to explore complex deep learning networks that are untrainable using a von Neumann architecture. We show that a quantum computer can find high quality values of intra-layer connections and weights, while yielding a tractable time result as the complexity of the network increases; a high performance computer can find optimal layer-based topologies; and a neuromorphic computer can represent the complex topologymore » and weights derived from the other architectures in low power memristive hardware. This represents a new capability that is not feasible with current von Neumann architecture. It potentially enables the ability to solve very complicated problems unsolvable with current computing technologies.« less

Authors:
 [1];  [1];  [1];  [1];  [2];  [2];  [2];  [3];  [3]
  1. ORNL
  2. University of Southern California, Information Sciences Institute
  3. University of Tennessee (UT)
Publication Date:
Research Org.:
Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States). Oak Ridge Leadership Computing Facility (OLCF)
Sponsoring Org.:
USDOE Laboratory Directed Research and Development (LDRD) Program; USDOE Office of Science (SC)
OSTI Identifier:
1335350
DOE Contract Number:  
AC05-00OR22725
Resource Type:
Conference
Resource Relation:
Conference: 2nd Workshop on Machine Learning in HPC Environments (Held in conjunction with SC16), Salt Lake City, UT, USA, 20161114, 20161114
Country of Publication:
United States
Language:
English

Citation Formats

Potok, Thomas E, Schuman, Catherine D, Young, Steven R, Patton, Robert M, Spedalieri, Federico, Liu, Jeremy, Yao, Ke-Thia, Rose, Garrett, and Chakma, Gangotree. A Study of Complex Deep Learning Networks on High Performance, Neuromorphic, and Quantum Computers. United States: N. p., 2016. Web.
Potok, Thomas E, Schuman, Catherine D, Young, Steven R, Patton, Robert M, Spedalieri, Federico, Liu, Jeremy, Yao, Ke-Thia, Rose, Garrett, & Chakma, Gangotree. A Study of Complex Deep Learning Networks on High Performance, Neuromorphic, and Quantum Computers. United States.
Potok, Thomas E, Schuman, Catherine D, Young, Steven R, Patton, Robert M, Spedalieri, Federico, Liu, Jeremy, Yao, Ke-Thia, Rose, Garrett, and Chakma, Gangotree. Fri . "A Study of Complex Deep Learning Networks on High Performance, Neuromorphic, and Quantum Computers". United States.
@article{osti_1335350,
title = {A Study of Complex Deep Learning Networks on High Performance, Neuromorphic, and Quantum Computers},
author = {Potok, Thomas E and Schuman, Catherine D and Young, Steven R and Patton, Robert M and Spedalieri, Federico and Liu, Jeremy and Yao, Ke-Thia and Rose, Garrett and Chakma, Gangotree},
abstractNote = {Current Deep Learning models use highly optimized convolutional neural networks (CNN) trained on large graphical processing units (GPU)-based computers with a fairly simple layered network topology, i.e., highly connected layers, without intra-layer connections. Complex topologies have been proposed, but are intractable to train on current systems. Building the topologies of the deep learning network requires hand tuning, and implementing the network in hardware is expensive in both cost and power. In this paper, we evaluate deep learning models using three different computing architectures to address these problems: quantum computing to train complex topologies, high performance computing (HPC) to automatically determine network topology, and neuromorphic computing for a low-power hardware implementation. Due to input size limitations of current quantum computers we use the MNIST dataset for our evaluation. The results show the possibility of using the three architectures in tandem to explore complex deep learning networks that are untrainable using a von Neumann architecture. We show that a quantum computer can find high quality values of intra-layer connections and weights, while yielding a tractable time result as the complexity of the network increases; a high performance computer can find optimal layer-based topologies; and a neuromorphic computer can represent the complex topology and weights derived from the other architectures in low power memristive hardware. This represents a new capability that is not feasible with current von Neumann architecture. It potentially enables the ability to solve very complicated problems unsolvable with current computing technologies.},
doi = {},
journal = {},
number = ,
volume = ,
place = {United States},
year = {2016},
month = {1}
}

Conference:
Other availability
Please see Document Availability for additional information on obtaining the full-text document. Library patrons may search WorldCat to identify libraries that hold this conference proceeding.

Save / Share: