Skip to main content
U.S. Department of Energy
Office of Scientific and Technical Information

Evolving Energy Efficient Convolutional Neural Networks

Conference ·
As deep neural networks have been deployed in more and more applications over the past half decade and are finding their way into an ever increasing number of operational systems, their energy consumption becomes a concern whether running in the datacenter or on edge devices. Hyperparameter optimization and automated network design for deep learning is a quickly growing field, but much of the focus has remained only on optimizing for the performance of the machine learning task. In this work, we demonstrate that the best performing networks created through this automated network design process have radically different computational characteristics (e.g. energy usage, model size, inference time), presenting the opportunity to utilize this optimization process to make deep learning networks more energy efficient and deployable to smaller devices. Optimizing for these computational characteristics is critical as the number of applications of deep learning continues to expand.
Research Organization:
Oak Ridge National Laboratory (ORNL), Oak Ridge, TN (United States)
Sponsoring Organization:
USDOE; USDOE Office of Science (SC), Advanced Scientific Computing Research (ASCR) (SC-21)
DOE Contract Number:
AC05-00OR22725
OSTI ID:
1606807
Country of Publication:
United States
Language:
English

References (18)

Regularized Evolution for Image Classifier Architecture Search journal July 2019
Mastering the game of Go with deep neural networks and tree search journal January 2016
Progressive Neural Architecture Search book January 2018
Benchmarking Keyword Spotting Efficiency on Neuromorphic Hardware conference January 2019
TrueNorth: Design and Tool Flow of a 65 mW 1 Million Neuron Programmable Neurosynaptic Chip journal October 2015
In-Datacenter Performance Analysis of a Tensor Processing Unit conference January 2017
Optimizing deep learning hyper-parameters through an evolutionary algorithm
  • Young, Steven R.; Rose, Derek C.; Karnowski, Thomas P.
  • Proceedings of the Workshop on Machine Learning in High-Performance Computing Environments - MLHPC '15 https://doi.org/10.1145/2834892.2834896
conference January 2015
Learning Separable Filters conference June 2013
Deep Residual Learning for Image Recognition conference June 2016
Evolving Deep Networks Using HPC conference January 2017
CP-decomposition with Tensor Power Method for Convolutional Neural Networks compression conference February 2017
Energy Efficiency Enhancement for CNN-based Deep Mobile Sensing journal June 2019
Loihi: A Neuromorphic Manycore Processor with On-Chip Learning journal January 2018
On Global Electricity Usage of Communication Technology: Trends to 2030 journal April 2015
Auto-Keras: An Efficient Neural Architecture Search System conference July 2019
ImageNet Large Scale Visual Recognition Challenge journal April 2015
Data-free Parameter Pruning for Deep Neural Networks conference January 2015
Learning Transferable Architectures for Scalable Image Recognition conference June 2018

Similar Records

Evolving Deep Networks Using HPC
Conference · Sat Dec 31 23:00:00 EST 2016 · OSTI ID:1414394

Evolving Deep Networks Using HPC
Conference · Wed Nov 01 00:00:00 EDT 2017 · OSTI ID:1410912

Related Subjects