skip to main content
OSTI.GOV title logo U.S. Department of Energy
Office of Scientific and Technical Information

Title: Doing the impossible: Why neural networks can be trained at all

Abstract

As deep neural networks grow in size, from thousands to millions to billions of weights, the performance of those networks becomes limited by our ability to accurately train them. A common naive question arises: if we have a system with billions of degrees of freedom, don't we also need billions of samples to train it? Of course, the success of deep learning indicates that reliable models can be learned with reasonable amounts of data. Similar questions arise in protein folding, spin glasses and biological neural networks. With effectively infinite potential folding/spin/wiring configurations, how does the system find the precise arrangement that leads to useful and robust results? Simple sampling of the possible configurations until an optimal one is reached is not a viable option even if one waited for the age of the universe. On the contrary, there appears to be a mechanism in the above phenomena that forces them to achieve configurations that live on a low-dimensional manifold, avoiding the curse of dimensionality. In the current work we use the concept of mutual information between successive layers of a deep neural network to elucidate this mechanism and suggest possible ways of exploiting it to accelerate training.

Authors:
ORCiD logo [1];  [1]
  1. BATTELLE (PACIFIC NW LAB)
Publication Date:
Research Org.:
Pacific Northwest National Lab. (PNNL), Richland, WA (United States)
Sponsoring Org.:
USDOE
OSTI Identifier:
1525472
Report Number(s):
PNNL-SA-127608
DOE Contract Number:  
AC05-76RL01830
Resource Type:
Journal Article
Journal Name:
Frontiers in Psychology
Additional Journal Information:
Journal Volume: 9; Journal Issue: JUN
Country of Publication:
United States
Language:
English
Subject:
deep learning, training, curse of dimensionality, mutual information, correlation

Citation Formats

Hodas, Nathan O., and Stinis, Panagiotis. Doing the impossible: Why neural networks can be trained at all. United States: N. p., 2018. Web. doi:10.3389/fpsyg.2018.01185.
Hodas, Nathan O., & Stinis, Panagiotis. Doing the impossible: Why neural networks can be trained at all. United States. doi:10.3389/fpsyg.2018.01185.
Hodas, Nathan O., and Stinis, Panagiotis. Thu . "Doing the impossible: Why neural networks can be trained at all". United States. doi:10.3389/fpsyg.2018.01185.
@article{osti_1525472,
title = {Doing the impossible: Why neural networks can be trained at all},
author = {Hodas, Nathan O. and Stinis, Panagiotis},
abstractNote = {As deep neural networks grow in size, from thousands to millions to billions of weights, the performance of those networks becomes limited by our ability to accurately train them. A common naive question arises: if we have a system with billions of degrees of freedom, don't we also need billions of samples to train it? Of course, the success of deep learning indicates that reliable models can be learned with reasonable amounts of data. Similar questions arise in protein folding, spin glasses and biological neural networks. With effectively infinite potential folding/spin/wiring configurations, how does the system find the precise arrangement that leads to useful and robust results? Simple sampling of the possible configurations until an optimal one is reached is not a viable option even if one waited for the age of the universe. On the contrary, there appears to be a mechanism in the above phenomena that forces them to achieve configurations that live on a low-dimensional manifold, avoiding the curse of dimensionality. In the current work we use the concept of mutual information between successive layers of a deep neural network to elucidate this mechanism and suggest possible ways of exploiting it to accelerate training.},
doi = {10.3389/fpsyg.2018.01185},
journal = {Frontiers in Psychology},
number = JUN,
volume = 9,
place = {United States},
year = {2018},
month = {7}
}