skip to main content
OSTI.GOV title logo U.S. Department of Energy
Office of Scientific and Technical Information

Title: Beyond Fine Tuning: Adding capacity to leverage few labels

Abstract

In this paper we present a technique to train neural network models on small amounts of data. Current methods for training neural networks on small amounts of rich data typically rely on strategies such as fine-tuning a pre-trained neural networks or the use of domain-specific hand-engineered features. Here we take the approach of treating network layers, or entire networks, as modules and combine pre-trained modules with untrained modules, to learn the shift in distributions between data sets. The central impact of using a modular approach comes from adding new representations to a network, as opposed to replacing representations via fine-tuning. Using this technique, we are able surpass results using standard fine-tuning transfer learning approaches, and we are also able to significantly increase performance over such approaches when using smaller amounts of data.

Authors:
; ; ; ;
Publication Date:
Research Org.:
Pacific Northwest National Lab. (PNNL), Richland, WA (United States)
Sponsoring Org.:
USDOE
OSTI Identifier:
1415707
Report Number(s):
PNNL-SA-122155
453040300
DOE Contract Number:  
AC05-76RL01830
Resource Type:
Conference
Resource Relation:
Conference: Learning with Limited Labeled Data (LLD Workshop 2017), December 9, 2017, Long Beach, California
Country of Publication:
United States
Language:
English
Subject:
deep learning; machine learning; transfer learning; data science

Citation Formats

Hodas, Nathan O., Shaffer, Kyle J., Yankov, Artem, Corley, Courtney D., and Anderson, Aryk L. Beyond Fine Tuning: Adding capacity to leverage few labels. United States: N. p., 2017. Web.
Hodas, Nathan O., Shaffer, Kyle J., Yankov, Artem, Corley, Courtney D., & Anderson, Aryk L. Beyond Fine Tuning: Adding capacity to leverage few labels. United States.
Hodas, Nathan O., Shaffer, Kyle J., Yankov, Artem, Corley, Courtney D., and Anderson, Aryk L. Sat . "Beyond Fine Tuning: Adding capacity to leverage few labels". United States. doi:.
@article{osti_1415707,
title = {Beyond Fine Tuning: Adding capacity to leverage few labels},
author = {Hodas, Nathan O. and Shaffer, Kyle J. and Yankov, Artem and Corley, Courtney D. and Anderson, Aryk L.},
abstractNote = {In this paper we present a technique to train neural network models on small amounts of data. Current methods for training neural networks on small amounts of rich data typically rely on strategies such as fine-tuning a pre-trained neural networks or the use of domain-specific hand-engineered features. Here we take the approach of treating network layers, or entire networks, as modules and combine pre-trained modules with untrained modules, to learn the shift in distributions between data sets. The central impact of using a modular approach comes from adding new representations to a network, as opposed to replacing representations via fine-tuning. Using this technique, we are able surpass results using standard fine-tuning transfer learning approaches, and we are also able to significantly increase performance over such approaches when using smaller amounts of data.},
doi = {},
journal = {},
number = ,
volume = ,
place = {United States},
year = {Sat Dec 09 00:00:00 EST 2017},
month = {Sat Dec 09 00:00:00 EST 2017}
}

Conference:
Other availability
Please see Document Availability for additional information on obtaining the full-text document. Library patrons may search WorldCat to identify libraries that hold this conference proceeding.

Save / Share: