DOE PAGES title logo U.S. Department of Energy
Office of Scientific and Technical Information

Title: Autoencoder node saliency: Selecting relevant latent representations

Abstract

The autoencoder is an artificial neural network that performs nonlinear dimension reduction and learns hidden representations of unlabeled data. With a linear transfer function it is similar to the principal component analysis (PCA). While both methods use weight vectors for linear transformations, the autoencoder does not come with any indication similar to the eigenvalues in PCA that are paired with eigenvectors. In this work, we propose a novel autoencoder node saliency method that examines whether the features constructed by autoencoders exhibit properties related to known class labels. The supervised node saliency ranks the nodes based on their capability of performing a learning task. It is coupled with the normalized entropy difference (NED). We establish a property for NED values to verify classifying behaviors among the top ranked nodes. Lastly, by applying our methods to real datasets, we demonstrate their ability to provide indications on the performing nodes and explain the learned tasks in autoencoders.

Authors:
ORCiD logo [1]
  1. Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States)
Publication Date:
Research Org.:
Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States)
Sponsoring Org.:
USDOE National Nuclear Security Administration (NNSA)
OSTI Identifier:
1491664
Alternate Identifier(s):
OSTI ID: 1636754
Report Number(s):
LLNL-JRNL-741590
Journal ID: ISSN 0031-3203; 896098
Grant/Contract Number:  
AC52-07NA27344
Resource Type:
Accepted Manuscript
Journal Name:
Pattern Recognition
Additional Journal Information:
Journal Volume: 88; Journal Issue: C; Journal ID: ISSN 0031-3203
Publisher:
Elsevier
Country of Publication:
United States
Language:
English
Subject:
97 MATHEMATICS AND COMPUTING; Autoencoder; Latent representations; Unsupervised learning; Neural networks; Node selection; Model interpretation

Citation Formats

Fan, Ya Ju. Autoencoder node saliency: Selecting relevant latent representations. United States: N. p., 2018. Web. doi:10.1016/j.patcog.2018.12.015.
Fan, Ya Ju. Autoencoder node saliency: Selecting relevant latent representations. United States. https://doi.org/10.1016/j.patcog.2018.12.015
Fan, Ya Ju. Mon . "Autoencoder node saliency: Selecting relevant latent representations". United States. https://doi.org/10.1016/j.patcog.2018.12.015. https://www.osti.gov/servlets/purl/1491664.
@article{osti_1491664,
title = {Autoencoder node saliency: Selecting relevant latent representations},
author = {Fan, Ya Ju},
abstractNote = {The autoencoder is an artificial neural network that performs nonlinear dimension reduction and learns hidden representations of unlabeled data. With a linear transfer function it is similar to the principal component analysis (PCA). While both methods use weight vectors for linear transformations, the autoencoder does not come with any indication similar to the eigenvalues in PCA that are paired with eigenvectors. In this work, we propose a novel autoencoder node saliency method that examines whether the features constructed by autoencoders exhibit properties related to known class labels. The supervised node saliency ranks the nodes based on their capability of performing a learning task. It is coupled with the normalized entropy difference (NED). We establish a property for NED values to verify classifying behaviors among the top ranked nodes. Lastly, by applying our methods to real datasets, we demonstrate their ability to provide indications on the performing nodes and explain the learned tasks in autoencoders.},
doi = {10.1016/j.patcog.2018.12.015},
journal = {Pattern Recognition},
number = C,
volume = 88,
place = {United States},
year = {Mon Dec 17 00:00:00 EST 2018},
month = {Mon Dec 17 00:00:00 EST 2018}
}

Journal Article:

Citation Metrics:
Cited by: 18 works
Citation information provided by
Web of Science

Save / Share:

Works referenced in this record:

Unsupervised feature extraction with autoencoder trees
journal, October 2017


Reducing the Dimensionality of Data with Neural Networks
journal, July 2006


A Connection Between Score Matching and Denoising Autoencoders
journal, July 2011


Neural networks and principal component analysis: Learning from examples without local minima
journal, January 1989


Consensus self-organized models for fault detection (COSMO)
journal, August 2011

  • Byttner, S.; Rögnvaldsson, T.; Svensson, M.
  • Engineering Applications of Artificial Intelligence, Vol. 24, Issue 5
  • DOI: 10.1016/j.engappai.2011.03.002

Self-monitoring for maintenance of vehicle fleets
journal, August 2017

  • Rögnvaldsson, Thorsteinn; Nowaczyk, Sławomir; Byttner, Stefan
  • Data Mining and Knowledge Discovery, Vol. 32, Issue 2
  • DOI: 10.1007/s10618-017-0538-6

A survey on feature selection methods
journal, January 2014


A Survey on Feature Selection
journal, January 2016


An efficient semi-supervised representatives feature selection algorithm based on information theory
journal, January 2017


Connectionist learning procedures
journal, September 1989


Gradient-based learning applied to document recognition
journal, January 1998

  • Lecun, Y.; Bottou, L.; Bengio, Y.
  • Proceedings of the IEEE, Vol. 86, Issue 11
  • DOI: 10.1109/5.726791

A trainable feature extractor for handwritten digit recognition
journal, June 2007


A novel hybrid CNN–SVM classifier for recognizing handwritten digits
journal, April 2012


The genomic and transcriptomic architecture of 2,000 breast tumours reveals novel subgroups
journal, April 2012

  • Curtis, Christina; Shah, Sohrab P.; Chin, Suet-Feung
  • Nature, Vol. 486, Issue 7403
  • DOI: 10.1038/nature10983

Extremely randomized trees
journal, March 2006


Identification of bursts in spike trains
journal, January 1992


Works referencing / citing this record:

Assessment of Autoencoder Architectures for Data Representation
book, October 2019

  • Pawar, Karishma; Attar, Vahida Z.; Pedrycz, Witold
  • Deep Learning: Concepts and Architectures, p. 101-132
  • DOI: 10.1007/978-3-030-31756-0_4

A Robust Deep Learning Approach for Spatiotemporal Estimation of Satellite AOD and PM2.5
journal, January 2020


Discriminative stacked autoencoder for feature representation and classification
journal, January 2020