skip to main content
OSTI.GOV title logo U.S. Department of Energy
Office of Scientific and Technical Information

Title: A Novel Pruning Method for Convolutional Neural Networks Based off Identifying Critical Filters

Abstract

Convolutional Neural Networks (CNNs) are one of the most extensively used tools in machine learning, but they are still not well understood and in many cases they are over-parameterized, leading to slow inference and impeding their deployment on low-power devices. In the last few years, many methods for decreasing the number of parameters in a network by pruning its output channels have been suggested, but a very recent work has argued that random pruning of channels performs on-par with state-of-the-art pruning methods. While random and other pruning methods might be effectively used for lowering the number of parameters in a CNN, none of these methods can be used to gain any further understanding of the model that the CNN has built. In this work, we propose a novel method for pruning a network, that at the same time can lead to a better understanding of what the individual filters of the network learn about the data. The method proposed aims to keep only the filters that are "important" for a class. We define a filter as important for a class if its removal has the highest negative impact on the accuracy for that class. We demonstrate that our method ismore » better than random pruning on two networks used on the EMNIST and CIFAR10 datasets. By analyzing the important filters, we find that the important filters in the pruned networks learn features which are more general across classes. We demonstrate the importance and applicability of that observation in two transfer-learning tasks.« less

Authors:
ORCiD logo [1]; ORCiD logo [1]
  1. ORNL
Publication Date:
Research Org.:
Oak Ridge National Lab. (ORNL), Oak Ridge, TN (United States)
Sponsoring Org.:
USDOE Office of Science (SC), Advanced Scientific Computing Research (ASCR) (SC-21)
OSTI Identifier:
1557493
DOE Contract Number:  
AC05-00OR22725
Resource Type:
Conference
Resource Relation:
Conference: Practice and Experience in Advanced Research Computing (PEARC 2019) - Chicago, Illinois, United States of America - 7/28/2019 8:00:00 AM-8/1/2019 8:00:00 AM
Country of Publication:
United States
Language:
English

Citation Formats

Dimovska, Mihaela, and Johnston, Travis. A Novel Pruning Method for Convolutional Neural Networks Based off Identifying Critical Filters. United States: N. p., 2019. Web. doi:10.1145/3332186.3333057.
Dimovska, Mihaela, & Johnston, Travis. A Novel Pruning Method for Convolutional Neural Networks Based off Identifying Critical Filters. United States. doi:10.1145/3332186.3333057.
Dimovska, Mihaela, and Johnston, Travis. Mon . "A Novel Pruning Method for Convolutional Neural Networks Based off Identifying Critical Filters". United States. doi:10.1145/3332186.3333057. https://www.osti.gov/servlets/purl/1557493.
@article{osti_1557493,
title = {A Novel Pruning Method for Convolutional Neural Networks Based off Identifying Critical Filters},
author = {Dimovska, Mihaela and Johnston, Travis},
abstractNote = {Convolutional Neural Networks (CNNs) are one of the most extensively used tools in machine learning, but they are still not well understood and in many cases they are over-parameterized, leading to slow inference and impeding their deployment on low-power devices. In the last few years, many methods for decreasing the number of parameters in a network by pruning its output channels have been suggested, but a very recent work has argued that random pruning of channels performs on-par with state-of-the-art pruning methods. While random and other pruning methods might be effectively used for lowering the number of parameters in a CNN, none of these methods can be used to gain any further understanding of the model that the CNN has built. In this work, we propose a novel method for pruning a network, that at the same time can lead to a better understanding of what the individual filters of the network learn about the data. The method proposed aims to keep only the filters that are "important" for a class. We define a filter as important for a class if its removal has the highest negative impact on the accuracy for that class. We demonstrate that our method is better than random pruning on two networks used on the EMNIST and CIFAR10 datasets. By analyzing the important filters, we find that the important filters in the pruned networks learn features which are more general across classes. We demonstrate the importance and applicability of that observation in two transfer-learning tasks.},
doi = {10.1145/3332186.3333057},
journal = {},
number = ,
volume = ,
place = {United States},
year = {2019},
month = {7}
}

Conference:
Other availability
Please see Document Availability for additional information on obtaining the full-text document. Library patrons may search WorldCat to identify libraries that hold this conference proceeding.

Save / Share: