Compression Using Feature Score and Refinement for Deep Neural Networks
- Resource Type
- Conference
- Authors
- Zhu, Fenghua; Hou, Jiachen; Wei, Yue; Ye, Peijun; Xiong, Gang; Lv, Yisheng
- Source
- 2023 IEEE 26th International Conference on Intelligent Transportation Systems (ITSC) Intelligent Transportation Systems (ITSC), 2023 IEEE 26th International Conference on. :5300-5305 Sep, 2023
- Subject
- Communication, Networking and Broadcast Technologies
Components, Circuits, Devices and Systems
Computing and Processing
Engineering Profession
Power, Energy and Industry Applications
Robotics and Control Systems
Signal Processing and Analysis
Transportation
Estimation
Feature extraction
Information filters
Convolutional neural networks
Intelligent transportation systems
- Language
- ISSN
- 2153-0017
Accelerating convolutional neural networks has recently received ever-increasing research focus. Filter pruning is one of the most effective ways to accelerate and compress convolutional neural networks (CNNs). Most existing approaches tend to prune filters from the parameters of networks by channel sparsity constraints or the weight of the parameters to find the redundant parameters, but ignore the feature extraction capability of the filters. In this paper, the principle behind our pruning is that low ability filters contain less information and we propose the concept of feature score to evaluate the feature extraction ability of the filters. We use the feature score estimation for the global filter importance ranking, then prune the network by removing those unimportant filters. Extensive experiments demonstrate the effectiveness of our method on different datasets including CIFAR-10 and CIFAR-100. Our method significantly outperforms state-of-the-art methods.