Sparse Convolutional Neural Networks
Abstract
Deep neural networks have achieved remarkable performance in both image classification and object detection problems, at the cost of a large number of parameters and computational complexity. In this work, we show how to reduce the redundancy in these parameters using a sparse decomposition. Maximum sparsity is obtained by exploiting both inter-channel and intra-channel redundancy, with a fine-tuning step that minimize the recognition loss caused by maximizing sparsity. This procedure zeros out more than 90% of parameters, with a drop of accuracy that is less than 1% on the ILSVRC2012 dataset. We also propose an efficient sparse matrix multiplication algorithm on CPU for Sparse Convolutional Neural Networks (SCNN) models. Our CPU implementation demonstrates much higher efficiency than the off-the-shelf sparse matrix libraries, with a significant speedup realized over the original dense network. In addition, we apply the SCNN model to the object detection problem, in conjunction with a cascade model and sparse fully connected layers, to achieve significant speedups.
Publication Date
10-14-2015
Publication Title
Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition
Volume
07-12-June-2015
Number of Pages
806-814
Document Type
Article; Proceedings Paper
Personal Identifier
scopus
DOI Link
https://doi.org/10.1109/CVPR.2015.7298681
Copyright Status
Unknown
Socpus ID
84959241183 (Scopus)
Source API URL
https://api.elsevier.com/content/abstract/scopus_id/84959241183
STARS Citation
Liu, Baoyuan; Wang, Min; Foroosh, Hassan; Tappen, Marshall; and Penksy, Marianna, "Sparse Convolutional Neural Networks" (2015). Scopus Export 2015-2019. 2061.
https://stars.library.ucf.edu/scopus2015/2061