Factorized Convolutional Neural Networks

Abstract

In this paper, we propose to factorize the convolutional layer to reduce its computation. The 3D convolution operation in a convolutional layer can be considered as performing spatial convolution in each channel and linear projection across channels simultaneously. By unravelling them and arranging the spatial convolutions sequentially, the proposed layer is composed of a low-cost single intra-channel convolution and a linear channel projection. When combined with residual connection, it can effectively preserve the spatial information and maintain the accuracy with significantly less computation. We also introduce a topological subdivisioning to reduce the connection between the input and output channels. Our experiments demonstrate that the proposed layers outperform the standard convolutional layers on performance/complexity ratio. Our models achieve similar performance to VGG-16, ResNet-34, ResNet-50, ResNet-101 while requiring 42x,7.32x,4.38x,5.85x less computation respectively.

Publication Date

7-1-2017

Publication Title

Proceedings - 2017 IEEE International Conference on Computer Vision Workshops, ICCVW 2017

Volume

2018-January

Number of Pages

545-553

Document Type

Article; Proceedings Paper

Personal Identifier

scopus

DOI Link

https://doi.org/10.1109/ICCVW.2017.71

Socpus ID

85046291788 (Scopus)

Source API URL

https://api.elsevier.com/content/abstract/scopus_id/85046291788

This document is currently not available here.

Share

COinS