Deep Convolutional Neural Networks have achieved remarkable performance on visual recognition problems, and have been extensively adopted in real-world applications, such as Apple's Face ID security system, autonomous driving cars, and automatic image tagging in online album services. One major concern in the development of CNNs is that their computational complexity grows along with the increase in their accuracy. Therefore, there is a continuous demand to find the right balance between accuracy and complexity in the design of CNN models. This dissertation focuses on designing various novel structures to enhance the performance of CNNs and their efficiency. Our efforts fall into two categories. One is to explore the redundancy in the standard convolutional neural networks so that comparable learning capability can be achieved with lower computational complexity. The second is to improve network performance with distinctive structures that can learn better feature representations, yielding negligible computational complexity by themselves. To explore the redundancy in CNNs and reduce the computational complexity, we propose three exclusive designs: Single Intra-Channel Convolutional (SIC) Layer, topological sub-divisioning, and spatial ``bottleneck'' structure. The SIC layer reduces the redundancy from the disentanglement between spatial 2D convolution and linear projection. Topological sub-divisioning is introduced to reduce the density of connections between input and output channels. The Spatial ``bottleneck'' structure takes advantage of the correlation between adjacent pixels in the spatial dimension to reduce the complexity of linear channel projection without reducing the spatial resolution of the subsequent layer. Building models based on these structures can achieve comparable performance against the counterpart state-of-the-art models on different computer vision tasks with several times fewer computational complexity, parameters, as well as actual running time. Since the most straightforward approach for boosting network performance from the non-linearity perspective is to design a more powerful activation function, we design a unique Look-up Table Unit activation function that learns the shape of the activation function from the data and provides sufficient non-linearity to the network to learn more complex feature representations. We also propose a novel layer structure, referred to as a Wide Hidden Expansion (WHE) layer, to substantially increase the number of activation functions along with the implicit hidden-channel increase, enhancing the performance of different network architectures.
If this is your thesis or dissertation, and want to learn how to access it or for more information about readership statistics, contact us at STARS@ucf.edu
Doctor of Philosophy (Ph.D.)
College of Engineering and Computer Science
Length of Campus-only Access
Doctoral Dissertation (Open Access)
Wang, Min, "Explore and Design Novel Structures for More Efficient and Better Deep Convolutional Neural Networks" (2020). Electronic Theses and Dissertations, 2020-. 147.