Fusing Two Convolutional Neural Networks For High-Resolution Scene Classification

Keywords

convolutional neural network; Deep feature extraction; Feature learning; High-resolution image

Abstract

This paper presents a novel deep convolutional feature fusion (ConvFF) approach for high-resolution scene classification, characterizing the well-known deep convolutional neural network (ConvNet) approach. The proposed ConvFF approach starts by generating an initial feature representation of the original scenes under exploration from two deep ConvNets pre-trained on two different large amount of labeled data. After the pre-training phase, we fine tune the two deep ConvNets consisting of mainly objects and scenes respectively in a supervised manner using the target training images. Then we propose to fuse the extracted two types of convolutional features provided by the last fully-connected (FC) layer, respectively. Finally, the fused convolutional features are fed as input to a SVM classifier for classification. The proposed method is evaluated by using two challenging high-resolution scene datasets. Experimental results show that the proposed method can effectively extract complementary features of the scenes and capture local spatial patterns, consistently outperforming several state-of-the-art methods.

Publication Date

12-1-2017

Publication Title

International Geoscience and Remote Sensing Symposium (IGARSS)

Volume

2017-July

Number of Pages

3242-3245

Document Type

Article; Proceedings Paper

Personal Identifier

scopus

DOI Link

https://doi.org/10.1109/IGARSS.2017.8127688

Socpus ID

85041833740 (Scopus)

Source API URL

https://api.elsevier.com/content/abstract/scopus_id/85041833740

This document is currently not available here.

Share

COinS