Cross-View Image Matching For Geo-Localization In Urban Environments
Abstract
In this paper, we address the problem of cross-view image geo-localization. Specifically, we aim to estimate the GPS location of a query street view image by finding the matching images in a reference database of geotagged bird's eye view images, or vice versa. To this end, we present a new framework for cross-view image geolocalization by taking advantage of the tremendous success of deep convolutional neural networks (CNNs) in image classification and object detection. First, we employ the Faster R-CNN [16] to detect buildings in the query and reference images. Next, for each building in the query image, we retrieve the k nearest neighbors from the reference buildings using a Siamese network trained on both positive matching image pairs and negative pairs. To find the correct NN for each query building, we develop an efficient multiple nearest neighbors matching method based on dominant sets. We evaluate the proposed framework on a new dataset that consists of pairs of street view and bird's eye view images. Experimental results show that the proposed method achieves better geo-localization accuracy than other approaches and is able to generalize to images at unseen locations.
Publication Date
11-6-2017
Publication Title
Proceedings - 30th IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2017
Volume
2017-January
Number of Pages
1998-2006
Document Type
Article; Proceedings Paper
Personal Identifier
scopus
DOI Link
https://doi.org/10.1109/CVPR.2017.216
Copyright Status
Unknown
Socpus ID
85044339474 (Scopus)
Source API URL
https://api.elsevier.com/content/abstract/scopus_id/85044339474
STARS Citation
Tian, Yicong; Chen, Chen; and Shah, Mubarak, "Cross-View Image Matching For Geo-Localization In Urban Environments" (2017). Scopus Export 2015-2019. 7054.
https://stars.library.ucf.edu/scopus2015/7054