Ego2Top: Matching Viewers In Egocentric And Top-View Videos

Keywords

Cross-domain image understanding; Egocentric vision; Gist; Spectral graph matching; Surveillance

Abstract

Egocentric cameras are becoming increasingly popular and provide us with large amounts of videos, captured from the first person perspective. At the same time, surveillance cameras and drones offer an abundance of visual information, often captured from top-view. Although these two sources of information have been separately studied in the past, they have not been collectively studied and related. Having a set of egocentric cameras and a top-view camera capturing the same area, we propose a framework to identify the egocentric viewers in the top-view video. We utilize two types of features for our assignment procedure. Unary features encode what a viewer (seen from top-view or recording an egocentric video) visually experiences over time. Pairwise features encode the relationship between the visual content of a pair of viewers. Modeling each view (egocentric or top) by a graph, the assignment process is formulated as spectral graph matching. Evaluating our method over a dataset of 50 top-view and 188 egocentric videos taken in different scenarios demonstrates the efficiency of the proposed approach in assigning egocentric viewers to identities present in top-view camera. We also study the effect of different parameters such as the number of egocentric viewers and visual features.

Publication Date

1-1-2016

Publication Title

Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics)

Volume

9909 LNCS

Number of Pages

253-268

Document Type

Article; Proceedings Paper

Personal Identifier

scopus

Socpus ID

85007551451 (Scopus)

Source API URL

https://api.elsevier.com/content/abstract/scopus_id/85007551451

This document is currently not available here.

Share

COinS