Title

Cross-View Action Recognition Via View Knowledge Transfer

Abstract

In this paper, we present a novel approach to recognizing human actions from different views by view knowledge transfer. An action is originally modelled as a bag of visual-words (BoVW), which is sensitive to view changes. We argue that, as opposed to visual words, there exist some higher level features which can be shared across views and enable the connection of action models for different views. To discover these features, we use a bipartite graph to model two view-dependent vocabularies, then apply bipartite graph partitioning to co-cluster two vocabularies into visual-word clusters called bilingual-words (i.e., high-level features), which can bridge the semantic gap across view-dependent vocabularies. Consequently, we can transfer a BoVW action model into a bag-of-bilingual-words (BoBW) model, which is more discriminative in the presence of view changes. We tested our approach on the IXMAS data set and obtained very promising results. Moreover, to further fuse view knowledge from multiple views, we apply a Locally Weighted Ensemble scheme to dynamically weight transferred models based on the local distribution structure around each test example. This process can further improve the average recognition rate by about 7%. © 2011 IEEE.

Publication Date

1-1-2011

Publication Title

Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition

Number of Pages

3209-3216

Document Type

Article; Proceedings Paper

Personal Identifier

scopus

DOI Link

https://doi.org/10.1109/CVPR.2011.5995729

Socpus ID

80052904932 (Scopus)

Source API URL

https://api.elsevier.com/content/abstract/scopus_id/80052904932

This document is currently not available here.

Share

COinS