Title

View Invariant Action Recognition Using Projective Depth

Keywords

Action recognition; Projective depth; View invariance

Abstract

In this paper, we investigate the concept of projective depth, demonstrate its application and significance in view-invariant action recognition. We show that projective depths are invariant to camera internal parameters and orientation, and hence can be used to identify similar motion of body-points from varying viewpoints. By representing the human body as a set of points, we decompose a body posture into a set of projective depths. The similarity between two actions is, therefore, measured by the motion of projective depths. We exhaustively investigate the different ways of extracting planes, which can be used to estimate the projective depths for use in action recognition including (i) ground plane, (ii) body-point triplets, (iii) planes in time, and (iv) planes extracted from mirror symmetry. We analyze these different techniques and analyze their efficacy in view-invariant action recognition. Experiments are performed on three categories of data including the CMU MoCap dataset, Kinect dataset, and IXMAS dataset. Results evaluated over semi-synthetic video data and real data confirm that our method can recognize actions, even when they have dynamic timeline maps, and the viewpoints and camera parameters are unknown and totally different. © 2014 Elsevier Inc. All rights reserved.

Publication Date

1-1-2014

Publication Title

Computer Vision and Image Understanding

Volume

123

Number of Pages

41-52

Document Type

Article

Personal Identifier

scopus

DOI Link

https://doi.org/10.1016/j.cviu.2014.03.005

Socpus ID

84899623692 (Scopus)

Source API URL

https://api.elsevier.com/content/abstract/scopus_id/84899623692

This document is currently not available here.

Share

COinS