Learning A Deep Model For Human Action Recognition From Novel Viewpoints
Keywords
Cross-view; dense trajectories; view knowledge transfer
Abstract
Recognizing human actions from unknown and unseen (novel) views is a challenging problem. We propose a Robust Non-Linear Knowledge Transfer Model (R-NKTM) for human action recognition from novel views. The proposed R-NKTM is a deep fully-connected neural network that transfers knowledge of human actions from any unknown view to a shared high-level virtual view by finding a set of non-linear transformations that connects the views. The R-NKTM is learned from 2D projections of dense trajectories of synthetic 3D human models fitted to real motion capture data and generalizes to real videos of human actions. The strength of our technique is that we learn a single R-NKTM for all actions and all viewpoints for knowledge transfer of any real human action video without the need for re-Training or fine-Tuning the model. Thus, R-NKTM can efficiently scale to incorporate new action classes. R-NKTM is learned with dummy labels and does not require knowledge of the camera viewpoint at any stage. Experiments on three benchmark cross-view human action datasets show that our method outperforms existing state-of-The-Art.
Publication Date
3-1-2018
Publication Title
IEEE Transactions on Pattern Analysis and Machine Intelligence
Volume
40
Issue
3
Number of Pages
667-681
Document Type
Article
Personal Identifier
scopus
DOI Link
https://doi.org/10.1109/TPAMI.2017.2691768
Copyright Status
Unknown
Socpus ID
85041966324 (Scopus)
Source API URL
https://api.elsevier.com/content/abstract/scopus_id/85041966324
STARS Citation
Rahmani, Hossein; Mian, Ajmal; and Shah, Mubarak, "Learning A Deep Model For Human Action Recognition From Novel Viewpoints" (2018). Scopus Export 2015-2019. 10219.
https://stars.library.ucf.edu/scopus2015/10219