Title

View-Invariant Representation And Learning Of Human Action

Keywords

Action Recognition; Activities; Events; Spatiotemporal curvature; Video Understanding; View-invariant Representation

Abstract

Automatically understanding human actions from video sequences is a very challenging problem. This involves the extraction of relevant visual information from a video sequence, representation of that information in a suitable form, and interpretation of visual information for the purpose of recognition and learning. We first present a view-invariant representation of action consisting of dynamic instants and intervals, which is computed using spatiotemporal curvature of a trajectory. This representation is then used by our system to learn human actions without any training. The system automatically segments video into individual actions, and computes a view-invariant representation for each action. The system is able to incrementally, learn different actions starting with no model. It is able to discover different instances of the same action performed by different people, and in different viewpoints. In order to validate our approach, we present results on video clips in which roughly 50 actions were performed by five different people in different viewpoints. Our system performed impressively by correctly interpreting most actions.

Publication Date

1-1-2001

Publication Title

Proceedings - IEEE Workshop on Detection and Recognition of Events in Video, EVENT 2001

Number of Pages

55-63

Document Type

Article; Proceedings Paper

Personal Identifier

scopus

DOI Link

https://doi.org/10.1109/EVENT.2001.938867

Socpus ID

14244270310 (Scopus)

Source API URL

https://api.elsevier.com/content/abstract/scopus_id/14244270310

This document is currently not available here.

Share

COinS