View-Invariant Representation And Learning Of Human Action


Action Recognition; Activities; Events; Spatiotemporal curvature; Video Understanding; View-invariant Representation


Automatically understanding human actions from video sequences is a very challenging problem. This involves the extraction of relevant visual information from a video sequence, representation of that information in a suitable form, and interpretation of visual information for the purpose of recognition and learning. We first present a view-invariant representation of action consisting of dynamic instants and intervals, which is computed using spatiotemporal curvature of a trajectory. This representation is then used by our system to learn human actions without any training. The system automatically segments video into individual actions, and computes a view-invariant representation for each action. The system is able to incrementally, learn different actions starting with no model. It is able to discover different instances of the same action performed by different people, and in different viewpoints. In order to validate our approach, we present results on video clips in which roughly 50 actions were performed by five different people in different viewpoints. Our system performed impressively by correctly interpreting most actions.

Publication Date


Publication Title

Proceedings - IEEE Workshop on Detection and Recognition of Events in Video, EVENT 2001

Number of Pages


Document Type

Article; Proceedings Paper

Personal Identifier


DOI Link


Socpus ID

14244270310 (Scopus)

Source API URL


This document is currently not available here.