Wta Hash-Based Multimodal Feature Fusion For 3D Human Action Recognition

Keywords

hashing; human action recognition; multimodal feature fusion

Abstract

With the prevalence of the commodity depth sensors (e.g. Kinect), multimodal data including RGB stream, depth stream and audio stream have been utilized in various applications such as video games, education and health. Nevertheless, it is still very challenging to effectively fuse the features from multimodal data. In this paper, we propose a WTA (Winner-Take-All) Hash-based feature fusion algorithm and investigate its application in 3D human action recognition. Specifically, the WTA Hashing is performed to encode features from different modalities into the ordinal space. By leveraging the ordinal measures rather than using the absolute value of the original features, such feature embedding can provide a form of resilience to the scale and numerical perturbations. We propose a frame-level feature fusion algorithm and develop a WTA Hash-embedded warping algorithm to measure the similarity between two sequences. Experiments performed on three public 3D human action datasets show that the proposed fusion algorithm has achieved state-of-the-art recognition results even with the nearest neighbor search.

Publication Date

3-25-2016

Publication Title

Proceedings - 2015 IEEE International Symposium on Multimedia, ISM 2015

Number of Pages

184-190

Document Type

Article; Proceedings Paper

Personal Identifier

scopus

DOI Link

https://doi.org/10.1109/ISM.2015.11

Socpus ID

84969651365 (Scopus)

Source API URL

https://api.elsevier.com/content/abstract/scopus_id/84969651365

This document is currently not available here.

Share

COinS