Title

Improving Semantic Concept Detection Through The Dictionary Of Visually-Distinct Elements

Keywords

Attribute; Concept Detection; Consensus Regularization; Dictionary Learning; Event Detection; Sparse Representation; TRECVID MED; TRECVID SIN

Abstract

A video captures a sequence and interactions of concepts that can be static, for instance, objects or scenes, or dynamic, such as actions. For large datasets containing hundreds of thousands of images or videos, it is impractical to manually annotate all the concepts, or all the instances of a single concept. However, a dictionary with visuallydistinct elements can be created automatically from unlabeled videos which can capture and express the entire dataset. The downside to this machine-discovered dictionary is meaninglessness, i.e., its elements are devoid of semantics and interpretation. In this paper, we present an approach that leverages the strengths of semantic concepts and the machine-discovered DOVE by learning a relationship between them. Since instances of a semantic concept share visual similarity, the proposed approach uses softconsensus regularization to learn the mapping that enforces instances from each semantic concept to have similar representations. The testing is performed by projecting the query onto the DOVE as well as new representations of semantic concepts from training, with non-negativity and unit summation constraints for probabilistic interpretation. We tested our formulation on TRECVID MED and SIN tasks, and obtained encouraging results.

Publication Date

9-24-2014

Publication Title

Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition

Number of Pages

2585-2592

Document Type

Article; Proceedings Paper

Personal Identifier

scopus

DOI Link

https://doi.org/10.1109/CVPR.2014.331

Socpus ID

84911408065 (Scopus)

Source API URL

https://api.elsevier.com/content/abstract/scopus_id/84911408065

This document is currently not available here.

Share

COinS