Learning A Multi-Concept Video Retrieval Model With Multiple Latent Variables

Keywords

Multi-concept retrieval; Structural learning; Video indexing; Video retrieval

Abstract

Effective and efficient video retrieval has become a pressing need in the “big video” era. The objective of this work is to provide a principled model for computing the ranking scores of a video in response to one or more concepts, where the concepts could be directly supplied by users or inferred by the system from the user queries. Indeed, how to deal with multi-concept queries has become a central component in modern video retrieval systems that accept text queries. However, it has been long overlooked and simply implemented by weighted averaging of the corresponding concept detectors’ scores. Our approach, which can be considered as a latent ranking SVM, integrates the advantages of various recent works in text and image retrieval, such as choosing ranking over structured prediction, modeling inter-dependencies between querying concepts, and so on. Videos consist of shots, and we use latent variables to account for the mutually complementary cues within and across shots. Concept labels of shots are scarce and noisy. We introduce a simple and effective technique to make our model robust to outliers. Our approach gives superior performance when it is tested on not only the queries seen at training but also novel queries, some of which consist of more concepts than the queries used for training.

Publication Date

4-1-2018

Publication Title

ACM Transactions on Multimedia Computing, Communications and Applications

Volume

14

Issue

2

Document Type

Article

Personal Identifier

scopus

DOI Link

https://doi.org/10.1145/3176647

Socpus ID

85047115902 (Scopus)

Source API URL

https://api.elsevier.com/content/abstract/scopus_id/85047115902

This document is currently not available here.

Share

COinS