Query-Focused Video Summarization: Dataset, Evaluation, And A Memory Network Based Approach

Abstract

Recent years have witnessed a resurgence of interest in video summarization. However, one of the main obstacles to the research on video summarization is the user subjectivity - users have various preferences over the summaries. The subjectiveness causes at least two problems. First, no single video summarizer fits all users unless it interacts with and adapts to the individual users. Second, it is very challenging to evaluate the performance of a video summarizer. To tackle the first problem, we explore the recently proposed query-focused video summarization which introduces user preferences in the form of text queries about the video into the summarization process. We propose a memory network parameterized sequential determinantal point process in order to attend the user query onto different video frames and shots. To address the second challenge, we contend that a good evaluation metric for video summarization should focus on the semantic information that humans can perceive rather than the visual features or temporal overlaps. To this end, we collect dense per-video-shot concept annotations, compile a new dataset, and suggest an efficient evaluation method defined upon the concept annotations. We conduct extensive experiments contrasting our video summarizer to existing ones and present detailed analyses about the dataset and the new evaluation method.

Publication Date

11-6-2017

Publication Title

Proceedings - 30th IEEE Conference on Computer Vision and Pattern Recognition, CVPR 2017

Volume

2017-January

Number of Pages

2127-2136

Document Type

Article; Proceedings Paper

Personal Identifier

scopus

DOI Link

https://doi.org/10.1109/CVPR.2017.229

Socpus ID

85044341537 (Scopus)

Source API URL

https://api.elsevier.com/content/abstract/scopus_id/85044341537

This document is currently not available here.

Share

COinS