Title
Story Segmentation In News Videos Using Visual And Text Cues
Abstract
In this paper, we present a framework for segmenting the news programs into different story topics. The proposed method utilizes both visual and text information of the video. We represent the news video by a Shot Connectivity Graph (SCG), where the nodes in the graph represent the shots in the video, and the edges between nodes represent the transitions between shots. The cycles in the graph correspond to the story segments in the news program. We first detect the cycles in the graph by finding the anchor persons in the video. This provides us with the coarse segmentation of the news video. The initial segmentation is later refined by the detections of the weather and sporting news, and the merging of similar stories. For the weather detection, the global color information of the images and the motion of the shots are considered. We have used the text obtained from automatic speech recognition (ASR) for detecting the potential sporting shots to form the sport stories. Adjacent stories with similar semantic meanings are further merged based on the visual and text similarities. The proposed framework has been tested on a widely used data set provided by NIST, which contains the ground truth of the story boundaries, and competitive evaluation results have been obtained. © Springer-Verlag Berlin Heidelberg 2005.
Publication Date
1-1-2005
Publication Title
Lecture Notes in Computer Science
Volume
3568
Number of Pages
92-102
Document Type
Article; Proceedings Paper
Personal Identifier
scopus
DOI Link
https://doi.org/10.1007/11526346_13
Copyright Status
Unknown
Socpus ID
26444593912 (Scopus)
Source API URL
https://api.elsevier.com/content/abstract/scopus_id/26444593912
STARS Citation
Zhai, Yun; Yilmaz, Alper; and Shah, Mubarak, "Story Segmentation In News Videos Using Visual And Text Cues" (2005). Scopus Export 2000s. 4459.
https://stars.library.ucf.edu/scopus2000/4459