Title
A General Framework For Temporal Video Scene Segmentation
Abstract
Videos are composed of many shots caused by different camera operations, e.g., on/off operations and switching between cameras. One important goal in video analysis is to group the shots into temporal scenes, such that all the shots in a single scene are related to a particular physical setting, an on-going action or a theme. In this paper, we present a general framework for temporal scene segmentation for various video types. The proposed method is formulated in a statistical fashion and uses the Markov chain Monte Carlo (MCMC) technique to determine the boundaries between video scenes. In this approach, an arbitrary number of scene boundaries are randomly initialized and automatically updated using two types of updates: diffuse and jumps. The posterior probability on the number of scenes and their boundary locations is computed based on the model priors and the data likelihood. The updates of the model parameters are controlled by the hypothesis ratio test in the MCMC process. The proposed framework has been experimented on two types of videos, home videos and feature films, and accurate results have been obtained. © 2005 IEEE.
Publication Date
12-1-2005
Publication Title
Proceedings of the IEEE International Conference on Computer Vision
Volume
II
Number of Pages
1111-1116
Document Type
Article; Proceedings Paper
Personal Identifier
scopus
DOI Link
https://doi.org/10.1109/ICCV.2005.6
Copyright Status
Unknown
Socpus ID
33745907274 (Scopus)
Source API URL
https://api.elsevier.com/content/abstract/scopus_id/33745907274
STARS Citation
Zhai, Yun and Shah, Mubarak, "A General Framework For Temporal Video Scene Segmentation" (2005). Scopus Export 2000s. 3295.
https://stars.library.ucf.edu/scopus2000/3295