Title
Tracking In Unstructured Crowded Scenes
Abstract
This paper presents a target tracking framework for unstructured crowded scenes. Unstructured crowded scenes are defined as those scenes where the motion of a crowd appears to be random with different participants moving in different directions over time. This means each spatial location in such scenes supports more than one, or multi-modal, crowd behavior. The case of tracking in structured crowded scenes, where the crowd moves coherently in a common direction, and the direction of motion does not vary over time, was previously handled in [1]. In this work, we propose to model various crowd behavior (or motion) modalities at different locations of the scene by employing Correlated Topic Model (CTM) of [16]. In our construction, words correspond to low level quantized motion features and topics correspond to crowd behaviors. It is then assumed that motion at each location in an unstructured crowd scene is generated by a set of behavior proportions, where behaviors represent distributions over low-level motion features. This way any one location in the scene may support multiple crowd behavior modalities and can be used as prior information for tracking. Our approach enables us to model a diverse set of unstructured crowd domains, which range from cluttered time-lapse microscopy videos of cell populations in vitro, to footage of crowded sporting events. ©2009 IEEE.
Publication Date
12-1-2009
Publication Title
Proceedings of the IEEE International Conference on Computer Vision
Number of Pages
1389-1396
Document Type
Article; Proceedings Paper
Personal Identifier
scopus
DOI Link
https://doi.org/10.1109/ICCV.2009.5459301
Copyright Status
Unknown
Socpus ID
77953226142 (Scopus)
Source API URL
https://api.elsevier.com/content/abstract/scopus_id/77953226142
STARS Citation
Rodriguez, Mikel; Ali, Saad; and Kanade, Takeo, "Tracking In Unstructured Crowded Scenes" (2009). Scopus Export 2000s. 11377.
https://stars.library.ucf.edu/scopus2000/11377