Feature-Independent Context Estimation For Automatic Image Annotation
Abstract
Automatic image annotation is a highly valuable tool for image search, retrieval and archival systems. In the absence of an annotation tool, such systems have to rely on either users' input or large amount of text on the webpage of the image, to acquire its textual description. Users may provide insufficient/noisy tags and all the text on the webpage may not be a description or an explanation of the accompanying image. Therefore, it is of extreme importance to develop efficient tools for automatic annotation of images with correct and sufficient tags. The context of the image plays a significant role in this process, along with the content of the image. A suitable quantification of the context of the image may reduce the semantic gap between visual features and appropriate textual description of the image. In this paper, we present an unsupervised feature-independent quantification of the context of the image through tensor decomposition. We incorporate the estimated context as prior knowledge in the process of automatic image annotation. Evaluation of the predicted annotations provides evidence of the effectiveness of our feature-independent context estimation method.
Publication Date
10-14-2015
Publication Title
Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition
Volume
07-12-June-2015
Number of Pages
1958-1965
Document Type
Article; Proceedings Paper
Personal Identifier
scopus
DOI Link
https://doi.org/10.1109/CVPR.2015.7298806
Copyright Status
Unknown
Socpus ID
84959201191 (Scopus)
Source API URL
https://api.elsevier.com/content/abstract/scopus_id/84959201191
STARS Citation
Tariq, Amara and Foroosh, Hassan, "Feature-Independent Context Estimation For Automatic Image Annotation" (2015). Scopus Export 2015-2019. 1850.
https://stars.library.ucf.edu/scopus2015/1850