Synthesized Classifiers For Zero-Shot Learning
Abstract
Given semantic descriptions of object classes, zeroshot learning aims to accurately recognize objects of the unseen classes, from which no examples are available at the training stage, by associating them to the seen classes, from which labeled examples are provided. We propose to tackle this problem from the perspective of manifold learning. Our main idea is to align the semantic space that is derived from external information to the model space that concerns itself with recognizing visual features. To this end, we introduce a set of 'phantom' object classes whose coordinates live in both the semantic space and the model space. Serving as bases in a dictionary, they can be optimized from labeled data such that the synthesized real object classifiers achieve optimal discriminative performance. We demonstrate superior accuracy of our approach over the state of the art on four benchmark datasets for zero-shot learning, including the full ImageNet Fall 2011 dataset with more than 20,000 unseen classes.
Publication Date
12-9-2016
Publication Title
Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition
Volume
2016-December
Number of Pages
5327-5336
Document Type
Article; Proceedings Paper
Personal Identifier
scopus
DOI Link
https://doi.org/10.1109/CVPR.2016.575
Copyright Status
Unknown
Socpus ID
84986274021 (Scopus)
Source API URL
https://api.elsevier.com/content/abstract/scopus_id/84986274021
STARS Citation
Changpinyo, Soravit; Chao, Wei Lun; Gong, Boqing; and Sha, Fei, "Synthesized Classifiers For Zero-Shot Learning" (2016). Scopus Export 2015-2019. 4478.
https://stars.library.ucf.edu/scopus2015/4478