Human Semantic Parsing For Person Re-Identification

Abstract

Person re-identification is a challenging task mainly due to factors such as background clutter, pose, illumination and camera point of view variations. These elements hinder the process of extracting robust and discriminative representations, hence preventing different identities from being successfully distinguished. To improve the representation learning, usually local features from human body parts are extracted. However, the common practice for such a process has been based on bounding box part detection. In this paper, we propose to adopt human semantic parsing which, due to its pixel-level accuracy and capability of modeling arbitrary contours, is naturally a better alternative. Our proposed SPReID integrates human semantic parsing in person re-identification and not only considerably outperforms its counter baseline, but achieves state-of-the-art performance. We also show that, by employing a simple yet effective training strategy, standard popular deep convolutional architectures such as Inception-V3 and ResNet-152, with no modification, while operating solely on full image, can dramatically outperform current state-of-the-art. Our proposed methods improve state-of-the-art person re-identification on: Market-1501 [48] by ~17% in mAP and ~6% in rank-1, CUHK03 [24] by ~4% in rank-1 and DukeMTMC-reID [50] by ~24% in mAP and ~10% in rank-1.

Publication Date

12-14-2018

Publication Title

Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition

Number of Pages

1062-1071

Document Type

Article; Proceedings Paper

Personal Identifier

scopus

DOI Link

https://doi.org/10.1109/CVPR.2018.00117

Socpus ID

85062847826 (Scopus)

Source API URL

https://api.elsevier.com/content/abstract/scopus_id/85062847826

This document is currently not available here.

Share

COinS