Context-Patch Based Face Hallucination Via Thresholding Locality-Constrained Representation And Reproducing Learning

Keywords

Context-patch; Face hallucination; Reproducing learning; Super-resolution; Thresholding

Abstract

Face hallucination, which refers to predicting a HighResolution (HR) face image from an observed Low-Resolution (LR) one, is a challenging problem. Most state-of-the-arts employ local face structure prior to estimate the optimal representations for each patch by the training patches of the same position, and achieve good reconstruction performance. However, they do not take into account the contextual information of image patch, which is very useful for the expression of human face. Different from position-patch based methods, in this paper we leverage the contextual information and develop a robust and efficient context-patch face hallucination algorithm, called Thresholding Locality-constrained Representation with Reproducing learning (TLcR-RL). In TLcR-RL, we use a thresholding strategy to enhance the stability of patch representation and the reconstruction accuracy. Additionally, we develop a reproducing learning to iteratively enhance the estimated result by adding the estimated HR face to the training set. Experiments demonstrate that the performance of our proposed framework has a substantial increase when compared to state-of-the-arts, including recently proposed deep learning based method.

Publication Date

8-28-2017

Publication Title

Proceedings - IEEE International Conference on Multimedia and Expo

Number of Pages

469-474

Document Type

Article; Proceedings Paper

Personal Identifier

scopus

DOI Link

https://doi.org/10.1109/ICME.2017.8019459

Socpus ID

85030244954 (Scopus)

Source API URL

https://api.elsevier.com/content/abstract/scopus_id/85030244954

This document is currently not available here.

Share

COinS