Weakly Supervised Facial Attribute Manipulation Via Deep Adversarial Network

Abstract

Automatically manipulating facial attributes is challenging because it needs to modify the facial appearances, while keeping not only the person's identity but also the realism of the resultant images. Unlike the prior works on the facial attribute parsing, we aim at an inverse and more challenging problem called attribute manipulation by modifying a facial image in line with a reference facial attribute. Given a source input image and reference images with a target attribute, our goal is to generate a new image (i.e., target image) that not only possesses the new attribute but also keeps the same or similar content with the source image. In order to generate new facial attributes, we train a deep neural network with a combination of a perceptual content loss and two adversarial losses, which ensure the global consistency of the visual content while implementing the desired attributes often impacting on local pixels. The model automatically adjusts the visual attributes on facial appearances and keeps the edited images as realistic as possible. The evaluation shows that the proposed model can provide a unified solution to both local and global facial attribute manipulation such as expression change and hair style transfer. Moreover, we further demonstrate that the learned attribute discriminator can be used for attribute localization.

Publication Date

5-3-2018

Publication Title

Proceedings - 2018 IEEE Winter Conference on Applications of Computer Vision, WACV 2018

Volume

2018-January

Number of Pages

112-121

Document Type

Article; Proceedings Paper

Personal Identifier

scopus

DOI Link

https://doi.org/10.1109/WACV.2018.00019

Socpus ID

85050983870 (Scopus)

Source API URL

https://api.elsevier.com/content/abstract/scopus_id/85050983870

This document is currently not available here.

Share

COinS