Improving The Improved Training Of Wasserstein Gans: A Consistency Term And Its Dual Effect

Abstract

Despite being impactful on a variety of problems and applications, the generative adversarial nets (GANs) are remarkably difficult to train. This issue is formally analyzed by Arjovsky & Bottou (2017), who also propose an alternative direction to avoid the caveats in the minmax two-player training of GANs. The corresponding algorithm, called Wasserstein GAN (WGAN), hinges on the 1-Lipschitz continuity of the discriminator. In this paper, we propose a novel approach to enforcing the Lipschitz continuity in the training procedure of WGANs. Our approach seamlessly connects WGAN with one of the recent semi-supervised learning methods. As a result, it gives rise to not only better photo-realistic samples than the previous methods but also state-of-the-art semi-supervised learning results. In particular, our approach gives rise to the inception score of more than 5.0 with only 1,000 CIFAR-10 images and is the first that exceeds the accuracy of 90% on the CIFAR-10 dataset using only 4,000 labeled images, to the best of our knowledge.

Publication Date

1-1-2018

Publication Title

6th International Conference on Learning Representations, ICLR 2018 - Conference Track Proceedings

Volume

2018-April

Number of Pages

1-17

Document Type

Article; Proceedings Paper

Personal Identifier

scopus

Socpus ID

85076815442 (Scopus)

Source API URL

https://api.elsevier.com/content/abstract/scopus_id/85076815442

This document is currently not available here.

Share

COinS