Unsupervised Feature Learning Through Divergent Discriminative Feature Accumulation

Abstract

Unlike unsupervised approaches such as autoencoders that learn to reconstruct their inputs, this paper introduces an alternative approach to unsupervised feature learning called divergent discriminative feature accumulation (DDFA) that instead continually accumulates features that make novel discriminations among the training set. Thus DDFA features are inherently discriminative from the start even though they are trained without knowledge of the ultimate classification problem. Interestingly, DDFA also continues to add new features indefinitely (so it does not depend on a hidden layer size), is not based on minimizing error, and is inherently divergent instead of convergent, thereby providing a unique direction of research for unsupervised feature learning. In this paper the quality of its learned features is demonstrated on the MNIST dataset, where its performance confirms that indeed DDFA is a viable technique for learning useful features. Copyright (c) 2015, Association for the Advancement of Artificial Intelligence (www.aaai.org). All rights reserved.

Publication Date

6-1-2015

Publication Title

Proceedings of the National Conference on Artificial Intelligence

Volume

4

Number of Pages

2979-2985

Document Type

Article; Proceedings Paper

Personal Identifier

scopus

Socpus ID

84960154388 (Scopus)

Source API URL

https://api.elsevier.com/content/abstract/scopus_id/84960154388

This document is currently not available here.

Share

COinS