Mitigating Fooling With Competitive Overcomplete Output Layer Neural Networks

Abstract

Although the introduction of deep learning has led to significant performance improvements in many machine learning applications, several recent studies have revealed that deep feedforward models are easily fooled. Fooling in effect results from overgeneralization of neural networks over regions far from the training data. To circumvent this problem this paper proposes a novel elaboration of standard neural network architectures called the competitive overcomplete output layer (COOL) neural network. Experiments demonstrate the effectiveness of COOL by visualizing the behavior of COOL networks in a low-dimensional artificial classification problem and by applying it to a high-dimensional vision domain (MNIST).

Publication Date

6-30-2017

Publication Title

Proceedings of the International Joint Conference on Neural Networks

Volume

2017-May

Number of Pages

518-525

Document Type

Article; Proceedings Paper

Personal Identifier

scopus

DOI Link

https://doi.org/10.1109/IJCNN.2017.7965897

Socpus ID

85031044119 (Scopus)

Source API URL

https://api.elsevier.com/content/abstract/scopus_id/85031044119

This document is currently not available here.

Share

COinS