Title

Overtraining In Fuzzy Artmap: Myth Or Reality?

Abstract

In this paper we are examining the issue of overtraining in Fuzzy ARTMAP. Over-training in Fuzzy ARTMAP manifests itself in two different ways: (a) it degrades the generalization performance of Fuzzy ARTMAP as training progresses, and (b) it creates unnecessarily large Fuzzy ARTMAP neural network architectures. In this work we are demonstrating that overtraining happens in Fuzzy ARTMAP and we propose an old remedy for its cure: cross-validation. In our experiments we compare the performance of Fuzzy ARTMAP that is trained (i) until the completion of training, (ii) for one epoch, and (iii) until its performance on a validation set is maximized. The experiments were performed on artificial and real databases. The conclusion derived from these experiments is that cross-validation is a useful procedure in Fuzzy ARTMAP, because it produces smaller Fuzzy ARTMAP architecture with improved generalization performance. The trade-off is that cross-validation introduces additional computational complexity in the training phase of Fuzzy ARTMAP.

Publication Date

1-1-2001

Publication Title

Proceedings of the International Joint Conference on Neural Networks

Volume

2

Number of Pages

1186-1190

Document Type

Article; Proceedings Paper

Personal Identifier

scopus

Socpus ID

0034863529 (Scopus)

Source API URL

https://api.elsevier.com/content/abstract/scopus_id/0034863529

This document is currently not available here.

Share

COinS