AG-ART: An adaptive approach to evolving ART architectures

Authors

    Authors

    A. Kaylani; M. Georgiopoulos; M. Mollaghasemi;G. C. Anagnostopoulos

    Abbreviated Journal Title

    Neurocomputing

    Keywords

    Machine learning; Classification; ARTMAP; Genetic algorithms; Genetic; operators; Category proliferation; ARTIFICIAL NEURAL-NETWORKS; FUZZY ARTMAP; MULTIOBJECTIVE OPTIMIZATION; MULTIDIMENSIONAL MAPS; CLASSIFICATION; ALGORITHMS; INFORMATION; PREDICTION; COMPLEXITY; REDUCTION; Computer Science, Artificial Intelligence

    Abstract

    This paper focuses on classification problems, and in particular on the evolution of ARTMAP architectures using genetic algorithms, with the objective of improving generalization performance and alleviating the adaptive resonance theory (ART) category proliferation problem. In a previous effort, we introduced evolutionary fuzzy ARTMAP (FAM), referred to as genetic Fuzzy ARTMAP (GFAM). In this paper we apply an improved genetic algorithm to FAM and extend these ideas to two other ART architectures; ellipsoidal ARTMAP (EAM) and Gaussian ARTMAP (CAM). One of the major advantages of the proposed improved genetic algorithm is that it adapts the CA parameters automatically, and in a way that takes into consideration the intricacies of the classification problem under consideration. The resulting genetically engineered ART architectures are justifiably referred to as AG-FAM, AG-EAM and AG-GAM or collectively as AG-ART (adaptive genetically engineered ART). We compare the performance (in terms of accuracy, size, and computational cost) of the AG-ART architectures with GFAM, and other ART architectures that have appeared in the literature and attempted to solve the category proliferation problem. Our results demonstrate that AG-ART architectures exhibit better performance than their other ART counterparts (semi-supervised ART) and better performance than GFAM. We also compare AG-ART's performance to other related results published in the classification literature, and demonstrate that AG-ART architectures exhibit competitive generalization performance and, quite often, produce smaller size classifiers in solving the same classification problems. We also show that AG-ART's performance gains are achieved within a reasonable computational budget. (C) 2008 Elsevier B.V. All rights reserved.

    Journal Title

    Neurocomputing

    Volume

    72

    Issue/Number

    10-12

    Publication Date

    1-1-2009

    Document Type

    Article

    Language

    English

    First Page

    2079

    Last Page

    2092

    WOS Identifier

    WOS:000266702300003

    ISSN

    0925-2312

    Share

    COinS