Title

Learning in the feed-forward random neural network: A critical review

Authors

Authors

M. Georgiopoulos; C. Li;T. Kocak

Comments

Authors: contact us about adding a copy of your work at STARS@ucf.edu

Abbreviated Journal Title

Perform. Eval.

Keywords

Random neural network; Learning; Gradient descent; Multi-layer; perceptron; Error functions; Evolutionary neural networks; ART; SVM; CART; Multi-objective optimization; PARTICLE SWARM OPTIMIZATION; MULTIPLE DATA SETS; VIDEO QUALITY; FUZZY-ARTMAP; SYNCHRONIZED INTERACTIONS; STATISTICAL COMPARISONS; MULTILAYER PERCEPTRONS; GLOBAL OPTIMIZATION; PATTERN-RECOGNITION; QUEUING-NETWORKS; Computer Science, Hardware & Architecture; Computer Science, Theory &; Methods

Abstract

The Random Neural Network (RNN) has received, since its inception in 1989, considerable attention and has been successfully used in a number of applications. In this critical review paper we focus on the feed-forward RNN model and its ability to solve classification problems. In particular, we paid special attention to the RNN literature related with learning algorithms that discover the RNN interconnection weights, suggested other potential algorithms that can be used to find the RNN interconnection weights, and compared the RNN model with other neural-network based and non-neural network based classifier models. In review, the extensive literature review and experimentation with the RNN feed-forward model provided us with the necessary guidance to introduce six critical review comments that identify some gaps in the RNN's related literature and suggest directions for future research. (C) 2010 Elsevier B.V. All rights reserved.

Journal Title

Performance Evaluation

Volume

68

Issue/Number

4

Publication Date

1-1-2011

Document Type

Review

Language

English

First Page

361

Last Page

384

WOS Identifier

WOS:000289541800007

ISSN

0166-5316

Share

COinS