Title

On-Line Gauss-Newton-Based Learning For Fully Recurrent Neural Networks

Abstract

In this paper we propose a novel, Gauss-Newton-based variant of the Real Time Recurrent Learning (RTRL) algorithm by Williams and Zipser (Neural Comput. 1 (1989) 270-280) for on-line training of Fully Recurrent Neural Networks. The new approach stands as a robust and effective compromise between the original, gradient-based RTRL (low computational complexity, slow convergence) and Newton-based variants of RTRL (high computational complexity, fast convergence). By gathering information over time in order to form Gauss-Newton search vectors, the new learning algorithm, GN-RTRL, is capable of converging faster to a better quality solution than the original algorithm. Experimental results reflect these qualities of GN-RTRL, as well as the fact that GN-RTRL may have in practice lower computational cost in comparison, again, to the original RTRL. © 2005 Elsevier Ltd. All rights reserved.

Publication Date

11-30-2005

Publication Title

Nonlinear Analysis, Theory, Methods and Applications

Volume

63

Issue

5-7

Number of Pages

-

Document Type

Article; Proceedings Paper

Personal Identifier

scopus

DOI Link

https://doi.org/10.1016/j.na.2005.02.015

Socpus ID

28044460611 (Scopus)

Source API URL

https://api.elsevier.com/content/abstract/scopus_id/28044460611

This document is currently not available here.

Share

COinS