On-line Gauss-Newton-based learning for fully recurrent neural networks
Abbreviated Journal Title
Nonlinear Anal.-Theory Methods Appl.
Mathematics, Applied; Mathematics
In this paper we propose a novel, Gauss-Newton-based variant of the Real Time Recurrent Learning (RTRL) algorithm by Williams and Zipser (Neural Comput. 1 (1989) 270-280) for on-line training of Fully Recurrent Neural Networks. The new approach stands as a robust and effective compromise between the original, gradient-based RTRL (low computational complexity, slow convergence) and Newton-based variants of RTRL (high computational complexity, fast convergence). By gathering information over time in order to form Gauss-Newton search vectors, the new learning algorithm, GN-RTRL, is capable of converging faster to a better quality solution than the original algorithm. Experimental results reflect these qualities of GN-RTRL, as well as the fact that GN-RTRL may have in practice lower computational cost in comparison, again, to the original RTRL. (C) 2005 Elsevier Ltd. All rights reserved.
Nonlinear Analysis-Theory Methods & Applications
"On-line Gauss-Newton-based learning for fully recurrent neural networks" (2005). Faculty Bibliography 2000s. 5742.