Simple Evolutionary Optimization Can Rival Stochastic Gradient Descent In Neural Networks
Keywords
Artificial intelligence; Deep learning; Machine learning; Neural networks; Pattern recognition and classification
Abstract
While evolutionary algorithms (EAs) have long offered an alternative approach to optimization, in recent years back-propagation through stochastic gradient descent (SGD) has come to dominate the fields of neural network optimization and deep learning. One hypothesis for the absence of EAs in deep learning is that modern neural networks have become so high dimensional that evolution with its inexact gradient cannot match the exact gradient calculations of backpropa-gation. Furthermore, the evaluation of a single individual in evolution on the big data sets now prevalent in deep learning would present a prohibitive obstacle towards efficient optimization. This paper challenges these views, suggesting that EAs can be made to run significantly faster than previously thought by evaluating individuals only on a small number of training examples per generation. Surprisingly, using this approach with only a simple EA (called the limited evaluation EA or LEEA) is competitive with the performance of the state-of-the-art SGD variant RMSProp on several benchmarks with neural networks with over 1,000 weights. More investigation is warranted, but these initial results suggest the possibility that EAs could be the first viable training alternative for deep learning outside of SGD, thereby opening up deep learning to all the tools of evolutionary computation.
Publication Date
7-20-2016
Publication Title
GECCO 2016 - Proceedings of the 2016 Genetic and Evolutionary Computation Conference
Number of Pages
477-484
Document Type
Article; Proceedings Paper
Personal Identifier
scopus
DOI Link
https://doi.org/10.1145/2908812.2908916
Copyright Status
Unknown
Socpus ID
84985943094 (Scopus)
Source API URL
https://api.elsevier.com/content/abstract/scopus_id/84985943094
STARS Citation
Morse, Gregory and Stanley, Kenneth O., "Simple Evolutionary Optimization Can Rival Stochastic Gradient Descent In Neural Networks" (2016). Scopus Export 2015-2019. 4462.
https://stars.library.ucf.edu/scopus2015/4462