Title

Evolving Plastic Neural Networks With Novelty Search

Keywords

adaptation; learning; neural networks; neuroevolution; neuromodulation; Novelty search

Abstract

Biological brains can adapt and learn from past experience. Yet neuroevolution, that is, automatically creating artificial neural networks (ANNs) through evolutionary algorithms, has sometimes focused on static ANNs that cannot change their weights during their lifetime. A profound problem with evolving adaptive systems is that learning to learn is highly deceptive. Because it is easier at first to improve fitness without evolving the ability to learn, evolution is likely to exploit domain-dependent static (i.e., nonadaptive) heuristics. This article analyzes this inherent deceptiveness in a variety of different dynamic, reward-based learning tasks, and proposes a way to escape the deceptive trap of static policies based on the novelty search algorithm. The main idea in novelty search is to abandon objective-based fitness and instead simply search only for novel behavior, which avoids deception entirely. A series of experiments and an in-depth analysis show how behaviors that could potentially serve as a stepping stone to finding adaptive solutions are discovered by novelty search yet are missed by fitness-based search. The conclusion is that novelty search has the potential to foster the emergence of adaptive behavior in reward-based learning tasks, thereby opening a new direction for research in evolving plastic ANNs. © The Author(s) 2010.

Publication Date

12-1-2010

Publication Title

Adaptive Behavior

Volume

18

Issue

6

Number of Pages

470-491

Document Type

Article

Personal Identifier

scopus

DOI Link

https://doi.org/10.1177/1059712310379923

Socpus ID

78751668734 (Scopus)

Source API URL

https://api.elsevier.com/content/abstract/scopus_id/78751668734

This document is currently not available here.

Share

COinS