Authors

S. Risi;K. O. Stanley

Comments

Authors: contact us about adding a copy of your work at STARS@ucf.edu

Abbreviated Journal Title

Artif. Life

Keywords

Compositional pattern-producing networks; indirect encoding; HyperNEAT; neuroevolution; artificial neural networks; generative and developmental; systems; ARTIFICIAL NEURAL-NETWORKS; EVOLUTION; EVOLVABILITY; MODULARITY; REPRESENTATION; REGULARITY; SYSTEMS; MODEL; Computer Science, Artificial Intelligence; Computer Science, Theory &; Methods

Abstract

Intelligence in nature is the product of living brains, which are themselves the product of natural evolution. Although researchers in the field of neuroevolution (NE) attempt to recapitulate this process, artificial neural networks (ANNs) so far evolved through NE algorithms do not match the distinctive capabilities of biological brains. The recently introduced hypercube-based neuroevolution of augmenting topologies (HyperNEAT) approach narrowed this gap by demonstrating that the pattern of weights across the connectivity of an ANN can be generated as a function of its geometry, thereby allowing large ANNs to be evolved for high-dimensional problems. Yet the positions and number of the neurons connected through this approach must be decided a priori by the user and, unlike in living brains, cannot change during evolution. Evolvable-substrate HyperNEAT (ES-HyperNEAT), introduced in this article, addresses this limitation by automatically deducing the node geometry from implicit information in the pattern of weights encoded by HyperNEAT, thereby avoiding the need to evolve explicit placement. This approach not only can evolve the location of every neuron in the network, but also can represent regions of varying density, which means resolution can increase holistically over evolution. ES-HyperNEAT is demonstrated through multi-task, maze navigation, and modular retina domains, revealing that the ANNs generated by this new approach assume natural properties such as neural topography and geometric regularity. Also importantly, ES-HyperNEAT's compact indirect encoding can be seeded to begin with a bias toward a desired class of ANN topographies, which facilitates the evolutionary search. The main conclusion is that ES-HyperNEAT significantly expands the scope of neural structures that evolution can discover.

Journal Title

Artificial Life

Volume

18

Issue/Number

4

Publication Date

1-1-2012

Document Type

Article

Language

English

First Page

331

Last Page

363

WOS Identifier

WOS:000310571900001

ISSN

1064-5462

Share

COinS