A Demonstration Of Stability-Plasticity Imbalance In Multi-Agent, Decomposition-Based Learning

Keywords

Decomposition-based reinforcement learning; Layered learning; Stability-plasticity dilemma

Abstract

Layered learning is a machine learning paradigm used in conjunction with direct-policy search reinforcement learning methods to find high performance agent behaviors for complex tasks. At its core, layered learning is a decomposition-based paradigm that shares many characteristics with robot shaping, transfer learning, hierarchical decomposition, and incremental learning. Previous studies have provided evidence that layered learning has the ability to outperform standard monolithic methods of learning in many cases. The dilemma of balancing stability and plasticity is a common problem in machine learning that causes learning agents to compromise between retaining learned information to perform a task with new incoming information. Although existing work implies that there is a stability-plasticity imbalance that greatly limits layered learning agents' ability to learn optimally, no work explicitly verifies the existence of the imbalance or its causes. This work investigates the stability-plasticity imbalance and demonstrates that indeed, layered learning heavily favors plasticity, which can cause learned subtask proficiency to be lost when new tasks are learned. We conclude by identifying potential causes of the imbalance in layered learning and provide high level advice about how to mitigate the imbalance's negative effects.

Publication Date

3-2-2016

Publication Title

Proceedings - 2015 IEEE 14th International Conference on Machine Learning and Applications, ICMLA 2015

Number of Pages

1070-1075

Document Type

Article; Proceedings Paper

Personal Identifier

scopus

DOI Link

https://doi.org/10.1109/ICMLA.2015.106

Socpus ID

84969590506 (Scopus)

Source API URL

https://api.elsevier.com/content/abstract/scopus_id/84969590506

This document is currently not available here.

Share

COinS