Learning Continuous State/Action Models For Humanoid Robots

Abstract

Reinforcement learning (RL) is a popular choice for solving robotic control problems. However, applying RL techniques to controlling humanoid robots with high degrees of freedom remains problematic due to the difficulty of acquiring sufficient training data. The problem is compounded by the fact that most real-world problems involve continuous states and actions. In order for RL to be scalable to these situations it is crucial that the algorithm be sample efficient. Model-based methods tend to be more data efficient than model-free approaches and have the added advantage that a single model can generalize to multiple control problems. This paper proposes a model approximation algorithm for continuous states and actions that integrates case-based reasoning (CBR) and Hidden Markov Models (HMM) to generalize from a small set of state instances. The paper demonstrates that the performance of the learned model is close to that of the system dynamics it approximates, where performance is measured in terms of sampling error.

Publication Date

1-1-2016

Publication Title

Proceedings of the 29th International Florida Artificial Intelligence Research Society Conference, FLAIRS 2016

Number of Pages

392-397

Document Type

Article; Proceedings Paper

Personal Identifier

scopus

Socpus ID

85004028231 (Scopus)

Source API URL

https://api.elsevier.com/content/abstract/scopus_id/85004028231

This document is currently not available here.

Share

COinS