Learning From Demonstration: Generalization Via Task Segmentation

Abstract

In this paper, a motion segmentation algorithm design is presented with the goal of segmenting a learned trajectory from demonstration such that each segment is locally maximally different from its neighbors. This segmentation is then exploited to appropriately scale (dilate/squeeze and/or rotate) a nominal trajectory learned from a few demonstrations on a fixed experimental setup such that it is applicable to different experimental settings without expanding the dataset and/or retraining the robot. The algorithm is computationally efficient in the sense that it allows facile transition between different environments. Experimental results using the Baxter robotic platform showcase the ability of the algorithm to accurately transfer a feeding task.

Publication Date

11-6-2017

Publication Title

IOP Conference Series: Materials Science and Engineering

Volume

261

Issue

1

Document Type

Article; Proceedings Paper

Personal Identifier

scopus

DOI Link

https://doi.org/10.1088/1757-899X/261/1/012001

Socpus ID

85035087355 (Scopus)

Source API URL

https://api.elsevier.com/content/abstract/scopus_id/85035087355

This document is currently not available here.

Share

COinS