Date of Original Version

12-2010

Type

Conference Proceeding

Rights Management

© 2010 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works.

Abstract or Description

A humanoid robot can perform a task through a policy mapping from its sensed state to the appropriate task actions. We assume that a hand-coded controller can capture such a mapping only for the basic cases of the given task. As the complexity of the situation increases, the harder it becomes to refine the controller, and such refinements are often tedious and error prone. Based on the fact that a human can detect the failures of a robot executing the hand-coded controller, in this paper we present a corrective learning from demonstration approach to improve the robot performance. Corrections are captured as new state action pairs, and during the autonomous humanoid robot execution, the controller is replaced by the demonstration corrections when the new state is found to be similar to the corrected state. We focus on the Aldebaran Nao humanoid robot and a concrete complex ball dribbling task in an environment with obstacles. We present experimental results showing an improvement in the humanoid task performance when the corrective demonstration is used in addition to the basic hand-coded controller.

DOI

10.1109/ICHR.2010.5686326

Share

COinS
 

Published In

Proceedings of the IEEE-RAS International Conference on Humanoid Robots (Humanoids), 2010, 334-339.