Date of Original Version
Abstract or Description
Gesture-Based Programming is our paradigm to ease the burden of programming robots. It is an extension of the human demonstration approach that includes encapsulated expertise to guide subtask segmentation and robust real-time execution. A variety of human gestures must be recognized to provide a useful and intuitive interface for the human demonstrator. While the full gesture-based programming environment has not yet been realized, this paper describes a multi-modal gesture recognition system that embodies many of the necessary elements to achieve true gesture-based programming. It begins with recognition of the gestures of a human demonstrating a trajectory. The execution agents then try to repeat the trajectory while observing corrective gestures from the teacher. Similar multi-agent networks are used for both training and execution.