Date of Original Version



Conference Proceeding

Rights Management

© 2012 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes, creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works.

Abstract or Description

We formalize the representation of gestures and present a model that is capable of synchronizing expressive and relevant gestures with text-to-speech input. A gesture consists of gesture primitives that are executed simultaneously. We formally define the gesture primitive and introduce the concept of a spatially targeted gesture primitive, i.e., a gesture primitive that is directed at a target of interest. The spatially targeted gesture primitive is useful for situations where the direction of the gesture is important for meaningful human-robot interaction. We contribute an algorithm to determine how a spatially targeted gesture primitive is generated. We also contribute a process to analyze the input text, determine relevant gesture primitives from the input text, compose gestures from gesture primitives and rank the combinations of gestures. We propose a set of criteria that weights and ranks the combinations of gestures. Although we illustrate the utility of our model, algorithm and process using a NAO humanoid robot, our contributions are applicable to other robots.





Published In

Proceedings of the International Symposium on Robot and Human Interactive Communication (RO-MAN), 2012, 107-112.