Date of Original Version
The final publication is available at Springer via http://dx.doi.org/10.1007/978-3-642-20217-9_17
Abstract or Description
Robust walking is one of the key requirements for soccer playing humanoid robots. Developing such a biped walk algorithm is non-trivial due to the complex dynamics of the walk process. In this paper, we first present a method for learning a corrective closed-loop policy to improve the walk stability for the Aldebaran Nao robot using real-time human feedback combined with an open-loop walk cycle. The open-loop walk cycle is obtained from the recorded joint commands while the robot is walking using an existing walk algorithm as a black-box unit. We capture the corrective feedback signals delivered by a human using a wireless feedback mechanism in the form of corrections to the particular joints and we present experimental results showing that a policy learned from a walk algorithm can be used to improve the stability of another walk algorithm. We then follow up with improving the open-loop walk cycle using advice operators before performing real-time human demonstration. During the demonstration, we then capture the sensory readings and the corrections in the form of displacements of the foot positions while the robot is executing improved open-loop walk cycle. We then translate the feet displacement values into individual correction signals for the leg joints using a simplified inverse kinematics calculation. We use a locally weighted linear regression method to learn a mapping from the recorded sensor values to the correction values. Finally, we use a simple anomaly detection method by modeling the changes in the sensory readings throughout the walk cycle during a stable walk as normal distributions and executing the correction policy only if a sensory reading goes beyond the modeled values. Experimental results demonstrate an improvement in the walk stability.
Lecture Notes in Computer Science, 6556, 194-205.