Date of Original Version
Abstract or Description
Precise knowledge of a robots’s ego-motion is a crucial requirement for higher level tasks like autonomous navigation. Bundle adjustment based monocular visual odometry has proven to successfully estimate the motion of a robot for short sequences, but it suffers from an ambiguity in scale. Hence, approaches that only optimize locally are prone to drift in scale for sequences that span hundreds of frames.
In this paper we present an approach to monocular visual odometry that compensates for drift in scale by applying constraints imposed by the known camera mounting and assumptions about the environment. To this end, we employ a continuously updated point cloud to estimate the camera poses based on 2d-3d-correspondences. Within this set of camera poses, we identify keyframes which are combined into a sliding window and reﬁned by bundle adjustment. Subsequently, we update the scale based on robustly tracked features on the road surface. Results on real datasets demonstrate a signiﬁcant increase in accuracy compared to the non-scaled scheme.
Proceedings of European Conference on Mobile Robots.