Date of Original Version

5-2010

Type

Conference Proceeding

Abstract or Description

We present an approach for efficiently recognizing all objects in a scene and estimating their full pose from multiple views. Our approach builds upon a state of the art single-view algorithm which recognizes and registers learned metric 3D models using local descriptors. We extend to multiple views using a novel multi-step optimization that processes each view individually and feeds consistent hypotheses back to the algorithm for global refinement. We demonstrate that our method produces results comparable to the theoretical optimum, a full multi-view generalized camera approach, while avoiding its combinatorial time complexity. We provide experimental results demonstrating pose accuracy, speed, and robustness to model error using a three-camera rig, as well as a physical implementation of the pose output being used by an autonomous robot executing grasps in highly cluttered scenes.

DOI

10.1109/ROBOT.2010.5509615

Included in

Robotics Commons

Share

COinS
 

Published In

Proceedings of the IEEE International Conference on Robotics and Automation (ICRA) , 2050-2055.