Date of Original Version
Abstract or Description
This paper describes a system that semi-automatically builds a virtual world for remote operations by constructing 3-D models of a robot’s work environment. With a minimum of human interaction, planar and quadric surface representations of objects typically found in manmade facilities are generated from laser rangefinder data. The surface representations are used to recognize complex models of objects in the scene. These object models are incorporated into a larger world model that can be viewed and analyzed by the operator, accessed by motion planning and robot safeguarding algorithms, and ultimately used by the operator to command the robot through graphical programming and other high level constructs. Limited operator interaction, combined with assumptions about the robots task environment, make the problem of modeling and recognizing objects tractable and yields a solution that can be readily incorporated into many telerobotic control schemes.
Proceedings of IEEE Intelligent Robots and Systems, 103-110.