Date of Original Version
Abstract or Description
Visual servoing is a robust technique for aligning both static and moving parts using imprecisely calibrated camera-lens-manipulator systems. An important limitation of these systems is the workspace within which the alignment task can be successfully performed due to the position and orientation of the camera. An active camera can extend this region, however this changes the visual representation of the task itself. Therefore, the reference input that drives the visually servoed manipulator must change appropriately. In this paper, a framework that allows for camera-lens motion during visually servoed manipulation is described. The main components of the framework include object schemas and port-based agents. Object schemas represent the task internally in terms of geometric models with attached sensor mappings. Object schemas are dynamically updated by sensor feedback, and thus provide an ability to perform three dimensional spatial reasoning during task execution, a capability traditional image-based visual servoing lacks. Object schemas are also able to dynamically create desired visual representations of the task from which reference inputs for vision-based control strategies are derived. The sensor mappings of object schemas are also used to guide camera motion based on task characteristics. Port-based agents are the executors of the visual reference inputs and the camera motion commands. They interact with the real world through visual servoing control laws. Experimental results that demonstrate system capabilities and performance are presented.