Date of Original Version

6-2011

Type

Conference Proceeding

Published In

Computer Vision and Pattern Recognition (CVPR), 2011 IEEE Conference on ,, pp.1961-1968, 20-25 June 2011

Abstract or Table of Contents

We present a human-centric paradigm for scene understanding. Our approach goes beyond estimating 3D scene geometry and predicts the "workspace" of a human which is represented by a data-driven vocabulary of human interactions. Our method builds upon the recent work in indoor scene understanding and the availability of motion capture data to create a joint space of human poses and scene geometry by modeling the physical interactions between the two. This joint space can then be used to predict potential human poses and joint locations from a single image. In a way, this work revisits the principle of Gibsonian affordances, reinterpreting it for the modern, data-driven era.

Included in

Robotics Commons

Share

COinS