Carnegie Mellon University
Browse
617_Yumer_2015_2019.pdf (113.06 MB)

Learning Shape Proxies from 3D Geometries for Semantic Analysis, Representation and Manipulation of Man-Made Objects

Download (113.06 MB)
thesis
posted on 2015-05-01, 00:00 authored by Mehmet Ersin Yumer

In this thesis, we investigate many aspects to extract shape proxies to enable perceptually sound and semantic man-made shape analysis, representation and manipulation. We start with introducing a co-segmentation method for textured 3D shapes. Our algorithm takes a collection of textured shapes belonging to the same category and sparse annotations of foreground segments, and produces a joint dense segmentation of the shapes in the collection. We model the segments by a collectively trained Gaussian mixture model. The final model segmentation is formulated as an energy minimization across all models jointly, where intra-model edges control the smoothness and separation of model segments, and inter-model edges impart global consistency. Second, we present a co-abstraction method that takes as input a collection of 3D objects, and produces a mutually consistent and individually identity-preserving abstraction of each object. In general, an abstraction is a simpler version of a shape that preserves its main characteristics. To this end, we introduce a new approach that hierarchically generates a spectrum of abstractions for each model in a shape collection. Given the spectra, we compute the appropriate abstraction level for each model such that shape simplification and inter set consistency are collectively maximized, while individual shape identities are preserved. Utilizing the co-abstraction method, we then introduce a method that automatically induces a group of deformation handles that are conformal to the constraint space learned from the shape set. Our approach identifies the intrinsic constraints among a set of shapes, and generates a co-constrained meta-handles set for each model in the collection. In general, these handles allow the user to prescribe arbitrary deformation directives including affine transformations as well as free-form surface deformations. However, only a subset of admissible deformations is enabled to the user as learned from the constraint space. The deformations prescribed to the meta-handles are then transferred to the original model using a physically based space deformation method that is tailored to preserve man-made shape characteristics. Finally, we propose a shape editing method where the user makes geometric deformations using a set of semantic attributes, thus avoiding the need for detailed geometric manipulations. In contrast to prior work, we focus on continuous deformations instead of discrete part substitutions. Our method provides a platform for quick design explorations and allows non-experts to produce semantically guided shape variations that are otherwise difficult to attain. We crowdsource a large set of pairwise comparisons between the semantic attributes and geometry and use this data to learn a continuous mapping from the semantic attributes to geometry. The resulting map enables simple and intuitive shape manipulations based solely on the learned attributes.

History

Date

2015-05-01

Degree Type

  • Dissertation

Department

  • Mechanical Engineering

Degree Name

  • Doctor of Philosophy (PhD)

Advisor(s)

Levent Burak Kara

Usage metrics

    Exports

    RefWorks
    BibTeX
    Ref. manager
    Endnote
    DataCite
    NLM
    DC