Date of Original Version

2013

Type

Conference Proceeding

Rights Management

The final publication is available at Springer via http://dx.doi.org/10.1007/978-3-642-40991-2_40

Abstract or Description

We present methods to introduce different forms of supervision into mixed-membership latent variable models. Firstly, we introduce a technique to bias the models to exploit topic-indicative features, i.e. features which are aprioriknown to be good indicators of the latent topics that generated them. Next, we present methods to modify the Gibbs sampler used for approximate inference in such models to permit injection of stronger forms of supervision in the form of labels for features and documents, along with a description of the corresponding change in the underlying generative process. This ability allows us to span the range from unsupervised topic models to semi-supervised learning in the same mixed membership model. Experimental results from an entity-clustering task demonstrate that the biasing technique and the introduction of feature and document labels provide a significant increase in clustering performance over baseline mixed-membership methods.

DOI

10.1007/978-3-642-40991-2_40

Share

COinS
 

Published In

Lecture Notes in Computer Science, 8189, 628-642.