Date of Original Version

6-2014

Type

Conference Proceeding

Journal Title

Proceedings of d Annual Meeting of the Association for Computational Linguistics (Short Papers)

First Page

828

Last Page

834

Rights Management

Copyright 2014 Association for Computational Linguistics

Abstract or Description

We introduce a model for incorporating contextual information (such as geography) in learning vector-space representations of situated language. In contrast to approaches to multimodal representation learning that have used properties of the object being described (such as its color), our model includes information about the subject (i.e., the speaker), allowing us to learn the contours of a word’s meaning that are shaped by the context in which it is uttered. In a quantitative evaluation on the task of judging geographically informed semantic similarity between representations learned from 1.1 billion words of geo-located tweets, our joint model outperforms comparable independent models that learn meaning in isolation.

Share

COinS
 

Published In

Proceedings of d Annual Meeting of the Association for Computational Linguistics (Short Papers), 828-834.