Date of Original Version

6-2011

Type

Conference Proceeding

Journal Title

Proceedings of the Annual Meeting of the Association for Computational Linguistics

First Page

409

Last Page

419

Rights Management

Copyright 2011 ACL

Abstract or Description

We introduce a discriminatively trained, globally normalized, log-linear variant of the lexical translation models proposed by Brown et al. (1993). In our model, arbitrary, nonindependent features may be freely incorporated, thereby overcoming the inherent limitation of generative models, which require that features be sensitive to the conditional independencies of the generative process. However, unlike previous work on discriminative modeling of word alignment (which also permits the use of arbitrary features), the parameters in our models are learned from unannotated parallel sentences, rather than from supervised word alignments. Using a variety of intrinsic and extrinsic measures, including translation performance, we show our model yields better alignments than generative baselines in a number of language pairs.

Creative Commons License

Creative Commons Attribution-Noncommercial-Share Alike 3.0 License
This work is licensed under a Creative Commons Attribution-Noncommercial-Share Alike 3.0 License.

Share

COinS
 

Published In

Proceedings of the Annual Meeting of the Association for Computational Linguistics, 409-419.