Date of Original Version
Proceedings of the Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies
Copyright 2012 ACL
Abstract or Description
This paper seeks to close the gap between training algorithms used in statistical machine translation and machine learning, specifically the framework of empirical risk minimization. We review well-known algorithms, arguing that they do not optimize the loss functions they are assumed to optimize when applied to machine translation. Instead, most have implicit connections to particular forms of ramp loss. We propose to minimize ramp loss directly and present a training algorithm that is easy to implement and that performs comparably to others. Most notably, our structured ramp loss minimization algorithm, RAMPION, is less sensitive to initialization and random seeds than standard approaches.
Creative Commons License
This work is licensed under a Creative Commons Attribution-Noncommercial-Share Alike 3.0 License.
Proceedings of the Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, 221-231.