Date of Original Version

6-2011

Type

Conference Proceeding

Journal Title

Proceedings of the Annual Meeting of the Association for Computational Linguistics (ACM)

First Page

176

Last Page

181

Rights Management

Copyright 2011 ACL

Abstract or Description

In statistical machine translation, a researcher seeks to determine whether some innovation (e.g., a new feature, model, or inference algorithm) improves translation quality in comparison to a baseline system. To answer this question, he runs an experiment to evaluate the behavior of the two systems on held-out data. In this paper, we consider how to make such experiments more statistically reliable. We provide a systematic analysis of the effects of optimizer instability—an extraneous variable that is seldom controlled for—on experimental outcomes, and make recommendations for reporting results more accurately

Creative Commons License

Creative Commons Attribution-Noncommercial-Share Alike 3.0 License
This work is licensed under a Creative Commons Attribution-Noncommercial-Share Alike 3.0 License.

Share

COinS
 

Published In

Proceedings of the Annual Meeting of the Association for Computational Linguistics (ACM), 176-181.