Date of Original Version

7-2011

Type

Conference Proceeding

Journal Title

Proceedings of the Conference on Empirical Methods in Natural Language Processing (EMNLP)

First Page

1500

Last Page

1511

Rights Management

Copyright 2011 ACL

Abstract or Description

Linear models have enjoyed great success in structured prediction in NLP. While a lot of progress has been made on efficient training with several loss functions, the problem of endowing learners with a mechanism for feature selection is still unsolved. Common approaches employ ad hoc filtering or L1- regularization; both ignore the structure of the feature space, preventing practicioners from encoding structural prior knowledge. We fill this gap by adopting regularizers that promote structured sparsity, along with efficient algorithms to handle them. Experiments on three tasks (chunking, entity recognition, and dependency parsing) show gains in performance, compactness, and model interpretability.

Creative Commons License

Creative Commons Attribution-Noncommercial-Share Alike 3.0 License
This work is licensed under a Creative Commons Attribution-Noncommercial-Share Alike 3.0 License.

Share

COinS
 

Published In

Proceedings of the Conference on Empirical Methods in Natural Language Processing (EMNLP), 1500-1511.