Date of Original Version
Abstract or Description
Current approaches to semi-supervised incremental learning prefer to select unlabeled examples predicted with high confidence for model re-training. However, this strategy can degrade the classification performance rather than improve it. We present an analysis for the reasons of this phenomenon, showing that only relying on high confidence for data selection can lead to an erroneous estimate to the true distribution when the confidence annotator is highly correlated with the classifier in the information they use. We propose a new data selection approach to address this problem and apply it to a variety of applications, including machine learning and speech recognition. Encouraging improvements in recognition accuracy are observed in our experiments.