Abstract
The most important asset of semi-supervised learning (SSL) methods is the use of available unlabeled data combined with an enough smaller set of labeled examples, so as to increase the classification accuracy compared with the default procedure of supervised methods, in which during the training only the labeled data are used. The encapsulation of classifier ensembles that produce different models through training process into semi-supervised schemes seems to be a promising strategy for enhanced learning ability. In this work, a Self-trained Rotation Forest (Self-RotF) algorithm and a variant of this (Weighted-Self-RotF) are presented. We performed an in depth comparison with other well-known semi-supervised classification methods on standard benchmark datasets and after having tested their performance with statistical tests, we finally reached to the point that the presented technique had better accuracy in most cases.
Keywords
Get full access to this article
View all access options for this article.
