Abstract
This paper gives a survey of the inverse boosting algorithm which employs an inverse error vector to produce new sample distributions. The experimental result shows that the inverse boosting can outperform normal boosting in some cases. Further experiments on several benchmark real-world data sets available from the UCI repository are taken to get a better insight into the inverse boosting algorithm, and then we find that the performance of the ensemble is determined by the performance of its base classifiers. This distinctive feature of inverse boosting indicates that traditional ways to improve the performance of an ensemble through enhancing the diversity between the base classifier are invalid here. Therefore, this paper proposes a method with pertinence to improve the performance of inverse boosting.
Get full access to this article
View all access options for this article.
