Abstract
We illustrate the danger of using default implementations of learning algorithms by showing that the implementation of RBF networks in the three most popular open source data mining software packages causes the algorithm to behave and perform like naïve Bayes in most instances. This result has significant implications for both practitioners and researchers in terms of computational complexity, ensemble design and metalearning for algorithm selection. We outline the limits of the similarity between RBF and naïve Bayes, and use metalearning to build a selection model capable of accurately discriminating between the two algorithms, so that extra computation is only incurred when it is likely to produce significant improvement in predictive accuracy.
Get full access to this article
View all access options for this article.
