Abstract
Recently, many software companies have shifted to shorter release cycles from the traditional multi-month release cycle. Evolution and transition of release cycles may affect the test effort in the system. This paper analyses 25 traditional releases containing 1210 classes and 69 rapid releases containing 2616 classes of four Open Source Java systems. Correlations between 48 Object Oriented metrics and 2 test metrics were evaluated to identify the best indicators of test effort. The results show that (i) correlation between OO and test metrics remain irrespective of release models, (ii) test effort required in Rapid Release (RR) models (shorter release cycles) is slightly more as compared to Traditional Release (TR) models, (iii) Out of 18 machine learning algorithms instance based machine learning algorithms IBK and K star followed by Multi-Layer Perceptron (MLP) and additive regression are able to predict the test effort accurately in classes.
Get full access to this article
View all access options for this article.
