Abstract
Scientific computing infrastructures need reliable software within pressing deadlines due to communities¡¯ requirements they support. Concerning the Grid, computing researchers have been developing software projects without exploiting solutions for discovering defects early enough in the implementation process. This has led to spending energy maintaining and correcting software once released. Achieving high reliability is therefore one of the most important challenges to be faced in the Grid context during the software development life cycle. Although developers perceive quality improvement solutions as limiting factors to their productivity, in our opinion enhancing quality enables us to eliminate mistakes and, as a consequence, reduce costs and delays; the software quality models and metrics represent the mainstream to reach high reliability by balancing both effort and results.
In this paper, we aim to provide an extension of a mathematical model that connects software best practices with a set of metrics to periodically predict the quality at any stage of code development and to determine its problems at any early phase. For data statistical properties, we used a risk-threshold-based discriminant analysis technique to analyze the defined model and to detect fault-prone and non fault-prone components. We gathered input data for this model from several European Middleware Initiative packages having different scopes and characteristics, whilst outputs were derived from measures of all specified metrics. At the end we attempted to understand if the model is a true picture of the software under evaluation.
Get full access to this article
View all access options for this article.
