Abstract
The Support Vector Machine (SVM) is a powerful technique for data classification. For linearly separable data points, the SVM constructs an optimal separating hyper-plane as a decision surface, to divide the data points of different categories in the vector space. For the non-linearly separable data points, the Kernel functions are used to extend the concept of the optimal separating hyper-plane so that the data can be linearly separable. The different kernel functions have different characteristics and hence the performance of the SVM is highly influenced by the selection of kernel functions. This paper presents the classification algorithm that uses the SVM in the training phase and the Mahalanobis distance in the testing phase, in order to design a classifier which has low impact of kernel function on the classification accuracy, positively. The Mahalanobis distance is used to replace the optimal separating hyper-plane as the classification decision making function in the SVM. The proposed approach is compared with Euclidean-SVM, which uses Euclidean distance function to replace the optimal separating hyper-plane as the classification boundary. It has also been evaluated against conventional SVM too. The experimental results show that the accuracy of the EuDiC (Euclidean Distance towards the Center of data) SVM classifier has a low impact on the implementation of kernel functions. The EuDiC SVM also achieves the drastic reduction in the classification time since it only depends on the mean of Support Vectors(SVs) of each category for classification. To prove its effectiveness on other types of data, the time series data have also been used. Due to robust design of the EuDiC, it also performs well for time series data too.
Keywords
Get full access to this article
View all access options for this article.
