Abstract
Brain-Computer Interface provides and simplifies the communication channel for the physically disabled individuals suffering from severe brain injury related to brain stroke and lost ability to speak. It helps these patients to connect with the outside world. In the proposed work, the electroencephalogram signal is used as an input source taken from Bonn University database that is divided into three class of data consisting of 247 samples each. It is further processed by Tunable Q-Wavelet Transform signal decomposition technique where the signals are subdivided into various sub-bands depending on the value of Q-factor, redundancy factor, and number of sub-bands. A novel custom technique uses Q-factor of 3, redundancy value of 3 & 12 number of sub-bands for high pass filtering as well as Q-factor of 1, redundancy value of 3 & 7 number of sub-bands for low pass filtering combined with nine statistical measures for feature extraction purpose. The classification is performed by using multi-class support vector machine giving the accuracy of 99.59%. The accuracy performs best when compared with the existing research results Furthermore, the comparative study has been performed on the same dataset by using deep neural network along with support vector machine giving an accuracy of 100%. Other evaluation parameters such as precision, sensitivity, specificity, and F1 score are also calculated. The classified data help transform the signal into three communication messages that will help solve the speech impairment of disabled individuals.
Keywords
Get full access to this article
View all access options for this article.
