Abstract
Although electroencephalography (EEG) brain-computer interface (BCI) has been quite successful, multi-command control is still one of the key issues for external applications. Multimodal BCI represents the direction of dealing with this problem. In our study, five healthy subjects performed the experiment cooperatively. EEG and electromyography (EMG) were recorded synchronously. For individual EEG, after Laplacian filtering, the C3 and C4 channels were determined. Then, the EEG was decomposed into the third layer by wavelet packet transform (WPT), and the average, sub-band energy and mean square deviation were calculate at particular nodes. Finally, these features were fed into support vector machine (SVM) either singly or in combination, and the EEG classification accuracy was obtained. For individual EMG, the mean absolute value (MAV) and root mean square (RMS) were calculated. Then, probabilistic neural network (PNN) was employed, and the EMG classification accuracy was also obtained. Different mental and gesture tasks were combined to represent multi-class and these commands were ranked depending on their performance. The results showed that the subjects were able to obtain multi-class with satisfactory performance by multimodal BCI. The proposed interface could support multi-command control for external applications.
Keywords
Get full access to this article
View all access options for this article.
