Abstract
The primary objective of this research is to enhance the classification accuracy of motor imagery (MI) electroencephalography signals to improve brain–computer interfaces (BCIs) for communication among individuals with mobility limitations. The main challenge is finding the correct frequency bands, efficient time-frequency representations, and accurately classifying these representations. Various classification models are available but have not achieved high accuracy, which is a key evaluation matrix. This research provides a method that first identifies the best frequency range utilizing several fast-converging optimization approaches. After filtering with the identified band, a continuous wavelet transform was used with complex Morlet to obtain temporal frequency representations, which are effective sources of feature representation for MI tasks. These scalograms were then classified using the advanced deep learning model vision transformer, which is well-known for its ability to extract and select features using the attention mechanism. The proposed technique achieved remarkable accuracy, attaining 97.33% on a widely recognized dataset and 89.89% on another dataset, outperforming comparable research. Integrating modern signal processing and a cutting-edge deep model enhances accuracy, allows for neuroprosthetic device control, and offers up new avenues for research in the BCI arena.
Keywords
Get full access to this article
View all access options for this article.
