Abstract
This work comes up with a reinforcement learning (RL) based neural classifier for elbow, finger and hand movements for a multitasking prosthetic hand. First, key statistical features are extracted from the Electromyogram (EMG) signals pertaining to elbow angles, finger movements for typing keys and hand movements. Next, these statistical features are fed to a neural network based reinforcement learning (NNRL) classifier for predicting elbow angle, typing key finger movements and hand movements. For the first task (elbow angle) EMG signals have been recorded with varying weights for different elbow positions; for the second task (typing keys) EMG data is for four tying keys and for the last task (hand movement) EMG data pertains to six hand movements. For the elbow angle prediction task, EMG signal for two channels: Biceps (channel 1) and Triceps (channel 2) has been recorded for 10 subjects and we extract 4 features from each channel. The classifier is able to achieve an average classification accuracy of 97.51%. For the typing keys and hand movement classification tasks, we have used two channels: right hand (channel 1) and left hand (channel 2) for EMG and extract 4 features for the typing keys and 10 features for hand movement. The classifier achieved an accuracy of 98.73% and 97.6% for the typing keys and hand movements tasks, respectively. NNRL gives superior results in comparison to the existing classifiers. High classification accuracy achieved by NNRL classifier shows that our approach could be used as a stepping stone for building a multi-tasking prosthetic hand.
Get full access to this article
View all access options for this article.
