Abstract
Here we report and comment on the magnitudes of post-stroke impairment reduction currently observed using new neurotechnologies. We argue that neurotechnology’s best use case is impairment reduction as this is neither the primary strength nor main goal of conventional rehabilitation, which is better at targeting the activity and participation levels of the ICF. The neurotechnologies discussed here can be divided into those that seek to be adjuncts for enhancing conventional rehabilitation, and those that seek to introduce a novel behavioral intervention altogether. Examples of the former include invasive and non-invasive brain stimulation. Examples of the latter include robotics and some forms of serious gaming. We argue that motor learning and training-related recovery are conceptually and mechanistically distinct. Based on our survey of recent results, we conclude that large reductions in impairment will need to begin with novel forms of high dose and high intensity behavioral intervention that are qualitatively different to conventional rehabilitation. Adjunct forms of neurotechnology, if they are going to be effective, will need to piggyback on these new behavioral interventions.
Introduction
We are at an interesting juncture in medicine. At the time of writing, most of us have had our COVID boosters. We are still not out of the pandemic but the vaccinations have been another success story for reductionist biology. As much as neurology would like to have similar success, the fact is that neurological conditions such as stroke and Parkinson’s Disease are not honorary infectious diseases and are not likely to yield to monotherapeutic magic bullets. 1 The recent aducanumab fiasco speaks to the reluctance of the biomedical establishment to accept this painful truth. 2 But all is not lost, there are new “system-level,” behavior-focused kids on the block: digital therapies, brain-machine interfaces, implantable neural stimulators, and robots. There is a logic to these behavioral approaches as the nervous system is not like other organs in that it undergoes adaptive experience-dependent plasticity. The question we address here is how large have reductions in impairment been for these various neuroengineering approaches in the field of neurorehabilitation thus far and are there prospects for them to get larger?
First we should say what this perspective will or will not be about. We are in no way seeking to be either encyclopedic or systematic but instead focus on key studies, hopefully in an unbiased way, so as to accentuate principles and conceptual pitfalls. We will not discuss the goal of using technology as either an aide to therapists conducting regular rehabilitation or as a substitute for therapists. On this latter point we should state that there is a real danger of using technology on the grounds of either cost saving or perceived gains in efficiency but as a consequence actually decrease the quality of care by either reducing in-person interaction or removing human beings from the loop altogether. This could lead to a race to the bottom; the app-ification of healthcare is not a promising default position for medicine. Human beings will always be needed when a behavioral intervention is being given and technology should be used when it does something that synergizes with them. We prefer the idea of Luke Skywalker and R2D2, not R2D2 alone. We will also not discuss either assistive technology or the use of technology for finer-grained assessment and tracking of responses to interventions.
The goal of this perspective is to determine if either recent trials or scientific work justify the belief that innovations in software and hardware can: (1) Produce impairment-reducing effect sizes previously unseen with conventional approaches to neurorehabilitation; or (2) Show promising signal suggestive of a true recovery effect.
Here we define “recovery” as a reduction in impairment that occurs as a result of neurophysiological changes that restore the behavioral phenotype back to or at least toward its pre-morbid state. At all points the question that needs to be asked when the observed impairment reduction is small after a technological intervention is whether this is attributable to either a failure of engineering or of the scientific premise.
The main claim advanced here is that technology-based behavioral interventions are better suited to impairment reduction than mere increases in regular therapy, as the latter is an approach that primarily targets motor learning and teaching of practical coping strategies, that is, the activity and participation levels of the ICF. 3 Although recent studies suggest that substantially increasing the amount of regular therapy given to a patient can have an effect on impairment,4-9 here we argue that this is suboptimal both for the patient and the therapist; a therapist could take a patient home by carrying them on their back but either a car or a van would be a better choice. Recent trials have shown parity for increased regular therapy and technology-based approaches, 10 but we speculate that there will soon be a tipping point for technology, after which patients will get equivalent or larger reductions in impairment with less time on task as compared with increased regular therapy. This is very important because patients and therapists will get bored and tired doing repetitive exercises for many hours over many days. We wish to emphasize, however, that therapists, defined broadly as humans with expertise in behavioral intervention, will always be crucial to this enterprise. One can envisage the creation of a new cadre of therapists trained to use technology specifically for impairment reduction within a conceptual framework targeting neuroanatomy and neurophysiology, rather than ADLs and the perceived needs of patients. These latter goals are entirely creditworthy but are already well-accomplished by existing approaches to physical, occupational, and speech therapy. Indeed, the ideal scenario would be a dual approach: maximize impairment reduction (restoration) and then follow-up with training patients to optimally generalize whatever level of impairment they possess to the real world (rehabilitation).
The critical point about technology in the context of neurorehabilitation is that it can be used to achieve 2 distinctly different goals for people living with neurological injury or disease. Namely, it is both a tool to modulate learning how to use the residual nervous system at a fixed level of impairment, and a tool to provide a novel training approach that is not focused on functional tasks but on increasing residual cognitive and motor capacities. Thus, there is an important distinction to be made between learning (or relearning) a given task within the performance envelope of residual capacities, 11 versus training to enhance the residual capacities themselves, that is, reduce impairment and thereby allow generalization across tasks. This dichotomy has also been framed as compensation versus restitution. 12 In a recent meta-analysis of 32 longitudinal studies of stroke recovery, none of them explicitly addressed this distinction between behavioral restitution and compensation. 13 One reason for this paucity is that the field has not yet reached consensus on what new kinematic measures should be used to track true recovery. The proper evaluation and development of interventional neurotechnologies will require parallel innovations in measurement. Crucial to our argument is that technology for restitution and technology as an aide for therapists should not be conflated with each other as they require distinct behavioral protocols and expertise. At the very least, the goal of any neurotechnology with regard to this dichotomy should always be explicitly stated. Our focus here is on the potential for technology to bring about meaningful neurological restitution.
A consequence of the reasoning outlined above is that if a technology-based intervention has a marked effect on reduction in impairment, but the control intervention does also, leading to no significant difference between the trial groups, then this should not necessarily be taken as a negative result against the technological intervention. First of all, the control group in most neurorehabilitation trials does not in fact receive standard of care but doses and intensities of protocolized OT and PT that are never usually given in regular clinical practice; in essence the control group is receiving a novel intervention in and of itself. Second, parity for the technology might be a first step in it becoming superior to matched OT/PT or at least easier to scale and economically more efficient. The first automobile ever had a maximum speed of 10 mph. The Model T Ford had a maximum speed of 40 to 45 mph. A horse can reach 55 mph. The transition is obvious in retrospect and we predict that it is likely to occur for impairment reduction technology in the same way. In contrast, we suspect that the activity and participation levels of the ICF will remain better targeted by some form of hands-on therapy. What is written below should be viewed through this lens: does any form of intervention, high dose conventional versus novel technological, or a combination of both, lead to large reductions in impairment? This is a more important question than showing that a technological intervention was statistically superior to the control, but the impairment reduction in both groups remained small.
Stimulation: Technology That Targets Physiology Directly
Non-invasive Brain Stimulation
The 2 principal non-invasive brain stimulation (NIBS) approaches are transcranial magnetic stimulation (TMS) and transcranial direct stimulation (tDCS), and although there are ever more variants, the claims made for them are essentially the same: synaptic connections can be strengthened and that these changes outlast the period of stimulation. TMS directly stimulates and induces action potentials in cortical or cerebellar neurons. 14 The idea is that if this is done repetitively then synaptic strengths change in a manner analogous to LTP. In the case of tDCS, neurons are not induced to fire but their membranes are brought closer to threshold for endogenous inputs. For TMS, the rehabilitative principle is to potentiate residual circuitry and connections, which in turn may make concomitant training more effective. For tDCS, the idea is that learning may be made more efficient by making synaptic change more probable on any given learning trial. The question then is, have these ideas centered on synaptic plasticity been borne out in terms of behavioral effect sizes. We will limit our answer to the 3 deficits that have been studied the most to date: aphasia, arm paresis, and walking.
Aphasia
As of now it seems as though both forms of NIBS seem to have an additional effect on naming when combined with speech and language therapy (SLT).15-17 Interestingly, when compared to SLT itself, NIBS, particularly tDCS, seems to be more effective in promoting generalization from trained items to untrained items. For instance, a recent randomized controlled trial in stroke survivors showed that participants who received anodal tDCS paired with SLT experienced an average 5.7 word increase in a naming task relative to the group that received a sham NIBS intervention paired with SLT. 18 This suggests that NIBS, although not restorative per se, is potentiating retrieval of already known but untrained words as well as learning of trained items. Generalization should always be interpreted with caution, however, as alternative explanations exist. For example, NIBS might increase the likelihood of a patient taking a risk of being wrong, but nevertheless creating the opportunity to be right. This is in contrast to the most common response in aphasic patients when asked to picture-name, which is to refrain from saying anything at all. This possibility should always be considered given that absolute effect sizes of NIBS in aphasia are not large in real-world communication settings, which the most recent meta-analyses would suggest. 19 Nevertheless, even if NIBS is not bringing patients back toward a pre-injured state, it seems to be potentiating convergence on an alternative strategy whereby patients can do better than just painstakingly learning to retrieve word-by-word with SLT alone. It is an open question whether this suggestion that NIBS does something qualitatively different from SLT alone will be confirmed by additional studies, and there is a need for development of a stronger conceptual framework to explain why this should be the case. For example, it is possible that tDCS operates via BDNF whereas regular training does not. 20
Arm Paresis
NIBS for the recovery of upper extremity function after stroke has been an area of hopeful investigation for many years, with several early feasibility studies suggesting that NIBS could enhance the efficacy of conventional rehabilitation following stroke and induce neurophysiological changes that accompany modest improvements in function and prehension. 21 Unfortunately, the current body of literature concerning NIBS and post-stroke upper extremity recovery indicates lack of any additional gains in either impairment or function in larger clinical trials using either tDCS 22 or TMS 23 in conjunction with physical therapy. A recent meta-analysis advises against the addition of supplementary NIBS for upper extremity rehabilitation due to the lack of any meaningful effect size when applied as an adjunct in well-controlled clinical trials.24,25 As with NIBS for aphasia, much of the hope and disappointment surrounding the various research endeavors involving NIBS and recovery from arm paresis stem from a paucity of specific hypotheses regarding the potential mechanism of action of these interventions. Indeed, many studies investigating recovery of the upper extremity following stroke using NIBS will invariably cite existing motor learning literature involving healthy controls as a proposed justification for the stimulation protocol being explored. Thus indicating that enhanced compensatory learning, not restoration, was always the goal, albeit only implicitly in some cases.
Walking
In contrast to the literature on NIBS for aphasia and upper extremity recovery, there have been some encouraging studies showing that NIBS may drive meaningful restorative gains when properly paired with gait training. Notably, for the lower extremity case the approach has been intentionally focused on restoration of damaged neurological pathways or activation of new pathways, not on enhancing compensatory learning. That is to say, there is a distinction to be made between having NIBS being used to bring circuitry on-line to then allow training to increase the efficiency of residual pathways for control, and using NIBS to primarily target learning as discussed in the sections above on aphasia and upper limb. For instance, one NIBS technique that has shown promise in enhancing the effect of rehabilitation on gait and balance in stroke survivors has been contra-lesional theta-burst TMS, with participants who received the interventional NIBS protocol showing a 13-point improvement on the Berg Balance Score compared with a 6-point improvement in the sham group. 26 In contrast to NIBS approaches that target the ipsilesional motor cortex, which have been associated with more modest improvements in gait and balance, 27 there is emerging evidence to suggest that both non-invasive and invasive stimulation of the contralesional cerebellum may promote robust improvements in post-stroke lower extremity function. While the reasons why gait might be more responsive to NIBS in the restorative sense remain to be determined, as alluded to above, functional ambulation is less amenable to qualitative changes in strategy and so training plus NIBS may better target residual neural architecture to restore a previous behavior than learning of a new compensatory behavior altogether.
Invasive Brain Stimulation
With the rapid growth of neurotechnology entrepreneurship in the past decade, we have also seen significant interest in the development of technologies that are designed for direct stimulation of the central nervous system in order to facilitate recovery from neurological injury. When one considers therapeutic invasive brain stimulation as a topic, it is almost impossible not to think of deep brain stimulation (DBS) for Parkinson’s Disease, a technique which has enjoyed decades of success in alleviating the severely debilitating symptoms of Parkinson’s Disease 28 in a manner that appears nothing short of miraculous to an outside observer. While the theoretical framework for DBS is relatively well understood: using high frequency stimulation parameters to reduce the influence of overactive nuclei in basal ganglia output, the new generation of invasive brain stimulation techniques have mechanisms of action that are far less well-defined. As a field, this fact should give us pause from an ethical standpoint as we consider the implications of permanent implantation of invasive devices into and around the central nervous system. In addition, unlike DBS, it appears to be an inescapable fact that the technologies to be discussed require the pairing of a stimulation protocol with intensive rehabilitation in order to be optimally efficacious. With these factors in mind, it becomes important to critically evaluate the effect sizes and meaningful functional improvement that these invasive interventions yield before pushing for widespread adoption.
Spinal Cord Epidural Stimulation
Spinal cord epidural stimulation (scES) is amongst the most mature of the invasive central nervous system stimulation techniques to be discussed in this section. Since it was first observed that scES could be used in conjunction with rehabilitation to elicit voluntary movements in people living with motor complete spinal cord injury (SCI), it has been the topic of intensive investigation. 29 Since this initial discovery, work to replicate and scale these findings has been aggressively underway, and people with SCI have been experiencing significant improvements in both voluntary movement production and autonomic function.30,31 In addition to this assistive effect, scES appears particularly promising because it results in true, albeit to a lesser degree, restoration of function: some voluntary muscle activation returns to patients who previously had none and, with rehabilitation, these gains can often persist without the need for continued use of the stimulator. 31 Despite these encouraging findings, the mechanism of action that is driving these restorative changes remains elusive, which means that scES researchers must rely on observations from existing data to drive best practice for participant selection and surgical and stimulation procedures. For instance, while the amount and location of spared spinal cord tissue in scES research participants with motor complete SCI was found to be only conditionally predictive of their ability to generate volitional limb movements following stimulation, 32 positioning of the stimulating electrodes was well-correlated with outcome. 33 In addition, stimulation must be precisely timed at different electrode locations in order to be optimally effective, with continuous stimulation of all electrodes failing to produce comparable results. 31 All told, scES remains an incredibly promising restorative technique for individuals with SCI. Indeed, a case has been made for cervical cord stimulation for upper limb paresis in chronic stroke 34 and initial results are exciting. 35 Further research investigating the potential mechanism of action in more detail, as well as optimizing stimulation protocols is required, not to mention an exploration of the role of scES in other forms of neurological injury and disease.
Vagus Nerve Stimulation
Recently, vagus nerve stimulation (VNS) has emerged 36 as an invasive stimulation technique targeting motor recovery from stroke. Already a staple of epilepsy management, having first been introduced in 1988, VNS is viewed as safe and efficacious even though the exact mechanism of action in reducing seizure activity is unclear.37,38 Given pre-clinical evidence that VNS paired with behavioral protocols promotes functional recovery in neurological conditions, 36 there was certainly a good-faith basis to suspect that VNS may have a role in enhancing motor recovery in stroke survivors. As such, in 2021 a pivotal clinical trial showed that invasive VNS paired with intensive (270 minutes per week for 6 weeks, with manually activated VNS) upper extremity rehabilitation produced significant improvements in the upper extremity Fugl–Meyer Assessment (FMA) (5.0 ± 4.4 points), compared with a group that received sham VNS and intensive rehabilitation (2.4 ± 3.8 points; Dawson et al 39 ). Although these VNS findings are seemingly encouraging, it is important to view them in the context of the overall thesis of this perspective piece: the effect sizes were small. The explanation for the small effect is suggested by recent work showing that VNS enhances motor learning via reinforcement by phasic cholinergic signaling. 40 As we have shown, the capacity for motor skill learning after stroke is intact within the constraints of the impairment envelope, that is to say learning cannot break through this envelope. Thus VNS may primarily be bringing patients to the lower bound of their impairment via reinforcement learning but no further. Thus it would be unfortunate if VNS becomes the new invasive version of tDCS and TMS, with the same premature enthusiasm and attendant confusion with regard to whether the effect is just task-specific learning within the impairment envelope of the patient or whether it changes the envelope itself. 11 Given the seriousness of an invasive central nervous system implant, consideration should be given to the risk/benefit ratio of invasive VNS as a desirable restorative therapy for stroke recovery, especially since purely behavioral interventions in chronic stroke patients have yielded larger effect sizes in upper extremity recovery (average changes in FMA ranging from 9.8 to 11).4,9 Again, the intellectual trend when it comes to neurotechnology is apparent: the excitement is almost always on an intervention that piggybacks on a regular behavioral intervention rather than on changing the behavioral intervention itself.
Motor Cortex Stimulation
Invasive epidural electrical cortical stimulation (EECS) of cortical motor areas in stroke survivors has rarely been attempted due to the risks and challenges associated with the procedure. The proposed mechanism of action for EECS is akin to NIBS, with the relatively simple rationale that it will be better than NIBS by increasing the proximity of the stimulating electrode to cortex and thereby improving the accuracy and efficacy of the stimulation protocol. The most recent EECS trial, and the largest of its type, failed to show significant between-group differences when EECS and upper extremity rehabilitation was compared with a control group that received matched rehabilitation alone. 41 Although subgroup analysis in this cohort slightly favored the EECS group over the control, one simply cannot ignore that EECS still yielded relatively small effect sizes, nor the fact that more than 10% of the EECS cohort experienced a serious adverse event from either the implantation procedure or the anesthesia.
Deep Cerebellar Stimulation
The core idea underlying this approach for stroke recovery is that DBS of the dentate nucleus will lead to increased excitability and reorganization of the contralateral peri-lesional motor cortex via the dentatothalamocortical pathway, which in turn will lead to gains in motor function. 42 A number of questionable assumptions form the basis for this rationale. First, that increased cortical excitability is behaviorally relevant (see Bestmann and Krakauer, 43 for critique of this overly-simplistic position). Second, that there is convincing evidence that crossed-cerebellar diaschisis is associated with worse stroke outcomes (there isn’t—see Krakauer and Carmichael 44 ). Third, that changes occurring in perilesional cortical maps indicate that functionally significant reorganization has occurred, 45 but the available evidence does not unequivocally support this (see Krakauer and Carmichael 44 ). The original work suggesting a potential stroke recovery benefit for deep cerebellar stimulation was done in rats, which showed only modest behavioral gains when chronic 20 or 30 Hz stimulation of the lateral cerebellar nucleus was given for several weeks with and without concomitant training. Indeed the gains seen are comparable to what was seen with cortical stimulation protocols in animal models, which does not bode well for a human study that has just begun called Electrical Stimulation of the Dentate Nucleus Area (EDEN). The objective of this study (n = 12) is to document the safety and patient outcomes of EDEN for the management of chronic, moderate to severe upper extremity hemiparesis due to ischemic stroke. 46
Overall, the promise of invasive approaches will ultimately depend on first developing better behavioral interventions with good effect sizes on their own. Physiological approaches would then modulate these new training paradigms. To hope that invasive approaches will substitute for or enhance inadequate behavioral interventions is not, in our view, a promising direction.
Brain Computer Interface
Brain Computer Interface (BCI)-assisted rehabilitation approaches are a largely unexplored area of neurorehabilitation. Traditionally, the role of BCI technology has been most widely applied in people with severe paralysis as a neuromotor prosthesis or augmentative communication technology. However, the use of BCI technology as an adjunct to neurorehabilitation to drive neurological restoration has been little explored. 47 Pairing electroencephalography-based BCI technology with upper extremity rehabilitation has yielded mixed results, but whether BCI-assisted rehabilitation significantly outperforms rehabilitation alone is questionable due to small effect sizes between groups and insufficient power to detect such changes between groups.48,49 Similarly, BCI-assisted lower extremity neurorehabilitation has also shown some promise, but without a control group, it is difficult to know if the gains cannot be associated with the intensive robotic rehabilitation that was also part of the protocol. 50 A significant challenge related to the integration of BCI technology into neurorehabilitation stems from the fact that BCI technology still does not provide continuous sensory feedback during ongoing movement, which would seem to be more important for a restorative role over a purely assistive one. As such, current approaches to BCI-assisted neurorehabilitation have yet to provide a compelling case for bringing about true recovery.
Robotics
Upper Limb. The story (so far) of rehabilitation robotics has been sobering. It can be seen as a cautionary tale of what happens when engineering and clinical practice get locked into a codependent relationship with not enough science in-between. The motivation for robotics seems commendable if it is not scrutinized too closely—a robot can allow delivery of higher doses and intensities of rehabilitation than a therapist. Unfortunately this is where the conceptual confusion begins. As we stated at the beginning of this piece, current upper limb therapy is based on the premise that task-oriented training is the best way to make functional gains that generalize to ADLs. Indeed, using robots to do repetitive task practice is exactly the study rationale provided in the introduction to the paper reporting the results of the largest robotics trial to date, RATULS. 51 The implication being that a robot is most usefully considered a mechanical therapist, with a focus on function rather than impairment. Accordingly, the ARAT, an activity level scale, was the primary outcome measure for RATULS. The same rationale emphasizing function was given in the 2 previous large trials of robotics.52,53 At the end of the introduction to the paper reporting on the Veterans Affairs (VA) Robotic-Assisted Upper-Limb Neurorehabilitation in Stroke Patients study, it is stated that the goal was to determine whether a robotic protocol could improve functioning and quality of life of stroke survivors with long-term upper-limb deficits. Again the emphasis, like conventional therapy, was on function not impairment. Oddly, however, the primary outcome measure in the VA study and the other previous large robotics study was the Fugl–Meyer scale, which is an impairment measure that was devised to quantify abnormal post-stroke synergies. The reason for the choice of outcome measure is never explicitly given in any of the studies but at least the ARAT is consistent with a focus on the activity (function) level of the ICF. It is important to emphasize that this is not just a quibble over semantics; a secondary outcomes measure, at the activities level of the ICF, the Wolf Motor Function Test (WMFT), similar in conception to the ARAT, was also included. It should be noted that the FM and ARAT are correlated but of course this does not mean they are measuring the same thing. Height and weight are also highly correlated but obviously measure distinct things. Indeed, there are many examples of the FM and ARAT dissociating in intervention studies.51,54,55 Ultimately, the field will need to transition from these old scales to finer grained kinematic measures of impairment to reduce ambiguity in the interpretation of results. 13
So, what is going on in these disappointing robot studies? The culprit, once again, is paucity of biological thinking, resulting in a superficial conflation of learning, training, repetition and practice; terms that should not be used interchangeably, and vague appeals to “neuroplasticity.” This is the example par excellence of what happens when the technological tail wags the conceptual dog. Interventions with expensive pieces of hardware based on vague premises are compared to conventional therapy based on the same or similar vague premises. The result, unsurprisingly, was 3 negative trials: at best there was no significant difference on the primary outcome measure between the novel intervention and the control, and magnitude of impairment reduction in the various groups were small—a gain of less than 3 points on the ARAT in RATULS, 51 and although FMA gains were slightly larger in the VA trials, there were few clinically important significant differences between the interventional groups and the control groups, with no greater than a 2.17 point change in the FMA seen in these comparisons across both trials. In the case of RATULs, the control intervention was superior to regular care on the primary outcome measure (ARAT), whereas the robotic intervention was not. It should be made clear that negative results can be very illuminating if a clear hypothesis based on well-reasoned concepts is proven to be wrong. If, however, there is no such hypothesis, and outcome measures are chosen mainly because they already exist, then it becomes very difficult to interpret the results of negative trials.
We will not go into detail here as to why we think that assistive robotics for the upper limb is not the optimal way to teach either compensatory strategies or reduce impairment (see Krakauer and Carmichael 44 ), but in our view it is a conceptual misunderstanding about the relationship between assistive robotics and true recovery that led to the (inevitable) negative trials, not a failure of either the technology per se or of trial design. To the degree that some robotic studies show small effects, indeed a recent meta-analysis suggested that this is the case, 56 it is possible that these are due to motivational effects, learning, increases in strength, and peripheral changes rather than truly restorative of motor control. 57 Our conclusion does not preclude the possibility that there will be a future role for robotics in rehabilitation, for example by providing weight support so that self-initiated practice can occur despite weakness, but the overall conceptual framework must change.
Lower-limb. It is crucially important to the topic of this review to discuss the ways in which outcomes related to clinical trials that involve lower extremity robotics have differed from upper extremity robotics. Multiple reviews have highlighted the benefits of lower extremity robotic technology, even compared with conventional gait rehabilitation approaches, particularly for stroke and SCI survivors.58-60 Unlike upper extremity robotics, many clinical trials of lower extremity robotics show reliable, measurable, and lasting improvement of motor function in the lower extremities in response to intensive robotic rehabilitation.61-63 In addition, lower extremity robotic rehabilitation, especially robotic interventions that focus on walking, may produce broader, systemic benefits such as improvements in bladder and bowel function, 64 cardiovascular and pulmonary health,65,66 slowing bone density loss, 67 and improving psychosocial outcomes.68-70 Although these findings are not the immediate focus of this review, it should be noted that these factors should contribute to therapeutic decision-making when it comes to identifying viable rehabilitation strategies for people with neurological disabilities. For instance, the ability of an exoskeletal robotic rehabilitation program to reduce the duration of a bowel routine by an average of 24%, whilst additionally normalizing stool consistency 61 is a restorative quality of exoskeletal robotic rehabilitation that does not have an equivalent in the upper extremity robotic literature. There is a critical need to understand why lower extremity robotic devices appear to produce more reliable motor improvements (when compared with non-robotic therapeutic approaches) than upper extremity robotic rehabilitation. It is likely that the differences in efficacy of upper and lower extremity robotic neurorehabilitation can be attributed at least in part to physiological differences. Locomotion is evolutionarily much older and the circuitry is more innate and distributed across the neuroaxis down to the spinal cord.71,72 In contrast, prehension is heavily cortex-dependent. 44 Thus using upper extremity robots as though they are just treadmills for the arm is almost certainly too simplistic.
Video Gaming, Virtual and Augmented Reality
Video games and virtual reality (VR) experiences aim to create more immersive, stimulating, and mood-enhancing training experiences than conventional therapy. They can also esthetically enhance the clinical environment and offer the opportunity for multiplayer interpersonal engagement either in their rooms, where they spend a lot of time alone or in silence with bored relatives and friends, or via a remote connection. Almost everyone knows what a videogame is. VR is defined in the Oxford English Dictionary as “The computer-generated simulation of a three-dimensional image or environment that can be interacted within a seemingly real or physical way by a person using special electronic equipment, such as a helmet with a screen inside or gloves fitted with sensors.” Augmented reality (AR) in contrast does not take the user out of the real world, but instead superimposes a computer-generated image on a user’s view of the real world. Mixed reality refers to combining elements if AR and VR. In addition to the motivational and fun aspects of VR and AR, they also provide a means to control and weight multimodal feedback, and allow patients to perform activities that might be unsafe in reality.
A Cochrane review from 2017 reported that the quality of the evidence for the effectiveness of VR and video gaming is low but nevertheless concluded that they may be beneficial in improving upper limb activity-level performance measures and global ADLs when used as an adjunct to usual care or when compared to equivalently-dosed conventional therapy. 73 A more recent meta-analysis aimed at specifically investigating the efficacy of immersive VR in stroke rehabilitation has similarly shown promise if the therapy is applied in conjunction with conventional rehabilitation. 74
The role of gamification in upper extremity rehabilitation tells a similar tale to the use of VR and AR in neurorehabilitation. A multi-center trial called EVREST, 75 conducted in patients within 3 months of stroke, compared 2 kinds of 2-week intervention: gaming with the Nintendo Wii versus recreational therapy (playing cards, bingo, music, etc). Both interventions were added on to conventional therapy. The result was that there was no difference in upper extremity outcome assessed with the WMFT at 4 weeks. The authors of the study concluded that the type of task used in rehabilitation might not matter as long as it is given at high intensity and is task specific. Similarly, a recent meta-analysis of 42 studies 76 of gaming for neurorehabilitation come to generally favorable conclusions but the reported effect sizes are still small (this meta analysis reported a Cohen’s d effect size of 0.42 for the effect of gamified rehabilitation on upper extremity function, which is considered to be on the high end of a “small” effect size). None of these papers provide any conceptual argument or background as to why games and VR are being used in the first place. Instead they just take the studies that they review at face-value and make no attempt to either critique or differentiate between the chosen outcomes. This “old wine in new bottles” use of neurotechnology will not take gamified rehabilitation very far beyond the benefits of regular OT and PT. As stated earlier more generally but here in the context of gaming, one must not conflate gamified physical and occupational therapies focused on function and ADLs, with novel, immersive gaming interventions focused on impairment reduction and training of capacities.
As with neurostimulation approaches and robotics, for progress to be made it will be necessary to have a proper conceptual framework for what games and VR uniquely offer so that they go beyond just being a way to provide conventional therapy with a few bells and whistles. After all, it should be asked why regular task-oriented therapy in VR is likely to be any different from giving it in reality. We argue instead that the correct way to think about video-gaming and VR is as the human analog of the enriched environments provided in animal models of disease77,78—namely to promote training that focuses on inducing reparative change in the nervous system, disease modification, and impairment reduction, not just improved task accomplishment through learning. That is to say, video-gaming and VR should be designed to be complementary to conventional therapy. The mechanisms of enrichment are not precisely known but are multiscale; working at social, psychological, behavioral, neurophysiological, and molecular levels. For instance, there has been compelling research to suggest that when gamification promotes enjoyment of rehabilitative exercises, it correlates with both improved upper extremity outcomes and greater rehabilitation intensity.79-81 Gamification in immersive esthetic environments can promote adherence to training, add variety, increase the gain on training through motivational effects, and allow for a focus on playful exploration of capacities outside of a task context.
A recent pilot study tried to apply this human enrichment plus non-task-based training approach to gaming. 54 Patients with arm paresis within 6 weeks of their stroke made 3D movements of the paretic arm by controlling the movement of a virtual dolphin. They had two 1-hour sessions per day, 5 days a week for 3 weeks. A large screen displayed the dolphin in his oceanic environment, oceanic sounds and music were played, and the lights were dimmed for the entirety of each session. A licensed physical or occupational therapist was present throughout each session and provided verbal and tactile feedback to ensure high-quality movements (ie, normal non-synergistic movement patterns), exploration of the full workspace, and minimal use of compensatory strategies. The gaming intervention was as effective as conventional therapy when it was provided at the same very high intensity and dose. Both interventions were twice as good, as measured with the ARAT, as standard of care therapy. It should be noted that this was not the case with the FMA. 82 Thus it remains to be determined how much this is true impairment reduction but the magnitude of the effect suggests that a component of it is. These results are promising as they suggest a way to provide a gaming approach for interventions focused on movement quality outside of a task context.
Conclusions
Here we have conducted a brief survey of technology-based approaches focused on their potential to significantly reduce impairment for neurological conditions. Unfortunately, to date, the majority of studies of neurotechnology fail to clearly state what level of the ICF they are targeting or what neurobiological principles their design is predicated on. Instead, vague references are made to function, plasticity and learning. This is not just academic concern because if we are going to achieve larger effect sizes then a finer-grained correspondence is required between neuroscience, technology design, and the clinical impairment being targeted. To date the most promising approaches using neurotechnology are the following: (1) Spinal cord stimulation in conjunction with intensive treadmill training. With this approach, although studies still have small sample sizes, the effect sizes can be impressive. (2) tDCS for aphasia is an intervention of interest, mainly because it to be having a complementary effect to SLT, although admittedly the effect sizes remain small to medium, and (3) immersive gaming for upper extremity paresis as it may be a more efficient way to achieve the large effect sizes that have recently been reported with higher doses and intensities of regular therapy. It is to be hoped that the lessons learned from these successes will lead to further novel interventions for neurological impairment. The common principle that seems to underlie the successes is to have the technology either promote or accompany high doses and intensities of training focused on lost capacities rather than on accomplishment of ADLs. This suggests that large reductions in impairment will not be achieved if neurotechnologies are just piggy-backed as adjuncts on regular therapy.
Footnotes
Declaration of Conflicting Interests
The author(s) declared the following potential conflicts of interest with respect to the research, authorship, and/or publication of this article: JK is an inventor and has equity in MindPod Dolphin, licensed to the company MindMaze, a technology used in one of the cited studies and DP declared no potential conflicts of interest with respect to the research, authorship, and/or publication of this article.
Funding
The author(s) received no financial support for the research, authorship, and/or publication of this article.
