Abstract

Abstracts Approved for Oral Presentation
An Examination of the Potential for Autonomic Nervous System Responses and Postural Sway to Serve as Indicators of Visual-Vestibular Mismatch
Doaa S. Al Sharif, PT, MS1, Emily A. Keshner, PT, EdD1, Carole Tucker, PhD1, Donna Coffman, PhD2, Pamela Roehm, MD, PhD3, Zachary Kane4
1 Department of Physical Therapy, Temple University, Philadelphia PA
2 Dept. of Epidemiology and Biostatistics, Temple Univ., Philadelphia PA
3 Dept. of Otolaryngology, Temple Univ. School of Medicine, Philadelphia PA
4 Dept. of Electrical and Computer Engineering, Temple Univ., Philadelphia PA
(1) Newman-Toker D et al (2015). Neurologic Clinics, 33(3), 577–599.
(2) Ventre-Dominey J et al (2014). Frontiers in Integrative Neuroscience, 8(July), 1–13.
(3) Mallinson AI, Longridge NS. A new set of criteria for evaluating malingering in work related vestibular injury. Otol Neurotol (2005); 26:686–91.
An Internal Model of Gravity and its role in Action, Perception, and Spatial Orientation
Angelaki, Dora PhD
Center for Neural Science, New York University
Whether running to catch a ball or turning to reach for a cup of coffee, the ability to navigate in the world and interact with the environment depends critically on knowing our current motion and allocentric orientation in the world. Motion sensors in the vestibular inner ear play a particularly important role in this process. However, moving in a gravitational environment complicates estimation of these signals. As pointed out by Einstein over a century ago, all acceleration sensors, including the otolithorgans, also respond to the force of gravity. Although illusions can occur when there are insufficient sensory cues available, under most circumstances the brain can accurately distinguish between tilting relative to gravity and translating through space, even in the absence of vision. We have identified a network of neurons in the macaque vestibulo-cerebellum that appears to perform the required computations by using multimodal sensory information from both sets of vestibular sensors to compute an internal model of gravity. Gravity signals have also been found in anterior thalamus neurons that encode 3D head orientation. These gravity signals are used to estimate visual orientation in the allocentric world, and bilateral labyrinthectomy causes deficits in both allocentric visual orientation perception and vertical arm movement planning and execution
Links Between Vestibular Function, Aging, and Balance
Bermúdez Rey MC1, Karmali F1, Clark TK2, Beylergil SB3, Wang W1, Merfeld DM4
1 Harvard Medical School
2 University of Colorado – Boulder
3 Case Western Reserve University
4 The Ohio State University
Vestibular dysfunction has long been known to contribute to imbalance. This study was designed to quantify the links between vestibular function and balance in healthy asymptomatic individuals. We measured five self-motion thresholds (0.2Hz – roll tilt; 1Hz – roll tilt, yaw rotation, y-translation, and z-translation) using standard methods in a population of 105 humans aged 18 to 80. 99 subjects also participated in a standard Romberg balance test. Failing the 4th condition (eyes closed, on foam) of this exact test had previously been shown to correlate with more than a six times higher chance of having fallen in the past year [1]. We found a substantive and significant correlation between increasing age and increasing vestibular threshold [2]. We also found significant correlations between: (a) increasing age and imbalance, (b) increasing vestibular thresholds and imbalance, as well as (c) the combined effect of increasing vestibular thresholds and age on imbalance [3]. We also performed mediation analyses to quantify whether vestibular function might be a causative mediator of imbalance in normal asymptomatic humans and found that 46% of the decline in balance with age in adults above the age of 40 was mediated by vestibular function [4]. Vestibular function seems to explain a large fraction of age-related balance declines as assayed via a Romberg balance test. This is surprising, since balance declines are known to be multi-factorial including declines in all physiologic contributors to balance (e.g., kinesthesia, vision, motor control, strength, vestibular function, etc.) as well as external environmental factors. This is important because identifying a pre-dominant physiologic cause of imbalance provides an opportunity for a targeted intervention
[1] Agrawal, Y., et al. (2009).
[2] Bermúdez Rey, M. C., et al. (2016).
[3] Karmali, F., et al. (2017).
Beylergil, S. B., et al. (2019). Progress in Brain Reports
Quantifying Peripheral Vestibular and Balance Abnormalities in People with Chronic Dizziness and Imbalance Following Mild Traumatic Brain Injury
Kody R. Campbell, PhD1, Lucy Parrington, PhD1, Timothy E. Hullar, MD1, Fay B. Horak, PhD1, Laurie A. King, PhD1, Robert J Peterka, PhD2
1 Oregon Health and Science University, Portland OR
2 Veteran Affairs Portland Health Care System, Portland OR
[1] Theadom et al. (2016). Br J Gen Pract 66(642)
[2] Peterka et al. (2018). Front Neurol 9:1045
Differences in vestibular perceptual thresholds between roll, pitch, and yaw axes
Torin K. Clark, Ph.D.
University of Colorado-Boulder, Boulder, CO
Vestibular perceptual thresholds quantify the smallest self-motions that can reliably be perceived in the dark. Vestibular thresholds increase with age over about 40 years of age (Bermúdez Rey et al., 2016), are a measure of vestibular sensory noise (Nouri & Karmali, 2018), and roll tilt thresholds have been found to significantly mediate the relationship between age and balance (Karmali et al., 2017) and be reduced in vestibular migraine patients (King et al., 2019). Given this clinical and operational importance, it is important to understand differences in vestibular perceptual thresholds for roll vs. pitch vs. yaw axes. Thirty years ago, Benson and colleagues (1989) quantified thresholds for rotation about an Earth-vertical axis, in yaw (subject seated upright, rotation about z-axis), roll (supine, about x-axis), and pitch (lateral recumbent, about yaxis). They found yaw rotation thresholds were significantly lower (1.5 degrees per second) compared to roll and pitch, which did not differ (2.04 and 2.07 degrees per second, respectively). To validate this finding, we measured thresholds for roll, pitch, and yaw rotation about a headcentered axis (2 second motion duration) using standard, modern psychophysical techniques. While roll and pitch thresholds continued to not differ, surprisingly, we found that yaw rotation thresholds were significantly higher than for roll or pitch. As this outcome contradicts Benson’s findings, we explored potential explanations. First, we suspected that the subject configuration for roll and pitch produces inertial stimulation to the lower extremities, providing an additional cue, which may lower these thresholds. To test this, we retested roll and pitch thresholds with 1) the subject configured with legs bent and restrained to minimize the maximum radius for the head-centered rotation and 2) the roll/pitch rotation axis approximately 20 cm below the center of the head, to replicate Benson’s configuration. In each case, this did not significantly change the roll or pitch thresholds and in each case pitch and roll remained lower than the yaw rotation thresholds. We conclude that contrary to previous findings, humans are actually less sensitive to yaw rotation (i.e., higher thresholds) than roll or pitch rotation. We speculate on potential functional implications of this finding.
(1) Bermúdez Rey et al (2016).
(2) Nouri & Karmali (2018).
(3) Karmali et al (2017).
(4) King et al (2019).
(5) Benson et al (1989).
Effect of limiting visual field on common causation perception during visual-inertial heading estimation
Benjamin T. Crane, MD, PhD., Raul Rodriquez
Departments of Otolaryngology, Bioengineering, and Neuroscience University of Rochester
Visual and inertial cues are the sensory modalities for heading determination. The visual cue is ambiguous as it can represent either self-motion through a fixed environment or environmental motion. When there are differences in visual and inertial direction, it is only appropriate to integrate them when they are both due to motion through a fixed environment, a situation known as common causation. Differences is heading direction is one factor that makes common causation less likely to be perceived, although surprisingly large differences can be perceived as similar. This project tests the hypothesis that visual field size is a factor significant factor in determining common causation. Previous experiments used 102° of the horizontal visual field and 70° of the vertical visual field. The current experiments look at the potential for visual field size to influence common causation by limiting the visual field to 38° in both directions, thus effectively cutting the screen down to 11% of the original size, and the visual field to 16% of the original size.
Both inertial and visual stimuli consisted of 2s of synchronized motion. The visual stimulus consisted of a 70% coherence star field. Trial blocks included 12 possible visual and inertial headings which covered the full 360° range in the horizontal plane in 30° increments. Every heading combination was presented in random order with 144 stimuli in each block. During each block a mechanical dial was sued to report the perceived direction of the visual (Vp) or inertial (Ip) heading and buttons to report if the headings were the same or different. Six trial blocks were performed in each subject, in 3 blocks inertial heading was reported and in the other 3 visual heading was reported. In all 6 blocks subjects reported if headings were the same or different.
Greatly diminishing the visual field size and removing peripheral vision had a surprisingly small effect on visual direction determination or common causation perception. The lateral component of non-cardinal visual headings (e.g. 30°, 60°) was overestimated by about 20°. Perception of common causation was also very similar to a full visual field with common causation highest when stimuli were aligned in cardinal directions and very low when stimuli were separated by 90° or more. When offset, visual headings continued to have a large influence on inertial heading perception –10° with a 30° offset, 8° with 60-90° offsets, and 3° with a 120-150° offset. These were smaller than the offsets see with the full visual field (13° with a 30° offset, and 13-19° with 60-120° offsets. The initial stimulus influence on the visual stimulus was small 1-2° in both conditions.
Examining the relationship between visual-vestibular deficits and mobility in adults with persistent symptoms after a mild traumatic brain injury
Linda D’Silva PT, PhD, Sakher Obaidat, PT., Prabhakar Chalise, PhD., Prabhakar Chalise, PhD., Michael Rippee, MD.
Department of Physical Therapy and Rehabilitation Science, Biostatistics, and Neurology
University of Kansas Medical Center, Kansas City, KS
Does Age Matter? A Fifteen-Year Review of a Vestibular Rehabilitation Program
Elizabeth Dannebaum, Physiotherapist1, Madia Rehwald2, Samir Sangani3, Joyce Fung4,
1 Jewish Rehabilitation Hospital
2 Department of Biology, Concordia University, Montreal
3 Feil and Oberfeld Research Center, Jewish Rehabilitation Hospital, Research site of CRIR, Laval, QC, Canada
4 School of Physical & Occupational Therapy, McGill University, Montreal, QC, Canada
Feil and Oberfeld Research Center, Jewish Rehabilitation Hospital, Research site of CRIR, Laval, QC, Canada
A Wearable System Which Reduces Motion Sickness and Improves Recovery of Balance
Didier A. Depireux1,2**, Cooper Pearson1, Emma Boguski1, Zachary Williams1, Caitlyn Pratt1, Samuel Owen 1
1 OtolithLabs, Washington, DC 20009; Didier/Cooper/Emma/Zack/Caitlyn@otolithlabs.com
2 Department of Otorhinolaryngology/Head and Neck Surgery, University of Maryland School of Medicine, Baltimore, MD 21201;
Here we will present the results of three placebo-controlled studies, independently conducted by three automotive companies, which quantified the safety of our nVSM and its effectiveness at preventing motion sickness in participants who are prone to motion sickness (as determined by their score on the Motion Sickness Susceptibility Questionnaire).
We will present results showing that our nVSM was uniformly found to significantly mitigate motion sickness and/or increase the time to discomfort and nausea. The nVSM was not found to significantly influence performance on visual and cognitive tasks (reading, number search, video game and others).
I’m so Dizzy, my head is Spinning… Dizziness After Concussion
Giza CC, Snyder A, Pearson R, Patel M, Baham M, Sheridan C, Choe MC.
UCLA
Dizziness is one of the most common symptoms after mild traumatic brain injury (mTBI) and concussion. Because dizziness has many potential biological mechanisms, it warrants a thoughtful diagnostic and treatment approach. Acute symptoms of concussion commonly include headache, nonspecific dizziness, nausea and vomiting and largely subside over time if the patient is protected from additional injury. However, dizziness that persists or worsens merits additional workup. Descriptions of the nature and acuity of the dizziness may help in the differential diagnosis. Toward that end, characterizing the primary symptom of dizziness (vertigo, lightheadedness, unsteadiness) and considering associated post-concussive symptoms (headache, incoordination, photo/phonophobia, nausea/vomiting) can be useful. Labyrinthine causes of dizziness after concussion include benign paroxysmal positional vertigo, labyrinthine concussion and other less common etiologies. Although often associated with temporal bone fracture, direct trauma to the labyrinth is uncommon after mTBI. Direct central nervous system damage and axonal injury to the vestibular and cerebellar pathways can occur in more severe TBI, but evidence for macrostructural damage to these brain regions is lacking in most cases of mTBI and concussion. Dizziness, vertigo, nausea and vomiting may occur in conjunction with post-traumatic migraine, while lightheadedness, non-vertiginous unsteadiness and exercise intolerance frequently occur subacutely and chronically (as part of deconditioning in athletes). Autonomic instability is not uncommon in patients with persistent post-concussive symptoms (PPCS), and positional dizziness associated with postural orthostatic tachycardia (POT) or orthostatic intolerance has been described. Interactions between POT, exercise intolerance and anxiety add further complexity to the evaluation and treatment of patients with PPCS. Each of these dizziness phenotypes may suggest treatment interventions directed towards the underlying neurobiology. A thorough and organized approach to persistent post-concussive dizziness is necessary to identify the underlying diagnosis and inform an optimal treatment plan.
Visual-vestibular conflict detection is modulated by motor signals
Savannah Halow, BS1, Paul MacNeilage, Ph.D1, James Liu, MS2, Eelke Folmer, Ph.D2.
1 Department of Psychology, University of Nevada, Reno NV
2 Department of Computer Science and Engineering, University of Nevada, Reno NV
Head movement relative to the stationary environment gives rise to congruent vestibular and visual optic flow signals. The resulting percept of a stationary visual environment depends on mechanisms that compare visual and vestibular signals to evaluate their congruence. Here we investigate the efficiency of these mechanisms and how it depends on fixation behavior as well as on the active versus passive nature of the head movement. Sensitivity to conflict was measured by modifying the gain on visual motion relative to head movement on individual trials and asking subjects to report whether the gain was too low or too high. Low and high gains result in percepts of the environment moving with or against head movement, respectively. Fitting a psychometric function to the resulting data yields two key parameters to characterize performance; the standard deviation (SD) and mean of the cumulative Gaussian fit. The mean indicates the single visual gain value that is perceived to match head movement. The SD indicates the range of gains that are compatible with perception of a stationary visual environment, referred to by Wallach as the Range of Immobility (Wallach, 1985). Experiments were conducted using a head-mounted display capable of rendering visual scene motion contingent on head motion, with fixation behavior monitored by an embedded eye tracker. The experimental design included combinations of active or passive head movement together with head-fixed or scene-fixed fixation. During active conditions, subjects rotated their heads in yaw ~15 degs over ~1 sec. Each subject’s movements were recorded and played back via rotating chair during the passive condition. During head-fixed and scene-fixed fixation the target moved with the head or scene, respectively. Sensitivity (quantified by SD) was better during active than passive head movement, likely due to increased precision on the head movement estimate arising from motor prediction and neck proprioception. Sensitivity was also better during scene-fixed than head-fixed fixation, perhaps due to decreased velocity of retinal image motion and increased precision on the estimate of retinal image motion under these conditions. The gain perceived as matching (quantified by the mean) also depended on motor signals. Gains were closer to unity during scene-fixed fixation and during active head movement, and decreased in the other conditions. These findings quantify how visual-vestibular conflict detection is modulated by eye and neck motor signals.
(1) Wallach (1985). Perceiving a stable environment. Scientific American
Sensory Contribution to Spatial Orientation in Patients with Vestibular Migraine
Amir Kheradmand, MD, Shirin Sadeghpour, MD, Jing Tian, PhD, Jorge Otero-Millan, PhD, Tzu-Chou Huang, MD.
VOR Lab, Johns Hopkins, Baltimore, MD
Living Water Neurological Clinic, Tainan, Taiwan
Vestibular migraine (VM) is among the leading causes of dizziness in general population. The VM pathophysiology is unknown with a major gap being the lack of understanding neural mechanisms underlying dizziness and spatial disorientation in these patients (Huang et al, 2020). VM patients usually do not have signs of peripheral vestibular dysfunction and their daily symptoms are triggered by changes in the head position or the visual surroundings, which indicate dysfunction of spatial perception in these patients. We have studied spatial orientation in a novel context of Bayesian spatial model (BSM), which is built on neurophysiology of multisensory processing and integration for spatial orientation. Within this framework, sensory components that encode head and eye positions are taken into account for perceived spatial orientation, measured by subjective visual vertical (SVV). We have applied this framework to investigate distinct mechanisms related to spatial disorientation in VM patients in comparison with healthy controls. In the upright head position, SVV accuracy was within the normal range for VM patients and healthy controls (two degrees from earth vertical). During the static head tilt of 20°, VM patients showed larger SVV error in the opposite direction of the head tilt (Winnick et al, 2018). These findings interpreted within the BSM framework, suggest that in the process of sensory integration for spatial orientation, VM patients compared with controls, have larger neural estimation of head position, resulting in larger errors of spatial orientation.
(1) Huang et al (2020). Cephalalgia 40(1): 107-121.
(2) Winnick et al (2018). Frontiers in Neurology 9(892).
Effects of perceived self-motion on cognitive task performance
Kio, Onoise Gerald Ph.D.
York University
Vection is the visually evoked illusion of self-motion in a stationary observer. Compelling vection can be produced in spite of visual-vestibular sensory conflict but it is possible that this sensory conflict impacts other perceptual and cognitive tasks. Previous literature has shown that the intensity of self-motion perceived by observers is lower when they perform attentionally demanding cognitive tasks than in the absence of attentional demands. We are starting new experiments to explore these questions. In this study therefore, we investigate how well observers perform cognitive tasks while experiencing various levels of visual self-motion. We measure and compare observers’ accuracy and completion time on tasks requiring logical reasoning and auditory processing while they remain stationary and while they experience different rates of movement through a virtual environment rendered in a Head Mounted Display. We hypothesize that the perceived sense of motion might induce a sense of urgency to give quicker but perhaps less accurate responses during self-motion than otherwise. The analysis will be designed to separate the relative importance of cognitive ability and divided attentional processing due to vection on the observers’ accuracy on these cognitive tasks.
Visual-vestibular sensory integration during congruent and incongruent self-rotation percepts
Ramy Kirollos, Ph.D., Chris M. Herdman, Ph.D.
Visualization and Simulation Center, Carleton University, Ottawa, ON
The value of motion bases in vehicle simulators continues to be a critical topic of debate in academia, industry and military. The objective of the current research program was to better understand the relationship between visual-vestibular sensory integration to determine if one system (visual or vestibular) is more relied upon for deciding perceived self-motion direction. The present study combines the use of a virtual reality (VR) headset with caloric irrigation of the vestibular system’s horizontal semi-circular canals to induce illusory self-rotation percepts. In Experiment set 1, we validated a method to measure circular vection speed using a knob that can be rotated clockwise or counter-clockwise when viewing an optokinetic drum presented in a VR headset. Findings revealed that faster drum speeds induced faster knob speeds (p <.001, R2 = .70). In Experiment set 2, caloric vestibular stimulation was used to induce illusory self-rotation percepts in the horizontal semi-circular canal while participants used the knob to index perceived self-rotation speeds and durations. Participants performed this experiment with their eyes closed and while a visual stimulus signaled no self-motion (i.e., eyes open while looking at stationary display). Results indicated slower (p <.001, R2 = .56) and shorter (p <.001, R2 = .79) self-rotation perception when a stationary visual stimulus was present than when participants had their eyes closed. These results indicated that neither the visual nor the vestibular system dominate the other during sensory conflict. In Experiment set 3, self-rotation was signaled in the same direction in the VR headset and by calorics in a congruent condition. In an incongruent condition, self-rotation signaled by the VR headset and calorics induced self-rotation in opposite directions at estimated perceptually equivalent speeds. Findings indicated that during the incongruent condition, participants indicated self-rotation consistent with the visual and vestibular stimuli in equal amounts of trials. Findings from this research program can inform the design of high fidelity simulators as they indicate that perceived self-motion direction is critically tied to cue reliability.
Pathophysiology of Vestibular Migraine
Richard Lewis MD.,
Harvard
Vestibular migraine (VM) is the most common cause of spontaneous vertigo but remains poorly understood. We investigated the hypothesis that central vestibular pathways are sensitized in VM by measuring self-motion perceptual thresholds in patients and control subjects and by characterizing the vestibulo-ocular reflex (VOR) and vestibular and headache symptom severity. VM patients were abnormally sensitive to roll tilt, which co-modulates semicircular canal and otolith organ activity, but not to motions that activate the canals or otolith organs in isolation, implying sensitization of canal- otolith integration. When tilt thresholds were considered together with vestibular symptom severity or VOR dynamics, VM patients segregated into two clusters. Thresholds in one cluster correlated positively with symptoms and with the VOR time constant; thresholds in the second cluster were uniformly low and independent of symptoms and the time constant. The VM threshold abnormality showed a frequency-dependence that paralleled the brain stem velocity storage mechanism. These results support a pathogenic model where vestibular symptoms emanate from the vestibular nuclei, which are sensitized by migraine-related brainstem regions and simultaneously suppressed by inhibitory feedback from the cerebellar nodulus and uvula, the site of canal-otolith integration. This conceptual framework elucidates VM pathophysiology and could potentially facilitate its diagnosis and treatment.
How Real and Perceived Tilt Affect Visual Self-Motion Processing
Meaghan McManus, Dr. Laurence R. Harris
Centre for Vision Research, York University
The visual environment plays an important role in perceived orientation. Regardless of the actual body posture of a person, when immersed in an upright (relative to them) visual scene, viewers who are tilted can experience a visual reorientation illusion (VRI) where they actually feel upright (Howard & Hu, 2001). When people report a VRI, visually-induced self-motion (vection) is enhanced (McManus & Harris, 2019 VSS). This might suggest that participants who report a VRI (1) are ignoring the gravity vector, resulting in a higher visual weighting or (2) have greater sensitivity to visual-vestibular conflict, compared to those who don’t not report VRIs. Both of these could lead to enhanced vection.
Here we investigated the connection between VRIs and sensory weighting using virtual reality. Vection experience was measured by having participants complete a visual self-motion task where they visually moved to previously seen target locations while standing, supine, and prone. Shorter travel distances indicated a stronger vection experience. Participant’s sensitivity to VRIs was measured over 1 minute where they continuously pressed a button if they perceived themselves as upright while lying supine (VRI) with an upright display. They were divided into VRI and non-VRI groups. The perceptual upright (PU) was then measured while sitting or lying on their side to obtain the weightings of vision, body, and gravity. Participants reported whether an ambiguous symbol in various orientations appeared as a “p” or “d” as the visual background orientation was varied. The PU was defined as midway between the orientations of maximum ambiguity and the weighting of each cue determined.
The VRI group had shorter travel distances compared to the no-VRI group (mean difference= 5.85%, SE= 0.83%, p=0.024). The weightings of vision or body did not differ between the VRI and non-VRI groups, however the VRI group had a significantly higher weighting of gravity (mean difference= 10.67%, SE= 4.23%, p=0.03).
It appears that despite their reported orientation being more influenced by visual cues and enhanced vection, VRI-sensitive people’s perceptual upright is actually more influenced by gravity. This finding is counter to the conclusion by Howard and Hu (2001) who supposed that during a VRI participants must be ignoring the gravity vector and is perhaps indicative of greater sensitivity to conflict.
(1) Howard, I. P., & Hu, G. (2001). Visually induced reorientation illusions. Perception, 30(5), 583 600.
(2) McManus, M., & Harris, L. R. (2019). When Gravity Is Not Where It Should Be: Effects On Perceived Self-Motion. Journal of Vision, 19(10), 237.
Gravity Affects Vestibular Adaptation to Magnetic Vestibular Stimulation
Jacob M Pogson, Dale C Roberts, Jorge Otero-Millan, David S Zee, Bryan K Ward
Neurology and Otolaryngology, School of Medicine, The Johns Hopkins University
Acute symptoms and signs of unilateral vestibular loss (UVL) include vertigo, tilting sensations, and spontaneous nystagmus. These signs resolve over time as vestibular adaptation restores balance between the vestibular nuclei, although peripheral vestibular function may not actually recover. An unresolved question of vestibular adaptation is whether anything influences the time-course?.
Long-duration magnetic vestibular stimulation (MVS) allows the time-course of vestibular adaptation to be studied by artificially inducing a sustained vestibular asymmetry that mimics UVL. MVS is thought to generate a constant fluid force on both lateral canal cupula, equivalent to a constant acceleration that activates the vestibulo-ocular reflex (VOR) pathway. Vestibular adaptation can be measured through changes in the velocity of the primary VOR nystagmus and the presence of a secondary response. Previous studies reported an effect of head position on the response to MVS. We sought to further study this effect by changing static head position pitch, roll, and yaw, thereby studying constant linear and rotational accelerations. Five normal subjects were recruited to maintain their head in one of four orientations about the y-axis (long axis of the MRI): supine, prone, left ear down, or right ear down positions, while in or out of a 7 Tesla magnetic field. During each trial three-dimensional binocular video-oculography was recorded at 100 Hz before (for two minutes), during (five minutes), and after (four minutes) entering the magnet. Head position was monitored using the position of the VOG goggles, with a 3D accelerometer and 3D magnetometer. Control trials were also performed away from the magnetic field (n=2). In addition, a three-dimensional linear control system model was tested with Matlab Simulink.
Effects of simulated brownout on task performance and postural sway
Anil K Raj, MD, Margaret Freyaldenhoven, BS, Maria Boolos, J. Blake Bullwinkel
Florida Institute for Human and Machine Cognition
Biophysical Models of Ion Transport in the Vestibular System
Robert M. Raphael1, Aravind Chenrayan Govindaraju1, Imran Quraishi2
1 Rice University
2 Yale University
The maintenance of a high potassium concentration (~150 mm) in the endolymphatic fluid in the inner ear is essential for hearing and balance. During sensory transduction, hair cell mechanotransduction channels are continually draining potassium ions from the endolymph. The resupply of potassium is an energy intensive process carried out by specialized epithelial cells - marginal cells in the cochlea and vestibular dark cells in the vestibule. These cells have extensive basolateral infoldings rich in mitochondria and a high density of the Na+-K+-ATPase pump. The biophysics of vestibular dark cell ion transport is not fully understood. To advance this research, we extended a previously developed integrated mathematical model of ion transport across the marginal/dark cells (Qurashi, et al. Am. J. Phys. 2007) by implementing a 15-state Post-Albers model of the Na+-K+-ATPase that includes explicit affinities for Na+ and K+ on both sides of the membrane and voltage dependent dissociation constants. The model contains mathematical expressions for known ion transporters at the basal and apical faces of dark cell. This extended model allows us to simulate the effects of energetic depletion by studying how potassium transport across the epithelium depends on ATP concentration. The results indicate that the current carried by the Na+-K+-ATPase, the K+ carried by the Na+-K+-Cl– cotransporter (NKCC1) and the net K+ current across the epithelium (iKte) all begin to decline when the ATP concentration on the basolateral side falls. Of particular physiological significance is that the model predicts that iKte reverses direction meaning that potassium will be transported out of the endolymph. The influences of extracellular K+ and Cl– on the transepithelial K+ current can also be simulated, advancing our understanding of the function and dysfunction of ion transport in the inner ear. The transepithelial model can be linked to existing models of hair cell mechanotransduction, providing a multiscale model of ion homeostasis in the vestibular endolymph. The model is able to make quantitative predictions on how alterations in the conductance of specific channels and transporters that can result from genetic mutations or drug exposure, affect ion transport and sensory transduction. These predictions may provide important clues to mechanisms of hidden vestibular loss and suggest strategies for pharmacological intervention in vestibular disorders.
Modeling the interaction among three complicated cerebellar disorders of eye movements:Periodic Alternating, Gaze-evoked and Rebound Nystagmus
Ari A Shemesh1, Koray Kocoglu2, G. Michael Halmagyi3, Gülden Akdal2 and David S Zee1,4 and Jorge Otero-Millan1
1 Department of Neurology, The Johns Hopkins University School of Medicine, Baltimore, Maryland, USA
2 Department of Neurology, Dokuz Eylül University, Izmir, Turkey.
3 Department of Neurology, Royal Prince Alfred Hospital and University of Sydney, Sydney, Australia
4 Departments of Ophthalmology, Otolaryngology-Head and Neck Surgery and Neuroscience, The Johns Hopkins University School of Medicine, Baltimore, Maryland, USA
GEN is the signature deficit when a positional (step) command is not generated properly to counteract eye orbital elasticity that tends to restore eye to central position. This ocular motor disorder occurs when the neural integrator becomes leaky which means it cannot hold a constant output (positional signal) in the absence of new information (maintaining a constant gaze deviation). A normal adaption mechanism which acts to counteract GEN to produce more stable eccentric gaze leads to rebound nystagmus with slow phase directed toward the previously held eccentric gaze.
A previous computational model of PAN included velocity storage through positive feedback and central adaptation through negative feedback and produced a second order dynamic vestibular system that is driven to oscillation by varying the time constant of the velocity storage (Leigh et al., 1981). Our patient demonstrated periodicity of both the vestibular system (PAN) and gaze holding system (GEN and RN).
Could the unique periodicity of GEN and RN be due to an additional oscillator in the gaze-holding system? Or instead be due to the interplay between oscillatory vestibular system and non-oscillatory gaze holding system?.
Based on, and to challenge, our current state of knowledge of how each nystagmus arises in isolation, we developed a mathematical model to address the potential interactions among PAN, GEN and RN. Our emphasis was on the mathematical integration circuits important for normal function of the vestibulo-ocular reflex and gaze holding, and the interaction of these integration circuits with adaptive mechanisms.
(1) Leigh RJ, Robinson DA, Zee DS. A hypothetical explanation for periodic alternating nystagmus: instability in the optokinetic-vestibular system. Ann N Y Acad Sci. 1981;374:619-35. PubMed PMID: 6978650.
Predicting Individual Differences and Identifying Suboptimal Strategies in a Dynamic Stabilization Task with Degraded Gravitational Cues
Vivekanand Pandey Vimal1,3, James R Lackner1,2,3, Paul DiZio1,2,3
1 Ashton Graybiel Spatial Orientation Laboratory (somde@brandeis.edu)
2 Psychology Department
3 Volen Center for Complex Systems, Brandeis University.
Our prior work shows that when subjects are deprived of gravitationally dependent vestibular and somatosensory cues, such as in low-g, 0g and spaceflight analog environments, they easily become spatially disoriented and show poor learning and performance in a stabilization task[1-3]. In these experiments we secured subjects into a Multi-Axis Rotation System (MARS) device that was programmed to behave like an inverted pendulum, and participants were instructed to use an attached joystick to stabilize around the balance point. We created the spaceflight analog condition by having subjects dynamically balance in the Horizontal Roll Plane, where they did not tilt relative to the gravitational vertical and therefore could not use gravitational cues to determine their position and had to rely only on motion cues. 90% of subjects in our spaceflight analog condition reported spatial disorientation and all subjects showed it in their data. Compared to the control condition (Vertical Roll Plane), all subjects showed significant deficits in performance and learning. Nevertheless, there was a wide range of individual differences. Could we predict learning and performance in the spaceflight analog condition early on We used a Bayesian Gaussian Mixture method to cluster subjects into 3 statistically distinct groups that represent Proficient, Somewhat-Proficient and Not-Proficient performance. Then we used a Gaussian Naive Bayes method to create predictive classifiers that allowed us to predict with 80% accuracy a subject’s final group, as early as the second block of experimentation (out of 10).
We also found that subjects in the Not-Proficient group were not undefinably bad but rather exhibited a suboptimal strategy of using very stereotyped large magnitude joystick deflections that reduced the number of times they hit the crash boundaries at the cost of wild movements. Could training help subjects avoid this suboptimal strategy? We found that providing subjects with verbal instructions on optimal joystick use was ineffective. Instead, we developed a training program that reinforced optimal joystick use while also teaching them how to dynamically stabilize independent of aligning with gravitational vertical. This training program allowed every subject to learn and improve their performance[4]
[1] Vimal, V. P., Lackner, J. R., & DiZio, P. (2016). Experimental Brain Research,
[2] Vimal, V. P., DiZio, P., & Lackner, J. R. (2017). Experimental Brain Research,
[3] Vimal, V. P., Lackner, J. R., & DiZio, P. (2018). Experimental Brain Research,
[4] Vimal, V. P., DiZio, P., & Lackner, J. R. (2019). Experimental brain research, 1-13.
Vestibular Diagnosis: Modern Technology vs. Clinical Judgement
David Zee MD.
Johns Hopkins
Modern technology is having a major impact on the diagnosis and management of the vestibular patient. Video oculography (VOR), vestibular-evoked myogenic potentials, improved imaging of the brain and ear are benefiting patients daily, in private offices and in hospital clinics and emergency department. But there is still much need for caution in relying on results generated by computer algorithms, for being vigilant for artifacts, and for not forgetting the rules for understanding vestibular pathophysiology laid out by the 19thcentury masters, including Alexander, Bárány, Bechterev, Ewald, Flourens, and Hogyes. And there is still room for innovation and learning something new at the bedside, from single patients. Here I will take you through examples that emphasize you must not forget the fundamentalsof physiology and anatomy needed in order to arrive at the correct diagnosis and which also can bring new understanding to perplexing vestibular disorders.
Abstracts Approved for Poster Presentation
Cognitive Impairment in Patients with a Clinical Vestibular Diagnosis
Corey Ambrose, Yewubnesh Hailu, Jennifer Stone, Clifford Hume, James Phillips
University of Washington, Department of Otolaryngology - Head and Neck Surgery
[1] Anson E, Jeka J. Perspectives on Aging Vestibular Function. Front Neurol. 2016;6:269. Published 2016 Jan 6. doi:10.3389/fneur.2015.00269
[2] Harun A, Oh ES, Bigelow RT, Studenski S, Agrawal Y. Vestibular Impairment in Dementia. Otol Neurotol. 2016;37(8):1137-1142. doi:10.1097/MAO.0000000000001157
Perceptual timing of passive rotational and auditory stimuli in virtual reality
William Chung, MSc., Michael Barnett-Cowan, Ph.D.
University of Waterloo, Waterloo, ON
Temporal integration of vestibular events with other sensory information is necessary for navigation and maintaining perceptual stability. Past research has shown that compared to other senses, the perceived onset of vestibular cues to self-motion are delayed. However, these results have been observed with closed eyes omitting visual information which can provide important self-motion cues. Previously we found that the perceived onset of active head movement paired with sound does not change when visual cues to self-motion are available (Chung & Barnett-Cowan, 2017). Here we extend this work by investigating whether the perceived timing of passive self-motion paired with sound changes when viewing a visually rich virtual scene. A temporal order judgement task between passive whole-body rotation and an auditory tone at various stimulus onset synchronies (-600 to 600 ms) was completed by 25 participants. The rotational stimuli were presented on a Moog 6DOF motion platform following a raised-cosine trajectory with a peak velocity of 20 deg/s at both 1 Hz and 0.5 Hz rotational frequency. A naturalistic virtual forest environment was created in Unreal Engine (version 4.16) and presented using the Oculus Rift CV1 head mounted display (HMD). As a secondary goal of the study, the rotational gain of the visual scene relative to the rotation of the HMD was manipulated (0.5, 1, 2, 1) to examine whether the velocity or direction of the visual motion would have any effect on the perceived timing of the rotation. We replicate previous reports that vestibular stimuli must occur before an auditory stimulus in order to be perceived as occurring simultaneously, where a greater delay is found when passively rotated at 0.5 Hz compared to 1 Hz (p<0.001) (Chang, Uchanski & Hullar, 2012). There was a tendency for the delay to get closer to true simultaneity when vision was present and congruent with self-motion (visual gain of 1) and increase when visual gain was incongruent (2, -1 and 0.5) with the motion, however this was not statistically significant. While these findings suggest that the presence of visual cues may have a modulating effect on the perceived timing of passive rotation, having visual feedback does not reduce the perceived delay for the onset of self-motion.
(1) Chung & Barnett-Cowan (2017) Experimental Brain Research 235(10).
(2) Chang, Uchanski & Hullar (2012) Laryngoscope 122(6).
Distance perception when real and virtual head motion do not match.
Matthew D. Cutone, M.A., Laurie M. Wilcox, Ph.D., Robert S. Allison, Ph.D.
York University, Toronto ON
For self-generated motion parallax, a sense of head velocity is needed to estimate distance from object motion (1). This information can be obtained from vestibular, proprioceptive, and visual sources. If the magnitude of efferent signals from the vestibular system produced by head motion do not correlate with the velocity gradient of the visible optic flow pattern, a conflict arises which leads to breakdown of motion-distance invariance. This potentially results in distortions of perceived distances to objects as visual and vestibular signals are non-concordant. We assessed this prediction by varying the gain between the observer’s physical head motion and simulated motion. Given that the relative and absolute motion parallax would be greater than expected from head motion when gain was greater than 1.0, we anticipated that this manipulation would result in objects appearing closer to the observer. Using an HMD, we presented targets 1 to 3 meters away from the observer within a cue rich environment with textured walls and floors. Participants stood and swayed laterally at a rate of 0.5 Hz. Lateral gain was applied by amplifying their real position by factors of 1.0 to 3.0, then using that to set the instantaneous viewpoint within the virtual environment. After presentation, the target disappeared, and the participant performed a blind walk and reached for it. Their hand position was recorded, and we computed positional errors relative to the target. We found no effect of our motion parallax gain manipulation on binocular reaching accuracy. To evaluate the role of stereopsis in counteracting the anticipated distortion in perceived space, we tested observers on the same task monocularly. In this case, distances were perceived as nearer as gain increased, but the effects were relatively small. Taken together our results suggest that observers are flexible in their interpretation of observer produced motion parallax during active head movement. This provides considerable tolerance of spatial perception to mismatches between physical and virtual motion in rich virtual environments.
(1) Howard & Rogers (2002).
Contributions of motion parallax and stereopsis to cyber sickness in VR
Siavash Eftekharifar, PhD candidate1, Adam O. Bebko, PhD2, Nikolaus F. Troje, PhD2
1 Centre for Neuroscience Studies, Queen’s University, Kingston, Ontario, Canada
2 Centre for Vision Research, York University, Toronto, Ontario, Canada
Cybersickness (or visually-induced motion sickness) is a common and unpleasant side effect associated with virtual reality (VR). The symptoms of cybersickness include nausea, dizziness, headache, and disorientation. While the mismatch between the sensory information received by the visual and vestibular systems is known to be the main cause of cybersickness in VR, many individual and technological factors have also been identified to influence the likelihood of users developing symptoms of cybersickness, including sex, stereoscopic viewing, field of view, and refresh rate.
In this study, we investigated the contribution of different visual cues to cybersickness, namely motion parallax and stereopsis. We simulated a rollercoaster ride using a head-mounted display for 10 minutes. Observers could see the track via a special opening inside the rollercoaster cart which enabled us to independently manipulate the availability of parallax and stereopsis. There were four conditions: (1) SWPW: the opening was a normal window, as indicated by normal stereo cues and normal motion parallax; (2) SCPC: the opening was covered by a canvas of a picture onto which a cart-fixed camera projected its view; (3) SWPC: the opening behaved like a window with respect to stereopsis, but like a canvas with respect to parallax; (4) SCPW: the opening behaved like a canvas with respect to stereopsis, but like a window with respect to parallax. Sixty subjects participated in this study, where they were randomly assigned to one of the four conditions. Participants responded to a simulator sickness questionnaire (SSQ) before and after the experiment and their electrodermal activity (EDA) was recorded during the experiment. SSQ revealed a main effect of condition. Participants reported the highest
SSQ score in the window condition (SWPW: s=59.1), and the lowest score in the picture condition (SCPC: s=12.46). The two other conditions resulted in intermediate scores (SWPC: s=42.44; SCPW: s=42.38). They were both significantly different from the picture condition, but did not reach significance when compared to the window condition. We did not find a significant effect of conditions on EDA.
The impact of training on measures of balance and visual-vestibular integration
Grace Gabriel, MA1,2, Laurence Harris, PhD3, Denise Henriques, PhD3, Maryam Pandi, MSc2, Robert Shewaga, MSc2, Jennifer Campos, PhD1,2
1 University of Toronto
2 The KITE Research Institute, UHN
3 York University
When navigating our environments, our brains actively process and integrate several different sources of sensory inputs at every given moment, including dynamic visual and vestibular inputs. This process of multisensory integration during self-motion allows us to make sense of the world around us and gives us a better gauge of how to effectively and safely navigate. In this study we investigated how younger and older adults integrate visual and vestibular information (alone and in combination) in order to perceive the heading direction of their own movement. We also investigated whether training can improve the accuracy and precision of heading estimates by providing participants with feedback on their responses (“correct”/“incorrect”). In this study, participants were seated in a state-of-the-art motion simulator, and were moved forward and to the left or to the right in three movement conditions: 1) physically (vestibular alone), 2) visually (through a virtual cloud of dots via head-mounted display; visual alone), or 3) bimodally (vestibular and visual combined). Transfer of training effects were also explored by evaluating the effects of self-motion training on a standing balance task. Preliminary analyses suggest that older adults are indeed less precise than younger adults when estimating the direction of their own movements across unimodal and bimodal conditions. Training effects were observed in the form of reduced heading biases pre vs. post-training, but no improvements in precision.
Developing methods to reduce motion sickness in Virtual Reality and its effect on human performance
Elizaveta Igoshina, BSc1, Dr. Frank Russo, PhD2, Dr. Behrang Keshavarz, PhD3
1 Multisensory Integration in Virtual Environments Lab, KITE-Toronto Rehabilitation Institute
Science of Music, Auditory Research, and Technology Lab, Ryerson University
2 Professor, Department of Psychology, Ryerson University, Affiliate Scientist, KITE-Toronto Rehabilitation Institute
3 Scientist, KITE-Toronto Rehabilitation Institute
Assistant Professor (adjunct), Department of Psychology, Ryerson University
Virtual reality (VR) technologies have myriad applications from entertainment, to scientific and medical research. One particular area in which VR technologies have a long tradition is driving simulation. The technological advancements of driving simulators has increased their accuracy and fidelity and reduced their operating costs. However, they also are known to cause simulator sickness (or visually induced motion sickness, VIMS), a special form of traditional motion sickness. The occurrence of VIMS can jeopardize the validity of driving simulators and limit their applicability. In addition, the presence of VIMS may affect user perception and behavior during a simulated driving task and bias driving performance. However, the severity of this bias is not well understood. We aim to (1) investigate how VIMS affects performance in a simulated driving task and (2) we will examine a potential treatment to reduce VIMS through in-vehicle ventilation. Participants will be engaged in a 30-minute driving task where they react to hazards, obey speed limits, and complete common driving maneuvers. To study the effect of airflow on VIMS (Objective 2), for half of the participants the car vents will be positioned to face the drivers head and torso having airflow directly contact the driver’s skin. The level of VIMS will be measured before and after the simulated drive using the Simulator Sickness Questionnaire and during the simulated drive using the Fast Motion Sickness Scale. Driving performance will be evaluated based on various criteria, including the standard deviation of lane position, speed maintenance, and reaction time to events and regressed with level of VIMS. The results of this study will determine the impact of VIMS on performance in a simulated driving task and will indicate whether exposure to airflow could be a potential countermeasure against VIMS. Preliminary results will be presented.
Incomplete compensation for self-motion in the visual perception of object velocity during a visual-vestibular conflict
Björn Jörges, PhD, Laurence R. Harris, PhD
Center for Vision Research, York University
When observing a moving target while an observer is moving, the same retinal speeds can correspond to vastly different physical velocities. When an observer moves in the same direction as the target, the retinal speed of the object is partially cancelled out, and vice-versa. Observers must thus obtain an accurate estimate of their own velocity, and subtract it from or add it to the retinal speed elicited by the target to obtain an accurate estimate of the object velocity. Estimates of an observer’s speed should be facilitated when visual and vestibular cues are congruent and can be integrated without multisensory conflict (Harris, Jenkin, & Zikovitz, 2000). When self-motion is experienced only visually while undergoing no physical motion, compensation is likely to be incomplete, leading to biases in judgments of object speed (Hypothesis 1). Furthermore, it has been argued that self-motion information is noisier than retinal information concerning object motion (Dokka, MacNeilage, DeAngelis, & Angelaki, 2015), especially when observers have only visual information about their own movement at their disposal (Fetsch, Deangelis, & Angelaki, 2010). Subtracting noisy self-motion information from retinal motion in order to obtain an estimate of target velocity should thus decrease precision (Hypothesis 2). To test these hypotheses, we presented two motion intervals in a 3D virtual environment and asked participants which motion was faster; one in which a target moved linearly to the left or to the right in the fronto-parallel plane, and one that consisted of a cloud of smaller targets travelling in the same direction. The single target moved at one of two constant speeds (6.6 or 8m/s, 6m from the observer), while the speed of the cloud was determined by a PEST staircase. While observing the single moving target, participants were moved visually either in the same direction, in the opposite direction, or remained static. In support of Hypothesis 1, we found differences in accuracy between static, congruent and incongruent motion; target motion during congruent self-motion was judged as slower than in the static condition and faster in the incongruent condition, indicating inadequate compensation for the observer’s motion. Self-motion during target motion observation decreased precision compared to the static condition in support of Hypothesis 2. Further research is necessary to determine how the availability of vestibular cues can remedy accuracy or precision losses during self-motion.
(1) Dokka, K., MacNeilage, P. R., DeAngelis, G. C., & Angelaki, D. E. (2015). Multisensory self-motion compensation during object trajectory judgments. Cerebral Cortex,
(2) Fetsch, C. R., Deangelis, G. C., & Angelaki, D. E. (2010). Visual-vestibular cue integration for heading perception: Applications of optimal cue integration theory. European Journal of Neuroscience,
(3) Harris, L. R., Jenkin, M., & Zikovitz, D. C. (2000). Visual and non-visual cues in the perception of linear self motion. Experimental Brain Research,
LRH is supported by an NSERC discovery grant. BJ is supported by the Canadian Space Agency.
Effects of simulated head motion and saccade direction on sensitivity to transsaccadic image motion
Maryam Keyvanara, Robert S. Allison
York University, Canada
Saccadic suppression of image displacement (SSD) is a perceptual feature of our visual system that occurs when we move our gaze from one fixation to another.SSD has mostly been studied with the head fixed. Normally when we move about we move our head as well as our eyes, although in virtual reality the virtual head movements may not correspond to the physical head movements producing a conflict between vision and the vestibular sense. Here we investigated the SSD effect during simulated head movements. Participants’ eyes were tracked as they viewed a set of 3D scenes with a constant (rightward) camera pan. They produced a horizontal (rightward) saccade upon displacement of an object in the scene, during which a sudden shift of the scene occurred in one of 10 different directions. Using a Bayesian adaptive procedure, we estimate thresholds for detection of these sudden camera movements. Within-subjects analysis showed that when users made horizontal saccades, the horizontal image translations were significantly less detectable than vertical image translations and also less noticeable than and in-depth translations. Likewise, horizontal transsaccadic rotations were significantly less detectable than vertical image rotations. These results imply that in 3D virtual environment, when users pan their head while making a horizontal saccade, they would be less susceptible to noticing horizontal changes to their viewpoint that occur during a saccade compared to vertical or in-depth changes. We are currently extending these studies to measure SSD during actual head motions in immersive VR, allowing us to assess the contributions of the visual, vestibular and proprioceptive senses. The interaction between head motion, eye movement and suppression of graphical updates during saccades can provide insight into designing better VR experiences.
Updating using visual and vestibular cues during linear lateral translation
John Jong-Jin Kim, M.A., Laurence R. Harris, Ph.D.
Center for Vision Research, York University, Toronto, ON, Canada
Updating the egocentric positions of objects of interest during self-motion is fundamental to our daily navigation and effective interaction with the world. And yet people make systematic errors in the direction of their movement when updating these positions after lateral self-motion (Klier, Hess, & Angelaki, 2008). The source of these errors is still largely unknown. When updating the position of surrounding objects, a person first needs to know their own movement through space, which requires integrating information from various senses including visual, vestibular, somatosensory and motor systems (Harris et al., 2002). To explore the contribution of visual and vestibular motion cues to these errors, we compared the errors people make when updating target positions during passive linear lateral translation with a) visual-cues only, b) vestibular-cues only, or c) both visual-and-vestibular cues. As a control condition, we also measured the errors people make when remembering the target location without self-motion, i.e., stationary for a comparable period of time. We used an Oculus Rift (CV1) to provide visual cues, optic flow and visual targets, and a Moog 6DOF motion platform to provide the vestibular cues. Targets (lateral positions: ±.46m, ±.23m, or 0m from the center of the screen; simulated viewing distance 1.75m) were presented briefly (0.5s) on a simulated projector screen while participants fixated a cross. After an idle period (7s) or a lateral translation (left or right at ~0.07 m/s for 7s; lateral distance of .46m), they positioned a dot at the remembered target positions by pointing a hand-held controller. In general, participants underestimated target eccentricity when pointing at the remembered target positions, with greater errors for more eccentric targets. Participants made greater errors with only vestibular cues than with only visual cues. However, when both visual and vestibular cues were available, they did not perform better than with only visual cues. Based on these findings, our ability to update a remembered target’s position appears to be affected by a target’s eccentricity, and visual motion cues alone are enough to evoke updating. Physical cues may not be needed when visual cues are available.
(1) Harris et al (2002). Virtual Reality 6(2),
(2) Klier et al (2008). Journal of Neurophysiology 99.
Is home exercise for dizziness after mild traumatic brain injury enough? Could wearable sensors help?
Martini DN, Pettigrew NC, Wilhelm JL, Scanlan KT, King LA
Oregon Health and Science University, Portland, Oregon
(1) Grabowski P, et al., Multimodal impairment-based physical therapy for the treatment of patients with post-concussion syndrome: A retrospective analysis on safety and feasibility. Phys Ther Sport. 2017;23:22-30.
(2) Argent R, et al., Patient Involvement With Home-Based Exercise Programs: Can Connected Health Interventions Influence Adherence?. JMIR Mhealth Uhealth. 2018;6(3):e47.
(3) Fino PC, et al., Inertial Sensors Reveal Subtle Motor Deficits When Walking With Horizontal Head Turns After Concussion. J Head Trauma Rehabil. 2019;34(2):E74-E81.
(4) Shull PB, et al., Quantified self and human movement: a review on the clinical impact of wearable sensing and feedback for gait analysis and intervention. Gait Posture. 2014;40(1):11-9.
(5) Wang Q, et al., A. Interactive wearable systems for upper body rehabilitation: a systematic review. J Neuroeng Rehabil. 2017;14(1):20.
Opathologic Findings in the Peripheral Vestibular System Following Head Injury
Renata M. Knoll MD1, Reuven Ishai, MD1, Rory J Lubner, BA1, David H. Jung, MD, PhD1, Aaron K. Remenschneider, MD, MPH1 , Elliott D. Kozin, MD1, Joseph B. Nadol Jr. MD2, Danielle R. Trakimas, MSE3
1 Department of Otolaryngology, Harvard Medical School/Massachusetts Eye and Ear, Boston, MA
2 Otopathology Laboratory, Department of Otolaryngology, Massachusetts Eye and Ear, Boston, MA
3 Department of Otolaryngology, Johns Hopkins School of Medicine, Baltimore, MD
Head injury is a major public health concern worldwide. It is estimated that more than 5.3 million individuals in the United States live with a head injury-related disability. Vestibular dysfunction has long been recognized as one of the possible sequalae of head injury. However, while the clinical findings of dizziness, disequilibrium, and vertigo after head injury are well described, less is known about the pathophysiology of vestibular dysfunction. Herein, we aimed to use human otopathologic techniques to analyze the histopathology of the peripheral vestibular system in patients with a history of head injury. Human temporal bones (TBs) from the National Temporal Bone Pathology Registry with history of head injury with or without temporal bone fracture (TBF) were included. Cases were categorized into head injury with TBF (Group A), and head injury without TBF (Group B). Specimens were evaluated for qualitative and quantitative characteristics, such as number of Scarpa ganglion neurons (ScGN) in the superior and inferior vestibular nerves, vestibular hair cells (HC) and/or dendrites degeneration in otolithic organs and semicircular canals, presence of vestibular endolymphatic hydrops and obstruction of the endolymphatic duct. Cases were compared to age-matched controls (Group C) without history of head injury. A total of fourteen TBs corresponding to 10 cases (90% male) with history of head injury were identified. Additional seven normal TBs from six patients were included as age-matched controls (p=.817). Five TBs had evidence of a transverse TBF (Group A) while 9 TB had no evidence of fracture (Group B). Group A demonstrated severe degeneration of the vestibular membranous labyrinth in the semicircular canals (100%, n= 5TB), and mild to severe degeneration of the maculae utriculi and sacculi (100%, n= 5TB). Group B showed moderate to severe degeneration of the vestibular membranous labyrinth in the semicircular canals (44%, n= 4TB), and moderate to severe degeneration of the maculae utriculi and saccule (22%, n= 2TB). Vestibular hydrops was present in Group A (40%, n= 2TB) and Group B (22%, n= 2TB). Blockage of the endolymphatic duct was identified in Group A (60%, n= 3TB) and Group B (11%, n= 1TB). There were a 52.6% and 40.3% decrease in the mean total ScGN count compared to age-matched controls (n=7) for Group A and B, respectively (p=.013, and p=.017). This is the first histopathological study of human temporal bones to examine the peripheral vestibular system in patients with a history of head injury with and without temporal bone fractures. Otopathological analysis in patients with history of head injury demonstrated distinct peripheral vestibular pathology, including reduction of ScGN even in cases without TBF.
Translation Perception and the Impact of Orientation and Gravity
Megan Kobel, Au.D. Daniel Merfeld, Ph.D.
The Ohio State University, Columbus OH
Accurate perception of translational acceleration is fundamental for balance. Otolith organs detect both gravity and linear acceleration and these must be disambiguated for accurate perception of both. While subjects without vestibular dysfunction show similar sensitivity to earth-vertical and earth-horizontal motions (MacNeilage et al., 2010), complete bilateral vestibular loss patients display a larger impact of vestibular loss on earth-vertical superior-inferior translations than earth-horizontal inter-aural translations (Valko et al., 2012). This suggests that the use of vestibular thresholds for accurate identification of impaired models of gravity and underlying vestibular dysfunction is dependent on orientation relative to gravity. Given the somewhat differing conclusions of these previous studies, our goals are to comprehensively assess the impact of translation perception by the direction of movements in world coordinates (i.e. relative to gravity), head coordinates (i.e. relative to otolith organs), and body orientation (i.e. gravity relative to otoliths). This requires at least 6 test conditions (listed below) to test the following hypotheses:
1) Thresholds measured while upright (where most motion is experienced) are lower than those measured when tilted 90°.
2) Translations aligned with inter-aural axis (y-axis) yield smaller thresholds than translations aligned with superior-inferior axis (z-axis).
3) Earth-vertical (up/down) motions parallel to gravity yield higher thresholds than earth-horizontal motions perpendicular to gravity.
Vestibular thresholds for 1 Hz inter-aural (y-axis) and superior-inferior (z-axis) translations were determined using standard methods in a Moog 6DOF motion platform in normal subjects (n=12). Trial order was randomized – counter-balanced to the extent possible – to assess thresholds for both axes in three body orientations: upright, ear-down, and supine. When combined with y-axis and z-axis translations for each orientation, this yields 6 motion conditions. Repeated measures analyses were performed to test these three hypotheses and evidence to support all three of our hypotheses was seen. This study assesses the impact of gravity relative to the otoliths and movement direction which gives fundamental insights into vestibular processing and provides essential normative data needed for future implementation of perceptual thresholds in vestibular diagnosis.
1. MacNeilage et al, J. Neurosci., 2010.
2. Valko et al, J. Neurosci., 2012.
The Role of Visual, Auditory, and Tactile Cues in the Perception of Self-Motion (Vection) in Younger and Older Adults
Brandy Murovec, BSc1, Behrang Keshavarz, PhD2
1 Ryerson University, Toronto, ON
2 Toronto Rehabilitation Institute, Toronto, ON
Virtual reality (VR) is advancing as a utility in a variety of domains, such as training, research, and entertainment. One of the most critical components to an immersive experience in VR is vection, defined as the illusion of self-motion. Vection has been demonstrated to be a multisensory phenomenon, relying on cues from a multitude of sensory modalities such as visual, auditory, and tactile cues. As a natural result of aging, the sensory systems which detect and process these cues have been shown to decline. The objective of the present study is to investigate vection in the context of age, to see if these declining sensory systems influence the perception of vection. In order to investigate this research question, 30 younger adults and 30 older adults will be recruited to participate in a study at the Toronto Rehabilitation Institute’s StreetLab. Participants will be seated in a chair in StreetLab and exposed to a revolving stimulus inducing the illusion that they are rotating on the chair although they remain stationary (i.e., circular vection). The rotating stimulus will contain visual (photorealistic virtual city scene), auditory (three stationary sound sources placed within the same virtual city scene), and/or tactile (a circular handrail within reach that rotates around the participants) cues. All participants will be exposed to trials that either include a single sensory input (visual-only, auditory-only, tactile-only), a combination of two (audio-visual, audio-tactile, visual-tactile), and a combination of all three sensory cues (audio-visual-tactile). Vection onset, duration, and intensity will be measured using subjective ratings and a button press system. The outcome of this study will help to understanding how to improve VR applications for use by older adults to optimize this technology for rehabilitation, training, and entertainment purposes.
Effects of Motion on Simulated Driving Performance in Younger and Older Adults
Robert J. Nowosielski, MSc., Jennifer Campos, PhD
University of Toronto, Toronto Rehabilitation Institute, Toronto, ON
Vestibular function is known to change with age, but the effects of these changes on functional activities requiring self-motion perception is largely unknown. Driving is complex task that involves the use of vestibular inputs to guide self-motion perception and behaviours. However, the degree to which age-related changes in vestibular function affect driving performance have yet to be studied in an experimental setting. Driving simulators are an increasingly common tool for examining driving performance in a safe and controlled way, yet the degree to which these simulators approximate real driving performance remains elusive; largely due to the variability of motion capabilities across different types of driving simulators. Using Toronto Rehabilitation Institute’s state of the art driving simulator, we measured the driving performance of older and younger drivers across three different physical motion conditions: no motion, rotational motion (yaw) only, and full motion (yaw, pitch, roll and translational motion) using a between subjects design (age and type of motion). We tested 34 younger adults aged 18 – 35 and 32 adults aged 65 and older using three, 15-minute driving scenarios for each motion condition with driving performance measured across 14 variables (e.g. mean speed, lateral acceleration, lane deviation). We hypothesized an additive and beneficial effect of motion (no motion to yaw, yaw to full motion) on driving performance over time (e.g. reduced speed variability, reduced lane deviations), with older adults being more sensitive to the effects of motion. Our results, however, demonstrate a more nuanced effect of motion on driving performance with younger and older adults responding to motion cues significantly differently and adjusting to these in different ways over drives/exposure time. These findings suggest that age-related changes in vestibular functioning should not be viewed in terms of decrements in function, rather in unique perceptual and cognitive strategies in integrating multisensory information.
Automatic Quantification of Nystagmus in Bedside Recordings from Patients with Acute Dizziness
Sai Akanksha Punuganti1, Tzu-Pu Chang, MD2, Jing Tian, Ph.D3, David Newman-Toker, M.D, Ph.D4, Jorge Otero-Millan, Ph.D5
1 Department of Biomedical Engineering, Johns Hopkins University Baltimore, USA
2 Department of Neurology/Neuromedical Scientific Center, Taichung Tzu Chi Hospital, Buddhist Tzu Chi Medical Foundation, Taichung, Taiwan
3 Department of Neurology, Johns Hopkins University, Baltimore, USA
4 Department of Neurology, Johns Hopkins University, Baltimore, USA
5 Department of Neurology, Department of Otolaryngology-Head and Neck Surgery, Armstrong Institute Center for Diagnostic Excellence, Johns Hopkins University, Baltimore, USA
Nystagmus is a pattern of involuntary eye movements typically composed of alternating slow-phases of eye drift in a constant direction and quick-phases, like saccades, where the eye jumps in the opposite direction. Evaluation of nystagmus during the Dix-Hallpike maneuver is key to diagnosing Benign Paroxysmal Positional Vertigo (BPPV) since it elicits a typical pattern with intensity that first peaks after a few seconds and then decays within approximately 30s. BPPV is the most common peripheral cause of nystagmus, especially among patients above 60 years of age, and is caused by the presence of debris (otoconia) within the semicircular canals. Even though BPPV can be diagnosed and treated with simple maneuvers done by vestibular experts, there is a high rate of misdiagnosis that results in high medical costs when using expensive and time-consuming neuroimaging techniques. In order to address the high rate of misdiagnosis in patients suffering from BPPV, there is a need for accurate and automated nystagmus detection methods. Here we will focus on automatic quantification of nystagmus recorded at the bedside during the Dix-Hallpike Maneuver with the objective of identifying patients suffering from BPPV. Specifically, we adapt saccade detection methods to identify quick-phases of nystagmus and introduce new methods to detect and remove artifacts and noise in the data caused, for example, by partial eyelid closure, poor pupil detection, or undesired reflections. We show how our method can outperform a commercially available solution when comparing the presence or absence of nystagmus with the reports of a vestibular expert.
Systemic injection of Calcitonin Gene-Related Peptide (CGRP) prolongs a nausea-like state in mice
Shafaqat Rahman, Benjamin Liang, Catherine Hauser, Stefanie Faucher, Raajan Jonnala, Anne Luebke
Departments of Biomedical Engineering and Neuroscience
Del Monte Institute of Neuroscience, University of Rochester Medical Center
Nausea is a prominent symptom and major cause of complaint for patients with migraine and specifically vestibular migraine (VM). As a readout of a nausea-like state present in migraine and VM, we will assessed hypothermic responses to provocative motion. Recent studies have demonstrated that provocative motion causes robust and prominent hypothermic responses in rats, humans, house musk shrews, and mice that there is a clear parallel in hypothermic responses between animals and humans in underlying physiological mechanism - cutaneous vasodilatation that favors heat loss. Additionally, because systemic CGRP injection has been shown to cause light-aversion (photosensitivity) in mice, we wondered what effect systemic CGRP injection would have on these nausea-like states in wildtype mice.
We carried out these studies on 40 wildtype C57BL/6J (JAX 664) mice (20F/20M). Head and tail temperatures were measured using an FLIR E60 IR camera before, during, and after a 20 min orbital rotation (0.75 Hz to 4 cm displacement). One week later, the same mice were injected systemically with 0.1 mg/kg rat α-CGRP (Sigma), and were retested.
We confirmed in both female and male C57BL/6J mice during provocative motion there is a decrease in head temperature (hypothermia) of ~1.5 degree C which recovers and is associated with a short-lasting tail-skin vasodilation (tail skin temperature increase of ~4 degrees C). Interestingly, systemic CGRP injection caused a similar reduction in head temperature, yet the hypothermia did not recover. Moreover, there was no associated tail-skin vasodilation in CGRP-injected mice.
In conclusion, provocative motion in wild type mice is accompanied by hypothermia that involves both autonomic and thermo-effector mechanisms. Moreover, a systemic CGRP injection prolongs the hypothermia and eliminates the tail-skin vasodilation. Experiments are underway to determine what effects CGRP antagonists and triptans may have on these physiological correlates of nausea.
The effect of small asynchronies of visual stimulus on inertial heading perception
Raul Rodriguez, MS, Benjamin T. Crane, MD, PhD
University of Rochester, Rochester, NY
Misaligned visual and inertial sensory perception can result in feelings of dizziness. Unexpected timing delays of the nervous system can be a major contributing factor. There is a body of literature that demonstrates the effect of visual stimuli on inertial heading perception. Those experiments are trying to discover how sensory integration operates. However, there are other factors that may play a role in the ability of those sensory modalities to integrate. The effect of presenting a delayed visual stimulus on inertial heading perception has not yet been thoroughly investigated. There is preliminary data that found a significant different between the effect of a non-delayed visual stimulus, and a visual stimulus delayed by 100ms. There is a range of time delays in auditory-visual studies that demonstrates an effect on perception between -100ms and 100ms [1]. This experiment explores that range by presenting the subject with timing delays from -100ms to 100ms at 25ms intervals. Inertial motion is provided by a 6-DOF motion platform, and a visual stimulus is presented concurrently. The inertial heading directions range from -140° to 140° in 35 increments, while the visual stimulus ranges from -120° to 120° in 30 increments, relative to the inertial heading direction. Therefore, 81 different stimulus combinations are presented randomly twice to each subject at every timing delay. We found an increase in the variability of responses as the offset increases and a statistically significant difference between certain time delays (e.g. -20ms visual delay and 20ms visual delay at 120° offset; p<0.05; Kruskal-Wallis; Wilcoxon Method). Visual influence on inertial heading perception is dependent on offset size and timing delay.
(1) Hear Res. 2009 Dec;258(1-2):89-99. doi: 10.1016/j.heares.2009.04.009. Epub 2009 Apr 22.
Assessment of Inter-rater Reliability in Oculomotor, Vestibular and Reaction Time Tests following Traumatic Brain Injury in U.S Military Service Members
Daniel S. Talian, AuD1, Megan M. Eitel, AuD2, Danielle J. Zion, AuD3, Stefanie E. Kuchinsky, PhD4, Louis M. French, Psy.D5, Tracey A. Brickell, D.Psych6, Sara M. Lippa, Ph.D7, Rael T. Lange, Ph.D8, Douglas S. Brungart, PhD9
1 Army Hearing Program, U.S. Army Public Health Center, Aberdeen, MD, USA
Walter Reed National Military Medical Center, Bethesda, MD, USA
2 Defense and Veterans Brain Injury Center, Silver Spring, MD, USA
Henry M. Jackson Foundation for the Advancement of Military Medicine, Bethesda, MD, USA
Walter Reed National Military Medical Center, Bethesda, MD, USA
3 Henry M. Jackson Foundation for the Advancement of Military Medicine, Bethesda, MD, USA
Walter Reed National Military Medical Center, Bethesda, MD, USA
4 Walter Reed National Military Medical Center, Bethesda, MD, USA
5 National Intrepid Center of Excellence, Walter Reed National Military Medical Center, Bethesda, MD, USA
Defense and Veterans Brain Injury Center, Walter Reed National Military Medical Center, Bethesda, MD, USA
Uniformed Services University of the Health Sciences, Bethesda, MD, USA
6 Defense and Veterans Brain Injury Center, Walter Reed National Military Medical Center, Bethesda, MD, USA
Uniformed Services University of the Health Sciences, Bethesda, MD, USA
7 National Intrepid Center of Excellence, Walter Reed National Military Medical Center, Bethesda, MD, USA
8 Defense and Veterans Brain Injury Center, Walter Reed National Military Medical Center, Bethesda, MD, USA
National Intrepid Center of Excellence, Walter Reed National Military Medical Center, Bethesda, MD, USA
University of British Columbia, Vancouver, BC, Canada
9 Walter Reed National Military Medical Center, Bethesda, MD, US
[1] Balaban, C., Hoffer, M. E., Szczupak, M., Snapp, H., Crawford, J., Murphy, S., Marshall, K., Pelusso, C., Knowles, S., & Kiderman, A. (2016). Oculomotor, Vestibular, and Reaction Time Tests in Mild Traumatic Brain Injury. PLOS ONE, 11(9), e0162168. https://doi.org/10.1371/journal.pone.0162168
[2] Hoffer, M. E., Balaban, C., Szczupak, M., Buskirk, J., Snapp, H., Crawford, J., Wise, S., Murphy, S., Marshall, K., Pelusso, C., Knowles, S., & Kiderman, A. (2017). The use of oculomotor, vestibular, and reaction time tests to assess mild traumatic brain injury (mTBI) over time. Laryngoscope Investigative Otolaryngology, 2(4), 157-165. https://doi.org/10.1002/lio2.74
[3] Scherer, M. R., & Schubert, M. C. (2009). Traumatic Brain Injury and Vestibular Pathology as a Comorbidity after Blast Exposure. Physical Therapy, 89(9), 980-992. https://doi.org/10.2522/ptj.20080353
Training Roll Tilt Self-Motion Perception
Andrew Wagner, PT, DPT, NCS1, Daniel M. Merfeld, PhD1, Manuel Klaus, PhD2, Fred W. Mast, PhD2
1 The Ohio State University, Columbus, OH
2 University of Bern, Bern Switzerland
Elevated roll tilt perceptual thresholds have recently been shown to be predictive of age-related balance impairment [1], thus studying the capacity to induce roll tilt perceptual learning (i.e. reduce vestibular noise) has the potential to inform future efforts that aim to probe the relationship between balance, falls, and vestibular function among older adults. Repeated exposure to various sensory stimuli has been shown to induce a learning effect for multiple sensory modalities. However, evidence for perceptual learning in the vestibular system is sparse. In one previous study of vestibular perceptual learning, Hartmann, et al. (2013) showed that in the absence of visual cues, interaural translation and yaw rotation direction recognition thresholds were unchanged following a perceptual learning intervention [2]. Using roll tilt, which has been shown to be physiologically relevant to balance [1], Klaus et al (2020) recently showed evidence of a robust capacity for improving self-motion perception, reducing roll tilt perceptual thresholds by 33% after 9 hours of training [3]. Building on this work, our current goal is to determine if similar results can be achieved in a shortened time period (5.5 vs 9 hours) using an automated protocol. We hypothesized that roll tilt perceptual thresholds will be significantly improved after 5.5 hours of training.
We measured 0.2 Hz roll tilt perceptual thresholds before and after a vestibular perceptual learning intervention. Using a six degree of freedom motion platform (Moog, East Aurora, NY), subjects completed 1800 trials of passive, 0.2 Hz head-centered roll tilt over a period three days (5 to 6 hours total training time). Using baseline threshold measures, the roll tilt stimulus was selected to target 70.7% accuracy. During training, subjects were provided with an auditory cue after each trial to notify them if their answer was correct or incorrect (right vs. left roll tilt). A control group received only vestibular perceptual threshold testing on two occasions, separated by approximately 48 hours. We used a two-sample t-test to compare the change in perceptual thresholds between the experimental and control groups to determine the existence of a training effect. This study assesses whether perceptual learning can be attained with greater efficiency (i.e., in less time) than a previous roll tilt perceptual learning paradigm.
[1] Beylergil, S.B., Karmali, F., Wang, W., Bermúdez Rey, M.C., Merfeld, D.M.: Vestibular roll tilt thresholds partially mediate age-related effects on balance. In: Progress in Brain Research. pp. 249–267. Elsevier (2019)
[2] Hartmann, M., Furrer, S., Herzog, M.H., Merfeld, D.M., Mast, F.W.: Self-motion perception training: thresholds improve in the light but not in the dark. Exp Brain Res. 226, 231–240 (2013).
[3] Klaus, M., Schone, C., Hartmann, M., Merfeld, D.M., Schubert, M.C., Mast, F.: Roll tilt self-motion direction discrimination: First evidence for perceptual learning. Attention, Perception, & Psychophysics. (2020).
Self-Motion as a Link between Stereopsis and Motion Parallax
Xiaoye Michael Wang, Ph.D.1
Anne Thaler, Ph.D.1
Siavash Eftekharifar2
Adam O. Bebko, Ph.D.1
Nikolaus F. Troje, Ph.D.1
1 Centre for Vision Research, York University, Toronto, ON, Canada
2 Centre for Neuroscience Studies, Queen’s University, Kingston, ON, Canada
Motion parallax aids depth perception in a daily environment [2]. It requires the coupling between one’s own bodily movements with retinal image updates. Such a coupling, as a form of sensorimotor contingency [1], gives rise to a sense of occupying a location in visual space [3][4]. Although perceptual changes from active and passive motion are geometrically identical, the latter does not contain efference motor information that produce such changes, and therefore breaks the perception-action coupling. In this study, we investigated the perceptual consequences of viewing a natural scene when observers either moved actively or passively, monocularly or binocularly in virtual reality. Using a head-mounted display, we placed participants in a hexagonal gazebo in the middle of a forest. Two adjacent openings of the gazebo were used to display the forest. The left one behaved like a window that provided both stereoscopic and motion parallax information about the forest (SWPW). The right opening behaved like a window with regards to motion parallax but like a flat picture with regards to stereopsis (SPPW). In each trial, participants adjusted the motion parallax gain in the right opening so that it would look the same as what they saw on the left. Changing the gain changed the mapping between the observer’s head movement and the viewpoint from which the scene was rendered. To produce motion parallax, participants either moved actively by swaying laterally, or we simulated equivalent visual flow while participants remained stationary. They also viewed the entire scene with either both eyes or only their dominant eye. Results showed that in the binocular condition, participants set the parallax gain to be around 0.5 when participants moved actively, but to be around 1 when they only passively received visual motion. In the monocular condition, the gain was around 1 in both movement conditions. This study showed that stereopsis predicts motion parallax, where SPP and SWPW contain the same amount of parallactic motion but because stereopsis specifies a flat screen in SPPW, observers did not expect seeing such motion and therefore perceived the same visual motion to be larger compared to SWPW. However, in the passive visual flow condition stereopsis did not affect perceived motion.
[1] O’Regan, J. K., & Noë, A. (2001). A sensorimotor account of vision and visual consciousness. Behavioral and brain sciences, 24(5), 939-973.
[2] Rogers, B., & Graham, M. (1979). Motion parallax as an independent cue for depth perception. Perception, 8(2), 125-134.
[3] Slater, M. (2009). Place illusion and plausibility can lead to realistic behaviour in immersive virtual environments. Philosophical Transactions of the Royal Society B: Biological Sciences, 364(1535), 3549-3557.
[4] Troje, N. F. (2019). Reality Check. Perception, 48(11), 1033-1038.
The Soul of Spatial Orientation – the Internal Model
Laurence R. Young
MIT, Cambridge, MA
The physical inputs to the vestibular system are now well understood. The manner in which angular acceleration of the head is sensed by the semicircular canals is straight forward – and was described by Jongkees et al nearly a century ago. The encoding of cupula deflection and its transmission to the vestibular nuclei and beyond was elegantly described by Fernandez and Goldberg, who even deigned to refer to differential equations to describe the process. The physical outputs of the sensorimotor system, particularly the angular velocity of eye movements in the vestibulo-ocular reflex, however, were mysteriously different. The lengthening of the VOR time constant, beyond that of the cupula mechanical deflection, required some further neural manipulation. Raphan and Cohen termed it “velocity storage” but that didn’t explain either its mechanism or its purpose. And even longer time constants were needed to describe the “adaptive response” to sustained stimulation, as considered by Young and Oman and by Melvill Jones and Malcolm. Once again, the mathematical model described but did not explain the phenomena. Complex physical stimuli, such as the responses to head tilt while rotating, (Coriolis Cross Coupling), were known to cause motion sickness and even vertigo – but the basis for habituation remained elusive. The interpretation of gravito-inertial stimuli, as transduced by the otolith organs, was readily understood at the physical transducer level, but there remained the unsolved issue of how the brain interpreted the otolith signals – as tilt relative to gravity or as linear acceleration. Our OTTR (Otolith Tilt-Translation Reinterpretation Hypothesis), along with that of Parker and Reschke, was, once again, descriptive but not explanatory. Numerous other phenomena illustrated the way in which multiple sensory modalities, including possible graviceptors near the kidney, foot pressure, light finger touch and other tactile sensing, contributed to the egocentric sense that down was in the direction of the feet. Psychophysical measures of visually induced motion (vection) amply demonstrated how the response to one physical stimulus, such as a rotating or tilted visual field, could be drastically altered by a confirming or contradicting signal from another sensory system. And finally, the familiarity or novelty of an environment could unleash a host of previously learned sensory-motor reflexes, invoking context specific adaptation.
The key to understanding how all of these complex, multi-dimensional relationships operate is the concept of the INTERNAL MODEL. As incorporated in the optimal estimator/Kalman Filter model, or its offspring, the observer model, the various sensory measures are compared to expected responses. And, most importantly, comparisons are made to the internal model’s expectations based on continuing input prediction. Among the examples used to illustrate this concept will be space sickness and, of increasing practical concern, earth sickness.
