Abstract
How does the central nervous system (CNS) combine sensory information from semicircular canal, otolith, and visual systems into perceptions of rotation, translation and tilt? Over the past four decades, a variety of input-output ("black box") mathematical models have been proposed to predict human dynamic spatial orientation perception and eye movements. The models have proved useful in vestibular diagnosis, aircraft accident investigation, and flight simulator design. Experimental refinement continues. This paper briefly reviews the history of two widely known model families, the linear "Kalman Filter" and the nonlinear "Observer". Recent physiologic data supports the internal model assumptions common to both. We derive simple 1-D and 3-D examples of each model for vestibular inputs, and show why – despite apparently different structure and assumptions – the linearized model predictions are dynamically equivalent when the four free model parameters are adjusted to fit the same empirical data, and perceived head orientation remains near upright. We argue that the motion disturbance and sensor noise spectra employed in the Kalman Filter formulation may reflect normal movements in daily life and perceptual thresholds, and thus justify the interpretation that the CNS cue blending scheme may well minimize least squares angular velocity perceptual errors.
Get full access to this article
View all access options for this article.
