Abstract
Head-coupled simulators consist of a helmet-mounted display, an image generator and a head position sensor. By measuring where the head is pointing, and displaying the appropriate visual information on the display, the wearer can be presented with a simulated visual environment. The integration of devices which display to other sensory modalities means that the user could also be presented with simulated auditory, tactile, and kinesthetic information. The technologies for use in head-coupled simulators are evolving to make them cheaper, lighter and more readily available.
The objective of the symposium is to act as a forum for the presentation and discussion of some of the many current applications of head-coupled simulators. The range of empirical studies which will be covered will demonstrate the flexibility and applicability of these devices. It is envisioned that the reduction in the need for the expensive and cumbersome equipment associated with traditional simulators will make head-coupled simulators of interest to the human factors community.
The four papers in the symposium may be divided into those which do research into the use and design of head-coupled simulators (3) and those which use head-coupled simulators to do research(l). The paper by Tom Bennet presents the results of a study which investigated the perception and transformation of imagery presented on a head-coupled simulator. Ron Kruk's and David Runnings' work was on flying performance in an F-16 simulator. Their interest was on the effects of scene luminance on performance. The work by Loran Haworth, Nancy Bucher, Robert Hennessy and David Runnings addressed some of the issues involved in compensating for lags in image generation systems in motion environments. The work by Max Wells and Mike Venturino attempted to answer the question of the optimum size of the field-of-view for head-coupled displays for operational use.
Get full access to this article
View all access options for this article.
