Abstract
Interest in the use of sound as a means of information display in human-machine systems has surged in recent years. While researchers have begun to address issues surrounding good auditory display design as well as potential domains of application, little is known about the cognitive processes involved in interpreting auditory displays. In multi-tasking scenarios, dividing concurrent information display across modalities (e.g., vision and audition) may allow the human operator to receive (i.e., to sense and perceive) more information, yet higher-level conflicts in the encoding and representation of information may persist. Surprisingly few studies to date have examined auditory information display in dual-task scenarios. This study examined the flexibility of encoding of information and processing code conflicts in a dual-task paradigm with auditory graphs—a specific class of auditory displays that represent quantitative information with sound. Results showed that 1) patterns of dual-task interference were task dependent, and 2) a verbal interference task was relatively more disruptive to auditory graph performance than a visuospatial interference task, particularly for point estimation.
Get full access to this article
View all access options for this article.
