Abstract
In many data-rich and safety-critical environments, multimodal displays (i.e., displays that present information in the visual, auditory, and tactile channels) are employed to support operators in dividing their attention across numerous tasks and sources of information. However, limitations of this approach are not well understood. Specifically, most research on the effectiveness of multimodal interfaces has examined the processing of only two concurrent signals in different modalities, mainly in vision and hearing. Anecdotal evidence and one empirical study (Hecht & Reiner, 2009) suggest that a person will likely fail to notice one or more signals when presented simultaneously with three or more unrelated multimodal stimuli. Therefore, the goals of this study were to (1) determine the extent to which people can notice and process three unrelated concurrent signals in different sensory channels and (2) whether and how this ability is modulated by age. Adults aged 65 years and older were of particular interest because individuals in this category represent the fastest growing segment of the U.S. population (U.S. Census Bureau, 2008), are known to suffer from various declines in sensory abilities (e.g, Li & Lindenberger, 2002; Stuart-Hamilton, 2012), and experience difficulties with divided attention (e.g., McDowd, Vercruyssen & Birren, 1991; Somberg & Salthouse, 1982) in general. Twelve younger (mean age: 23 years) and twelve older (mean age: 68 years) adults were presented with a series of singles, pairs, and triplets of visual, auditory, and tactile stimuli and asked to verbally indicate the modality of cue(s) they detected. The duration of each signal combination was one second. Prior to the task, crossmodal matching was performed to ensure that, subjectively, the stimuli were of equal intensity. Overall, the older adult group error rate was higher than that of the younger group (i.e., 3.3% and 0.98%, respectively). In particular, older adults often failed to notice the tactile cue when all three cues were combined. They also sometimes falsely reported the presence of a visual cue when presented with a combination of auditory and tactile cues. The findings from this work will be discussed in terms of underlying sensory and perceptual mechanisms. They will help inform the design of multimodal displays and ensure their usefulness across different age groups.
Get full access to this article
View all access options for this article.
