Abstract
University of MichiganOperators in many complex event-driven domains face the challenge of data overload. Two major contributors to this problem are over-reliance in display design on one sensory channel (vision in most domains) and the fact that the presentation of data and information does not vary to account for changing task contexts and operator states. These problems call for the introduction of context-sensitive multimodal displays. There is a substantial and growing body of research on multisensory information processing and presentation. However, little guidance is available for the design of flexible displays that take context into consideration. Two important research questions are: 1) who should be in control of the adaptation of information presentation – the user or the system, or perhaps both? - and b) what factors should drive display adaptation? This article will review two approaches to context-sensitive display design: adaptive and adaptable. The benefits and disadvantages of each approach will be discussed, and a recently developed hybrid adaptive-adaptable multimodal interface will be described. To our knowledge, this is the first display design that combines both approaches to context-sensitivity and employs a wide range of drivers, ranging from environmental conditions to operator states and performance.
Get full access to this article
View all access options for this article.
