Abstract
As the technology that supports interactive systems advances, the possibility of leveraging a multitude of sensory systems becomes possible. By using multiple sensory processors, substantial gains in the information management capacity of the human-computer integral should be realized, and those with sensory losses can be better accommodated. The question becomes when multimodal information is presented, how should these multiple sources of information be coordinated, particularly when two or more tasks are performed simultaneously? While current design theories developed primarily for unimodal interaction can be drawn on, additional research is required to fully guide multimodal multitask interaction design. The current study seeks to extend unimodal design theories to multimodal systems and identifies some interesting differences in unimodal vs multimodal multitask interaction.
Get full access to this article
View all access options for this article.
