Abstract
Objective
The present work investigates how task modality (visual, auditory), task load level (low or high), and task load type (target-distractor similarity, display rate) influence dual-task interference in virtual reality (VR).
Background
Dual-task interference is influenced by various factors including perceptual modality, where tasks that share the same modality may yield larger performance decrements than tasks that do not. Task load (i.e., level and type) can also reduce performance in one or both tasks. However, the interaction between these factors in immersive environments like VR is still unclear.
Method
Participants performed a: (a) visual tracking task, (b) visual/auditory detection task, or (c) both concurrently in two different experiments. In Experiment 1, visual tracking load was manipulated via increasing target-distractor similarity, while detection load was manipulated via increasing display rate. In Experiment 2, detection load was manipulated by increasing target-distractor similarity.
Results
In Experiment 1, higher detection task loads induced greater dual-task costs (DTC) in the detection task regardless of task modality, whereas tracking task DTC was not influenced by higher task loads. In Experiment 2, higher detection task loads induced greater DTC in the detection task only when it was presented visually.
Conclusion
The findings suggest that tasks presented in the same modality may experience greater dual-task interference in one or both tasks depending on the task load level and type.
Application
These findings can inform the design of multimodal interfaces in complex multitasking environments like military operations or emergency response where minimizing dual-task interference at varying workloads is crucial.
Keywords
Get full access to this article
View all access options for this article.
