Abstract
There is little guidance for designers on how to map information requirements to tactile displays. In this paper, we propose new directions for carrying out the mapping of tactile displays based on semantic mapping techniques used in auditory and visual displays. We discuss these techniques in relation to the design of a multimodal ground control station (GCS) for unmanned aerial vehicles to improve the visually-dominated GCS interface. We hope that this approach will encourage the design of better, safer, and more intuitive UAV GCS interfaces to reduce the frequency of mishaps related to human error.
Get full access to this article
View all access options for this article.
