Abstract
Virtual reality (VR) provides an immersive medium for information processing. Gestures offer a natural and intuitive means of translating cognitive processes into physical actions, making them a promising interaction method to support information processing in VR. However, how gestures reflect and map abstract cognitive tasks to physical interactions remains underexplored. To address this gap, this study explored the mental models by which end-users design gestural interactions to support information processing in VR. Using a gesture elicitation method, 8 participants created 445 gestures representing 19 cognitive processes in Bloom’s taxonomy. Five categories of mental models were identified: linguistic-symbolic, spatial-manipulative, metaphoric, social-conventional, and traditional graphical user interface-derived. Furthermore, the study found that the cognitive process categories affected the mental model adoption, with higher-order cognitive processes prompting more human-like interaction. These findings suggest the potential for developing more natural interactions for cognitive tasks and offer guidance for designing gestural interactions in VR.
Get full access to this article
View all access options for this article.
