Abstract
Learning visual locations is a critical skill that allows people to understand and act in their environment, especially as part of a person’s situation awareness during complex tasks such as air-traffic control. Although a number of studies have examined visual location learning from a theoretical and/or empirical perspective, the goal of developing a rigorous computational model of this process remains an elusive one. In this work, we develop an initial model focused on acquiring, rehearsing, and recalling the locations of static objects in the visual field. We build this model on the foundation of the ACT-R cognitive architecture, specifically using its memory and vision components to better understand and specify the processes of visual scanning and location memory. We then demonstrate how this model accounts for human behavior and performance in two recent empirical studies. The resulting model can serve as the basis for future efforts to build a more rigorous general computational models of more complex tasks that involve visual location learning as a central component task.
Get full access to this article
View all access options for this article.
