Abstract
Our ability to visualize things outside our field of view often feels similar to visual perception. However, while perception is grounded in real-world sensory inputs, imagery might rely on information stored in long-term memory. This contrast raises key questions about the nature of these processes: Do visual mental imagery and visual perception depend on distinct neural mechanisms? Or do they share access to a common representational space, which encodes semantic content in a way capable of supporting both internally generated images and externally driven perception? To what extent are individual differences in imagery vividness related to behavioral performance? To answer these questions, we developed a computerized battery eliciting visual perception and visual mental imagery across five semantic domains: Colors, Faces, Letters, Maps, and Shapes. Seventy-five typical imagers (based on their scores from the Vividness of Visual Imagery Questionnaire, VVIQ; 49 in-person, 26 online) completed this battery. On each trial, we recorded two imagery performance measures—accuracy and response times—and two subjective measures: vividness for the imagery and confidence for the perceptual trials. We found performance differences between imagery and perception in the domains of Color, Face, and Map, but not in the domains of Letters or Shapes, suggesting the existence of domain-specific processes that differentiate imagery from perception. Additionally, domain-specific performance was predicted by logistic regression models, further indicating that distinct mechanisms may support imagery and perception depending on the semantic content. Furthermore, unsupervised clustering analysis revealed a latent space of four clusters, fewer than the theoretically possible 10 clusters (2 modalities × 5 domains). This reduction suggests shared features across domains and modalities, pointing to a modality-independent semantic code, likely stored in higher-order cortical regions, that supports common high-level conceptual representations engaged by both imagery and perception. Trial-by-trial vividness ratings were stronger predictors of imagery task performance than the off-task VVIQ, indicating that in-the-moment vividness more accurately reflects individual imagery ability.
Keywords
Get full access to this article
View all access options for this article.
References
Supplementary Material
Please find the following supplemental material available below.
For Open Access articles published under a Creative Commons License, all supplemental material carries the same license as the article it is associated with.
For non-Open Access articles published, all supplemental material carries a non-exclusive license, and permission requests for re-use of supplemental material or any part of supplemental material shall be sent directly to the copyright owner as specified in the copyright notice associated with the article.
