Abstract
Digital Chinese landscape painting can be enhanced through multisensory fusion, yet culturally adapted cross-modal mappings and real-time adaptation remain under-tested. This laboratory, between-subjects virtual reality experiment (N = 80) compared a visual-only condition with a multisensory condition delivering synchronized auditory, haptic, and olfactory cues, with cue mapping implemented as culturally congruent or deliberately incongruent. Outcomes included standardized immersion/presence measures and affect ratings, behavioral logs of interaction and exploration (including time-on-task and coverage), and physiological indices of arousal derived from electrodermal activity and photoplethysmography-based pulse metrics, assessed immediately after exposure and at one-week and one-month follow-ups. Culturally congruent multisensory cueing produced higher immersion/presence and engagement than visual-only presentation and the incongruent multisensory format, alongside more frequent and diverse interaction behavior, longer time-on-task, and broader exploration of the virtual scenes. Self-reported emotion showed higher positive valence and arousal under congruent cueing with convergent physiological arousal patterns, and differences remained observable at follow-up assessments.
Get full access to this article
View all access options for this article.
