Abstract
To examine the impact of music as a cross-modal prime on facial emotion recognition ability in autistic children, this study compares the priming effect of music with that of faces as an intramodal prime and nonverbal sounds as the same cross-modal prime. The response time and accuracy of facial emotion recognition (happy and sad) were compared among 21 neurotypical children and 17 autistic children under various priming stimuli. A data analysis revealed that autistic children exhibited worse recognition of facial emotional expressions and demonstrated longer reaction time than neurotypical children. Unlike the other two stimuli (facial expressions and nonverbal sounds), music as a cross-modal prime resulted in slightly higher response accuracy for emotionally congruent conditions compared with emotionally incongruent conditions among autistic children. Furthermore, reaction time was significantly prolonged under emotionally congruent conditions than under emotionally incongruent conditions. This suggests that autistic children demonstrate a greater propensity to allocate additional time to improve the accuracy of their judgments under congruent conditions compared with incongruent conditions. Therefore, music intervention has significant potential for supporting autistic children to empathize with faces by playing emotionally congruent music and improving their facial emotion recognition abilities.
Get full access to this article
View all access options for this article.
