Abstract
As ChatGPT continues to impress with its ability to generate human-like text, its capabilities in emotion recognition remain an open question. Unlike previous research comparing ChatGPT and humans on tasks with objective answers, we explored an affective domain where no correct answer exists: emotional ratings of images, a task requiring visual-perceptual analysis of complex input to recover an affective judgment. Using the MATTER database, rated on valence and arousal dimensions, I prompted ChatGPT-4 to do the same. The results revealed that ChatGPT rated images as less positive and less arousing than humans on average, particularly for images categorized as ‘mirthful,’ ‘fearful,’ and ‘disgusting.’ These findings suggest that while ChatGPT is able to process affective information, its responses reflect an analytical rather than experiential framework, differing from human interpretations.
Get full access to this article
View all access options for this article.
