Abstract
Art appreciation relies heavily on the imagery as the primary medium of information dissemination. It enables teachers and students alike to communicate a gamut of emotions through its visuals. Nevertheless, the abstract and subjective nature of emotions coupled with the intricate and nonlinear relationship between image characteristics and image sentiment presents a formidable challenge in image emotion classification. Thus, this study proposes a novel image emotion classification model that integrates the depth separable convolution technique. First, the RGB features of the image are extracted through the judicious use of yellow correction, brightness adjustment, and size scaling, while optimizing color video signal transmission with YCrCb. Second, the deep semantic features of the image are extracted through multi-scale fusion features using an improved FPN model, wherein pre-training parameters of ResNet101 are transferred to the model. Finally, the emotion semantics in art images are classified using a convolution block attention module to form a depth separable convolution. Experimental results reveal that the 33 image evaluation features obtained through training have a strong correlation with the expressed emotional semantics. The predictive capacity of the proposed model aligns with the true polarity of the sample with a remarkable accuracy of 93.31% in emotional classification.
Get full access to this article
View all access options for this article.
