Abstract
Technological progress has driven the vigorous development of multimedia technology, with massive amounts of multimedia data generated every moment. Efficient sentiment analysis algorithms can help people understand and use multimedia data, reduce production and management costs, and improve the efficiency of human-computer interaction. The extraction of emotional features from multimedia information is a crucial step in capturing semantic information. Accurately extracting emotional states from multimedia content has become one of the important focuses of information processing. Traditional methods for extracting emotional features have limited accuracy in information disclosure due to their singularity, resulting in a significant gap between information content and actual cognition. To address this issue, a multimedia emotion representation method combining graph convolutional adversarial learning and attention mechanism was proposed. This method achieved the final multimedia emotion design model by constructing an emotion representation feature model, adversarial design of multidimensional emotion labels, and attention modules for local and overall emotions. The proposed hybrid model was tested and analyzed, and the results showed that the average loss value of the multimedia emotion fusion algorithm was less than 0.3, and its accurate recognition rate of video data reached 90.47%. The recognition accuracy of neutral, angry, happy, and sad emotional labels exceeded 85%, with the highest value reaching 92.30%, significantly better than other algorithms. In addition, the improved hybrid algorithm performed better in information representation and extraction capabilities, with an increase in emotional information interactivity of over 40% and an overall average time consumption of less than 1.5 s. The study analyzes multimedia emotional data from two dimensions: features and labels, effectively providing important research value and significance for emotional data mining and emotional content capture.
Keywords
Get full access to this article
View all access options for this article.
