Abstract
In recent years, the rapid rise of technologies such as the Internet of Things (IoT) and Artificial Intelligence (AI) has transformed numerous domains, particularly smart homes. As people experience greater material comfort, they increasingly seek deeper, more emotionally intelligent ways to interact with technology. Music, rich in emotional content, serves as a powerful medium for interpersonal communication and is increasingly regarded as a natural channel for intelligent human-computer interaction. However, traditional music emotion recognition techniques face challenges with low recognition accuracy and high computational costs. To address these limitations, we propose an efficient deep learning-based music emotion recognition system that integrates generative adversarial networks (GANs) within an IoT framework. The system employs a convolutional neural network (CNN) to extract both local and global features from musical signals using Mel-frequency techniques. These features enhance the GAN’s ability to detect complex emotional expressions in music. Experimental results demonstrate that the proposed model achieves significantly lower error rates and greater recognition accuracy compared to state-of-the-art methods. Specifically, it attains an accuracy of 94.06%, confirming its effective performance and suitability for real-time, emotion-aware music recommendation in IoT applications.
Keywords
Get full access to this article
View all access options for this article.
