Abstract
Traditional music emotion recognition (MER) faces problems such as lack of contextual information, inaccurate recognition of music emotions, and difficulty in handling nonlinear relationships. This article first used long short-term memory (LSTM) networks to capture global information and contextual relationships of music. Subsequently, the DCNN was chosen to process sequence data and capture global dependencies to improve the accuracy of MER. Finally, a MER model was constructed based on DCNN to recognize and classify music emotions. This article obtained the impact of different parameter values on model training iterations by adjusting hyperparameters related to training. The optimal values for learning rate
Keywords
Get full access to this article
View all access options for this article.
