Abstract
With the growth of people’s demand for personalized music, how to use AI technology to achieve accurate understanding and creative transformation of music styles has become an important topic. In this study, a transfer learning algorithm based on a deep learning framework is designed to automatically identify and simulate different music style characteristics in order to break through the traditional music creation mode. By pre-training a large-scale multi-style music library and then fine-tuning it for a specific target style, the effective migration of music styles is achieved. The experimental data show that this method can significantly improve the accuracy of style conversion and make the similarity of the generated music works in timbre, melody, rhythm, and other dimensions reach more than 92% while maintaining good novelty and diversity. In order to verify the audience acceptance of the generated works, this study invited participants from different age groups and musical preferences to conduct a listening comparison experiment. The results show that compared with the direct use of non-transfer learning models or artificially created music, the works generated based on transfer learning algorithms have achieved higher praise rates, especially in the two key indicators of innovation and emotional resonance, which have improved, respectively. About 23% and 16%.
Get full access to this article
View all access options for this article.
