Abstract
Dance is an art of human movement that takes professional actions as the main form of expression. With the deep integration of computational technologies in the education field, intelligent dance motion recognition and analysis have become a key direction for promoting the digital transformation of dance education. Aiming at three core challenges in traditional online teaching—difficulty in intuitively capturing precise data of dance movements through videos alone, inability to locate key defects in movement techniques, and lack of interactive feedback and quantitative evaluation of movement quality—this study proposes the POSE-LSTM computational model integrating action recognition algorithms. The model implements posture recognition via the AlphaPose algorithm, transforms spatio-temporal information into action feature vectors, and inputs the normalized features into a pre-trained LSTM neural network for classification encoding. Experiments on the iDance dataset demonstrate that the algorithm achieves an identification accuracy of 0.9825 and a recall rate of 0.9802 for dance movements. Teaching practice verification shows that with the POSE-LSTM model, the action scoring accuracy increases to 98%, the average teaching time per movement is reduced by 23%, and the instructor-directed practice time per movement is decreased by 39%. The study finally discusses the development trends of computational methods in dance education and the prospects of motion recognition systems in engineering applications.
Keywords
Get full access to this article
View all access options for this article.
