Abstract
This study focuses on the digital protection needs of ancient Chinese mural art and innovatively constructs an intelligent recognition system based on deep transfer learning and multi-modal feature fusion. In response to the challenges of scarce and uneven quality of mural image data, the system adopts a two-stage progressive training strategy. Firstly, a deep convolutional neural network is pre-trained on a large general image dataset to obtain the basic visual feature representation and then fine-tuned and optimized on a professional mural dataset through a domain adaptive method. The feature extraction stage innovatively integrates the high-level semantic features of deep neural networks with the low-level artistic features of traditional image processing. Among them, the deep features are extracted through an improved residual network architecture, while the artistic features blend the color distribution features based on the HSV color space with the improved rotational invariance LBP texture features. The system innovatively designs a feature fusion module based on the attention mechanism, achieving dynamic optimization and combination of multi-source features through learnable feature weights. Experimental verification shows that, compared with mainstream deep network models, this system has achieved a significant improvement in the accuracy of feature recognition in the era of murals. At the same time, it has demonstrated excellent generalization performance in cross-dataset tests, providing reliable technical support and new methodological guidance for the intelligent research of cultural heritage.
Get full access to this article
View all access options for this article.
