Abstract
Portrait painting, as a traditional art form, has seen increasing research efforts aimed at automating its creation with the advancement of computer technology. Currently, research on computer-generated portrait painting primarily focuses on using deep learning models to generate high-quality artistic style works from input images. However, despite the application of advanced technologies such as generative adversarial networks (GANs) in this field, there are still shortcomings in terms of preserving image details and features. To address this, the study proposes a portrait design model based on dynamic receptive fields (DyRF), whose core innovation lies in introducing multi-scale convolutional modules and adaptive receptive field mechanisms to achieve dynamic perception of features and precise reconstruction of details in different image regions, significantly enhancing the image’s ability to retain details and artistic expression. Experimental results show that DyRF significantly outperforms comparison models on multiple key metrics: achieving a Fréchet depth distance (FID) of 12.21, approximately 30% lower than DyMo (17.65) and FSGAN (23.66); it achieves a feature similarity (FSIM) of 0.85, significantly higher than SCA-GAN (0.71) and FSGAN (0.68), indicating superior performance in terms of image realism and feature consistency. The research results validate the effectiveness of the DyRF model in detail reconstruction and feature retention, bringing new breakthroughs to the field of computer-generated art.
Get full access to this article
View all access options for this article.
