Abstract
As research and applications of artificial intelligence generated content (AIGC) in the field of fashion design continue to advance, existing intelligent garment generation models commonly face challenges such as high computational costs, monotonous texture generation, and training instability, which often lead to blurred details when generating diverse garment forms (e.g., wide sleeves in Hanfu, pleats in dresses). To address these issues, an intelligent garment generation method based on progressive training and contrastive learning strategy is proposed in this paper, which we call PT-CLGAN. First, a lightweight progressive generator is designed to reduce the computational load while ensuring high-resolution details of the garment generation. Second, a contrastive learning strategy is used to enhance the decision boundary of the discriminator, and capture the latent space feature contrast, thereby improving the training stability of PT-CLGAN. Finally, in order to verify the effectiveness of the proposed method, the simulation experiments are simulated on GarmentDesign, ExpandFashion, and DeepFashion2, we use PSNR, LPIPS, and MS_SSIM metrics for generated garment images as the objective evaluation metrics, and compare our proposed method with existing methods such as generative adversarial network (GAN), DCGAN, and TexGAN. The comprehensive experimental evaluation confirms the superiority of the proposed approach over traditional generative models, with significant improvements in garment synthesis quality. This approach enables the generation of previously unexplored pattern, cut, and color combinations by designers, thereby transcending the limitations of human experiential constraints. For instance, GANs can effectively integrate traditional embroidery techniques with contemporary minimalism to produce distinctive visual lexicons.
Keywords
Get full access to this article
View all access options for this article.
