Abstract
Yarn-level reproduction of fabric appearance enables the digital generation of fabric textures, effectively replacing physical trial weaving and sampling processes and thereby reducing production labor and material waste. However, most existing methods suffer from visual distortion or high computational cost. The visual appearance of woven fabrics is influenced by multiple factors, including yarn diameter, yarn density, and weave pattern. To predict fabric appearance based on real yarns, this study proposes a weave-pattern-guided image-to-image translation framework for woven fabric appearance generation. To improve the quality of the generated fabric appearance, a multiscale group convolution (MSGC) module is introduced to capture fine-grained multiscale features. In addition, a new dataset containing diverse fabric types is constructed to provide a comprehensive basis for performance evaluation. The proposed method achieves improvements over the baseline pix2pix model, with the learned perceptual image patch similarity (LPIPS) improving by 6.02%, the structural similarity (SSIM) increasing by 0.61%, and the Fréchet inception distance (FID) improving by 30.86%. These improvements indicate that the generated appearances are visually more plausible, with enhanced perceptual fidelity and structural consistency. Experimental results demonstrate that the proposed method can effectively generate the appearance of specific woven fabrics according to given weave patterns and exhibits adaptability across different weave structures.
Get full access to this article
View all access options for this article.
