Abstract
To meet the fashion industry’s increasing demand for intelligent tools capable of manipulating and retexturing fashion items while maintaining structural integrity, an area where existing methods often lack photorealistic quality and intuitive control, we introduce iRetexturing, a novel framework that integrates diffusion models and geometric priors to tackle these challenges. The iRetexturing method utilizes a three-stage pipeline: high-resolution preprocessing with masked super-resolution and semantic segmentation, quadripartite texture synthesis using grid-based tiling and boundary-aware regeneration, and ControlNet-guided diffusion with dual spatial constraints (Canny edges and depth maps), incorporating innovations such as parametric texture modulation, partial repainting with seam refinement, and real-time adaptability across diverse materials. Evaluations conducted on 4400 fashion images reveal that iRetexturing outperforms state-of-the-art methods including diffuseIT, achieving a learned perceptual image patch similarity of 0.1385 and structural similarity of 0.8323 compared with diffuseIT’s 0.1618 and 0.8074, respectively, despite diffuseIT’s lower Fréchet inception distance (54.96 versus 75.25), highlighting iRetexturing’s superiority in fine-grained texture replacement and high-fidelity textile design. By combining diffusion-based generation with geometric priors, iRetexturing enables precise manipulation of surface characteristics, bridging the gap between conceptual prototypes and production-ready assets, and offering transformative potential for the fashion industry by streamlining the design-to-production process and fostering creative innovation.
Keywords
Get full access to this article
View all access options for this article.
