Abstract
The swift advancement of image generation technology has rendered the generation of batches of automatic fashion design images feasible. Nevertheless, the manner of extracting corresponding patterns promptly and precisely from the generated personalized fashion images persists as a challenge. This study has formulated a 2D–3D–2D pattern generation methodology. This approach initially employs a fine-tuned diffusion model to generate high-quality garment images, subsequently utilizes a large-scale image 3D reconstruction model to automatically effect 3D reconstruction of the garment images, and ultimately uses 3D model reverse engineering technology to convert the 3D garment model into the corresponding garment pattern. We generated a dress pattern as an exemplar to validate the method. The experimental results indicated that the method is capable of converting fashion images generated by the diffusion model into their corresponding 2D patterns. The average error between the 2D pattern and the 3D model was less than 5 mm, and the 3D virtual try-on results of the generated pattern conformed to the style of the generated image, thereby verifying the efficacy of the method. Compared with manual pattern drafting, the time consumption was significantly reduced, providing robust support for the automated coupling of fashion design and pattern production.
Get full access to this article
View all access options for this article.
