Abstract
Accurately capturing the three-dimensional configuration of fabric drape is essential for applications such as garment computer-aided design (CAD), digital clothing simulation, and material property prediction. However, current approaches for acquiring and processing 3D drape data are often costly, labor-intensive, and poorly standardized, limiting scalability and integration with data-driven workflows. To address this gap, we propose a fully automated pipeline that standardizes 3D fabric meshes reconstructed from multi-view images without manual intervention or additional hardware. Our approach achieves consistent mesh representations across diverse samples by enforcing boundary and topological constraints through a combination of geometric projection and horizontal adjustment, resulting in a Hausdorff distance approaching zero. Experiments on a dataset of 92 fabrics, with five representative samples selected for demonstration, show that the standardized meshes retain structural features with a trianglewise area variation of less than 4.5% and exhibit smoother boundary contours. In addition, the standardized inputs improve the training efficiency and prediction accuracy of deep learning models for fabric property regression. This method provides a practical solution for improving the usability of reconstructed drape data, thereby supporting more reliable textile simulation and data-driven fabric analysis.
Get full access to this article
View all access options for this article.
