Abstract
Clothing simulation, particularly the modeling of clothing deformation on the human body, has been a significant research focus. State-of-the-art methods typically apply neural architectures in a rudimentary manner without adapting to garment deformation characteristics. This limitation necessitates additional processing to refine model outputs. To overcome these limitations, this paper introduces a transformer-based approach combined with graph neural networks to predict clothing deformations and generate images that accurately reflect human motion. Compared to existing methods, our approach captures finer details of clothing deformation while maintaining physical plausibility. Quantitative experiments demonstrate its superior performance across multiple evaluation metrics, while qualitative assessments show that our method outperforms current state-of-the-art techniques.
Get full access to this article
View all access options for this article.
