Abstract
Defect detection is crucial for controlling product quality in the process of textile production. However, for existing detection techniques, there are still challenges in identifying different forms of defect and small defects within the same category. To address this issue, we propose a fabric defect detection model called MAS-YOLO. This model is based on YOLOv8n and incorporates several key innovations. First, we designed a multi-branch coordinate attention module to capture direction and position information. Second, we designed an adaptive weighted downsampling module based on grouped convolution, which emphasizes defective features and reduces background interference using weighting features. Finally, we introduced sliding loss to address the imbalance between easy and difficult samples. The experimental results show that the mean average precisions for a customized fabric defect dataset and the AliCloud Tianchi dataset were 96.3% and 51.6%, respectively, that is, 6.9% and 7.8% higher, respectively, than the original YOLOv8n. The detection speeds using the GTX1050ti graphics card and RTX3070ti graphics card are 57.3 frames per second (fps) and 154.3 fps, respectively; this can meet the real-time requirements of defect detection in most industrial sites and provide technical support for the application of lightweight network models in the industry.
Get full access to this article
View all access options for this article.
