Abstract
To overcome the hurdles in tiny and extreme aspect ratio defect identification in fabrics, along with the issues of high leakage and false detection rates, we proposed an improved YOLOv8 fabric defect detection algorithm CA-P2-YOLOv8s (CP-YOLOv8s). It mainly includes three strategies. First, the coordinate attention (CA) mechanism was integrated behind the Spatial Pyramid Pooling Fast SPPF) module, which can embed the location information into the attention of the channel to enhance the image features. Second, a P2 detection layer was added at the detection head to form a four-scale pyramid network, increasing detection accuracy of tiny defects. Finally, the Wise-ShapeIoU loss function was designed, Wise-IoU steers the model focus to the more run-of-the-mill, decent-quality anchor frames, and Shape-IoU ensures the model keeps its eye on the ball when it comes to the frame’s intrinsic shape and scale. By working in tandem, these loss functions guide the model, reducing the effects of subpar samples, and ultimately improving the network’s performance to generalize well across the board. Experiments on the AliCloud Tianchi public fabric defects dataset validated that the proposed CP-YOLOv8s algorithm’s precision, recall, and mean average precision (mAP) are 95.0%, 83.1%, and 89.5%, respectively. That represents a solid jump of 4.8%, 6.5%, and 5.1% over the baseline YOLOv8s model, all while trimming the model’s parameters by 4.2%. On the self-constructed silk fabric dataset, the algorithm outperformed other algorithms by achieving a precision of 92.8%, recall of 86.0%, and mAP of 91.4%, respectively. In addition, the detection speed is enough to keep up with the demands of real-time industrial inspection.
Get full access to this article
View all access options for this article.
