Abstract
Objective
Although convolutional neural networks (CNNs) achieve high accuracy in fabric defect segmentation tasks, their robustness and reliability in complex real-world application pose significant challenges. To enhance the robustness and reliability of CNNs, segmentation uncertainty and predicted confidence calibration are often used for assessment. This study investigates the effectiveness of single model strategies alongside Bayesian and non-Bayesian techniques to improve the robustness and reliability of predictions in fabric defect segmentation.
Methods
Three methods were assessed for fabric defect segmentation tasks, the single-model strategy, Monte Carlo (MC) Dropout (a Bayesian method), and Deep Ensembles (a non-Bayesian method). The reliability and robustness of these methods are evaluated in terms of segmentation accuracy, probability calibration, uncertainty estimation, and identification of out-of-distribution samples. In addition, the effect of different loss functions for the segmentation performance and uncertainty estimation was investigated.
Results
The results on four datasets indicate that the Deep Ensembles have shown a significant enhancement in segmentation accuracy compared with the single-model strategy, showing an increase of 1.5% to 5.3% in the Dice coefficient. In contrast, the segmentation performance of MC Dropout was found to be inferior to that of the single-model strategy. Moreover, in terms of confidence calibration, Deep Ensembles demonstrated improvements ranging from 0.1 to 2.2 in the expected calibration error (ECE) metric, showcasing its enhanced precision in model confidence calibration. In addition, by employing information entropy as a tool for uncertainty estimation, both MC Dropout and Deep Ensembles methods showed a strong negative correlation (ranging from −0.72 to −0.92;
Conclusion
Our analysis confirms that Deep Ensembles excels in these aspects, outperforming the alternatives. In addition, both MC Dropout and Deep Ensembles provide meaningful uncertainty estimates, which prove valuable in identifying out-of-distribution samples. This insight emphasizes the importance of uncertainty estimation in enhancing model robustness and reliability for industrial applications.
Get full access to this article
View all access options for this article.
