Abstract
Contrastive learning (CL) is a learning strategy that has received widespread attention in the field of machine learning, which trains models by learning to distinguish between pairs of positive and negative samples. Unfortunately, the model struggles to extract key features and effectively differentiate between false negative samples and difficult negative samples when the data contain a large amount of noise. Based on this, a Twin Contrastive Learning based on Negative Sample Attention (TCL-NSA) method is proposed in this paper. In TCL-NSA, clustering of features is executed to resolve the importance of difficult negative samples, and different attention is assigned to negative samples, allowing the model to learn more discriminative features. Meanwhile, the shrinking residual sub-model is constructed to effectively pre-process the noisy signals so that the encoder can extract accurate representations of the data and make the clustering results more accurate. In addition, a projection head is designed to better map the extracted backbone network representation to the feature space of the contrast loss. Experimental validation is carried out on two rolling bearing datasets with average accuracy of 99.45%, and comparison experiments are conducted with other popular methods and show superior performance of TCL-NSA.
Keywords
Get full access to this article
View all access options for this article.
