Abstract
Contrastive learning (CL) is a learning strategy that trains a model by learning to distinguish between positive and negative sample pairs, enabling it to effectively learn key features in data. However, data augmentation methods in classical CL often generate views that are semantically independent, making it difficult to distinguish between false negative samples and hard negative samples under strong noise interference. In view of this, a variational graph CL based on dynamic negative sample optimization (VGCL-DNSO) is proposed in this paper. In VGCL-DNSO, a soft attention value is generated through the similarity matrix to assign different attention to negative samples, effectively improving the discriminative ability of key features under strong noise interference. Meanwhile, to maximize the retention of the feature information of the data, the potential probability distribution of the data in the hidden space is learned, from which a more representative feature view is obtained. Through two different types of rolling bearing fault diagnosis experiments, the analysis results show that VGCL-DNSO demonstrates significant advantages in generalization and robustness.
Keywords
Get full access to this article
View all access options for this article.
