Abstract
Transformer fault diagnosis faces significant challenges due to the scarcity of labeled data and the imbalance among fault categories. To address these issues, a novel model, SSRL-TFD, is proposed, leveraging self-supervised representation learning specifically designed for small, imbalanced datasets. The model incorporates a two-stage training process along with knowledge transfer, enabling effective learning of key features from large amounts of unlabeled data, thereby enhancing performance in scenarios with limited labeled and imbalanced data. Additionally, the combination of segment slicing and FFT captures both time-domain and frequency-domain information, further strengthening the feature extraction capabilities. Experimental results confirm that SSRL-TFD, through its pretext tasks in the first stage and shallow CNN architecture, successfully extracts intrinsic features from unlabeled signals, alleviating the negative impact of data imbalance. Comparisons with various methods demonstrate superior diagnostic performance and robustness. This approach offers a new solution for transformer fault diagnosis when labeled data is limited, utilizing voiceprint signals.
Get full access to this article
View all access options for this article.
