Abstract
In unsupervised domain adaptation (UDA), existing methods primarily focus on reducing the distributional discrepancy between domains, often neglecting domain shifts and intra-domain variances. A lightweight intra-class and inter-class domain adaptive network (LIIDAN) is proposed to overcome these challenges and facilitate fault diagnosis across varying working conditions. The model consists of two main components: the lightweight and optimized DarkNet19 (LODN19) module for fault feature extraction and the intra-class and inter-class domain adaptation module (IIDAM). The DarkNet19 model’s optimization is achieved by using lightweight processing to remove selected convolutional and max-pooling layers, which simplify the network structure. In addition, a gated recurrent unit layer is added to capture time-related features, while a multi-head self-attention mechanism is applied to boost the network’s capability in efficiently capturing global dependencies. Subsequently, the IIDAM incorporates an improved adversarial domain classifier, where the domain loss function is defined using the Jensen–Shannon divergence to differentiate whether the extracted features originate from the source or target domain. Next, intra-class and inter-class loss functions are defined using weighted conditional maximum mean discrepancy to minimize intra-class distances and enhance inter-class separation. To balance the model’s migration and recognition capabilities, an adaptive factor is introduced, which refines the overall loss function. Two bearing datasets have shown the efficiency of the algorithm, with an average recognition rate of 99.4% across 12 transfer tasks in the Case Western Reserve University (CWRU) dataset, and an improvement in execution time of 44.9%. An average recognition rate of 97.8% and a 46.7% improvement in execution time were obtained for a dataset containing composite faults from a self-constructed test bench.
Keywords
Get full access to this article
View all access options for this article.
