Abstract
The operating environment inside submarines for valves is harsh, prone to leaks, and leak signal detection is often conducted in complex, noisy environments. To address the issue that a single sensor cannot fully characterize leak features, which are often indistinct and lead to insufficient detection accuracy, this study proposes a multi-sensor enhancement scheme and a diagnosis method based on multi-sensor data fusion and deep learning (DL). First, distributed optical fiber temperature sensors, distributed optical fiber acoustic sensors, and acoustic emission sensors are used to comprehensively collect leak signals. Adaptive enhancement strategies are proposed for different signal characteristics: temperature signals are converted into high-feature-intensity images using entropy values combined with Euclidean distance maps (SEED); acoustic vibration signals are transformed into Markov images based on fast Fourier transform (FFT), followed by multi-point image fusion to highlight shared features; acoustic emission signals are converted into high-energy-concentrated time-frequency images using multi-scale synchroextracting transform (MSST). Subsequently, the three enhanced images are input into a multi-scale dilated convolutional network for feature extraction. Feature-level fusion is combined with an attention mechanism (three-branch multi-scale fusion attention network) to select key features. Finally, the jellyfish optimizer is introduced to optimize hyperparameters, and gradient-weighted class activation mapping is used to visualize the model’s attention regions. Experiments show that the proposed signal enhancement method effectively highlights signal features and improves the reliability of multimodal processing schemes. Compared to traditional convolutional neural network and image combination methods, this multi-scale feature extraction network demonstrates significant advantages and higher accuracy. In this research context, the scheme combining feature fusion with an attention mechanism outperforms decision fusion and hybrid fusion by better utilizing key information from each modality, promoting deep interactions between modalities and global feature selection, reducing information loss, and thereby significantly enhancing classification performance, achieving an overall accuracy exceeding 99.5%.
Keywords
Get full access to this article
View all access options for this article.
