Abstract
The lack of interpretability in neural networks hinders fault diagnosis and optimal feature selection, leading to ineffective anomaly detection of gas turbines. This study proposes an anomaly detection framework for gas turbines that integrates explainable artificial intelligence with the physical intelligence of expert systems. This framework aims to enhance the interpretability of a neural network for optimizing the input features and localizing faults. The proposed framework features two key characteristics. First, the explainable training phase determines optimal features based on recursive feature elimination with reconstruction error and verification of expert systems. Second, the explainable inference phase executes real-time fault localization with an extremely low false alarm by addressing the cumulative alarm and feature-wise reconstruction error. The proposed framework is validated using extensive operational data including various fault modes, collected from combined cycle power plants in the Republic of Korea. Extensive experiments demonstrate that the proposed framework achieved exceptional accuracy with an F1-score of 0.961 and a false alarm rate of 1.98e-4, outperforming other anomaly detection methods. The proposed framework also effectively localizes the modes and causal effects of each fault even though the proposed neural network is trained only with normal data. Extensive case studies including sensitivity analysis and variations in training data size clearly reveal the reliability of the proposed framework. These systematic analyses confirm that the proposed framework is effective in detecting anomalies in real-world applications for efficient operation and maintenance of gas turbines.
Keywords
Get full access to this article
View all access options for this article.
