Abstract
With the introduction of the deep learning large model TabTransformer for electric load analysis and forecasting, this study aims to compare the performance of traditional machine learning, deep learning, and Transformer-based sequential prediction algorithms on heterogeneous tabular data. We propose an interpretable deep learning large model architecture, Electric Power Load TabTransformer (EPLTT), tailored specifically to electric load forecasting with mixed heterogeneous tabular data. EPLTT leverages a Transformer architecture designed explicitly for tabular data processing, akin to GPT-series deep neural networks, efficiently handling complex structured data and providing an effective alternative to conventional models. First, we discuss the strengths and weaknesses of various algorithms addressing tabular learning problems. Subsequently, we demonstrate the configuration and training of EPLTT on real-world forecasting datasets. Finally, we achieve interpretability by applying class-weight adjustments to compensate for dataset imbalance during training. Experimental results indicate EPLTT’s superior performance over traditional models such as LightGBM regarding accuracy, precision, and specificity. Additionally, EPLTT exhibits enhanced adaptability and robustness in dataset preparation and complexity management, fulfilling practical application requirements for speed, generalization, and interpretability. The model’s vertical-domain competitiveness is validated against real-world power load forecasting scenarios, addressing class imbalance, missing values, and categorical data.
Get full access to this article
View all access options for this article.
