Abstract
Time series classification(TSC) is an important task in time series analysis and has been around for decades in the data-mining and machine learning communities. Traditional supervised learning model requires a large amount of labeled training data, while unsupervised model has insufficient performance for complex time series. In this paper, we propose a semi-supervised training framework based on the Mean Teacher model. By using no more than 10% labeled data and dynamically adjusting the smoothing coefficient, this framework reduces the model's training time and improves the accuracy of multivariate TSC. Aiming at the difficulty of choosing the receptive field (RF) size of Convolutional Neural Networks (CNN), we use multiple prime numbers as kernel sizes for the Omni-Scale CNN (OS_CNN) to ensure that the overlap of RF between different convolution kernels is minimized, which can cover various regions of the time series comprehensively and improve the diversity and effectiveness of feature extraction. Finally, an enhanced self-attention mechanism based on convolution is used to improve the convergence of model training. Experiments demonstrate that the method in this paper improves more than 10% over 7 various state-of-the-art baselines on 42 different datasets.
Keywords
Get full access to this article
View all access options for this article.
