Abstract
In the realm of distributed manufacturing, each manufacturer’s production control system captures operational data from distributed devices across various factories. These devices, while sharing the same sample space of product, exhibit distinct feature spaces of manufacturing processes, generating what is termed Vertical Multi-variate Time-series Data (VMTD). VMTD is characterized by its distributed nature, feature heterogeneity, and state correlation. This paper delves into the design of a vertical federated learning framework tailored for assembly quality prediction, addressing the unique challenges posed by VMTD’s three key characteristics. To protect privacy while leveraging VMTD, we propose a training data sample alignment technique that leverages the intersection of private datasets of different participants, ensuring the confidentiality of sensitive information and enabling secure aggregation of disparate data. Furthermore, in light of VMTD’s state correlation, we enhance the Vision Transformer (ViT) model, which is a robust feature extraction tool, by refining its architecture. A Multi-Layer Parallel Pooling-based Vision Transformer (MLP-PVT) model is proposed to decouple the strong correlation between devices across different participants in the distributed manufacturing process. These innovations circumvent the limitations of traditional centralized quality inspection methods, bolstering the models’ generalization and robustness, and facilitating highly accurate product predictions. A comparative analysis with state-of-the-art algorithms is conducted to substantiate our approach’s viability and efficacy.
Keywords
Get full access to this article
View all access options for this article.
