Abstract
Accurate 3D scene reconstruction of substations is critical for digital twin engineering, but existing point cloud segmentation methods suffer from poor semantic integration and high computational overhead. This study proposes a computational framework for semantic-driven 3D reconstruction, centered on the DI-PointNet algorithm (enhanced from PointNet++), to address these engineering challenges. The framework’s core computational modules include: Point Cloud Preprocessing Engineering: Improved RANSAC algorithm with adaptive thresholding (iterative tolerance adjustment) for ground filtering, reducing false positive rates by 37% compared to standard RANSAC. Power line feature extraction via spatial clustering (DBSCAN with ε = 0.8 m), achieving 96.3% key equipment extraction accuracy. Semantic-Geometric Fusion Network: Two-layer continuous transformer module: Cross-window attention mechanism (window size 32 × 32) enhances feature interaction between adjacent equipment, reducing semantic ambiguity by 29%. Hierarchical key sampling: Progressive downsampling (from 1024 to 128 points) with farthest point sampling (FPS) reduces computational complexity from O(n2) to O(n log n). Inverted residual module: Depth-wise separable convolutions optimize multi-scale feature extraction, cutting memory usage by 41%. Engineering Performance Validation: On a 220 kV substation dataset (1.2 M points), the framework achieves 92.4% scene completeness, 4.2 mm geometric fidelity error, and 92.1% semantic segmentation accuracy. Real-time rendering optimization via level-of-detail (LOD) scheduling enables 34.6 FPS for 4K resolution, outperforming PointNet++ by 18.3 FPS. This computational solution advances 3D reconstruction methodology for industrial scenes, providing technical support for substation digital twin development and demonstrating scalable value in power system engineering.
Keywords
Get full access to this article
View all access options for this article.
