Abstract
As the typical task in graph analysis paradigm, node classification is to predict the class label for each node in the given graph. To address the challenge of label scarcity, graph contrastive learning (GCL) has emerged as a mainstream approach for unsupervised node representation learning. By training graph neural networks (GNNs) to distinguish between positive and negative sample pairs across different augmented views, GCL enables effective feature learning. However, most view augmentation strategies in existing GCL methods inevitably introduce additional noise, degrading the quality of node representations. Moreover, these methods often overlook the hierarchical community structures inherent in graphs, which may lead to mislabeling closely connected node pairs as negative samples—a contradiction to the graph homogeneity assumption. To tackle these issues, we propose a GCL-based node classification method rooted in structural entropy. Specifically, we leverage an encoding tree constructed by minimizing structural entropy and an edge-reweighted view generated via an attention mechanism as the augmented views for GCL. This design preserves the integrity of the input view's fundamental structural information. Additionally, by integrating the hierarchical community characteristics of the encoding tree, we develop a graph-tree contrastive loss function to enhance the ability of node representations to capture hierarchical community structures. Extensive experiments show that our method is superior to the state-of-the-art node classification methods in terms of effectiveness and robustness.
Get full access to this article
View all access options for this article.
