收录:
摘要:
Distribution shift widely exists in graph representation learning and often reduces model performance. This work investigates how to improve the performance of a graph neural network (GNN) in a single graph by controlling distribution shift between embedding spaces. Specifically, we provide an upper error -bound estimation, which quantitatively analyzes how distribution shift affects GNNs' performance in a single graph. Considering that there is no natural domain division in a single graph, we propose PW-GNN to simultaneously learn discriminative embedding and reduce distribution shift. PW-GNN measures distribution discrepancy using the distance between test embeddings and prototypes, and transfers minimizing distribution shift to minimizing the power of Wasserstein distance, which is introduced into GNNs as a regularizer. A series of theoretical analyses are carried out to demonstrate the effectiveness of PW-GNN. Besides, a lowcomplexity training algorithm is designed by exploring entropy -regularized strategy and block coordinate descent method. Extensive numerical experiments are conducted on different datasets with both biased and unbiased splits. We empirically test our model equipped with four backbone models. Results show that PW-GNN outperforms state-of-the-art baselines and mitigates up to 8% of negative effects off distribution shift on backbones.
关键词:
通讯作者信息:
来源 :
INFORMATION SCIENCES
ISSN: 0020-0255
年份: 2024
卷: 670
8 . 1 0 0
JCR@2022
归属院系: