Ensuring fairness in Graph Neural Networks is fundamental to promoting trustworthy and socially responsible machine learning systems. In response, numerous fair graph learning methods have been proposed in recent years. However, most of them assume full access to demographic information, a requirement rarely met in practice due to privacy, legal, or regulatory restrictions. To this end, this paper introduces a novel fair graph learning framework that mitigates bias in graph learning under limited demographic information. Specifically, we propose a mechanism guided by partial demographic data to generate proxies for demographic information and design a strategy that enforces consistent node embeddings across demographic groups. In addition, we develop an adaptive confidence strategy that dynamically adjusts each node's contribution to fairness and utility based on prediction confidence. We further provide theoretical analysis demonstrating that our framework, FairGLite, achieves provable upper bounds on group fairness metrics, offering formal guarantees for bias mitigation. Through extensive experiments on multiple datasets and fair graph learning frameworks, we demonstrate the framework's effectiveness in both mitigating bias and maintaining model utility.
翻译:确保图神经网络的公平性是推动可信赖且具有社会责任感的机器学习系统的基础。为此,近年来已提出了众多公平图学习方法。然而,其中大多数方法都假设能够完全获取人口统计信息,这一要求在实践中由于隐私、法律或监管限制而很少得到满足。为此,本文提出了一种新颖的公平图学习框架,旨在有限的人口统计信息下减轻图学习中的偏见。具体而言,我们提出了一种由部分人口统计数据引导的机制,用于生成人口统计信息的代理,并设计了一种策略来强制不同人口统计群体间的节点嵌入保持一致。此外,我们开发了一种自适应置信度策略,该策略基于预测置信度动态调整每个节点对公平性和效用的贡献。我们进一步提供了理论分析,证明我们的框架 FairGLite 在群体公平性指标上实现了可证明的上界,为偏见缓解提供了形式化保证。通过在多个数据集和公平图学习框架上进行广泛实验,我们证明了该框架在减轻偏见和保持模型效用方面的有效性。