Ensuring fairness in Graph Neural Networks is fundamental to promoting trustworthy and socially responsible machine learning systems. In response, numerous fair graph learning methods have been proposed in recent years. However, most of them assume full access to demographic information, a requirement rarely met in practice due to privacy, legal, or regulatory restrictions. To this end, this paper introduces a novel fair graph learning framework that mitigates bias in graph learning under limited demographic information. Specifically, we propose a mechanism guided by partial demographic data to generate proxies for demographic information and design a strategy that enforces consistent node embeddings across demographic groups. In addition, we develop an adaptive confidence strategy that dynamically adjusts each node's contribution to fairness and utility based on prediction confidence. We further provide theoretical analysis demonstrating that our framework, FairGLite, achieves provable upper bounds on group fairness metrics, offering formal guarantees for bias mitigation. Through extensive experiments on multiple datasets and fair graph learning frameworks, we demonstrate the framework's effectiveness in both mitigating bias and maintaining model utility.
翻译:确保图神经网络的公平性是构建可信且具有社会责任感的机器学习系统的基石。为此,近年来涌现出众多公平图学习方法。然而,大多数方法均假设能够完全获取人口统计信息,这一要求在实践中鲜有满足,主要受限于隐私、法律或监管约束。为此,本文提出一种新颖的公平图学习框架,旨在有限人口统计信息条件下缓解图学习中的偏差。具体而言,我们设计了一种基于部分人口统计数据的引导机制,用于生成人口统计信息的代理变量,并提出一种策略以确保不同人口统计群体间节点嵌入的一致性。此外,我们开发了一种自适应置信度策略,能够根据预测置信度动态调整每个节点对公平性与效用性的贡献权重。进一步,我们通过理论分析证明,所提出的FairGLite框架能够在群体公平性指标上达到可证明的上界,为偏差缓解提供了形式化保证。通过在多个数据集及公平图学习框架上的广泛实验,我们验证了该框架在缓解偏差与保持模型效用性方面的有效性。