Model fairness is becoming important in class-incremental learning for Trustworthy AI. While accuracy has been a central focus in class-incremental learning, fairness has been relatively understudied. However, naively using all the samples of the current task for training results in unfair catastrophic forgetting for certain sensitive groups including classes. We theoretically analyze that forgetting occurs if the average gradient vector of the current task data is in an "opposite direction" compared to the average gradient vector of a sensitive group, which means their inner products are negative. We then propose a fair class-incremental learning framework that adjusts the training weights of current task samples to change the direction of the average gradient vector and thus reduce the forgetting of underperforming groups and achieve fairness. For various group fairness measures, we formulate optimization problems to minimize the overall losses of sensitive groups while minimizing the disparities among them. We also show the problems can be solved with linear programming and propose an efficient Fairness-aware Sample Weighting (FSW) algorithm. Experiments show that FSW achieves better accuracy-fairness tradeoff results than state-of-the-art approaches on real datasets.
翻译:模型公平性正成为可信人工智能中类增量学习的重要考量。尽管类增量学习一直以准确性为核心关注点,但公平性相关研究相对不足。然而,在训练中简单地使用当前任务的所有样本,会导致对某些敏感群体(包括类别)产生不公平的灾难性遗忘。我们通过理论分析发现,如果当前任务数据的平均梯度向量与敏感群体的平均梯度向量方向"相反"(即它们的内积为负),遗忘就会发生。为此,我们提出了一种公平的类增量学习框架,通过调整当前任务样本的训练权重来改变平均梯度向量的方向,从而减少表现欠佳群体的遗忘并实现公平性。针对多种群体公平性度量,我们构建了优化问题,旨在最小化敏感群体的总体损失,同时减少群体间的差异。我们还证明了这些问题可以通过线性规划求解,并提出了一种高效的公平感知样本加权算法。实验表明,在真实数据集上,该算法相比现有最先进方法取得了更好的准确性与公平性权衡结果。