Fairness in machine learning has been extensively studied in single-task settings, while fair multi-task learning (MTL), especially with heterogeneous tasks (classification, detection, regression) and partially missing labels, remains largely unexplored. Existing fairness methods are predominantly classification-oriented and fail to extend to continuous outputs, making a unified fairness objective difficult to formulate. Further, existing MTL optimization is structurally misaligned with fairness: constraining only the shared representation, allowing task heads to absorb bias and leading to uncontrolled task-specific disparities. Finally, most work treats fairness as a zero-sum trade-off with utility, enforcing symmetric constraints that achieve parity by degrading well-served groups. We introduce FairMT, a unified fairness-aware MTL framework that accommodates all three task types under incomplete supervision. At its core is an Asymmetric Heterogeneous Fairness Constraint Aggregation mechanism, which consolidates task-dependent asymmetric violations into a unified fairness constraint. Utility and fairness are jointly optimized via a primal--dual formulation, while a head-aware multi-objective optimization proxy provides a tractable descent geometry that explicitly accounts for head-induced anisotropy. Across three homogeneous and heterogeneous MTL benchmarks encompassing diverse modalities and supervision regimes, FairMT consistently achieves substantial fairness gains while maintaining superior task utility. Code will be released upon paper acceptance.
翻译:机器学习中的公平性已在单任务场景中得到广泛研究,而公平的多任务学习(MTL),尤其是在涉及异构任务(分类、检测、回归)及部分标签缺失的情况下,仍鲜有探索。现有公平性方法主要面向分类任务,难以扩展至连续输出场景,导致统一的公平性目标难以构建。此外,现有MTL优化机制在结构上与公平性要求存在偏差:仅约束共享表示层,允许任务头部吸收偏差,从而导致无法控制的任务特定差异。最后,多数研究将公平性与效用视为零和权衡,通过施加对称约束以实现均衡,却以损害服务良好群体为代价。本文提出FairMT,一个统一的公平感知MTL框架,能够在不完全监督下兼容所有三类任务类型。其核心为一种非对称异构公平性约束聚合机制,该机制将任务依赖的非对称违规整合为统一的公平性约束。效用与公平性通过原始-对偶形式联合优化,同时采用头部感知的多目标优化代理,提供可处理的下降几何结构,显式考虑头部引起的各向异性。在涵盖多种模态与监督模式的三个同质及异构MTL基准测试中,FairMT始终在保持卓越任务效用的同时,实现显著的公平性提升。代码将在论文录用后开源。