Discrimination mitigation within machine learning (ML) models could be complicated because multiple factors may be interwoven hierarchically and historically. Yet few existing fairness measures can capture the discrimination level within ML models in the face of multiple sensitive attributes (SAs). To bridge this gap, we propose a fairness measure based on distances between sets from a manifold perspective, named as 'Harmonic Fairness measure via Manifolds (HFM)' with two optional versions, which can deal with a fine-grained discrimination evaluation for several SAs of multiple values. Because directly computing HFM may be costly, to accelerate its subprocedure -- the computation of distances of sets, we further propose two approximation algorithms named 'Approximation of distance between sets for one sensitive attribute with multiple values (ApproxDist)' and 'Approximation of extended distance between sets for several sensitive attributes with multiple values (ExtendDist)' to respectively resolve bias evaluation of one single SA with multiple values and that of several SAs with multiple values. Moreover, we provide an algorithmic effectiveness analysis for ApproxDist under certain assumptions to explain how well it could work. The empirical results demonstrate that our proposed fairness measure HFM is valid and approximation algorithms (i.e. ApproxDist and ExtendDist) are effective and efficient.
翻译:暂无翻译