We show how to take a regression function $\hat{f}$ that is appropriately ``multicalibrated'' and efficiently post-process it into an approximately error minimizing classifier satisfying a large variety of fairness constraints. The post-processing requires no labeled data, and only a modest amount of unlabeled data and computation. The computational and sample complexity requirements of computing $\hat f$ are comparable to the requirements for solving a single fair learning task optimally, but it can in fact be used to solve many different downstream fairness-constrained learning problems efficiently. Our post-processing method easily handles intersecting groups, generalizing prior work on post-processing regression functions to satisfy fairness constraints that only applied to disjoint groups. Our work extends recent work showing that multicalibrated regression functions are ``omnipredictors'' (i.e. can be post-processed to optimally solve unconstrained ERM problems) to constrained optimization.
翻译:我们展示了如何将“ 多校准” 的回归函数 $\ hat{f} 美元($\ hat{f} $) 和高效的处理后, 将“ 多校准” 转化为一个大致错误, 最大限度地减少分类, 满足多种公平限制。 后处理不需要贴标签的数据, 只需要少量的未贴标签的数据和计算。 计算$\ hat f$的计算和抽样复杂性要求可以与解决单一公平学习任务的要求相匹配, 但事实上它可以被有效地用于解决许多不同的下游公平受限制的学习问题。 我们的后处理方法很容易处理交叉组合, 概括处理后处理回归功能的先前工作, 以满足公平性限制, 仅对脱钩组适用。 我们的工作延续了最近的工作, 显示多校准的回归功能是“ omniprepridaters' ( e. e. be laced produced ) 以最佳方式解决不受约束的机构风险管理问题, 以限制的优化为限制。