We consider the task of training machine learning models with data-dependent constraints. Such constraints often arise as empirical versions of expected value constraints that enforce fairness or stability goals. We reformulate data-dependent constraints so that they are calibrated: enforcing the reformulated constraints guarantees that their expected value counterparts are satisfied with a user-prescribed probability. The resulting optimization problem is amendable to standard stochastic optimization algorithms, and we demonstrate the efficacy of our method on a fairness-sensitive classification task where we wish to guarantee the classifier's fairness (at test time).
翻译:我们认为,培训机床学习模式的任务取决于数据,这些制约往往是作为执行公平或稳定目标的预期价值限制的经验性版本产生的。我们重新制定数据依赖限制,以便校准它们:执行重新拟订的限制保证,使其预期价值对应方对用户规定的概率感到满意。由此产生的优化问题可以修正为标准的随机优化算法,我们展示了我们的方法在公平敏感分类任务上的效力,我们希望在这种任务中保证分类员的公平性(在测试时间)。