Recommender systems are being employed across an increasingly diverse set of domains that can potentially make a significant social and individual impact. For this reason, considering fairness is a critical step in the design and evaluation of such systems. In this paper, we introduce HyperFair, a general framework for enforcing soft fairness constraints in a hybrid recommender system. HyperFair models integrate variations of fairness metrics as a regularization of a joint inference objective function. We implement our approach using probabilistic soft logic and show that it is particularly well-suited for this task as it is expressive and structural constraints can be added to the system in a concise and interpretable manner. We propose two ways to employ the methods we introduce: first as an extension of a probabilistic soft logic recommender system template; second as a fair retrofitting technique that can be used to improve the fairness of predictions from a black-box model. We empirically validate our approach by implementing multiple HyperFair hybrid recommenders and compare them to a state-of-the-art fair recommender. We also run experiments showing the effectiveness of our methods for the task of retrofitting a black-box model and the trade-off between the amount of fairness enforced and the prediction performance.
翻译:由于这一原因,考虑到公平性是设计和评估这些系统的关键一步,我们在此文件中提出超法尔,这是在混合建议系统中执行软性公平限制的一般框架。超法尔模型将公平度指标的变异作为共同推论客观功能的正规化。我们使用概率软逻辑执行我们的方法,并表明它特别适合这项任务,因为它是明确和结构上的制约因素,可以以简洁和可解释的方式加入到系统中。我们建议采用两种方法来采用我们采用的方法:首先,作为概率性软逻辑建议系统模板的延伸;第二,是一种公平的改装技术,可以用来改进黑箱模型预测的公正性。我们通过采用多种超法尔混合建议并把它们与最先进的公平建议进行比较,来验证我们的方法。我们还进行了实验,展示了我们变换黑箱模型和业绩预测数量之间的公平性与贸易性。