Despite their enormous predictive power, machine learning models are often unsuitable for applications in regulated industries such as finance, due to their limited capacity to provide explanations. While model-agnostic frameworks such as Shapley values have proved to be convenient and popular, they rarely align with the kinds of causal explanations that are typically sought after. Counterfactual case-based explanations, where an individual is informed of which circumstances would need to be different to cause a change in outcome, may be more intuitive and actionable. However, finding appropriate counterfactual cases is an open challenge, as is interpreting which features are most critical for the change in outcome. Here, we pose the question of counterfactual search and interpretation in terms of similarity learning, exploiting the representation learned by the random forest predictive model itself. Once a counterfactual is found, the feature importance of the explanation is computed as a function of which random forest partitions are crossed in order to reach it from the original instance. We demonstrate this method on both the MNIST hand-drawn digit dataset and the German credit dataset, finding that it generates explanations that are sparser and more useful than Shapley values.
翻译:尽管机器学习模型具有巨大的预测能力,但由于其提供解释的能力有限,通常不适用于金融等受监管行业。虽然如Shapley值等模型无关框架已被证明便捷且流行,但它们很少与通常寻求的因果解释类型相一致。反事实案例解释——即告知个体哪些情况需要改变才能导致结果变化——可能更直观且可操作。然而,寻找合适的反事实案例仍是一个开放挑战,解释哪些特征对结果变化最为关键亦然。在此,我们将反事实搜索与解释问题转化为相似性学习问题,利用随机森林预测模型自身学习到的表征。一旦找到反事实案例,解释的特征重要性将通过计算从原始实例到达该案例所需跨越的随机森林分区函数来确定。我们在MNIST手写数字数据集和德国信用数据集上验证了该方法,发现其生成的解释比Shapley值更稀疏且更具实用性。