We develop the use of mutual information (MI), a well-established metric in information theory, to interpret the inner workings of deep learning models. To accurately estimate MI from a finite number of samples, we present GMM-MI (pronounced $``$Jimmie$"$), an algorithm based on Gaussian mixture models that can be applied to both discrete and continuous settings. GMM-MI is computationally efficient, robust to the choice of hyperparameters and provides the uncertainty on the MI estimate due to the finite sample size. We extensively validate GMM-MI on toy data for which the ground truth MI is known, comparing its performance against established mutual information estimators. We then demonstrate the use of our MI estimator in the context of representation learning, working with synthetic data and physical datasets describing highly non-linear processes. We train deep learning models to encode high-dimensional data within a meaningful compressed (latent) representation, and use GMM-MI to quantify both the level of disentanglement between the latent variables, and their association with relevant physical quantities, thus unlocking the interpretability of the latent representation. We make GMM-MI publicly available.
翻译:我们开发了相互信息(MI)在深度学习模型解释中的应用。我们使用高斯混合模型提出了GMM-MI算法,用于从有限样本中准确估计MI。该算法适用于离散和连续数据,并具有计算效率高和对超参数选择具有鲁棒性的优点。由于样本数量有限,GMM-MI可以提供MI估计的不确定性。我们将广泛地验证GMM-MI在玩具数据上的性能,比较其表现与现有的相互信息估算器。然后,我们在表示学习的背景下展示了我们的MI估计器的应用,使用合成数据和描述高度非线性过程的物理数据集训练深度学习模型,将高维数据编码为有意义的压缩(潜在)表示,并使用GMM-MI量化潜在变量之间的解缠缠绕程度和它们与相关物理量之间的关联。这就实现了潜在表示的可解释性。我们将GMM-MI公开发布。