Persistent homology (PH) is a crucial concept in computational topology, providing a multiscale topological description of a space. It is particularly significant in topological data analysis, which aims to make statistical inference from a topological perspective. In this work, we introduce a new topological summary for Bayesian neural networks, termed the predictive topological uncertainty (pTU). The proposed pTU measures the uncertainty in the interaction between the model and the inputs. It provides insights from the model perspective: if two samples interact with a model in a similar way, then they are considered identically distributed. We also show that the pTU is insensitive to the model architecture. As an application, pTU is used to solve the out-of-distribution (OOD) detection problem, which is critical to ensure model reliability. Failure to detect OOD input can lead to incorrect and unreliable predictions. To address this issue, we propose a significance test for OOD based on the pTU, providing a statistical framework for this issue. The effectiveness of the framework is validated through various experiments, in terms of its statistical power, sensitivity, and robustness.
翻译:持续同调(PH)是计算拓扑学中的一个核心概念,它为空间提供了多尺度的拓扑描述。在拓扑数据分析中,这一概念尤为重要,其目标是从拓扑视角进行统计推断。本文针对贝叶斯神经网络提出了一种新的拓扑摘要指标,称为预测拓扑不确定性(pTU)。所提出的pTU度量了模型与输入之间相互作用的不确定性,并从模型视角提供了深刻见解:若两个样本以相似方式与模型交互,则它们被视为同分布。我们还证明了pTU对模型架构不敏感。作为应用,pTU被用于解决分布外(OOD)检测问题,这对确保模型可靠性至关重要。未能检测到OOD输入可能导致错误且不可靠的预测。针对此问题,我们基于pTU提出了一种OOD的显著性检验,为该问题提供了统计框架。该框架的有效性通过多项实验在统计功效、敏感性和鲁棒性方面得到了验证。