Subset selection-based methods are widely used to explain deep vision models: they attribute predictions by highlighting the most influential image regions and support object-level explanations. While these methods perform well in in-distribution (ID) settings, their behavior under out-of-distribution (OOD) conditions remains poorly understood. Through extensive experiments across multiple ID-OOD sets, we find that reliability of the existing subset based methods degrades markedly, yielding redundant, unstable, and uncertainty-sensitive explanations. To address these shortcomings, we introduce a framework that combines submodular subset selection with layer-wise, gradient-based uncertainty estimation to improve robustness and fidelity without requiring additional training or auxiliary models. Our approach estimates uncertainty via adaptive weight perturbations and uses these estimates to guide submodular optimization, ensuring diverse and informative subset selection. Empirical evaluations show that, beyond mitigating the weaknesses of existing methods under OOD scenarios, our framework also yields improvements in ID settings. These findings highlight limitations of current subset-based approaches and demonstrate how uncertainty-driven optimization can enhance attribution and object-level interpretability, paving the way for more transparent and trustworthy AI in real-world vision applications.
翻译:基于子集选择的方法被广泛用于解释深度视觉模型:它们通过高亮最具影响力的图像区域来归因预测,并支持对象级解释。尽管这些方法在分布内(ID)场景下表现良好,但其在分布外(OOD)条件下的行为仍鲜为人知。通过对多个ID-OOD数据集进行广泛实验,我们发现现有基于子集的方法的可靠性显著下降,产生冗余、不稳定且对不确定性敏感的解释。为弥补这些缺陷,我们提出一个框架,将子模子集选择与基于梯度的逐层不确定性估计相结合,以提升鲁棒性和保真度,且无需额外训练或辅助模型。我们的方法通过自适应权重扰动估计不确定性,并利用这些估计指导子模优化,确保选择多样且信息丰富的子集。实证评估表明,除了缓解现有方法在OOD场景下的弱点外,我们的框架在ID场景中也带来改进。这些发现揭示了当前基于子集方法的局限性,并展示了不确定性驱动的优化如何增强归因和对象级可解释性,为现实世界视觉应用中更透明、可信的AI铺平道路。