Feature-attribution methods (e.g., SHAP, LIME) explain individual predictions but often miss higher-order structure: sets of features that act in concert. We propose Modules of Influence (MoI), a framework that (i) constructs a model explanation graph from per-instance attributions, (ii) applies community detection to find feature modules that jointly affect predictions, and (iii) quantifies how these modules relate to bias, redundancy, and causality patterns. Across synthetic and real datasets, MoI uncovers correlated feature groups, improves model debugging via module-level ablations, and localizes bias exposure to specific modules. We release stability and synergy metrics, a reference implementation, and evaluation protocols to benchmark module discovery in XAI.
翻译:特征归因方法(如SHAP、LIME)能够解释个体预测,但往往忽略高阶结构:即协同作用的特征集合。我们提出影响力模块(MoI)框架,该框架(i)从单实例归因构建模型解释图,(ii)应用社区检测技术发现共同影响预测的特征模块,(iii)量化这些模块与偏差、冗余及因果模式的关联。在合成与真实数据集上的实验表明,MoI能够揭示相关特征组,通过模块级消融改进模型调试,并将偏差暴露定位至特定模块。我们发布了稳定性与协同性度量指标、参考实现及评估协议,为可解释人工智能领域的模块发现建立基准。