Explainable Artificial Intelligence (XAI) is increasingly required in computational economics, where machine-learning forecasters can outperform classical econometric models but remain difficult to audit and use for policy. This survey reviews and organizes the growing literature on XAI for economic time series, where autocorrelation, non-stationarity, seasonality, mixed frequencies, and regime shifts can make standard explanation techniques unreliable or economically implausible. We propose a taxonomy that classifies methods by (i) explanation mechanism: propagation-based approaches (e.g., Integrated Gradients, Layer-wise Relevance Propagation), perturbation and game-theoretic attribution (e.g., permutation importance, LIME, SHAP), and function-based global tools (e.g., Accumulated Local Effects); (ii) time-series compatibility, including preservation of temporal dependence, stability over time, and respect for data-generating constraints. We synthesize time-series-specific adaptations such as vector- and window-based formulations (e.g., Vector SHAP, WindowSHAP) that reduce lag fragmentation and computational cost while improving interpretability. We also connect explainability to causal inference and policy analysis through interventional attributions (Causal Shapley values) and constrained counterfactual reasoning. Finally, we discuss intrinsically interpretable architectures (notably attention-based transformers) and provide guidance for decision-grade applications such as nowcasting, stress testing, and regime monitoring, emphasizing attribution uncertainty and explanation dynamics as indicators of structural change.
翻译:可解释人工智能(XAI)在计算经济学领域的需求日益增长,其中机器学习预测模型虽能超越经典计量经济学模型,但其审计难度高且难以直接用于政策制定。本文系统梳理并整合了经济时间序列XAI领域不断增长的文献,该领域中自相关性、非平稳性、季节性、混合频率及体制转换等特性常导致标准解释技术不可靠或缺乏经济合理性。我们提出一种分类体系,依据以下维度对方法进行分类:(i)解释机制:基于传播的方法(如积分梯度、逐层相关性传播)、扰动与博弈论归因(如置换重要性、LIME、SHAP)以及基于函数的全局工具(如累积局部效应);(ii)时间序列兼容性:包括时序依赖保持、时间稳定性及数据生成约束的遵循。我们综合了针对时间序列的特定改进方法,如基于向量和滑动窗口的构建(如Vector SHAP、WindowSHAP),这些方法在提升可解释性的同时减少了滞后碎片化与计算成本。通过干预归因(因果Shapley值)和约束反事实推理,我们将可解释性与因果推断及政策分析相连接。最后,我们讨论了本质可解释的架构(尤其是基于注意力的Transformer模型),并为即时预测、压力测试和体制监测等决策级应用提供指导,强调将归因不确定性与解释动态性作为结构性变化的指示指标。