Explainable Artificial Intelligence seeks to make the reasoning processes of AI models transparent and interpretable, particularly in complex decision making environments. In the construction industry, where AI based decision support systems are increasingly adopted, limited attention has been paid to the integration of supporting evidence that underpins the reliability and accountability of AI generated outputs. The absence of such evidence undermines the validity of explanations and the trustworthiness of system recommendations. This paper addresses this gap by introducing a theoretical, evidence based means end framework developed through a narrative review. The framework offers an epistemic foundation for designing XAI enabled DSS that generate meaningful explanations tailored to users knowledge needs and decision contexts. It focuses on evaluating the strength, relevance, and utility of different types of evidence supporting AI generated explanations. While developed with construction professionals as primary end users, the framework is also applicable to developers, regulators, and project managers with varying epistemic goals.
翻译:可解释人工智能旨在使AI模型的推理过程透明且可解释,尤其在复杂决策环境中。在建筑行业,基于AI的决策支持系统正日益普及,然而对于支撑AI生成输出可靠性与可问责性的辅助证据整合却关注有限。此类证据的缺失会削弱解释的有效性及系统建议的可信度。本文通过引入一个基于叙事性综述构建的理论性、证据驱动的"手段-目的"框架来填补这一空白。该框架为设计具备XAI功能的决策支持系统提供了认识论基础,使其能够生成适应用户知识需求与决策情境的有意义解释。其核心在于评估支撑AI生成解释的不同类型证据的强度、相关性与实用性。虽然该框架以建筑专业人员作为主要终端用户进行开发,但也适用于具有不同认知目标的开发者、监管者及项目经理。