Explainable Artificial Intelligence seeks to make the reasoning processes of AI models transparent and interpretable, particularly in complex decision making environments. In the construction industry, where AI based decision support systems are increasingly adopted, limited attention has been paid to the integration of supporting evidence that underpins the reliability and accountability of AI generated outputs. The absence of such evidence undermines the validity of explanations and the trustworthiness of system recommendations. This paper addresses this gap by introducing a theoretical, evidence based means end framework developed through a narrative review. The framework offers an epistemic foundation for designing XAI enabled DSS that generate meaningful explanations tailored to users knowledge needs and decision contexts. It focuses on evaluating the strength, relevance, and utility of different types of evidence supporting AI generated explanations. While developed with construction professionals as primary end users, the framework is also applicable to developers, regulators, and project managers with varying epistemic goals.
翻译:可解释人工智能旨在使人工智能模型的推理过程透明且可解释,特别是在复杂决策环境中。在建筑行业,基于人工智能的决策支持系统正日益普及,但支撑人工智能生成输出可靠性与可问责性的辅助证据整合问题尚未得到充分关注。此类证据的缺失会削弱解释的有效性及系统建议的可信度。本文通过引入一个基于叙事性综述构建的理论性、证据驱动的手段-目的框架来填补这一空白。该框架为设计具备可解释人工智能功能的决策支持系统提供了认识论基础,使其能够生成适应用户知识需求与决策情境的有意义解释。框架着重评估支撑人工智能生成解释的不同类型证据的强度、相关性与实用性。虽然以建筑行业专业人员作为主要终端用户进行开发,该框架同样适用于具有不同认知目标的开发者、监管者与项目经理。