This paper aims to clarify the representational status of Deep Learning Models (DLMs). While commonly referred to as 'representations', what this entails is ambiguous due to a conflation of functional and relational conceptions of representation. This paper argues that while DLMs represent their targets in a relational sense, in general, we have no good reason to believe that DLMs encode locally semantically decomposable representations of their targets. That is, the representational capacity these models have is largely global, rather than decomposable into stable, local subrepresentations. This result has immediate implications for explainable AI (XAI) and directs attention toward exploring the global relational nature of deep learning representations and their relationship both to models more generally to understand their potential role in future scientific inquiry.
翻译:暂无翻译