Understanding deep learning models is important for EEG-based brain-computer interface (BCI), since it not only can boost trust of end users but also potentially shed light on reasons that cause a model to fail. However, deep learning interpretability has not yet raised wide attention in this field. It remains unknown how reliably existing interpretation techniques can be used and to which extent they can reflect the model decisions. In order to fill this research gap, we conduct the first quantitative evaluation and explore the best practice of interpreting deep learning models designed for EEG-based BCI. We design metrics and test seven well-known interpretation techniques on benchmark deep learning models. Results show that methods of GradientInput, DeepLIFT, integrated gradient, and layer-wise relevance propagation (LRP) have similar and better performance than saliency map, deconvolution and guided backpropagation methods for interpreting the model decisions. In addition, we propose a set of processing steps that allow the interpretation results to be visualized in an understandable and trusted way. Finally, we illustrate with samples on how deep learning interpretability can benefit the domain of EEG-based BCI. Our work presents a promising direction of introducing deep learning interpretability to EEG-based BCI.
翻译:深入了解学习模式对于基于EEG的大脑-计算机界面(BCI)十分重要,因为它不仅能够增强最终用户的信任,而且有可能揭示造成模型失败的原因;然而,深层次的学习解释尚未引起该领域的广泛注意;目前尚不清楚现有解释技术如何可靠,以及这些技术在多大程度上能够反映示范决定;为填补这一研究差距,我们进行第一次定量评估,探索解释为基于EEG的BCI设计的深层次学习模式的最佳做法。我们设计了衡量标准,并测试了七种关于深层次学习模式基准的著名解释技术。结果显示,渐进式、深层LIFT、综合梯度和多层相关性的传播方法,其性能与显著的地图、演进和引导回溯性分析方法相类似,而且更好。此外,我们提出了一套处理步骤,以便能够以可理解和可信任的方式将解释结果直观化。我们用样本来说明深层次的学习解释性能如何有益于基于EGCI的BCI域。我们的工作为引入深层次的EG解释提供了一种很有希望的方向。