While many deep learning (DL)-based networking systems have demonstrated superior performance, the underlying Deep Neural Networks (DNNs) remain blackboxes and stay uninterpretable for network operators. The lack of interpretability makes DL-based networking systems prohibitive to deploy in practice. In this paper, we propose Metis, a framework that provides interpretability for two general categories of networking problems spanning local and global control. Accordingly, Metis introduces two different interpretation methods based on decision tree and hypergraph, where it converts DNN policies to interpretable rule-based controllers and highlight critical components based on analysis over hypergraph. We evaluate Metis over several state-of-the-art DL-based networking systems and show that Metis provides human-readable interpretations while preserving nearly no degradation in performance. We further present four concrete use cases of Metis, showcasing how Metis helps network operators to design, debug, deploy, and ad-hoc adjust DL-based networking systems.
翻译:虽然许多深层学习(DL)网络系统表现优异,但深神经网络(DNNs)基础的深神经网络(DNNs)仍为黑盒,无法为网络操作者解释,缺乏解释性使基于DL的网络系统无法实际部署。在本文件中,我们提议Metis这个框架为涵盖地方和全球控制的两大类网络问题提供解释性。因此,Metis采用基于决定树和高压图的两种不同的解释方法,将DNN政策转换为可解释的基于规则的控制器,并根据对高光学的分析突出关键组成部分。我们评估了几个基于DL的先进网络系统的Metis(Metis),并表明Metis提供了可读的翻译,同时几乎没有造成性能退化。我们还介绍了Metis的四种具体使用案例,展示Metis如何帮助网络操作者设计、调试、部署和调整基于DL的网络系统。