Prototype-based methods are of the particular interest for domain specialists and practitioners as they summarize a dataset by a small set of representatives. Therefore, in a classification setting, interpretability of the prototypes is as significant as the prediction accuracy of the algorithm. Nevertheless, the state-of-the-art methods make inefficient trade-offs between these concerns by sacrificing one in favor of the other, especially if the given data has a kernel-based representation. In this paper, we propose a novel interpretable multiple-kernel prototype learning (IMKPL) to construct highly interpretable prototypes in the feature space, which are also efficient for the discriminative representation of the data. Our method focuses on the local discrimination of the classes in the feature space and shaping the prototypes based on condensed class-homogeneous neighborhoods of data. Besides, IMKPL learns a combined embedding in the feature space in which the above objectives are better fulfilled. When the base kernels coincide with the data dimensions, this embedding results in a discriminative features selection. We evaluate IMKPL on several benchmarks from different domains which demonstrate its superiority to the related state-of-the-art methods regarding both interpretability and discriminative representation.
翻译:以原型为基础的方法对域专家和实践者特别感兴趣,因为他们总结了一组代表的数据集。因此,在分类设置中,原型的可解释性与算法的预测准确性一样重要。然而,最先进的方法通过牺牲一种方法而牺牲另一种方法而使这些关切之间的权衡效率低,特别是如果给定的数据具有内核代表制。在本文件中,我们提出一种新的可解释的多内核原型学习模式(IMKPL),以在地貌空间中构建高度可解释的原型,这也对数据的区别代表性十分有效。我们的方法侧重于地貌空间中各班级的当地歧视,并根据浓缩的等级和相容性数据区塑造原型。此外,IMKPL还学习了将上述目标更好地实现的地貌空间结合在一起。当基本内核与数据维度相吻合时,这种嵌入在有区别性特征的选择中的结果。我们从不同领域对IMKPL的一些基准进行了评估,这些基准显示其优劣性与相关国家代表性的解释方法。