Out-of-distribution (OOD) detection is a critical task for reliable machine learning. Recent advances in representation learning give rise to developments in distance-based OOD detection, where testing samples are detected as OOD if they are relatively far away from the centroids or prototypes of in-distribution (ID) classes. However, prior methods directly take off-the-shelf loss functions that suffice for classifying ID samples, but are not optimally designed for OOD detection. In this paper, we propose CIDER, a simple and effective representation learning framework by exploiting hyperspherical embeddings for OOD detection. CIDER jointly optimizes two losses to promote strong ID-OOD separability: (1) a dispersion loss that promotes large angular distances among different class prototypes, and (2) a compactness loss that encourages samples to be close to their class prototypes. We show that CIDER is effective under various settings and establishes state-of-the-art performance. On a hard OOD detection task CIFAR-100 vs. CIFAR-10, our method substantially improves the AUROC by 14.20% compared to the embeddings learned by the cross-entropy loss.
翻译:最近的代表性学习进展导致在远程检测OOD方面出现发展,测试样品在距离分布(ID)级的机器人或原型相对遥远的情况下被检测成OOD;然而,先前的方法直接取出现有损失功能,足以对ID样本进行分类,但并非为OOD检测设计的最佳方法;在本文件中,我们提议CIDER,这是一个简单而有效的代表性学习框架,利用超球嵌入进行OOD检测。 CIDER联合优化了两种损失,以促进ID-OD的很强的分离性:(1) 分散性损失,促进不同类别原型之间的大角距离;(2) 紧凑性损失,鼓励样品接近其类别原型。我们表明,CIDER在不同环境下是有效的,并且建立了最先进的性能。关于硬的OD检测任务CIFAR-100 vs.CIFAR-10,我们的方法大大改进了AUROC,比跨作物损失所学的嵌入式增加了14.20%。