Supervised anomaly detection methods perform well in identifying known anomalies that are well represented in the training set. However, they often struggle to generalise beyond the training distribution due to decision boundaries that lack a clear definition of normality. Existing approaches typically address this by regularising the representation space during training, leading to separate optimisation in latent and label spaces. The learned normality is therefore not directly utilised at inference, and their anomaly scores often fall within arbitrary ranges that require explicit mapping or calibration for probabilistic interpretation. To achieve unified learning of geometric normality and label discrimination, we propose Centre-Enhanced Discriminative Learning (CEDL), a novel supervised anomaly detection framework that embeds geometric normality directly into the discriminative objective. CEDL reparameterises the conventional sigmoid-derived prediction logit through a centre-based radial distance function, unifying geometric and discriminative learning in a single end-to-end formulation. This design enables interpretable, geometry-aware anomaly scoring without post-hoc thresholding or reference calibration. Extensive experiments on tabular, time-series, and image data demonstrate that CEDL achieves competitive and balanced performance across diverse real-world anomaly detection tasks, validating its effectiveness and broad applicability.
翻译:监督式异常检测方法在识别训练集中充分表征的已知异常时表现良好。然而,由于决策边界缺乏对正常性的明确定义,这些方法往往难以泛化至训练分布之外。现有方法通常通过在训练期间正则化表示空间来解决此问题,导致潜在空间与标签空间中的优化相互分离。因此,学习到的正常性在推理过程中未被直接利用,且其异常得分常处于任意范围内,需要显式映射或校准才能进行概率解释。为实现几何正常性与标签判别性的统一学习,我们提出中心增强判别学习(CEDL),这是一种新颖的监督式异常检测框架,将几何正常性直接嵌入判别目标中。CEDL通过基于中心的径向距离函数对传统Sigmoid衍生的预测对数几率进行重参数化,将几何学习与判别学习统一于端到端的单一表述中。该设计实现了可解释的、几何感知的异常评分,无需后验阈值设定或参考校准。在表格数据、时间序列数据和图像数据上的大量实验表明,CEDL在多样化的现实世界异常检测任务中均取得了具有竞争力且均衡的性能,验证了其有效性与广泛适用性。