With the deployment of Large Language Models (LLMs) in interactive applications, online malicious intent detection has become increasingly critical. However, existing approaches fall short of handling diverse and complex user queries in real time. To address these challenges, we introduce ADRAG (Adversarial Distilled Retrieval-Augmented Guard), a two-stage framework for robust and efficient online malicious intent detection. In the training stage, a high-capacity teacher model is trained on adversarially perturbed, retrieval-augmented inputs to learn robust decision boundaries over diverse and complex user queries. In the inference stage, a distillation scheduler transfers the teacher's knowledge into a compact student model, with a continually updated knowledge base collected online. At deployment, the compact student model leverages top-K similar safety exemplars retrieved from the online-updated knowledge base to enable both online and real-time malicious query detection. Evaluations across ten safety benchmarks demonstrate that ADRAG, with a 149M-parameter model, achieves 98.5% of WildGuard-7B's performance, surpasses GPT-4 by 3.3% and Llama-Guard-3-8B by 9.5% on out-of-distribution detection, while simultaneously delivering up to 5.6x lower latency at 300 queries per second (QPS) in real-time applications.
翻译:随着大型语言模型在交互式应用中的部署,在线恶意意图检测变得日益关键。然而,现有方法难以实时处理多样且复杂的用户查询。为应对这些挑战,我们提出了ADRAG(对抗性蒸馏检索增强防护器),这是一个用于鲁棒高效在线恶意意图检测的两阶段框架。在训练阶段,一个高容量的教师模型在对抗性扰动、检索增强的输入上进行训练,以学习针对多样复杂用户查询的鲁棒决策边界。在推理阶段,一个蒸馏调度器将教师模型的知识迁移至一个紧凑的学生模型,并利用在线收集的持续更新的知识库。在部署时,该紧凑的学生模型利用从在线更新的知识库中检索到的Top-K相似安全示例,实现在线且实时的恶意查询检测。在十个安全基准测试上的评估表明,ADRAG仅使用1.49亿参数的模型,即可达到WildGuard-7B性能的98.5%,在分布外检测上分别超过GPT-4达3.3%、超过Llama-Guard-3-8B达9.5%,同时在实时应用中(每秒300次查询)实现了高达5.6倍的延迟降低。