对抗学习是一种机器学习技术,旨在通过提供欺骗性输入来欺骗模型。最常见的原因是导致机器学习模型出现故障。大多数机器学习技术旨在处理特定的问题集,其中从相同的统计分布(IID)生成训练和测试数据。当这些模型应用于现实世界时,对手可能会提供违反该统计假设的数据。可以安排此数据来利用特定漏洞并破坏结果。

VIP内容

经典机器学习算法假设训练数据和测试数据具有相同的输入特征空间和相同的数据分布。在诸多现实问题中,这一假设往往不能满足,导致经典机器学习算法失效。领域自适应是一种新的学习范式,其关键技术在于通过学习新的特征表达来对齐源域和目标域的数据分布,使得在有标签源域训练的模型可以直接迁移到没有标签的目标域上,同时不会引起性能的明显损失。本文介绍领域自适应的定义,分类和代表性算法,重点讨论基于度量学习的领域自适应算法和基于对抗学习的领域自适应算法。最后,分析领域自适应的典型应用和存在挑战,明确领域自适应的发展趋势,并提出未来可能的研究方向。

成为VIP会员查看完整内容
0
15

最新内容

The robustness of deep neural networks (DNNs) against adversarial example attacks has raised wide attention. For smoothed classifiers, we propose the worst-case adversarial loss over input distributions as a robustness certificate. Compared with previous certificates, our certificate better describes the empirical performance of the smoothed classifiers. By exploiting duality and the smoothness property, we provide an easy-to-compute upper bound as a surrogate for the certificate. We adopt a noisy adversarial learning procedure to minimize the surrogate loss to improve model robustness. We show that our training method provides a theoretically tighter bound over the distributional robust base classifiers. Experiments on a variety of datasets further demonstrate superior robustness performance of our method over the state-of-the-art certified or heuristic methods.

0
0
下载
预览

最新论文

The robustness of deep neural networks (DNNs) against adversarial example attacks has raised wide attention. For smoothed classifiers, we propose the worst-case adversarial loss over input distributions as a robustness certificate. Compared with previous certificates, our certificate better describes the empirical performance of the smoothed classifiers. By exploiting duality and the smoothness property, we provide an easy-to-compute upper bound as a surrogate for the certificate. We adopt a noisy adversarial learning procedure to minimize the surrogate loss to improve model robustness. We show that our training method provides a theoretically tighter bound over the distributional robust base classifiers. Experiments on a variety of datasets further demonstrate superior robustness performance of our method over the state-of-the-art certified or heuristic methods.

0
0
下载
预览
Top