Deep neural networks usually require large labeled datasets for training to achieve the start-of-the-art performance in many tasks, such as image classification and natural language processing. Though a lot of data is created each day by active Internet users through various distributed systems across the world, most of these data are unlabeled and are vulnerable to data poisoning attacks. In this paper, we develop an efficient active learning method that requires fewer labeled instances and incorporates the technique of adversarial retraining in which additional labeled artificial data are generated without increasing the labeling budget. The generated adversarial examples also provide a way to measure the vulnerability of the model. To check the performance of the proposed method under an adversarial setting, i.e., malicious mislabeling and data poisoning attacks, we perform an extensive evaluation on the reduced CIFAR-10 dataset, which contains only two classes: 'airplane' and 'frog' by using the private cloud on campus. Our experimental results demonstrate that the proposed active learning method is efficient for defending against malicious mislabeling and data poisoning attacks. Specifically, whereas the baseline active learning method based on the random sampling strategy performs poorly (about 50%) under a malicious mislabeling attack, the proposed active learning method can achieve the desired accuracy of 89% using only one-third of the dataset on average.
翻译:深心神经网络通常需要大量贴标签的数据集来进行培训,才能在很多任务(如图像分类和自然语言处理)中实现艺术初创性工作,例如图像分类和自然语言处理。虽然活跃的互联网用户每天通过各种分布的系统在世界各地创建大量数据,但这些数据大多没有标签,容易受到数据中毒袭击。在本文件中,我们开发了高效的积极学习方法,需要较少贴标签的事例,并纳入对抗性再培训技术,在这种技术中,在不增加标签预算的情况下生成额外的贴标签人工数据。生成的对抗性实例也提供了一种衡量模型脆弱性的方法。为了检查在对抗性环境(即恶意标签错误和数据中毒攻击)下拟议方法的性能,我们广泛评价了已减少的CIFAR-10数据集,该数据集仅包含两个类别:“飞机”和“蛙”,在校园使用私人云。我们的实验结果表明,拟议的积极学习方法在防止恶意标签和数据中毒攻击方面是有效的。具体地,而基于随机取样战略的基线积极学习方法,在恶意攻击的平均方法下,只能用一种恶意标签方法进行低度(大约50%),在一项恶意攻击中进行主动研究。