Convolutional Neural Networks (CNNs) have advanced existing medical systems for automatic disease diagnosis. However, a threat to these systems arises that adversarial attacks make CNNs vulnerable. Inaccurate diagnosis results make a negative influence on human healthcare. There is a need to investigate potential adversarial attacks to robustify deep medical diagnosis systems. On the other side, there are several modalities of medical images (e.g., CT, fundus, and endoscopic image) of which each type is significantly different from others. It is more challenging to generate adversarial perturbations for different types of medical images. In this paper, we propose an image-based medical adversarial attack method to consistently produce adversarial perturbations on medical images. The objective function of our method consists of a loss deviation term and a loss stabilization term. The loss deviation term increases the divergence between the CNN prediction of an adversarial example and its ground truth label. Meanwhile, the loss stabilization term ensures similar CNN predictions of this example and its smoothed input. From the perspective of the whole iterations for perturbation generation, the proposed loss stabilization term exhaustively searches the perturbation space to smooth the single spot for local optimum escape. We further analyze the KL-divergence of the proposed loss function and find that the loss stabilization term makes the perturbations updated towards a fixed objective spot while deviating from the ground truth. This stabilization ensures the proposed medical attack effective for different types of medical images while producing perturbations in small variance. Experiments on several medical image analysis benchmarks including the recent COVID-19 dataset show the stability of the proposed method.


翻译:内脏内脏网络(CNNs)已经发展了现有的自动疾病诊断医疗系统,但是,对这些系统的威胁是,对抗性攻击使CNN的CNN易受攻击;不准确的诊断结果对人体保健产生负面的影响;有必要调查潜在的对抗性攻击,以巩固深入医疗诊断系统;另一方面,有几种医疗图像模式(如CT、Fundus和内窥镜图像),每种类型都与其它类型有很大不同;为不同类型的医疗图像产生对抗性扰动比较困难;在本文件中,我们提议基于图像的医疗对抗性攻击方法,以不断产生对抗性攻击性攻击;不准确的诊断结果对医疗图像产生负面影响;我们的方法的客观功能包括损失偏差期和损失稳定期;损失偏差期增加了CNN对对抗性例子的预测与其地面真相标签之间的差异;同时,损失稳定性术语确保了CNN对这一例子的类似预测及其平稳的输入。从对不同类型医疗图像的全图解中,我们提出了基于图像的以图像为基础的医学攻击性攻击性攻击性攻击性攻击方法,这个拟议的以彻底的稳定性攻击性攻击性攻击性攻击性攻击性攻击性攻击性攻击性术语,从而彻底地研究最佳地稳定性地平整地计算。

0
下载
关闭预览

相关内容

专知会员服务
44+阅读 · 2020年10月31日
专知会员服务
123+阅读 · 2020年9月8日
【Google】平滑对抗训练,Smooth Adversarial Training
专知会员服务
46+阅读 · 2020年7月4日
机器学习入门的经验与建议
专知会员服务
90+阅读 · 2019年10月10日
鲁棒机器学习相关文献集
专知
8+阅读 · 2019年8月18日
Hierarchically Structured Meta-learning
CreateAMind
23+阅读 · 2019年5月22日
逆强化学习-学习人先验的动机
CreateAMind
15+阅读 · 2019年1月18日
Unsupervised Learning via Meta-Learning
CreateAMind
41+阅读 · 2019年1月3日
Hierarchical Imitation - Reinforcement Learning
CreateAMind
19+阅读 · 2018年5月25日
gan生成图像at 1024² 的 代码 论文
CreateAMind
4+阅读 · 2017年10月31日
Arxiv
12+阅读 · 2020年12月10日
W-net: Bridged U-net for 2D Medical Image Segmentation
Arxiv
19+阅读 · 2018年7月12日
VIP会员
Top
微信扫码咨询专知VIP会员