主题: Locally Masked Convolution for Autoregressive Models
摘要: 高维生成模型具有许多应用程序,包括图像压缩,多媒体生成,异常检测和数据完成。自然图像的最新估算器是自回归的,可将像素上的联合分布分解为由深度神经网络(例如,神经网络)参数化的条件乘积。卷积神经网络,例如PixelCNN。但是,PixelCNN仅对关节的单个分解建模,并且只有单个生成顺序是有效的。对于诸如图像完成的任务,这些模型无法使用很多观察到的上下文。为了以任意顺序生成数据,我们引入了LMConv:对标准2D卷积的简单修改,允许将任意蒙版应用于图像中每个位置的权重。使用LMConv,我们可以学习分布估计器的集合,这些估计器共享参数但生成顺序有所不同,从而提高了全图像密度估计的性能(无条件CIFAR10为2.89 bpd),以及全局一致的图像完成度。
Despite the success of convolutional neural networks (CNNs) in many computer vision and image analysis tasks, they remain vulnerable against so-called adversarial attacks: Small, crafted perturbations in the input images can lead to false predictions. A possible defense is to detect adversarial examples. In this work, we show how analysis in the Fourier domain of input images and feature maps can be used to distinguish benign test samples from adversarial images. We propose two novel detection methods: Our first method employs the magnitude spectrum of the input images to detect an adversarial attack. This simple and robust classifier can successfully detect adversarial perturbations of three commonly used attack methods. The second method builds upon the first and additionally extracts the phase of Fourier coefficients of feature-maps at different layers of the network. With this extension, we are able to improve adversarial detection rates compared to state-of-the-art detectors on five different attack methods.