在数学(特别是功能分析)中,卷积是对两个函数(f和g)的数学运算,产生三个函数,表示第一个函数的形状如何被另一个函数修改。 卷积一词既指结果函数,又指计算结果的过程。 它定义为两个函数的乘积在一个函数反转和移位后的积分。 并针对所有shift值评估积分,从而生成卷积函数。

VIP内容

主题: Locally Masked Convolution for Autoregressive Models

摘要: 高维生成模型具有许多应用程序,包括图像压缩,多媒体生成,异常检测和数据完成。自然图像的最新估算器是自回归的,可将像素上的联合分布分解为由深度神经网络(例如,神经网络)参数化的条件乘积。卷积神经网络,例如PixelCNN。但是,PixelCNN仅对关节的单个分解建模,并且只有单个生成顺序是有效的。对于诸如图像完成的任务,这些模型无法使用很多观察到的上下文。为了以任意顺序生成数据,我们引入了LMConv:对标准2D卷积的简单修改,允许将任意蒙版应用于图像中每个位置的权重。使用LMConv,我们可以学习分布估计器的集合,这些估计器共享参数但生成顺序有所不同,从而提高了全图像密度估计的性能(无条件CIFAR10为2.89 bpd),以及全局一致的图像完成度。

成为VIP会员查看完整内容
0
11

最新内容

It is a consensus that feature maps in the shallow layer are more related to image attributes such as texture and shape, whereas abstract semantic representation exists in the deep layer. Meanwhile, some image information will be lost in the process of the convolution operation. Naturally, the direct method is combining them together to gain lost detailed information through concatenation or adding. In fact, the image representation flowed in feature fusion can not match with the semantic representation completely, and the semantic deviation in different layers also destroy the information purification, that leads to useless information being mixed into the fusion layers. Therefore, it is crucial to narrow the gap among the fused layers and reduce the impact of noises during fusion. In this paper, we propose a method named weight mechanism to reduce the gap between feature maps in concatenation of series connection, and we get a better result of 0.80% mIoU improvement on Massachusetts building dataset by changing the weight of the concatenation of series connection in residual U-Net. Specifically, we design a new architecture named fused U-Net to test weight mechanism, and it also gains 0.12% mIoU improvement.

0
0
下载
预览

最新论文

It is a consensus that feature maps in the shallow layer are more related to image attributes such as texture and shape, whereas abstract semantic representation exists in the deep layer. Meanwhile, some image information will be lost in the process of the convolution operation. Naturally, the direct method is combining them together to gain lost detailed information through concatenation or adding. In fact, the image representation flowed in feature fusion can not match with the semantic representation completely, and the semantic deviation in different layers also destroy the information purification, that leads to useless information being mixed into the fusion layers. Therefore, it is crucial to narrow the gap among the fused layers and reduce the impact of noises during fusion. In this paper, we propose a method named weight mechanism to reduce the gap between feature maps in concatenation of series connection, and we get a better result of 0.80% mIoU improvement on Massachusetts building dataset by changing the weight of the concatenation of series connection in residual U-Net. Specifically, we design a new architecture named fused U-Net to test weight mechanism, and it also gains 0.12% mIoU improvement.

0
0
下载
预览
父主题
Top