Deep learning has been at the foundation of large improvements in image classification. To improve the robustness of predictions, Bayesian approximations have been used to learn parameters in deep neural networks. We follow an alternative approach, by using Gaussian processes as building blocks for Bayesian deep learning models, which has recently become viable due to advances in inference for convolutional and deep structure. We investigate deep convolutional Gaussian processes, and identify a problem that holds back current performance. To remedy the issue, we introduce a translation insensitive convolutional kernel, which removes the restriction of requiring identical outputs for identical patch inputs. We show empirically that this convolutional kernel improves performances in both shallow and deep models. On MNIST, FASHION-MNIST and CIFAR-10 we improve previous GP models in terms of accuracy, with the addition of having more calibrated predictive probabilities than simple DNN models.
翻译:深度学习是大幅改善图像分类的基础。为了提高预测的稳健性,贝耶斯近似值被用于学习深神经网络的参数。我们采取了另一种方法,将高萨进程作为贝耶斯深学习模型的构件,由于对进化和深层结构的推论进展,这种模型最近变得可行。我们调查了深刻的革命高斯进程,并找出了阻碍当前业绩的问题。为了纠正这一问题,我们引入了一种敏感共振内核,从而消除了要求相同补丁输入的相同输出的限制。我们从经验上表明,这种共振内核改善了浅深层模型的性能。在MNIST、FASHION-MNIST和CIFAR-10方面,我们改进了以前的GP模型的准确性,增加了比简单的DNN模型更精确的预测性概率。