Convolutional Neural Networks (CNNs) filter the input data using a series of spatial convolution operators with compact stencils and point-wise non-linearities. Commonly, the convolution operators couple features from all channels, which leads to immense computational cost in the training of and prediction with CNNs. To improve the efficiency of CNNs, we introduce lean convolution operators that reduce the number of parameters and computational complexity. Our new operators can be used in a wide range of existing CNNs. Here, we exemplify their use in residual networks (ResNets), which have been very reliable for a few years now and analyzed intensively. In our experiments on three image classification problems, the proposed LeanResNet yields results that are comparable to other recently proposed reduced architectures using similar number of parameters.
翻译:革命神经网络(CNNs)使用一系列空间变迁操作员,使用紧凑的电线和点向的非线性来过滤输入数据。 通常, 革命操作员从所有渠道都有不同的功能, 这导致有线电视新闻网培训和预测的计算成本巨大。 为了提高CNN的效率, 我们引入了精细的变迁操作员, 以减少参数的数量和计算的复杂性。 我们的新操作员可以在现有的有线电视网中广泛使用。 在这里, 我们举例说明了它们在剩余网络(ResNets)中的使用, 这些网络几年来一直非常可靠,并进行了深入分析。 在我们关于三个图像分类问题的实验中, 拟议的LeanResNet产生的结果可以与其他最近提出的使用类似参数的减少结构相比。