Deep learning is a subset of a broader family of machine learning methods based on learning data representations. These models are inspired by human biological nervous systems, even if there are various differences pertaining to the structural and functional properties of biological brains. The elementary constituents of deep learning models are neurons, which can be considered as functions that receive inputs and produce an output that is a weighted sum of the inputs fed through an activation function. Several models of neurons were proposed in the course of the years that are all based on learnable parameters called weights. In this paper we present a new type of artificial neuron, the double-weight neuron,characterized by additional learnable weights that lead to a more complex and accurate system. We tested a feed-forward and convolutional neural network consisting of double-weight neurons on the MNIST dataset, and we tested a convolution network on the CIFAR-10 dataset. For MNIST we find a $\approx 4\%$ and $\approx 1\%$ improved classification accuracy, respectively, when compared to a standard feed-forward and convolutional neural network built with the same sets of hyperparameters. For CIFAR-10 we find a $\approx 12\%$ improved classification accuracy. We thus conclude that this novel artificial neuron can be considered as a valuable alternative to common ones.
翻译:深层学习是基于学习数据表述的较广的机器学习方法系列的一部分。 这些模型受人类生物神经系统启发,即使生物大脑的结构和功能特性存在各种差异。 深层学习模型的基本成分是神经元,可以被视为接收投入的功能,并产生一个通过激活功能投入的加权总和的输出。 若干神经元模型是在这些年中提出的,所有这些模型都以可学习的参数为基础,称为权重。 在本文中,我们提出了一种新的人造神经元类型,即双重重量神经神经神经神经元,通过更多可学习的重量进行分解,导致更复杂和精确的系统。我们测试了一个向前进和向进进神经网络,在MNIST数据集中由双量神经元组成,我们在CIFAR-10数据集中测试了一个卷动网络。 对于MNIST,我们发现一个$4 ⁇ 和$\approx 1 ⁇ ++$。 美元的分类精度,与标准向前和向进进神经神经神经元网络相比,我们用更精度和进进进的神经神经神经神经神经神经网络 网络,这样可以以10号的普通的精度计算。