Equilibrium propagation has been proposed as a biologically plausible alternative to the backpropagation algorithm. The local nature of gradient computations, combined with the use of convergent RNNs to reach equilibrium states, make this approach well-suited for implementation on neuromorphic hardware. However, previous studies on equilibrium propagation have been restricted to networks containing only dense layers or relatively small architectures with a few convolutional layers followed by a final dense layer. These networks have a significant gap in accuracy compared to similarly sized feedforward networks trained with backpropagation. In this work, we introduce the Hopfield-Resnet architecture, which incorporates residual (or skip) connections in Hopfield networks with clipped $\mathrm{ReLU}$ as the activation function. The proposed architectural enhancements enable the training of networks with nearly twice the number of layers reported in prior works. For example, Hopfield-Resnet13 achieves 93.92\% accuracy on CIFAR-10, which is $\approx$3.5\% higher than the previous best result and comparable to that provided by Resnet13 trained using backpropagation.
翻译:均衡传播已被提出作为反向传播算法的一种生物学上合理的替代方案。其梯度计算的局部性,结合使用收敛性循环神经网络达到平衡状态的特点,使得该方法非常适合在神经形态硬件上实现。然而,先前关于均衡传播的研究仅限于仅包含密集层或相对较小的架构(即少数卷积层后接一个最终密集层)的网络。与使用反向传播训练的类似规模前馈网络相比,这些网络在准确率上存在显著差距。在本工作中,我们提出了Hopfield-Resnet架构,该架构在Hopfield网络中引入了残差(或跳跃)连接,并使用裁剪$\mathrm{ReLU}$作为激活函数。所提出的架构增强使得训练具有近乎先前工作中报告层数两倍的网络成为可能。例如,Hopfield-Resnet13在CIFAR-10数据集上达到了93.92\%的准确率,这比先前的最佳结果高出约3.5\%,并且与使用反向传播训练的Resnet13所提供的性能相当。