The way how recurrently connected networks of spiking neurons in the brain acquire powerful information processing capabilities through learning has remained a mystery. This lack of understanding is linked to a lack of learning algorithms for recurrent networks of spiking neurons (RSNNs) that are both functionally powerful and can be implemented by known biological mechanisms. Since RSNNs are simultaneously a primary target for implementations of brain-inspired circuits in neuromorphic hardware, this lack of algorithmic insight also hinders technological progress in that area. The gold standard for learning in recurrent neural networks in machine learning is back-propagation through time (BPTT), which implements stochastic gradient descent with regard to a given loss function. But BPTT is unrealistic from a biological perspective, since it requires a transmission of error signals backwards in time and in space, i.e., from post- to presynaptic neurons. We show that an online merging of locally available information during a computation with suitable top-down learning signals in real-time provides highly capable approximations to BPTT. For tasks where information on errors arises only late during a network computation, we enrich locally available information through feedforward eligibility traces of synapses that can easily be computed in an online manner. The resulting new generation of learning algorithms for recurrent neural networks provides a new understanding of network learning in the brain that can be tested experimentally. In addition, these algorithms provide efficient methods for on-chip training of RSNNs in neuromorphic hardware.
翻译:大脑神经元反复连接的神经元网络如何通过学习获得强大的信息处理能力仍然是个谜。这种缺乏理解与经常神经神经元网络缺乏学习算法有关,这些网络功能强大,可以由已知的生物机制实施。由于RSNN是同时在神经畸形硬件中实施大脑引发的电路的首要目标,缺乏算法洞察力也阻碍了该领域的技术进步。在机器学习的经常性神经网络中学习机器学习的金质标准是时间回溯式(BPTTT),通过特定损失函数执行随机梯度梯度下降。但是,BPTT从生物角度讲是不现实的,因为它需要在时间和空间上传递错误信号,即从后到前合成神经元。我们显示,在计算过程中将本地可用信息与适当的自上至下学习信号进行在线更新,可以向BPTT提供高度可靠的近似数据。对于在网络计算中仅晚出现错误信息的任务,在网络的常规计算过程中,我们通过不断的轨算方法来更新本地可获取的硬值。