AI systems on edge devices face a critical challenge in open-world environments: adapting when data distributions shift and novel classes emerge. While offline training dominates current paradigms, online continual learning (OCL)--where models learn incrementally from non-stationary streams without catastrophic forgetting--remains challenging in power-constrained settings. We present a neuromorphic solution called CLP-SNN: a spiking neural network architecture for Continually Learning Prototypes and its implementation on Intel's Loihi 2 chip. Our approach introduces three innovations: (1) event-driven and spatiotemporally sparse local learning, (2) a self-normalizing three-factor learning rule maintaining weight normalization, and (3) integrated neurogenesis and metaplasticity for capacity expansion and forgetting mitigation. On OpenLORIS few-shot learning experiments, CLP-SNN achieves accuracy competitive with replay methods while being rehearsal-free. CLP-SNN delivers transformative efficiency gains: 70\times faster (0.33ms vs 23.2ms), and 5,600\times more energy efficient (0.05mJ vs 281mJ) than the best alternative OCL on edge GPU. This demonstrates that co-designed brain-inspired algorithms and neuromorphic hardware can break traditional accuracy-efficiency trade-offs for future edge AI systems.
翻译:边缘设备上的人工智能系统在开放世界环境中面临关键挑战:当数据分布发生偏移且新类别出现时需进行自适应调整。尽管离线训练主导当前范式,但在功耗受限场景下实现在线持续学习——即模型从非平稳数据流中增量学习且避免灾难性遗忘——仍具挑战性。我们提出一种名为CLP-SNN的神经形态解决方案:一种用于持续学习原型的脉冲神经网络架构及其在英特尔Loihi 2芯片上的实现。该方法包含三项创新:(1)事件驱动且时空稀疏的局部学习机制,(2)保持权重归一化的自归一化三因子学习规则,(3)集成神经发生与元可塑性以实现容量扩展与遗忘缓解。在OpenLORIS小样本学习实验中,CLP-SNN达到与回放方法相当的准确率且无需样本复现。CLP-SNN实现了突破性效率提升:相比边缘GPU上最优替代OCL方案,速度提升70倍(0.33ms对比23.2ms),能效提高5600倍(0.05mJ对比281mJ)。这表明协同设计的类脑算法与神经形态硬件能够为未来边缘AI系统突破传统精度-效率权衡。