The Forward-Forward algorithm eliminates backpropagation's memory constraints and biological implausibility through dual forward passes with positive and negative data. However, conventional implementations suffer from critical inter-layer isolation, where layers optimize goodness functions independently without leveraging collective learning dynamics. This isolation constrains representational coordination and limits convergence efficiency in deeper architectures. This paper introduces Collaborative Forward-Forward (CFF) learning, extending the original algorithm through inter-layer cooperation mechanisms that preserve forward-only computation while enabling global context integration. Our framework implements two collaborative paradigms: Fixed CFF (F-CFF) with constant inter-layer coupling and Adaptive CFF (A-CFF) with learnable collaboration parameters that evolve during training. The collaborative goodness function incorporates weighted contributions from all layers, enabling coordinated feature learning while maintaining memory efficiency and biological plausibility. Comprehensive evaluation on MNIST and Fashion-MNIST demonstrates significant performance improvements over baseline Forward-Forward implementations. These findings establish inter-layer collaboration as a fundamental enhancement to Forward-Forward learning, with immediate applicability to neuromorphic computing architectures and energy-constrained AI systems.
翻译:前向-前向算法通过正负数据的双重前向传递,消除了反向传播的内存限制和生物学不合理性。然而,传统实现存在严重的层间隔离问题,各层独立优化"优良度"函数,未能利用集体学习动态。这种隔离限制了表征协调性,并在深层架构中制约了收敛效率。本文提出协作式前向-前向学习,通过层间协作机制扩展原始算法,在保持纯前向计算的同时实现全局上下文整合。我们的框架实现了两种协作范式:采用恒定层间耦合的固定式CFF,以及使用可训练协作参数的自适应CFF,这些参数在训练过程中动态演化。协作优良度函数整合了所有层的加权贡献,在保持内存效率和生物学合理性的同时实现协调的特征学习。在MNIST和Fashion-MNIST上的综合评估表明,该方法相较于基线前向-前向实现取得了显著的性能提升。这些发现确立了层间协作作为前向-前向学习的关键增强机制,可立即应用于神经形态计算架构和能源受限的人工智能系统。