The application of deep learning to the area of communications systems has been a growing field of interest in recent years. Forward-forward (FF) learning is an efficient alternative to the backpropagation (BP) algorithm, which is the typically used training procedure for neural networks. Among its several advantages, FF learning does not require the communication channel to be differentiable and does not rely on the global availability of partial derivatives, allowing for an energy-efficient implementation. In this work, we design end-to-end learned autoencoders using the FF algorithm and numerically evaluate their performance for the additive white Gaussian noise and Rayleigh block fading channels. We demonstrate their competitiveness with BP-trained systems in the case of joint coding and modulation, and in a scenario where a fixed, non-differentiable modulation stage is applied. Moreover, we provide further insights into the design principles of the FF network, its training convergence behavior, and significant memory and processing time savings compared to BP-based approaches.
翻译:近年来,深度学习在通信系统领域的应用已成为日益增长的研究热点。前向-前向学习是反向传播算法的一种高效替代方案,后者是神经网络通常采用的训练方法。前向-向学习具有多项优势:它不要求通信信道可微分,且不依赖于全局偏导数的可用性,从而可实现节能的系统部署。本研究基于前向-向算法设计了端到端学习的自编码器,并在加性高斯白噪声信道与瑞利块衰落信道中进行了数值性能评估。实验表明,在联合编码调制场景以及采用固定不可微分调制级的场景下,该方案与基于反向传播训练的系统具有可比性能。此外,本研究进一步揭示了前向-前向网络的设计原理、训练收敛特性,并证实其相较于基于反向传播的方法可显著节省内存与处理时间。