Channel coding is vital for reliable sixth-generation (6G) data transmission, employing diverse error correction codes for various application scenarios. Traditional decoders require dedicated hardware for each code, leading to high hardware costs. Recently, artificial intelligence (AI)-driven approaches, such as the error correction code Transformer (ECCT) and its enhanced version, the foundation error correction code Transformer (FECCT), have been proposed to reduce the hardware cost by leveraging the Transformer to decode multiple codes. However, their excessively high computational complexity of $\mathcal{O}(N^2)$ due to the self-attention mechanism in the Transformer limits scalability, where $N$ represents the sequence length. To reduce computational complexity, we propose a unified Transformer-based decoder that handles multiple linear block codes within a single framework. Specifically, a standardized unit is employed to align code length and code rate across different code types, while a redesigned low-rank unified attention module, with computational complexity of $\mathcal{O}(N)$, is shared across various heads in the Transformer. Additionally, a sparse mask, derived from the parity-check matrix's sparsity, is introduced to enhance the decoder's ability to capture inherent constraints between information and parity-check bits, improving decoding accuracy and further reducing computational complexity by $86\%$. Extensive experimental results demonstrate that the proposed unified Transformer-based decoder outperforms existing methods and provides a high-performance, low-complexity solution for next-generation wireless communication systems.
翻译:信道编码对于可靠的第六代(6G)数据传输至关重要,需针对不同应用场景采用多样化的纠错码。传统解码器需为每种编码配备专用硬件,导致硬件成本高昂。近年来,人工智能(AI)驱动的方法,如纠错码Transformer(ECCT)及其增强版本基础纠错码Transformer(FECCT),被提出以利用Transformer解码多种编码来降低硬件成本。然而,由于Transformer中自注意力机制导致的$\mathcal{O}(N^2)$过高计算复杂度限制了其可扩展性,其中$N$表示序列长度。为降低计算复杂度,我们提出了一种基于Transformer的统一解码器,可在单一框架内处理多种线性分组码。具体而言,采用标准化单元对齐不同编码类型的码长与码率,同时重新设计的低秩统一注意力模块(计算复杂度为$\mathcal{O}(N)$)在Transformer的多头注意力中共享。此外,引入基于校验矩阵稀疏性导出的稀疏掩码,以增强解码器捕获信息位与校验位间固有约束的能力,从而提升解码精度,并进一步将计算复杂度降低86%。大量实验结果表明,所提出的基于Transformer的统一解码器性能优于现有方法,为下一代无线通信系统提供了高性能、低复杂度的解决方案。