A general form of codebook design for code-domain non-orthogonal multiple access (CD-NOMA) can be considered equivalent to an autoencoder (AE)-based constellation design for multi-user multidimensional modulation (MU-MDM). Due to a constrained design space for optimal constellation, e.g., fixed resource mapping and equal power allocation to all codebooks, however, existing AE architectures produce constellations with suboptimal bit-error-rate (BER) performance. Accordingly, we propose a new architecture for MU-MDM AE and underlying training methodology for joint optimization of resource mapping and a constellation design with bit-to-symbol mapping, aiming at approaching the BER performance of a single-user MDM (SU-MDM) AE model with the same spectral efficiency. The core design of the proposed AE architecture is dense resource mapping combined with the novel power allocation layer that normalizes the sum of user codebook power across the entire resources. This globalizes the domain of the constellation design by enabling flexible resource mapping and power allocation. Furthermore, it allows the AE-based training to approach a global optimal MU-MDM constellations for CD-NOMA. Extensive BER simulation results demonstrate that the proposed design outperforms the existing CD-NOMA designs while approaching the single-user BER performance achieved by the equivalent SU-MDM AE within 0.3 dB over the additive white Gaussian noise channel.


翻译:由于最佳星座设计空间有限,例如固定资源测绘和对所有编码目录平等分配权力,现有的AE结构生成了低于最优化的位元率(BER)的星座。因此,我们提议了MU-MDM AE的新结构,以及联合优化资源绘图和以比对比调制式制图为目的的星座设计基础培训方法,目的是接近单一用户MDM(SU-MDMM) AE模型的BER性能,其光谱效率相同。拟议的AE结构的核心设计是密集的资源制图,与使整个资源的用户编码率之和正常化的新权力分配层相结合。因此,我们提议了MU-MDMAE的新结构,以及联合优化资源绘图和以比对比对比对比对调制式制图为主的星座设计基础培训方法,目的是接近单一用户MDMMA(S-MM) A模型(S-MDMA)的B性能设计性能。此外,AMA-MLMA-MA-MLMA的当前最佳设计方法通过模拟的S-ML-MDMDMAMA的S-S-S-S-S-ML-ML-ML-S-S-MDMD-MLMD-S-S-S-MD-S-S-SIM-M-S-S-S-S-S-S-SIM-S-S-S-S-S-S-S-M-SIM-SIM-SIM-SIM-SIM-SIM-SIM-SIM-MD-SIM-SIM-S-S-SIM-SIM-SIM-SIM-M-S-S-S-SIM-SIM-S-S-S-S-SIM-SMA-SIM-SMA-SMA-SMA-SMA-SMA-SMA-SMA-SMA-SMA-SMA-SMA-S-SMA-SMA-SMA-SMA-M-S-S-S-S-S-S-S-S-S-M-M-M-M-M-M-S-M-M-M-M-M-M-

0
下载
关闭预览

相关内容

Keras François Chollet 《Deep Learning with Python 》, 386页pdf
专知会员服务
163+阅读 · 2019年10月12日
RoBERTa中文预训练模型:RoBERTa for Chinese
PaperWeekly
57+阅读 · 2019年9月16日
Hierarchically Structured Meta-learning
CreateAMind
27+阅读 · 2019年5月22日
强化学习的Unsupervised Meta-Learning
CreateAMind
18+阅读 · 2019年1月7日
Arxiv
5+阅读 · 2020年6月16日
Multiple Combined Constraints for Image Stitching
Arxiv
3+阅读 · 2018年9月18日
VIP会员
相关资讯
RoBERTa中文预训练模型:RoBERTa for Chinese
PaperWeekly
57+阅读 · 2019年9月16日
Hierarchically Structured Meta-learning
CreateAMind
27+阅读 · 2019年5月22日
强化学习的Unsupervised Meta-Learning
CreateAMind
18+阅读 · 2019年1月7日
Top
微信扫码咨询专知VIP会员