Cooperative perception plays a vital role in extending a vehicle's sensing range beyond its line-of-sight. However, exchanging raw sensory data under limited communication resources is infeasible. Towards enabling an efficient cooperative perception, vehicles need to address the following fundamental question: What sensory data needs to be shared?, at which resolution?, and with which vehicles? To answer this question, in this paper, a novel framework is proposed to allow reinforcement learning (RL)-based vehicular association, resource block (RB) allocation, and content selection of cooperative perception messages (CPMs) by utilizing a quadtree-based point cloud compression mechanism. Furthermore, a federated RL approach is introduced in order to speed up the training process across vehicles. Simulation results show the ability of the RL agents to efficiently learn the vehicles' association, RB allocation, and message content selection while maximizing vehicles' satisfaction in terms of the received sensory information. The results also show that federated RL improves the training process, where better policies can be achieved within the same amount of time compared to the non-federated approach.
翻译:合作感知在将车辆感知范围扩大到其视觉范围以外方面发挥着至关重要的作用。然而,在有限的通信资源下交流原始感官数据是行不通的。为了能够形成有效的合作感知,车辆需要解决以下基本问题:需要共享什么感知数据?与哪些人共享?与哪些车辆共享?以及哪些车辆?为了回答这一问题,本文件提出了一个新框架,以便利用基于四边云压机制,使基于车辆的强化学习(RL)关系、资源分配(RB)块和合作社感知信息的内容选择(CPMs)得以实现。此外,为了加快车辆之间的培训进程,还采用了一种联合RL方法。模拟结果显示,RL代理商有能力有效地学习车辆的关联、RB分配和信息内容选择,同时尽量提高所收到感知信息的满意度。结果还显示,Federced RL改进了培训过程,从而可以在与非吞食方式相同的时间内实现更好的政策。