Collective Perception (CP) has emerged as a promising approach to overcome the limitations of individual perception in the context of autonomous driving. Various approaches have been proposed to realize collective perception; however, the Sensor2Sensor domain gap that arises from the utilization of different sensor systems in Connected and Automated Vehicles (CAVs) remains mostly unaddressed. This is primarily due to the paucity of datasets containing heterogeneous sensor setups among the CAVs. The recently released SCOPE datasets address this issue by providing data from three different LiDAR sensors for each CAV. This study is the first to address the Sensor2Sensor domain gap in vehicle-to-vehicle (V2V) collective perception. First, we present our sensor-domain robust architecture S2S-Net. Then an in-depth analysis of the Sensor2Sensor domain adaptation capabilities of state-of-the-art CP methods and S2S-Net is conducted on the SCOPE dataset. This study shows that, all evaluated state-of-the-art mehtods for collective perception highly suffer from the Sensor2Sensor domain gap, while S2S-Net demonstrates the capability to maintain very high performance in unseen sensor domains and outperforms the evaluated state-of-the-art methods by up to 44 percentage points.
翻译:暂无翻译