Biologically inspired algorithms for simultaneous localization and mapping (SLAM) such as RatSLAM have been shown to yield effective and robust robot navigation in both indoor and outdoor environments. One drawback however is the sensitivity to perceptual aliasing due to the template matching of low-dimensional sensory templates. In this paper, we propose an unsupervised representation learning method that yields low-dimensional latent state descriptors that can be used for RatSLAM. Our method is sensor agnostic and can be applied to any sensor modality, as we illustrate for camera images, radar range-doppler maps and lidar scans. We also show how combining multiple sensors can increase the robustness, by reducing the number of false matches. We evaluate on a dataset captured with a mobile robot navigating in a warehouse-like environment, moving through different aisles with similar appearance, making it hard for the SLAM algorithms to disambiguate locations.
翻译:由生物启发的同步定位和绘图算法(SLAM),如RatSLAM(RatSLAM),在室内和室外环境中都显示能够产生有效和稳健的机器人导航。然而,一个缺点是,由于低维感官模板的模板匹配,对概念化别名的敏感度。在本文中,我们建议采用一种不受监督的代言学习方法,产生可用于RatSLAM的低维潜潜伏状态描述符。我们的方法是感知性,可以应用到任何传感器模式,正如我们为相机图像、雷达测距图和利达尔扫描所展示的那样。我们还展示了如何将多个传感器结合在一起,通过减少假匹配的数量来提高强度。我们评估在类似仓库的环境中与移动机器人航行的数据集,以相似的外观通过不同的航道移动,使得SLM算法很难去ambiguate地点。