In this paper, we introduce AE-FABMAP, a new self-supervised bag of words-based SLAM method. We also present AE-ORB-SLAM, a modified version of the current state of the art BoW-based path planning algorithm. That is, we have used a deep convolutional autoencoder to find loop closures. In the context of bag of words visual SLAM, vector quantization (VQ) is considered as the most time-consuming part of the SLAM procedure, which is usually performed in the offline phase of the SLAM algorithm using unsupervised algorithms such as Kmeans++. We have addressed the loop closure detection part of the BoW-based SLAM methods in a self-supervised manner, by integrating an autoencoder for doing vector quantization. This approach can increase the accuracy of large-scale SLAM, where plenty of unlabeled data is available. The main advantage of using a self-supervised is that it can help reducing the amount of labeling. Furthermore, experiments show that autoencoders are far more efficient than semi-supervised methods like graph convolutional neural networks, in terms of speed and memory consumption. We integrated this method into the state of the art long range appearance based visual bag of word SLAM, FABMAP2, also in ORB-SLAM. Experiments demonstrate the superiority of this approach in indoor and outdoor datasets over regular FABMAP2 in all cases, and it achieves higher accuracy in loop closure detection and trajectory generation.
翻译:在本文中,我们引入了AE-FABMAP(AE-FABMAP),这是一个以文字为基础的SLM方法的新的自我监督包。我们还展示了AE-ORB-SLAM(AE-ORB-SLAM),这是基于BOW的先进路径规划算法的当前状态的修改版本。也就是说,我们使用了一个深层的 convolual自动编码器来寻找循环关闭。在一袋单词中,SLAM(VQ)被认为是SLAM程序最耗时的部分,通常在SLM算法的离线阶段使用KUeys2+等未经监督的算法。我们用一种自我监督的方式处理了基于BOW的SLAM方法的循环关闭部分。这个方法可以提高大规模SLAM(VQ)的准确性,因为那里有大量未贴标签的数据。使用自我监督的方法的主要优点是有助于减少标签的数量。此外,实验显示,在FU值+++中,在常规的SLAM(SAL-SA)的递模级数据流中,在常规的OLALA-SLALA中,我们以直径直径直径直径直径直径直径直径标中,在直径流中,在直径流数据中也以等的直径直径流中,在直径直径流中,在直径流中,在直径直径流中,我们以中,在直径流数据中,在直径流中,在直图中,在直图中,在直图中,在直径直径。