The sense of touch is fundamental in several manipulation tasks, but rarely used in robot manipulation. In this work we tackle the problem of learning rich touch features from cross-modal self-supervision. We evaluate them identifying objects and their properties in a few-shot classification setting. Two new datasets are introduced using a simulated anthropomorphic robotic hand equipped with tactile sensors on both synthetic and daily life objects. Several self-supervised learning methods are benchmarked on these datasets, by evaluating few-shot classification on unseen objects and poses. Our experiments indicate that cross-modal self-supervision effectively improves touch representation, and in turn has great potential to enhance robot manipulation skills.
翻译:触摸感在一些操作任务中具有根本意义,但很少用于机器人操纵。在这项工作中,我们从跨模式自我监督的角度学习丰富的触摸特征。我们用几张照片来评估这些特征。我们用一个微小的分类设置来评估它们识别物体及其属性。我们采用了两套新的数据集,使用一个模拟的人类形态机器人手,配备了合成和日常生活物体的触动感应器。一些自我监督的学习方法以这些数据集为基准,通过评估对看不见物体和成份的几张照片的分类。我们的实验表明,跨模式自我监督能够有效地改善触摸代表,反过来,也具有提高机器人操纵技能的巨大潜力。