Imitation Learning (IL) holds great potential for learning repetitive manipulation tasks, such as those in industrial assembly. However, its effectiveness is often limited by insufficient trajectory precision due to compounding errors. In this paper, we introduce Grasped Object Manifold Projection (GOMP), an interactive method that mitigates these errors by constraining a non-rigidly grasped object to a lower-dimensional manifold. GOMP assumes a precise task in which a manipulator holds an object that may shift within the grasp in an observable manner and must be mated with a grounded part. Crucially, all GOMP enhancements are learned from the same expert dataset used to train the base IL policy, and are adjusted with an n-arm bandit-based interactive component. We propose a theoretical basis for GOMP's improvement upon the well-known compounding error bound in IL literature. We demonstrate the framework on four precise assembly tasks using tactile feedback, and note that the approach remains modality-agnostic. Data and videos are available at williamvdb.github.io/GOMPsite.
翻译:模仿学习在掌握重复性操作任务(如工业装配)方面具有巨大潜力,但其有效性常因复合误差导致的轨迹精度不足而受限。本文提出抓取物体流形投影,这是一种通过将非刚性抓取的物体约束到低维流形来减轻此类误差的交互式方法。GOMP假设任务精确性要求高:机械臂抓持的物体可能在抓握中发生可观测的位移,且需与接地部件精准对接。关键的是,所有GOMP增强均从训练基础IL策略的同一专家数据集中学习,并通过基于n臂赌博机的交互组件进行调优。我们为GOMP在模仿学习文献中已知的复合误差界上的改进提供了理论依据。我们在四个利用触觉反馈的精密装配任务中验证了该框架,并指出该方法保持模态无关性。数据与视频详见williamvdb.github.io/GOMPsite。