Single-view RGB model-based object pose estimation methods achieve strong generalization but are fundamentally limited by depth ambiguity, clutter, and occlusions. Multi-view pose estimation methods have the potential to solve these issues, but existing works rely on precise single-view pose estimates or lack generalization to unseen objects. We address these challenges via the following three contributions. First, we introduce AlignPose, a 6D object pose estimation method that aggregates information from multiple extrinsically calibrated RGB views and does not require any object-specific training or symmetry annotation. Second, the key component of this approach is a new multi-view feature-metric refinement specifically designed for object pose. It optimizes a single, consistent world-frame object pose minimizing the feature discrepancy between on-the-fly rendered object features and observed image features across all views simultaneously. Third, we report extensive experiments on four datasets (YCB-V, T-LESS, ITODD-MV, HouseCat6D) using the BOP benchmark evaluation and show that AlignPose outperforms other published methods, especially on challenging industrial datasets where multiple views are readily available in practice.
翻译:基于模型的单视角RGB物体姿态估计方法虽具备较强的泛化能力,但本质上受限于深度歧义、场景杂乱及遮挡问题。多视角姿态估计方法有望解决这些局限,然而现有研究要么依赖精确的单视角姿态估计结果,要么缺乏对未见物体的泛化能力。我们通过以下三项贡献应对这些挑战。首先,我们提出AlignPose——一种六维物体姿态估计方法,该方法聚合来自多个经过外参标定的RGB视角的信息,且无需任何物体特定训练或对称性标注。其次,该方法的核心组件是专为物体姿态设计的新型多视角特征度量精细化模块。该模块通过优化单一且一致的世界坐标系物体姿态,同时最小化所有视角中实时渲染的物体特征与观测图像特征之间的特征差异。第三,我们在四个数据集(YCB-V、T-LESS、ITODD-MV、HouseCat6D)上使用BOP基准评估进行了大量实验,结果表明AlignPose优于其他已公开方法,尤其在具有挑战性的工业数据集上表现突出——这类场景在实践中通常易于获取多视角图像。