Articulated objects are pervasive in daily life. However, due to the intrinsic high-DoF structure, the joint states of the articulated objects are hard to be estimated. To model articulated objects, two kinds of shape deformations namely the geometric and the pose deformation should be considered. In this work, we present a novel category-specific parametric representation called Object Model with Articulated Deformations (OMAD) to explicitly model the articulated objects. In OMAD, a category is associated with a linear shape function with shared shape basis and a non-linear joint function. Both functions can be learned from a large-scale object model dataset and fixed as category-specific priors. Then we propose an OMADNet to predict the shape parameters and the joint states from an object's single observation. With the full representation of the object shape and joint states, we can address several tasks including category-level object pose estimation and the articulated object retrieval. To evaluate these tasks, we create a synthetic dataset based on PartNet-Mobility. Extensive experiments show that our simple OMADNet can serve as a strong baseline for both tasks.
翻译:然而,由于内在的高剂量结构,很难估计表达的物体的共同状态。要建模表达的物体,应考虑两种形状变形,即几何和变形。在这项工作中,我们提出了一个新型的、特定类别的参数表示法,称为有人工变形的物体模型(OMAD),以明确模拟表达的物体。在OMAD中,一个类别与线形函数函数相关,具有共同形状基础和非线性联合功能。两种功能都可以从大型物体模型数据集中学习,并被固定为特定类别的前身。然后,我们提议一个OMADNet,以预测形状参数和从一个物体的单一观测得出的联合状态。随着物体形状和共同状态的完整表现,我们可以处理若干任务,包括分类对象构成估计和表达的对象检索。为了评估这些任务,我们根据 PartNet-Mobility 创建了一个合成数据集。广泛的实验显示,我们简单的OMADNet可以作为两项任务的坚实基线。