Point cloud completion referring to completing 3D shapes from partial 3D point clouds is a fundamental problem for 3D point cloud analysis tasks. Benefiting from the development of deep neural networks, researches on point cloud completion have made great progress in recent years. However, the explicit local region partition like kNNs involved in existing methods makes them sensitive to the density distribution of point clouds. Moreover, it serves limited receptive fields that prevent capturing features from long-range context information. To solve the problems, we leverage the cross-attention and self-attention mechanisms to design novel neural network for processing point cloud in a per-point manner to eliminate kNNs. Two essential blocks Geometric Details Perception (GDP) and Self-Feature Augment (SFA) are proposed to establish the short-range and long-range structural relationships directly among points in a simple yet effective way via attention mechanism. Then based on GDP and SFA, we construct a new framework with popular encoder-decoder architecture for point cloud completion. The proposed framework, namely PointAttN, is simple, neat and effective, which can precisely capture the structural information of 3D shapes and predict complete point clouds with highly detailed geometries. Experimental results demonstrate that our PointAttN outperforms state-of-the-art methods by a large margin on popular benchmarks like Completion3D and PCN. Code is available at: https://github.com/ohhhyeahhh/PointAttN
翻译:从部分 3D 点云中完成 3D 形状是 3D 点云分析任务的根本问题。 从深神经网络的开发中受益,关于点云完成的研究近年来取得了巨大进展。然而,在现有方法中,像 kNNNs 那样明确的局部区域分区,使得它们敏感地关注点云的密度分布。此外,它为有限的可接受域提供了有限的空间,防止从远程背景信息中捕捉特征。为了解决问题,我们利用交叉注意和自我注意机制设计新的神经网络,以每点方式处理点云,消除 kNNS。两个基本区块的几何细节概念和自我-视野增强(SFA)近年来取得了巨大进展。建议通过关注机制,直接在点之间建立短程和远程的结构关系,从而防止从远程背景信息中捕获特征。然后根据GDP和SFA,我们用流行的编码-Decoder- decoder结构构建一个新的框架来完成点云的完成。拟议的框架,即点AttN,是简单、整洁和有效的,可以精确地捕捉到3D 点的直径直径的轨道定位基准。