We introduce the SE(3)-Transformer, a variant of the self-attention module for 3D point clouds and graphs, which is equivariant under continuous 3D roto-translations. Equivariance is important to ensure stable and predictable performance in the presence of nuisance transformations of the data input. A positive corollary of equivariance is increased weight-tying within the model. The SE(3)-Transformer leverages the benefits of self-attention to operate on large point clouds and graphs with varying number of points, while guaranteeing SE(3)-equivariance for robustness. We evaluate our model on a toy N-body particle simulation dataset, showcasing the robustness of the predictions under rotations of the input. We further achieve competitive performance on two real-world datasets, ScanObjectNN and QM9. In all cases, our model outperforms a strong, non-equivariant attention baseline and an equivariant model without attention.
翻译:我们引入了SE(3)- Transformex(3D点云和图的自我注意模块的变体),这是3D点云和图的变体,在连续的3D转式翻译下是等同的。平等对于确保数据输入发生干扰变换时的稳定和可预测的性能十分重要。等相的正推论是模型内重量减重的增加。SE(3)- Transform利用自我注意的好处在数量不同的大点云和图中操作,同时保证SE(3)-equality的稳健性。我们评估了我们的玩具N-体粒子模拟数据集模型,展示了在输入旋转时预测的稳健性。我们进一步在两个真实世界数据集ScanObjectNN和QM9上取得了竞争性的性表现。在所有情况下,我们的模型都超越了一个强大、非等同的注意基线和等同模型,而没有受到注意。