Group equivariant neural networks are used as building blocks of group invariant neural networks, which have been shown to improve generalisation performance and data efficiency through principled parameter sharing. Such works have mostly focused on group equivariant convolutions, building on the result that group equivariant linear maps are necessarily convolutions. In this work, we extend the scope of the literature to self-attention, that is emerging as a prominent building block of deep learning models. We propose the LieTransformer, an architecture composed of LieSelfAttention layers that are equivariant to arbitrary Lie groups and their discrete subgroups. We demonstrate the generality of our approach by showing experimental results that are competitive to baseline methods on a wide range of tasks: shape counting on point clouds, molecular property regression and modelling particle trajectories under Hamiltonian dynamics.
翻译:群体等同神经网络被作为群体性神经网络的构件,这些构件显示通过有原则的参数共享来提高一般性能和数据效率,这些工程主要侧重于群体等同共变,其基础是群体等同线性图必然会变。在这项工作中,我们将文献的范围扩大到自我关注,这是作为深层学习模型的突出构件而出现的。我们提议使用“LieTransex”,这是一个由“LielfAtention”层组成的结构,对任意的“Li Lie”组及其离散分组具有等同性。我们通过展示实验结果来展示我们的方法的通用性,这些实验结果对一系列广泛任务的基线方法具有竞争力:点云形状计数、分子属性回归和模拟汉密尔顿动态下的粒子轨迹。