Learning predictive models for unlabeled spatiotemporal data is challenging in part because visual dynamics can be highly entangled in real scenes, making existing approaches prone to overfit partial modes of physical processes while neglecting to reason about others. We name this phenomenon spatiotemporal mode collapse and explore it for the first time in predictive learning. The key is to provide the model with a strong inductive bias to discover the compositional structures of latent modes. To this end, we propose ModeRNN, which introduces a novel method to learn structured hidden representations between recurrent states. The core idea of this framework is to first extract various components of visual dynamics using a set of spatiotemporal slots with independent parameters. Considering that multiple space-time patterns may co-exist in a sequence, we leverage learnable importance weights to adaptively aggregate slot features into a unified hidden representation, which is then used to update the recurrent states. Across the entire dataset, different modes result in different responses on the mixtures of slots, which enhances the ability of ModeRNN to build structured representations and thus prevents the so-called mode collapse. Unlike existing models, ModeRNN is shown to prevent spatiotemporal mode collapse and further benefit from learning mixed visual dynamics.
翻译:部分由于视觉动态在真实场景中会高度纠缠在一起,使得现有的方法容易过度适应物理过程的局部模式,而忽略了对他人的理性。我们命名了这种现象的时空模式崩溃,并在预测性学习中首次探索了这种现象。关键在于为模型提供强烈的感应偏差,以发现潜伏模式的构成结构。为此,我们提议ModernNNN, 采用新颖的方法来学习经常状态之间结构化的隐蔽表达方式。这个框架的核心思想是首先利用一组具有独立参数的波地时空槽提取视觉动态的各种组成部分。考虑到多个空间时空格模式可能会在顺序上同时存在,我们利用可学习的重量对适应性综合时空格特征形成统一的隐形表达方式,然后用来更新经常性状态。在整个数据集中,不同的模式导致对空格的混合反应不同,这加强了ModRNN建立结构化的表达方式的能力,从而防止了所谓的模式崩溃。与混合的视觉变化模式不同,显示的是,ModormRNNN将防止进一步的视觉变化。