Solving time-dependent parametric partial differential equations (PDEs) remains a fundamental challenge for neural solvers, particularly when generalizing across a wide range of physical parameters and dynamics. When data is uncertain or incomplete-as is often the case-a natural approach is to turn to generative models. We introduce ENMA, a generative neural operator designed to model spatio-temporal dynamics arising from physical phenomena. ENMA predicts future dynamics in a compressed latent space using a generative masked autoregressive transformer trained with flow matching loss, enabling tokenwise generation. Irregularly sampled spatial observations are encoded into uniform latent representations via attention mechanisms and further compressed through a spatio-temporal convolutional encoder. This allows ENMA to perform in-context learning at inference time by conditioning on either past states of the target trajectory or auxiliary context trajectories with similar dynamics. The result is a robust and adaptable framework that generalizes to new PDE regimes and supports one-shot surrogate modeling of time-dependent parametric PDEs.
翻译:求解时间依赖的参数化偏微分方程(PDE)仍然是神经求解器面临的核心挑战,尤其是在需要泛化至广泛物理参数和动态范围时。当数据存在不确定性或不完整性——这在实际中极为常见——生成模型成为一种自然的解决途径。本文提出ENMA,一种用于建模物理现象引发的时空动态的生成式神经算子。ENMA通过在压缩的潜空间中使用基于流匹配损失训练的生成式掩码自回归Transformer进行未来动态预测,实现了逐令牌生成。不规则采样的空间观测数据通过注意力机制编码为统一的潜表示,并经由时空卷积编码器进一步压缩。该设计使ENMA能够在推理时通过以下两种方式进行上下文学习:以目标轨迹的历史状态为条件,或以具有相似动态的辅助上下文轨迹为条件。最终构建出一个鲁棒且自适应的框架,该框架能够泛化至新的PDE体系,并支持对时间依赖的参数化PDE进行单次替代建模。