Physics-Informed Neural Networks with hard constraints (HC-PINNs) are increasingly favored for their ability to strictly enforce boundary conditions via a trial function ansatz $\tilde{u} = A + B \cdot N$, yet the theoretical mechanisms governing their training dynamics have remained unexplored. Unlike soft-constrained formulations where boundary terms act as additive penalties, this work reveals that the boundary function $B$ introduces a multiplicative spatial modulation that fundamentally alters the learning landscape. A rigorous Neural Tangent Kernel (NTK) framework for HC-PINNs is established, deriving the explicit kernel composition law. This relationship demonstrates that the boundary function $B(\vec{x})$ functions as a spectral filter, reshaping the eigenspectrum of the neural network's native kernel. Through spectral analysis, the effective rank of the residual kernel is identified as a deterministic predictor of training convergence, superior to classical condition numbers. It is shown that widely used boundary functions can inadvertently induce spectral collapse, leading to optimization stagnation despite exact boundary satisfaction. Validated across multi-dimensional benchmarks, this framework transforms the design of boundary functions from a heuristic choice into a principled spectral optimization problem, providing a solid theoretical foundation for geometric hard constraints in scientific machine learning.
翻译:硬约束物理信息神经网络(HC-PINNs)因其能通过试函数设定 $\tilde{u} = A + B \cdot N$ 严格强制边界条件而日益受到青睐,然而,支配其训练动态的理论机制尚未得到探索。与边界项作为加性惩罚项的软约束形式不同,本工作揭示了边界函数 $B$ 引入了一种乘性的空间调制,从根本上改变了学习景观。我们为HC-PINNs建立了一个严格的神经正切核(NTK)框架,推导出了显式的核组合法则。这一关系表明,边界函数 $B(\vec{x})$ 起到了谱滤波器的作用,重塑了神经网络固有核的特征谱。通过谱分析,我们确定了残差核的有效秩是训练收敛性的确定性预测因子,其性能优于经典的条件数。研究表明,广泛使用的边界函数可能无意中导致谱塌缩,从而在即使精确满足边界条件的情况下,仍导致优化停滞。通过在多维基准测试中进行验证,该框架将边界函数的设计从启发式选择转变为一个原则性的谱优化问题,为科学机器学习中的几何硬约束提供了坚实的理论基础。