Higher-Order Hypergraph Learning (HOHL) was recently introduced as a principled alternative to classical hypergraph regularization, enforcing higher-order smoothness via powers of multiscale Laplacians induced by the hypergraph structure. Prior work established the well- and ill-posedness of HOHL through an asymptotic consistency analysis in geometric settings. We extend this theoretical foundation by proving the consistency of a truncated version of HOHL and deriving explicit convergence rates when HOHL is used as a regularizer in fully supervised learning. We further demonstrate its strong empirical performance in active learning and in datasets lacking an underlying geometric structure, highlighting HOHL's versatility and robustness across diverse learning settings.
翻译:高阶超图学习(HOHL)最近被提出作为经典超图正则化的一种原则性替代方案,它通过超图结构诱导的多尺度拉普拉斯算子幂次来强制执行高阶平滑性。先前的研究通过在几何设置中的渐近一致性分析,确立了HOHL的适定性与不适定性。我们通过证明截断版本HOHL的一致性,并推导出当HOHL作为全监督学习中的正则化器时的显式收敛速率,扩展了这一理论基础。我们进一步展示了其在主动学习以及缺乏底层几何结构的数据集中的强大实证性能,凸显了HOHL在不同学习场景下的多样性与鲁棒性。